diff --git a/published/20170414 5 projects for Raspberry Pi at home.md b/published/20170414 5 projects for Raspberry Pi at home.md new file mode 100644 index 0000000000..3d6f5b2382 --- /dev/null +++ b/published/20170414 5 projects for Raspberry Pi at home.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: (warmfrog) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10936-1.html) +[#]: subject: (5 projects for Raspberry Pi at home) +[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) +[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) + +5 个可在家中使用的树莓派项目 +====================================== + +![5 projects for Raspberry Pi at home][1] + +[树莓派][2] 电脑可被用来进行多种设置用于不同的目的。显然它在教育市场帮助学生在教室和创客空间中学习编程与创客技巧方面占有一席之地,它在工作场所和工厂中有大量行业应用。我打算介绍五个你可能想要在你的家中构建的项目。 + +### 媒体中心 + +在家中人们常用树莓派作为媒体中心来服务多媒体文件。它很容易搭建,树莓派提供了大量的 GPU(图形处理单元)运算能力来在大屏电视上渲染你的高清电视节目和电影。将 [Kodi][3](从前的 XBMC)运行在树莓派上是一个很棒的方式,它可以播放你的硬盘或网络存储上的任何媒体。你同样可以安装一个插件来播放 YouTube 视频。 + +还有几个略微不同的选择,最常见的是 [OSMC][4](开源媒体中心)和 [LibreELEC][5],都是基于 Kodi 的。它们在放映媒体内容方面表现的都非常好,但是 OSMC 有一个更酷炫的用户界面,而 LibreElec 更轻量级。你要做的只是选择一个发行版,下载镜像并安装到一个 SD 卡中(或者仅仅使用 [NOOBS][6]),启动,然后就准备好了。 + +![LibreElec ][7] + +*LibreElec;树莓派基金会, CC BY-SA* + +![OSMC][8] + +*OSMC.tv, 版权所有, 授权使用* + +在往下走之前,你需要决定[使用哪种树莓派][9]。这些发行版在任何树莓派(1、2、3 或 Zero)上都能运行,视频播放在这些树莓派中的任何一个上都能胜任。除了 Pi 3(和 Zero W)有内置 Wi-Fi,唯一可察觉的不同是用户界面的反应速度,在 Pi 3 上更快。Pi 2 也不会慢太多,所以如果你不需要 Wi-Fi 它也是可以的,但是当切换菜单时,你会注意到 Pi 3 比 Pi 1 和 Zero 表现的更好。 + +### SSH 网关 + +如果你想从外部网络访问你的家庭局域网的电脑和设备,你必须打开这些设备的端口来允许外部访问。在互联网中开放这些端口有安全风险,意味着你总是你总是处于被攻击、滥用或者其他各种未授权访问的风险中。然而,如果你在你的网络中安装一个树莓派,并且设置端口映射来仅允许通过 SSH 访问树莓派,你可以这么用来作为一个安全的网关来跳到网络中的其他树莓派和 PC。 + +大多数路由允许你配置端口映射规则。你需要给你的树莓派一个固定的内网 IP 地址来设置你的路由器端口 22 映射到你的树莓派端口 22。如果你的网络服务提供商给你提供了一个静态 IP 地址,你能够通过 SSH 和主机的 IP 地址访问(例如,`ssh pi@123.45.56.78`)。如果你有一个域名,你可以配置一个子域名指向这个 IP 地址,所以你没必要记住它(例如,`ssh pi@home.mydomain.com`)。 + +![][11] + +然而,如果你不想将树莓派暴露在互联网上,你应该非常小心,不要让你的网络处于危险之中。如果你遵循一些简单的步骤来使它更安全: + +1. 大多数人建议你更换你的登录密码(有道理,默认密码 “raspberry” 是众所周知的),但是这不能阻挡暴力攻击。你可以改变你的密码并添加一个双重验证(所以你需要你的密码*和*一个手机生成的与时间相关的密码),这么做更安全。但是,我相信最好的方法阻止入侵者访问你的树莓派是在你的 SSH 配置中[禁止密码认证][12],这样只能通过 SSH 密匙进入。这意味着任何试图猜测你的密码尝试登录的人都不会成功。只有你的私有密匙可以访问。简单来说,很多人建议将 SSH 端口从默认的 22 换成其他的,但是通过简单的 [Nmap][13] 扫描你的 IP 地址,你信任的 SSH 端口就会暴露。 +2. 最好,不要在这个树莓派上运行其他的软件,这样你不会意外暴露其他东西。如果你想要运行其他软件,你最好在网络中的其他树莓派上运行,它们没有暴露在互联网上。确保你经常升级来保证你的包是最新的,尤其是 `openssh-server` 包,这样你的安全缺陷就被打补丁了。 +3. 安装 [sshblack][14] 或 [fail2ban][15] 来将任何表露出恶意的用户加入黑名单,例如试图暴力破解你的 SSH 密码。 + +使树莓派安全后,让它在线,你将可以在世界的任何地方登录你的网络。一旦你登录到你的树莓派,你可以用 SSH 访问本地网络上的局域网地址(例如,192.168.1.31)访问其他设备。如果你在这些设备上有密码,用密码就好了。如果它们同样只允许 SSH 密匙,你需要确保你的密匙通过 SSH 转发,使用 `-A` 参数:`ssh -A pi@123.45.67.89`。 + +### CCTV / 宠物相机 + +另一个很棒的家庭项目是安装一个相机模块来拍照和录视频,录制并保存文件,在内网或者外网中进行流式传输。你想这么做有很多原因,但两个常见的情况是一个家庭安防相机或监控你的宠物。 + +[树莓派相机模块][16] 是一个优秀的配件。它提供全高清的相片和视频,包括很多高级配置,很[容易编程][17]。[红外线相机][18]用于这种目的是非常理想的,通过一个红外线 LED(树莓派可以控制的),你就能够在黑暗中看见东西。 + +如果你想通过一定频率拍摄静态图片来留意某件事,你可以仅仅写一个简短的 [Python][19] 脚本或者使用命令行工具 [raspistill][20], 在 [Cron][21] 中规划它多次运行。你可能想将它们保存到 [Dropbox][22] 或另一个网络服务,上传到一个网络服务器,你甚至可以创建一个[web 应用][23]来显示他们。 + +如果你想要在内网或外网中流式传输视频,那也相当简单。在 [picamera 文档][24]中(在 “web streaming” 章节)有一个简单的 MJPEG(Motion JPEG)例子。简单下载或者拷贝代码到文件中,运行并访问树莓派的 IP 地址的 8000 端口,你会看见你的相机的直播输出。 + +有一个更高级的流式传输项目 [pistreaming][25] 也可以,它通过在网络服务器中用 [JSMpeg][26] (一个 JavaScript 视频播放器)和一个用于相机流的单独运行的 websocket。这种方法性能更好,并且和之前的例子一样简单,但是如果要在互联网中流式传输,则需要包含更多代码,并且需要你开放两个端口。 + +一旦你的网络流建立起来,你可以将你的相机放在你想要的地方。我用一个来观察我的宠物龟: + +![Tortoise ][27] + +*Ben Nuttall, CC BY-SA* + +如果你想控制相机位置,你可以用一个舵机。一个优雅的方案是用 Pimoroni 的 [Pan-Tilt HAT][28],它可以让你简单的在二维方向上移动相机。为了与 pistreaming 集成,可以看看该项目的 [pantilthat 分支][29]. + +![Pan-tilt][30] + +*Pimoroni.com, Copyright, 授权使用* + +如果你想将你的树莓派放到户外,你将需要一个防水的外围附件,并且需要一种给树莓派供电的方式。POE(通过以太网提供电力)电缆是一个不错的实现方式。 + +### 家庭自动化或物联网 + +现在是 2017 年(LCTT 译注:此文发表时间),到处都有很多物联网设备,尤其是家中。我们的电灯有 Wi-Fi,我们的面包烤箱比过去更智能,我们的茶壶处于俄国攻击的风险中,除非你确保你的设备安全,不然别将没有必要的设备连接到互联网,之后你可以在家中充分的利用物联网设备来完成自动化任务。 + +市场上有大量你可以购买或订阅的服务,像 Nest Thermostat 或 Philips Hue 电灯泡,允许你通过你的手机控制你的温度或者你的亮度,无论你是否在家。你可以用一个树莓派来催动这些设备的电源,通过一系列规则包括时间甚至是传感器来完成自动交互。用 Philips Hue,你做不到的当你进房间时打开灯光,但是有一个树莓派和一个运动传感器,你可以用 Python API 来打开灯光。类似地,当你在家的时候你可以通过配置你的 Nest 打开加热系统,但是如果你想在房间里至少有两个人时才打开呢?写一些 Python 代码来检查网络中有哪些手机,如果至少有两个,告诉 Nest 来打开加热器。 + +不用选择集成已存在的物联网设备,你可以用简单的组件来做的更多。一个自制的窃贼警报器,一个自动化的鸡笼门开关,一个夜灯,一个音乐盒,一个定时的加热灯,一个自动化的备份服务器,一个打印服务器,或者任何你能想到的。 + +### Tor 协议和屏蔽广告 + +Adafruit 的 [Onion Pi][31] 是一个 [Tor][32] 协议来使你的网络通讯匿名,允许你使用互联网而不用担心窥探者和各种形式的监视。跟随 Adafruit 的指南来设置 Onion Pi,你会找到一个舒服的匿名的浏览体验。 + +![Onion-Pi][33] + +*Onion-pi from Adafruit, Copyright, 授权使用* + +![Pi-hole][34] + +可以在你的网络中安装一个树莓派来拦截所有的网络交通并过滤所有广告。简单下载 [Pi-hole][35] 软件到 Pi 中,你的网络中的所有设备都将没有广告(甚至屏蔽你的移动设备应用内的广告)。 + +树莓派在家中有很多用法。你在家里用树莓派来干什么?你想用它干什么? + +在下方评论让我们知道。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home + +作者:[Ben Nuttall][a] +选题:[lujun9972][b] +译者:[warmfrog](https://github.com/warmfrog) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bennuttall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) +[2]: https://www.raspberrypi.org/ +[3]: https://kodi.tv/ +[4]: https://osmc.tv/ +[5]: https://libreelec.tv/ +[6]: https://www.raspberrypi.org/downloads/noobs/ +[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) +[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) +[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project +[10]: mailto:pi@home.mydomain.com +[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png +[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication +[13]: https://nmap.org/ +[14]: http://www.pettingers.org/code/sshblack.html +[15]: https://www.fail2ban.org/wiki/index.php/Main_Page +[16]: https://www.raspberrypi.org/products/camera-module-v2/ +[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects +[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ +[19]: http://picamera.readthedocs.io/ +[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md +[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md +[22]: https://github.com/RZRZR/plant-cam +[23]: https://github.com/bennuttall/bett-bot +[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming +[25]: https://github.com/waveform80/pistreaming +[26]: http://jsmpeg.com/ +[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) +[28]: https://shop.pimoroni.com/products/pan-tilt-hat +[29]: https://github.com/waveform80/pistreaming/tree/pantilthat +[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) +[31]: https://learn.adafruit.com/onion-pi/overview +[32]: https://www.torproject.org/ +[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) +[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) +[35]: https://pi-hole.net/ + + + diff --git a/published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md new file mode 100644 index 0000000000..587c69b785 --- /dev/null +++ b/published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @@ -0,0 +1,127 @@ +[#]: collector: (lujun9972) +[#]: translator: (tomjlw) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10939-1.html) +[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) +[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor) +[#]: author: (Stephan Tetzel https://opensource.com/users/stephan) + +如何用树莓派搭建一个颗粒物传感器 +====== + +> 用树莓派、一个廉价的传感器和一个便宜的屏幕监测空气质量。 + +![](https://img.linux.net.cn/data/attachment/album/201906/05/005121bbveeavwgyc1i1gk.jpg) + +大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派、你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。 + +为了弥补这一缺陷,我们给树莓派添加了一块小屏幕,这样我们就可以直接从该设备上读取数据。以下是我们如何为我们的移动细颗粒物传感器搭建并配置好屏幕。 + +### 为树莓派搭建好屏幕 + +在[亚马逊][3]、阿里巴巴以及其它来源有许多可以买到的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为 320*480 像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。3.5 英寸屏幕和树莓派几乎一样大,这一点不错。 + +当你第一次启动屏幕打开树莓派的时候,会因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令: + +``` +$ rm -rf LCD-show +$ git clone +$ chmod -R 755 LCD-show +$ cd LCD-show/ +``` + +为你的屏幕执行合适的命令以安装驱动。例如这是给我们 MPI3501 型屏幕的命令: + +``` +$ sudo ./LCD35-show +``` + +这行命令会安装合适的驱动并重启树莓派。 + +### 安装 PIXEL 桌面并设置自动启动 + +以下是我们想要我们项目能够做到的事情:如果树莓派启动,我们想要展现一个有我们空气质量测量数据的网站。 + +首先,安装树莓派的[PIXEL 桌面环境][6]: + +``` +$ sudo apt install raspberrypi-ui-mods +``` + +然后安装 Chromium 浏览器以显示网站: + +``` +$ sudo apt install chromium-browser +``` + +需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而树莓派用户并没有默认设置好自动登录。你可以用 `raspi-config` 工具设置自动登录: + +``` +$ sudo raspi-config +``` + +在菜单中,选择:“3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin”。 + +在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹 `/home/pi/.config/lxsession/LXDE-pi/`: + +``` +$ mkdir -p /home/pi/config/lxsession/LXDE-pi/ +``` + +然后在该文件夹里创建 `autostart` 文件: + +``` +$ nano /home/pi/.config/lxsession/LXDE-pi/autostart +``` + +并粘贴以下代码: + +``` +#@unclutter +@xset s off +@xset -dpms +@xset s noblank + +# Open Chromium in Full Screen Mode +@chromium-browser --incognito --kiosk +``` + +如果你想要隐藏鼠标指针,你得安装 `unclutter` 包并移除 `autostart` 文件开头的注释。 + +``` +$ sudo apt install unclutter +``` + +![移动颗粒物传感器][7] + +我对去年的代码做了些小修改。因此如果你之前搭建过空气质量项目,确保用[原文章][2]中的指导为 AQI 网站重新下载脚本和文件。 + +通过添加触摸屏,你现在拥有了一个便携的颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用这个小型测量站——你甚至可以使用移动电源以摆脱电网。 + +* * * + +这篇文章原来在[开源学校解决方案][8]Open Scool Solutions上发表,获得许可重新发布。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor + +作者:[Stephan Tetzel][a] +选题:[lujun9972][b] +译者:[tomjlw](https://github.com/tomjlw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stephan +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) +[2]: https://linux.cn/article-9620-1.html +[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a +[4]: https://amzn.to/2CcvgpC +[5]: https://github.com/goodtft/LCD-show +[6]: https://linux.cn/article-8459-1.html +[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor) +[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/ + diff --git a/published/20190409 5 open source mobile apps.md b/published/20190409 5 open source mobile apps.md new file mode 100644 index 0000000000..e51f6dbc93 --- /dev/null +++ b/published/20190409 5 open source mobile apps.md @@ -0,0 +1,108 @@ +[#]: collector: "lujun9972" +[#]: translator: "fuzheng1998" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10931-1.html" +[#]: subject: "5 open source mobile apps" +[#]: via: "https://opensource.com/article/19/4/mobile-apps" +[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen" + +5 个可以满足你的生产力、沟通和娱乐需求的开源手机应用 +====== + +> 你可以依靠这些应用来满足你的生产力、沟通和娱乐需求。 + +![](https://img.linux.net.cn/data/attachment/album/201906/03/001949brnq19j5qeqn3onv.jpg) + +像世界上大多数人一样,我的手似乎就没有离开过手机。多亏了我从 Google Play 和 F-Droid 安装的开源移动应用程序,让我的 Android 设备好像提供了无限的沟通、生产力和娱乐服务一样。 + +在我的手机上的许多开源应用程序中,当想听音乐、与朋友/家人和同事联系、或者在旅途中完成工作时,以下五个是我一直使用的。 + +### MPDroid + +一个音乐播放器进程 (MPD)的 Android 控制器。 + +![MPDroid][2] + +MPD 是将音乐从小型音乐服务器电脑传输到大型的黑色立体声音箱的好方法。它直连 ALSA,因此可以通过 ALSA 硬件接口与数模转换器(DAC)对话,它可以通过我的网络进行控制——但是用什么东西控制呢?好吧,事实证明 MPDroid 是一个很棒的 MPD 控制器。它可以管理我的音乐数据库,显示专辑封面,处理播放列表,并支持互联网广播。而且它是开源的,所以如果某些东西不好用的话…… + +MPDroid 可在 [Google Play][4] 和 [F-Droid][5] 上找到。 + +### RadioDroid + +一台能单独使用及与 Chromecast 搭配使用的 Android 网络收音机。 + +![RadioDroid][6] + +RadioDroid 是一个网络收音机,而 MPDroid 则管理我音乐的数据库;从本质上讲,RadioDroid 是 [Internet-Radio.com][7] 的一个前端。此外,通过将耳机插入 Android 设备,通过耳机插孔或 USB 将 Android 设备直接连接到立体声系统,或通过兼容设备使用其 Chromecast 功能,可以享受 RadioDroid。这是一个查看芬兰天气情况,听取排名前 40 的西班牙语音乐,或收到到最新新闻消息的好方法。 + +RadioDroid 可在 [Google Play][8] 和 [F-Droid][9] 上找到。 + +### Signal + +一个支持 Android、iOS,还有桌面系统的安全即时消息客户端。 + +![Signal][10] + +如果你喜欢 WhatsApp,但是因为它与 Facebook [日益密切][11]的关系而感到困扰,那么 Signal 应该是你的下一个产品。Signal 的唯一问题是说服你的朋友们最好用 Signal 取代 WhatsApp。但除此之外,它有一个与 WhatsApp 类似的界面;很棒的语音和视频通话;很好的加密;恰到好处的匿名;并且它受到了一个不打算通过使用软件来获利的基金会的支持。为什么不喜欢它呢? + +Signal 可用于 [Android][12]、[iOS][13] 和 [桌面][14]。 + +### ConnectBot + +Android SSH 客户端。 + +![ConnectBot][15] + +有时我离电脑很远,但我需要登录服务器才能办事。[ConnectBot][16] 是将 SSH 会话搬到手机上的绝佳解决方案。 + +ConnectBot 可在 [Google Play][17] 上找到。 + +### Termux + +有多种熟悉的功能的安卓终端模拟器。 + +![Termux][18] + +你是否需要在手机上运行 `awk` 脚本?[Termux][19] 是个解决方案。如果你需要做终端类的工作,而且你不想一直保持与远程计算机的 SSH 连接,请使用 ConnectBot 将文件放到手机上,然后退出会话,在 Termux 中执行你的操作,用 ConnectBot 发回结果。 + +Termux 可在 [Google Play][20] 和 [F-Droid][21] 上找到。 + +* * * + +你最喜欢用于工作或娱乐的开源移动应用是什么呢?请在评论中分享它们。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/mobile-apps + +作者:[Chris Hermansen][a] +选题:[lujun9972][b] +译者:[fuzheng1998](https://github.com/fuzheng1998) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 +[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg "MPDroid" +[3]: https://opensource.com/article/17/4/fun-new-gadget +[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US +[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/ +[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png "RadioDroid" +[7]: https://www.internet-radio.com/ +[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2 +[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/ +[10]: https://opensource.com/sites/default/files/uploads/signal.png "Signal" +[11]: https://opensource.com/article/19/3/open-messenger-client +[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms +[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8 +[14]: https://signal.org/download/ +[15]: https://opensource.com/sites/default/files/uploads/connectbot.png "ConnectBot" +[16]: https://connectbot.org/ +[17]: https://play.google.com/store/apps/details?id=org.connectbot +[18]: https://opensource.com/sites/default/files/uploads/termux.jpg "Termux" +[19]: https://termux.com/ +[20]: https://play.google.com/store/apps/details?id=com.termux +[21]: https://f-droid.org/packages/com.termux/ diff --git a/published/20190411 Be your own certificate authority.md b/published/20190411 Be your own certificate authority.md new file mode 100644 index 0000000000..e5f09b6935 --- /dev/null +++ b/published/20190411 Be your own certificate authority.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10921-1.html) +[#]: subject: (Be your own certificate authority) +[#]: via: (https://opensource.com/article/19/4/certificate-authority) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) + + +自己成为一个证书颁发机构(CA) +====== + +> 为你的微服务架构或者集成测试创建一个简单的内部 CA。 + +![](https://img.linux.net.cn/data/attachment/album/201905/31/091023sg9s0ss11rsoseqg.jpg) + +传输层安全([TLS][2])模型(有时也称它的旧名称 SSL)基于[证书颁发机构][3]certificate authoritie(CA)的概念。这些机构受到浏览器和操作系统的信任,从而*签名*服务器的的证书以用于验证其所有权。 + +但是,对于内部网络,微服务架构或集成测试,有时候*本地 CA*更有用:一个只在内部受信任的 CA,然后签名本地服务器的证书。 + +这对集成测试特别有意义。获取证书可能会带来负担,因为这会占用服务器几分钟。但是在代码中使用“忽略证书”可能会被引入到生产环境,从而导致安全灾难。 + +CA 证书与常规服务器证书没有太大区别。重要的是它被本地代码信任。例如,在 Python `requests` 库中,可以通过将 `REQUESTS_CA_BUNDLE` 变量设置为包含此证书的目录来完成。 + +在为集成测试创建证书的例子中,不需要*长期的*证书:如果你的集成测试需要超过一天,那么你应该已经测试失败了。 + +因此,计算**昨天**和**明天**作为有效期间隔: + +``` +>>> import datetime +>>> one_day = datetime.timedelta(days=1) +>>> today = datetime.date.today() +>>> yesterday = today - one_day +>>> tomorrow = today - one_day +``` + +现在你已准备好创建一个简单的 CA 证书。你需要生成私钥,创建公钥,设置 CA 的“参数”,然后自签名证书:CA 证书*总是*自签名的。最后,导出证书文件以及私钥文件。 + +``` +from cryptography.hazmat.primitives.asymmetric import rsa +from cryptography.hazmat.primitives import hashes, serialization +from cryptography import x509 +from cryptography.x509.oid import NameOID + + +private_key = rsa.generate_private_key( + public_exponent=65537, + key_size=2048, + backend=default_backend() +) +public_key = private_key.public_key() +builder = x509.CertificateBuilder() +builder = builder.subject_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), +])) +builder = builder.issuer_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), +])) +builder = builder.not_valid_before(yesterday) +builder = builder.not_valid_after(tomorrow) +builder = builder.serial_number(x509.random_serial_number()) +builder = builder.public_key(public_key) +builder = builder.add_extension( + x509.BasicConstraints(ca=True, path_length=None), + critical=True) +certificate = builder.sign( + private_key=private_key, algorithm=hashes.SHA256(), + backend=default_backend() +) +private_bytes = private_key.private_bytes( + encoding=serialization.Encoding.PEM, + format=serialization.PrivateFormat.TraditionalOpenSSL, + encryption_algorithm=serialization.NoEncrption()) +public_bytes = certificate.public_bytes( + encoding=serialization.Encoding.PEM) +with open("ca.pem", "wb") as fout: + fout.write(private_bytes + public_bytes) +with open("ca.crt", "wb") as fout: + fout.write(public_bytes) +``` + +通常,真正的 CA 会需要[证书签名请求][4](CSR)来签名证书。但是,当你是自己的 CA 时,你可以制定自己的规则!可以径直签名你想要的内容。 + +继续集成测试的例子,你可以创建私钥并立即签名相应的公钥。注意 `COMMON_NAME` 需要是 `https` URL 中的“服务器名称”。如果你已配置名称查询,你需要服务器能响应对 `service.test.local` 的请求。 + +``` +service_private_key = rsa.generate_private_key( + public_exponent=65537, + key_size=2048, + backend=default_backend() +) +service_public_key = service_private_key.public_key() +builder = x509.CertificateBuilder() +builder = builder.subject_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local') +])) +builder = builder.not_valid_before(yesterday) +builder = builder.not_valid_after(tomorrow) +builder = builder.public_key(public_key) +certificate = builder.sign( + private_key=private_key, algorithm=hashes.SHA256(), + backend=default_backend() +) +private_bytes = service_private_key.private_bytes( + encoding=serialization.Encoding.PEM, + format=serialization.PrivateFormat.TraditionalOpenSSL, + encryption_algorithm=serialization.NoEncrption()) +public_bytes = certificate.public_bytes( + encoding=serialization.Encoding.PEM) +with open("service.pem", "wb") as fout: + fout.write(private_bytes + public_bytes) +``` + +现在 `service.pem` 文件有一个私钥和一个“有效”的证书:它已由本地的 CA 签名。该文件的格式可以给 Nginx、HAProxy 或大多数其他 HTTPS 服务器使用。 + +通过将此逻辑用在测试脚本中,只要客户端配置信任该 CA,那么就可以轻松创建看起来真实的 HTTPS 服务器。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/certificate-authority + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez/users/elenajon123 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj +[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security +[3]: https://en.wikipedia.org/wiki/Certificate_authority +[4]: https://en.wikipedia.org/wiki/Certificate_signing_request diff --git a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/published/20190417 Inter-process communication in Linux- Sockets and signals.md similarity index 70% rename from translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md rename to published/20190417 Inter-process communication in Linux- Sockets and signals.md index 4e7a06c983..7a4c304246 100644 --- a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md +++ b/published/20190417 Inter-process communication in Linux- Sockets and signals.md @@ -1,8 +1,8 @@ [#]: collector: "lujun9972" [#]: translator: "FSSlc" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10930-1.html" [#]: subject: "Inter-process communication in Linux: Sockets and signals" [#]: via: "https://opensource.com/article/19/4/interprocess-communication-linux-networking" [#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu" @@ -10,21 +10,21 @@ Linux 下的进程间通信:套接字和信号 ====== -学习在 Linux 中进程是如何与其他进程进行同步的。 +> 学习在 Linux 中进程是如何与其他进程进行同步的。 -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) +![](https://img.linux.net.cn/data/attachment/album/201906/02/234437y6gig4tg4yy94356.jpg) -本篇是 Linux 下[进程间通信][1](IPC)系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC,[第二篇文章][3]则通过管道(无名的或者有名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。 +本篇是 Linux 下[进程间通信][1](IPC)系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC,[第二篇文章][3]则通过管道(无名的或者命名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。 ### 套接字 -正如管道有两种类型(有名和无名)一样,套接字也有两种类型。IPC 套接字(即 Unix domain socket)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP(传输控制协议)或 UDP(用户数据报协议)。 +正如管道有两种类型(命名和无名)一样,套接字也有两种类型。IPC 套接字(即 Unix 套接字)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP(传输控制协议)或 UDP(用户数据报协议)。 -与之相反,IPC 套接字依赖于本地系统内核的支持来进行通信;特别的,IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同,但在本质上,IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 localhost(127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器的地址。 +与之相反,IPC 套接字依赖于本地系统内核的支持来进行通信;特别的,IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同,但在本质上,IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 `localhost`(127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器地址。 套接字以流的形式(下面将会讨论到)被配置为双向的,并且其控制遵循 C/S(客户端/服务器端)模式:客户端通过尝试连接一个服务器来初始化对话,而服务器端将尝试接受该连接。假如万事顺利,来自客户端的请求和来自服务器端的响应将通过管道进行传输,直到其中任意一方关闭该通道,从而断开这个连接。 -一个`迭代服务器`(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会一直持续下去,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个 worker 的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题达到一个很小的规模,只关注基本的 API,而不去关心并发的问题。 +一个迭代服务器(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会挂起,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个工人worker的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题保持在一个很小的规模,只关注基本的 API,而不去关心并发的问题。 最后,随着各种 POSIX 改进的出现,套接字 API 随着时间的推移而发生了显著的变化。当前针对服务器端和客户端的示例代码特意写的比较简单,但是它着重强调了基于流的套接字中连接的双方。下面是关于流控制的一个总结,其中服务器端在一个终端中开启,而客户端在另一个不同的终端中开启: @@ -108,10 +108,10 @@ int main() { 上面的服务器端程序执行典型的 4 个步骤来准备回应客户端的请求,然后接受其他的独立请求。这里每一个步骤都以服务器端程序调用的系统函数来命名。 - 1. `socket(…)` : 为套接字连接获取一个文件描述符 - 2. `bind(…)` : 将套接字和服务器主机上的一个地址进行绑定 - 3. `listen(…)` : 监听客户端请求 - 4. `accept(…)` :接受一个特定的客户端请求 + 1. `socket(…)`:为套接字连接获取一个文件描述符 + 2. `bind(…)`:将套接字和服务器主机上的一个地址进行绑定 + 3. `listen(…)`:监听客户端请求 + 4. `accept(…)`:接受一个特定的客户端请求 上面的 `socket` 调用的完整形式为: @@ -121,7 +121,7 @@ int sockfd = socket(AF_INET,      /* versus AF_LOCAL */                     0);           /* system picks protocol (TCP) */ ``` -第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM` 和 `SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字,只有一种协议选择:TCP,在这里表示的 `0`;。因为对 `socket` 的一次成功调用将返回相似的文件描述符,一个套接字将会被读写,对应的语法和读写一个本地文件是类似的。 +第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM` 和 `SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字,只有一种协议选择:TCP,在这里表示的 `0`。因为对 `socket` 的一次成功调用将返回相似的文件描述符,套接字可以被读写,对应的语法和读写一个本地文件是类似的。 对 `bind` 的调用是最为复杂的,因为它反映出了在套接字 API 方面上的各种改进。我们感兴趣的点是这个调用将一个套接字和服务器端所在机器中的一个内存地址进行绑定。但对 `listen` 的调用就非常直接了: @@ -131,9 +131,9 @@ if (listen(fd, MaxConnects) < 0) 第一个参数是套接字的文件描述符,第二个参数则指定了在服务器端处理一个拒绝连接错误之前,有多少个客户端连接被允许连接。(在头文件 `sock.h` 中 `MaxConnects` 的值被设置为 `8`。) -`accept` 调用默认将是一个拥塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。 +`accept` 调用默认将是一个阻塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。 -在设计上,一个服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。 +在设计上,服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。 #### 示例 2. 使用套接字的客户端 @@ -207,25 +207,25 @@ int main() { if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) ``` -对 `connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的响应然后读取返回的响应。在经过会话后,服务器端和客户端都将调用 `close` 去关闭可读可写套接字,尽管其中一个关闭操作已经足以关闭他们之间的连接,但此时客户端可能就此关闭,但正如前面提到的那样,服务器端将一直保持开放以处理其他事务。 +对 `connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的请求然后读取返回的响应。在会话后,服务器端和客户端都将调用 `close` 去关闭这个可读可写套接字,尽管任何一边的关闭操作就足以关闭它们之间的连接。此后客户端可以退出了,但正如前面提到的那样,服务器端可以一直保持开放以处理其他事务。 -从上面的套接示例中,我们看到了请求信息被返回给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同,一般来说,IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。 +从上面的套接字示例中,我们看到了请求信息被回显给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同,一般来说,IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。 ### 信号 -一个信号中断了一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。符号常数拥有整数类型的值,例如 `SIGKILL` 对应的值为 `9`。 +信号会中断一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。这种符号常量有整数类型的值,例如 `SIGKILL` 对应的值为 `9`。 -信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C` 来从命令行中终止一个程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。针对终止,`SIGTERM` 信号可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。 +信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C` 来终止一个从命令行中启动的程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。`SIGTERM` 意即终止,它可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。 考虑一下一个多进程应用,例如 Nginx 网络服务器是如何被另一个进程优雅地关闭的。`kill` 函数: ``` int kill(pid_t pid, int signum); /* declaration */ ``` -bei -可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 pid(进程 ID),假如这个参数是 `0`,则这个参数将会被识别为信号发送者所属的那组进程。 -`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM` 或 `SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 pid 是否是有效的。这样将一个多进程应用的优雅地关闭就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM` 。(Nginx 主进程可以通过调用 `kill` 函数来终止其他 worker 进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。 +可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 `pid`(进程 ID),假如这个参数是 `0`,则这个参数将会被视作信号发送者所属的那组进程。 + +`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM` 或 `SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 `pid` 是否是有效的。这样优雅地关闭一个多进程应用就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM` 。(Nginx 主进程可以通过调用 `kill` 函数来终止其他工人进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。 #### 示例 3. 一个多进程系统的优雅停止 @@ -290,9 +290,9 @@ int main() { 上面的停止程序模拟了一个多进程系统的优雅退出,在这个例子中,这个系统由一个父进程和一个子进程组成。这次模拟的工作流程如下: - * 父进程尝试去 fork 一个子进程。假如这个 fork 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`。 - * 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前子,进程将打印一个信息。 - * 在 fork 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。 + * 父进程尝试去 `fork` 一个子进程。假如这个 `fork` 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`。 + * 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前,进程将打印一个信息。 + * 在 `fork` 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。 下面是一次运行的输出: @@ -309,22 +309,23 @@ Parent sleeping for a time... My child terminated, about to exit myself... ``` -对于信号的处理,上面的示例使用了 `sigaction` 库函数(POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有轻便性问题。下面是我们主要关心的代码片段: +对于信号的处理,上面的示例使用了 `sigaction` 库函数(POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有移植性问题。下面是我们主要关心的代码片段: - * 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒: +* 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒: -``` - puts("Parent sleeping for a time..."); + ``` +puts("Parent sleeping for a time..."); sleep(5); if (-1 == kill(cpid, SIGTERM)) { ...sleepkillcpidSIGTERM... ``` -假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。 - * `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数: + 假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。 -``` - void set_handler() { +* `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数: + + ``` +void set_handler() {   struct sigaction current;            /* current setup */   sigemptyset(¤t.sa_mask);       /* clear the signal set */   current.sa_flags = 0;                /* for setting sa_handler, not sa_action */ @@ -333,21 +334,22 @@ if (-1 == kill(cpid, SIGTERM)) { } ``` -上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定 handler ,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的 handler。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。 + 上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定为句柄,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的句柄。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。 使用信号来作为 IPC 的确是一个很轻量的方法,但确实值得尝试。通过信号来做 IPC 显然可以被归入 IPC 工具箱中。 ### 这个系列的总结 在这个系列中,我们通过三篇有关 IPC 的文章,用示例代码介绍了如下机制: + * 共享文件 * 共享内存(通过信号量) - * 管道(有名和无名) + * 管道(命名和无名) * 消息队列 * 套接字 * 信号 -甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下,IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁。),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过甚至是通过共享变量来通信的基本多线程程序的人来说,TA 都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。 +甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下,IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过即使是基本的通过共享变量来通信的多线程程序的人来说,他都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。 当然,没有一个简单的答案能够回答上述 IPC 机制中的哪一个更好。在编程中每一种 IPC 机制都会涉及到一个取舍问题:是追求简洁,还是追求功能强大。以信号来举例,它是一个相对简单的 IPC 机制,但并不支持多个进程之间的丰富对话。假如确实需要这样的对话,另外的选择可能会更合适一些。带有锁的共享文件则相对直接,但是当要处理大量共享的数据流时,共享文件并不能很高效地工作。管道,甚至是套接字,有着更复杂的 API,可能是更好的选择。让具体的问题去指导我们的选择吧。 @@ -360,13 +362,13 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-networ 作者:[Marty Kalin][a] 选题:[lujun9972][b] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/mkalindepauledu [b]: https://github.com/lujun9972 [1]: https://en.wikipedia.org/wiki/Inter-process_communication -[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1 -[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2 +[2]: https://linux.cn/article-10826-1.html +[3]: https://linux.cn/article-10845-1.html [4]: http://condor.depaul.edu/mkalin diff --git a/published/20190422 4 open source apps for plant-based diets.md b/published/20190422 4 open source apps for plant-based diets.md new file mode 100644 index 0000000000..7399627ee1 --- /dev/null +++ b/published/20190422 4 open source apps for plant-based diets.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10926-1.html) +[#]: subject: (4 open source apps for plant-based diets) +[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) +[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) + +4 款“吃草”的开源应用 +====== + +> 这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。 + +![](https://img.linux.net.cn/data/attachment/album/201906/01/193302nompumppxnmnxirz.jpg) + +减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以让你吃的更清淡。无论你是参加[无肉星期一][2]、践行 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到[植物全食饮食][4]whole-food, plant-based diet(WFPB),这些应用能帮助你找出要吃什么、发现素食和素食友好的餐馆,并轻松地将你的饮食偏好传达给他人,来助你更好地走这条路。所有这些应用都是开源的,可从 [F-Droid 仓库][5]下载。 + +### Daily Dozen + +![Daily Dozen app][6] + +[Daily Dozen][7] 提供了医学博士、美国法医学会院士(FACLM) Michael Greger 推荐的项目清单作为健康饮食和生活方式的一部分。Greger 博士建议食用由多种食物组成的基于植物的全食饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、抱子甘蓝等许多其他建议。 + +### Food Restrictions + +![Food Restrictions app][8] + +[Food Restrictions][9] 是一个简单的应用,它可以帮助你将你的饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 “X” 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择选项更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在选择选项中是没有的。但是,就像选择选项方式的文字版本一样,这些句子仅提供英语和葡萄牙语。 + +### OpenFoodFacts + +![Open Food Facts app][10] + +购买杂货时避免买入不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。 + +### OpenVegeMap + +![OpenVegeMap app][12] + +使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、适合纯素食者,仅限素食主义者,适合素食者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/apps-plant-based-diets + +作者:[Joshua Allen Holm][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/holmja +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 +[2]: https://www.meatlessmonday.com/ +[3]: https://www.amazon.com/dp/0385344740/ +[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/ +[5]: https://f-droid.org/ +[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app) +[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/ +[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app) +[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/ +[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app) +[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/ +[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app) +[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/ +[14]: https://www.openstreetmap.org/ diff --git a/published/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/published/20190427 Monitoring CPU and GPU Temperatures on Linux.md new file mode 100644 index 0000000000..1ba2cb1a9b --- /dev/null +++ b/published/20190427 Monitoring CPU and GPU Temperatures on Linux.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: (cycoe) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10929-1.html) +[#]: subject: (Monitoring CPU and GPU Temperatures on Linux) +[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/) +[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) + +在 Linux 上监控 CPU 和 GPU 温度 +====== + +> 本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度的两种简单方式。 + +由于 [Steam][1](包括 [Steam Play][2],即 Proton)和一些其他的发展,GNU/Linux 正在成为越来越多计算机用户的日常游戏平台的选择。也有相当一部分用户在遇到像[视频编辑][3]或图形设计等(Kdenlive 和 [Blender][4] 是这类应用程序中很好的例子)资源消耗型计算任务时,也会使用 GNU/Linux。 + +不管你是否是这些用户中的一员或其他用户,你也一定想知道你的电脑 CPU 和 GPU 能有多热(如果你想要超频的话更会如此)。如果是这样,那么继续读下去。我们会介绍两个非常简单的命令来监控 CPU 和 GPU 温度。 + +我的装置包括一台 [Slimbook Kymera][5] 和两台显示器(一台 TV 和一台 PC 监视器),使得我可以用一台来玩游戏,另一台来留意监控温度。另外,因为我使用 [Zorin OS][6],我会将关注点放在 Ubuntu 和 Ubuntu 的衍生发行版上。 + +为了监控 CPU 和 GPU 的行为,我们将利用实用的 `watch` 命令在每几秒钟之后动态地得到读数。 + +![][7] + +### 在 Linux 中监控 CPU 温度 + +对于 CPU 温度,我们将结合使用 `watch` 与 `sensors` 命令。一篇关于[此工具的图形用户界面版本][8]的有趣文章已经在 It's FOSS 中介绍过了。然而,我们将在此处使用命令行版本: + +``` +watch -n 2 sensors +``` + +`watch` 保证了读数会在每 2 秒钟更新一次(当然,这个周期值能够根据你的需要去更改): + +``` +Every 2,0s: sensors + +iwlwifi-virtual-0 +Adapter: Virtual device +temp1: +39.0°C + +acpitz-virtual-0 +Adapter: Virtual device +temp1: +27.8°C (crit = +119.0°C) +temp2: +29.8°C (crit = +119.0°C) + +coretemp-isa-0000 +Adapter: ISA adapter +Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C) +Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C) +Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C) +Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C) +Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C) +Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C) +Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C) +``` + +除此之外,我们还能得到如下信息: + + * 我们有 5 个核心正在被使用(并且当前的最高温度为 37.0℃)。 + * 温度超过 82.0℃ 会被认为是过热。 + * 超过 100.0℃ 的温度会被认为是超过临界值。 + +根据以上的温度值我们可以得出结论,我的电脑目前的工作负载非常小。 + +### 在 Linux 中监控 GPU 温度 + +现在让我们来看看显卡。我从来没使用过 AMD 的显卡,因此我会将重点放在 Nvidia 的显卡上。我们需要做的第一件事是从 [Ubuntu 的附加驱动][10] 中下载合适的最新驱动。 + +在 Ubuntu(Zorin 或 Linux Mint 也是相同的)中,进入“软件和更新 > 附加驱动”选项,选择最新的可用驱动。另外,你可以添加或启用显示卡的官方 ppa(通过命令行或通过“软件和更新 > 其他软件”来实现)。安装驱动程序后,你将可以使用 “Nvidia X Server” 的 GUI 程序以及命令行工具 `nvidia-smi`(Nvidia 系统管理界面)。因此我们将使用 `watch` 和 `nvidia-smi`: + +``` +watch -n 2 nvidia-smi +``` + +与 CPU 的情况一样,我们会在每两秒得到一次更新的读数: + +``` +Every 2,0s: nvidia-smi + +Fri Apr 19 20:45:30 2019 ++-----------------------------------------------------------------------------+ +| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | +|-------------------------------+----------------------+----------------------+ +| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | +|===============================+======================+======================| +| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A | +| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default | ++-------------------------------+----------------------+----------------------+ + ++-----------------------------------------------------------------------------+ +| Processes: GPU Memory | +| GPU PID Type Process name Usage | +|=============================================================================| +| 0 1557 G /usr/lib/xorg/Xorg 190MiB | +| 0 1820 G /usr/bin/gnome-shell 174MiB | +| 0 7820 G ...equest-channel-token=303407235874180773 65MiB | ++-----------------------------------------------------------------------------+ +``` + +从这个表格中我们得到了关于显示卡的如下信息: + + * 它正在使用版本号为 418.56 的开源驱动。 + * 显示卡的当前温度为 54.0℃,并且风扇的使用量为 0%。 + * 电量的消耗非常低:仅仅 10W。 + * 总量为 6GB 的 vram(视频随机存取存储器),只使用了 433MB。 + * vram 正在被 3 个进程使用,他们的 ID 分别为 1557、1820 和 7820。 + +大部分这些事实或数值都清晰地表明,我们没有在玩任何消耗系统资源的游戏或处理大负载的任务。当我们开始玩游戏、处理视频或其他类似任务时,这些值就会开始上升。 + +#### 结论 + +即便我们有 GUI 工具,但我还是发现这两个命令对于实时监控硬件非常的顺手。 + +你将如何去使用它们呢?你可以通过阅读他们的 man 手册来学习更多关于这些工具的使用技巧。 + +你有其他偏爱的工具吗?在评论里分享给我们吧 ;)。 + +玩得开心! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/ + +作者:[Alejandro Egea-Abellán][a] +选题:[lujun9972][b] +译者:[cycoe](https://github.com/cycoe) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/itsfoss/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-steam-ubuntu-linux/ +[2]: https://itsfoss.com/steam-play-proton/ +[3]: https://itsfoss.com/best-video-editing-software-linux/ +[4]: https://www.blender.org/ +[5]: https://slimbook.es/ +[6]: https://zorinos.com/ +[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png +[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/ +[9]: https://itsfoss.com/best-command-line-games-linux/ +[10]: https://itsfoss.com/install-additional-drivers-ubuntu/ +[11]: https://itsfoss.com/review-googler-linux/ +[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg diff --git a/published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md new file mode 100644 index 0000000000..457caaa69a --- /dev/null +++ b/published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10919-1.html) +[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) +[#]: via: (https://itsfoss.com/install-budgie-ubuntu/) +[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) + +在 Ubuntu 上安装 Budgie 桌面 +====== + +> 在这个逐步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。 + +在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它外观优雅,而且需要的资源也不多。 + +阅读这篇 《[Ubuntu Budgie 点评][3]》或观看下面的视频,了解 Ubuntu Budgie 18.04 的外观如何。 + +- [Ubuntu 18.04 Budgie Desktop Tour [It's Elegant]](https://youtu.be/KXgreWOK33k) + +如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu,例如默认 Ubuntu 带有 GNOME 桌面,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。 + +在这篇文章中,我将告诉你到底该怎么做。但首先,对那些不了解 Budgie 的人进行一点介绍。 + +Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,可以让用户在其上尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。 + +> 警告 +> +> 在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。 +> +> 你也许不会遇到任何问题。是否要尝试不同桌面由你决定。 + +### 在 Ubuntu 上安装 Budgie + +此方法未在 Linux Mint 上进行测试,因此我建议你 Mint 上不要按照此指南进行操作。 + +对于正在使用 Ubuntu 的人,Budgie 现在默认是 Ubuntu 仓库的一部分。因此,我们不需要添加任何 PPA 来下载 Budgie。 + +要安装 Budgie,只需在终端中运行此命令即可。我们首先要确保系统已完全更新。 + +``` +sudo apt update && sudo apt upgrade +sudo apt install ubuntu-budgie-desktop +``` + +下载完成后,你将看到选择显示管理器的提示。选择 “lightdm” 以获得完整的 Budgie 体验。 + +![Select lightdm][7] + +安装完成后,重启计算机。然后,你会看到 Budgie 的登录页面。输入你的密码进入主屏幕。 + +![Budgie Desktop Home][8] + +### 切换到其他桌面环境 + +![Budgie login screen][9] + +你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里,你可以在已安装的桌面环境(DE)之间进行选择。就我而言,我看到了 Budgie 和默认的 Ubuntu(GNOME)桌面。 + +![Select your DE][10] + +因此,无论何时你想登录 GNOME,都可以使用此菜单执行此操作。 + +### 如何删除 Budgie + +如果你不喜欢 Budgie 或只是想回到常规的以前的 Ubuntu,你可以如上节所述切换回常规桌面。 + +但是,如果你真的想要删除 Budgie 及其组件,你可以按照以下命令回到之前的状态。 + +**在使用这些命令之前先切换到其他桌面环境:** + +``` +sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm +sudo apt autoremove +sudo apt install --reinstall gdm3 +``` + +成功运行所有命令后,重启计算机。 + +现在,你将回到 GNOME 或其他你有的桌面。 + +### 你对 Budgie 有什么看法? + +Budgie 是[最佳 Linux 桌面环境][12]之一。希望这个简短的指南帮助你在 Ubuntu 上安装了很棒的 Budgie 桌面。 + +如果你安装了 Budgie,你最喜欢它的什么?请在下面的评论中告诉我们。像往常一样,欢迎任何问题或建议。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-budgie-ubuntu/ + +作者:[Atharva Lele][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/atharva/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/which-ubuntu-install/ +[2]: https://ubuntubudgie.org/ +[3]: https://itsfoss.com/ubuntu-budgie-18-review/ +[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[5]: https://github.com/solus-project/budgie-desktop +[6]: https://getsol.us/home/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1 +[11]: https://itsfoss.com/snapd-error-ubuntu/ +[12]: https://itsfoss.com/best-linux-desktop-environments/ diff --git a/published/20161106 Myths about -dev-urandom.md b/published/201905/20161106 Myths about -dev-urandom.md similarity index 100% rename from published/20161106 Myths about -dev-urandom.md rename to published/201905/20161106 Myths about -dev-urandom.md diff --git a/published/20171214 Build a game framework with Python using the module Pygame.md b/published/201905/20171214 Build a game framework with Python using the module Pygame.md similarity index 100% rename from published/20171214 Build a game framework with Python using the module Pygame.md rename to published/201905/20171214 Build a game framework with Python using the module Pygame.md diff --git a/published/20171215 How to add a player to your Python game.md b/published/201905/20171215 How to add a player to your Python game.md similarity index 100% rename from published/20171215 How to add a player to your Python game.md rename to published/201905/20171215 How to add a player to your Python game.md diff --git a/published/20180130 An introduction to the DomTerm terminal emulator for Linux.md b/published/201905/20180130 An introduction to the DomTerm terminal emulator for Linux.md similarity index 100% rename from published/20180130 An introduction to the DomTerm terminal emulator for Linux.md rename to published/201905/20180130 An introduction to the DomTerm terminal emulator for Linux.md diff --git a/published/201905/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/published/201905/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md new file mode 100644 index 0000000000..3d8083d440 --- /dev/null +++ b/published/201905/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @@ -0,0 +1,229 @@ +ddgr:一个从终端搜索 DuckDuckGo 的命令行工具 +====== + +在 Linux 中,Bash 技巧非常棒,它使 Linux 中的一切成为可能。 + +对于开发人员或系统管理员来说,它真的很管用,因为他们大部分时间都在使用终端。你知道他们为什么喜欢这种技巧吗? + +因为这些技巧可以提高他们的工作效率,也能使他们工作更快。 + +### 什么是 ddgr + +[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 `BROWSER` 环境变量,ddgr 可以在几个基于文本的浏览器中开箱即用。 + +确保你的系统安装了任何一个基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。 + +它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 `ddgr` 出现的原因。 + +与 Web 界面不同,你可以指定每页要查看的搜索结果数。 + +**建议阅读:** + +- [Googler – 从 Linux 命令行搜索 Google][2] +- [Buku – Linux 中一个强大的命令行书签管理器][3] +- [SoCLI – 从终端搜索和浏览 StackOverflow 的简单方法][4] +- [RTV(Reddit 终端查看器)- 一个简单的 Reddit 终端查看器][5] + +### 什么是 DuckDuckGo + +DDG 即 DuckDuckGo。DuckDuckGo(DDG)是一个真正保护用户搜索和隐私的互联网搜索引擎。它没有过滤用户的个性化搜索结果,对于给定的搜索词,它会向所有用户显示相同的搜索结果。 + +大多数用户更喜欢谷歌搜索引擎,但是如果你真的担心隐私,那么你可以放心地使用 DuckDuckGo。 + +### ddgr 特性 + + * 快速且干净(没有广告、多余的 URL 或杂物参数),自定义颜色 + * 旨在以最小的空间提供最高的可读性 + * 指定每页显示的搜索结果数 + * 可以在 omniprompt 中导航结果,在浏览器中打开 URL + * 用于 Bash、Zsh 和 Fish 的搜索和选项补完脚本 + * 支持 DuckDuckGo Bang(带有自动补完) + * 直接在浏览器中打开第一个结果(如同 “I’m Feeling Ducky”) + * 不间断搜索:无需退出即可在 omniprompt 中触发新搜索 + * 关键字支持(例如:filetype:mime、site:somesite.com) + * 按时间、指定区域搜索,禁用安全搜索 + * 支持 HTTPS 代理,支持 Do Not Track,可选择禁用用户代理字符串 + * 支持自定义 URL 处理程序脚本或命令行实用程序 + * 全面的文档,man 页面有方便的使用示例 + * 最小的依赖关系 + +### 需要条件 + +`ddgr` 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。 + +``` +$ python3 --version +Python 3.6.3 +``` + +### 如何在 Linux 中安装 ddgr + +我们可以根据发行版使用以下命令轻松安装 `ddgr`。 + +对于 Fedora ,使用 [DNF 命令][6]来安装 `ddgr`。 + +``` +# dnf install ddgr +``` + +或者我们可以使用 [SNAP 命令][7]来安装 `ddgr`。 + +``` +# snap install ddgr +``` + +对于 LinuxMint/Ubuntu,使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 `ddgr`。 + +``` +$ sudo add-apt-repository ppa:twodopeshaggy/jarun +$ sudo apt-get update +$ sudo apt-get install ddgr +``` + +对于基于 Arch Linux 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 `ddgr`。 + +``` +$ yaourt -S ddgr +或 +$ packer -S ddgr +``` + +对于 Debian,使用 [DPKG 命令][12] 安装 `ddgr`。 + +``` +# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb +# dpkg -i ddgr_1.2-1_debian9.amd64.deb +``` + +对于 CentOS 7,使用 [YUM 命令][13]来安装 `ddgr`。 + +``` +# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm +``` + +对于 opensuse,使用 [zypper 命令][14]来安装 `ddgr`。 + +``` +# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm +``` + +### 如何启动 ddgr + +在终端上输入 `ddgr` 命令,不带任何选项来进行 DuckDuckGo 搜索。你将获得类似于下面的输出。 + +``` +$ ddgr +``` + +![][16] + +### 如何使用 ddgr 进行搜索 + +我们可以通过两种方式启动搜索。从 omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。 + +直接从终端: + +``` +$ ddgr 2daygeek +``` + +![][17] + +从 omniprompt: + +![][18] + +### Omniprompt 快捷方式 + +输入 `?` 以获得 omniprompt,它将显示关键字列表和进一步使用 `ddgr` 的快捷方式。 + +![][19] + +### 如何移动下一页、上一页和第一页 + +它允许用户移动下一页、上一页或第一页。 + + * `n`: 移动到下一组搜索结果 + * `p`: 移动到上一组搜索结果 + * `f`: 跳转到第一页 + +![][20] + +### 如何启动新搜索 + +`d` 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 “2daygeek website”,现在我将搜索 “Magesh Maruthamuthu” 这个新短语。 + +从 omniprompt: + +``` +ddgr (? for help) d magesh maruthmuthu +``` + +![][21] + +### 在搜索结果中显示完整的 URL + +默认情况下,它仅显示文章标题,在搜索中添加 `x` 选项以在搜索结果中显示完整的文章网址。 + +``` +$ ddgr -n 5 -x 2daygeek +``` + +![][22] + +### 限制搜索结果 + +默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 `ddgr` 带有 `--num` 或 ` -n` 参数。 + +``` +$ ddgr -n 5 2daygeek +``` + +![][23] + +### 网站特定搜索 + +要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站下搜索 “Package Manager”,查看结果。 + +``` +$ ddgr -n 5 --site 2daygeek "package manager" +``` + +![][24] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://github.com/jarun/ddgr +[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/ +[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/ +[4]:https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/ +[5]:https://www.2daygeek.com/rtv-reddit-terminal-viewer-a-simple-terminal-viewer-for-reddit/ +[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[7]:https://www.2daygeek.com/snap-command-examples/ +[8]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ +[11]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ +[12]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ +[13]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[14]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux1.png +[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-3.png +[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-2.png +[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-4.png +[20]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-5a.png +[21]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-6a.png +[22]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-7a.png +[23]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-8.png +[24]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-9a.png diff --git a/published/20180429 The Easiest PDO Tutorial (Basics).md b/published/201905/20180429 The Easiest PDO Tutorial (Basics).md similarity index 100% rename from published/20180429 The Easiest PDO Tutorial (Basics).md rename to published/201905/20180429 The Easiest PDO Tutorial (Basics).md diff --git a/published/20180518 What-s a hero without a villain- How to add one to your Python game.md b/published/201905/20180518 What-s a hero without a villain- How to add one to your Python game.md similarity index 100% rename from published/20180518 What-s a hero without a villain- How to add one to your Python game.md rename to published/201905/20180518 What-s a hero without a villain- How to add one to your Python game.md diff --git a/published/20180605 How to use autofs to mount NFS shares.md b/published/201905/20180605 How to use autofs to mount NFS shares.md similarity index 100% rename from published/20180605 How to use autofs to mount NFS shares.md rename to published/201905/20180605 How to use autofs to mount NFS shares.md diff --git a/published/201905/20180611 3 open source alternatives to Adobe Lightroom.md b/published/201905/20180611 3 open source alternatives to Adobe Lightroom.md new file mode 100644 index 0000000000..4cebea72fa --- /dev/null +++ b/published/201905/20180611 3 open source alternatives to Adobe Lightroom.md @@ -0,0 +1,86 @@ +Adobe Lightroom 的三个开源替代品 +======= + +> 摄影师们:在没有 Lightroom 套件的情况下,可以看看这些 RAW 图像处理器。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) + +如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深、清晰度以及真实质感是口袋中的智能手机无法与之相比的。 + +所有的这些功能在便利性上要付出一些很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的 RAW 格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,并且 首选应用就是 Adobe Lightroom。但是由于 Adobe Lightroom 的昂贵价格、基于订阅的定价模式以及专有许可证都使更多人开始关注其开源替代品。 + +Lightroom 有两大主要功能:处理 RAW 格式的图片文件,以及数字资产管理系统(DAM) —— 通过标签、评星以及其他元数据信息来简单清晰地整理照片。 + +在这篇文章中,我们将介绍三个开源的图片处理软件:Darktable、LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个具有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:“[开源项目的 DAM 管理][2]”,他分享了他在自己的 [Lunatics!][3] 电影项目研究过的开源多媒体软件。 + +### Darktable + +![Darktable][4] + +类似其他两个软件,Darktable 可以处理 RAW 格式的图像并将它们转换成可用的文件格式 —— JPEG、PNG、TIFF、PPM、PFM 和 EXR,它同时支持 Google 和 Facebook 的在线相册,上传至 Flikr,通过邮件附件发送以及创建在线相册。 + +它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是“无损的” —— 你的初始 RAW 图像文件始终会被保存。 + +Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG、CR2、DNG、OpenEXR 和 PFM 等格式的支持。图像在一个数据库中显示,因此你可以轻易地过滤并查询这些元数据,包括了文字标签、评星以及颜色标签。软件同时支持 21 种语言,支持 Linux、MacOS、BSD、Solaris 11/GNOME 以及 Windows(Windows 版本是最新发布的,Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能)。 + +Darktable 在开源许可证 [GPLv3][7] 下发布,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。 + +### LightZone + +![LightZone's tool stack][11] + +[LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它是跨平台的,有 Windows、MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说说 LightZone 其他独特特性。 + +这个软件最初在 2005 年时,是以专有许可证发布的图像处理软件,后来在 BSD 证书下开源。此外,在你下载这个软件之前,你必须注册一个免费账号,以便 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。) + +除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制用到另一些图像上。如果想要编辑图片的部分区域,你还可以通过矢量工具或者根据色彩和亮度来选择像素。 + +想要了解更多,见 LightZone 的[论坛][13] 或者查看 Github上的 [源代码][14]。 + +### RawTherapee + +![RawTherapee][15] + +[RawTherapee][16] 是另一个值得关注的开源([GPL][17])的 RAW 图像处理器。就像 Darktable 和 LightZone,它是跨平台的(支持 Windows、MacOS 和 Linux),一切修改都在无损条件下进行,因此不论你叠加多少滤镜做出多少改变,你都可以回到你最初的 RAW 文件。 + +RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改,以方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板可以方便准确地选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。 + +这个软件可以从多数相机直接导入 RAW 文件,并且支持超过 25 种语言,得到了广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及对 CPU 性能的利用。 + +RawTherapee 还提供了很多其他 [功能][19];可以查看它的 [官方文档][20] 以及 [源代码][21] 了解更多细节。 + +你是否在摄影中使用另外的开源 RAW 图像处理工具?有任何建议和推荐都可以在评论中分享。 + +------ + +via: https://opensource.com/alternatives/adobe-lightroom + +作者:[Opensource.com][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[scoutydren](https://github.com/scoutydren) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[1]: https://en.wikipedia.org/wiki/Raw_image_format +[2]: https://opensource.com/article/18/3/movie-open-source-software +[3]: http://lunatics.tv/ +[4]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC "Darktable" +[5]: http://www.darktable.org/ +[6]: https://www.darktable.org/about/faq/#faq-windows +[7]: https://github.com/darktable-org/darktable/blob/master/LICENSE +[8]: https://www.darktable.org/about/features/ +[9]: https://www.darktable.org/resources/ +[10]: https://github.com/darktable-org/darktable +[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ +[12]: http://www.lightzoneproject.org/ +[13]: http://www.lightzoneproject.org/Forum +[14]: https://github.com/ktgw0316/LightZone +[15]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw "RawTherapee" +[16]: http://rawtherapee.com/ +[17]: https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt +[18]: https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions +[19]: http://rawpedia.rawtherapee.com/Features +[20]: http://rawpedia.rawtherapee.com/Main_Page +[21]: https://github.com/Beep6581/RawTherapee diff --git a/published/20180725 Put platforms in a Python game with Pygame.md b/published/201905/20180725 Put platforms in a Python game with Pygame.md similarity index 100% rename from published/20180725 Put platforms in a Python game with Pygame.md rename to published/201905/20180725 Put platforms in a Python game with Pygame.md diff --git a/published/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md b/published/201905/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md similarity index 100% rename from published/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md rename to published/201905/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md diff --git a/published/20181218 Using Pygame to move your game character around.md b/published/201905/20181218 Using Pygame to move your game character around.md similarity index 100% rename from published/20181218 Using Pygame to move your game character around.md rename to published/201905/20181218 Using Pygame to move your game character around.md diff --git a/published/201905/20190107 Aliases- To Protect and Serve.md b/published/201905/20190107 Aliases- To Protect and Serve.md new file mode 100644 index 0000000000..60299f61c5 --- /dev/null +++ b/published/201905/20190107 Aliases- To Protect and Serve.md @@ -0,0 +1,172 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10918-1.html) +[#]: subject: (Aliases: To Protect and Serve) +[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +命令别名:保护和服务 +====== + +> Linux shell 允许你将命令彼此链接在一起,一次触发执行复杂的操作,并且可以对此创建别名作为快捷方式。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) + +让我们将继续我们的别名系列。到目前为止,你可能已经阅读了我们的[关于别名的第一篇文章][1],并且应该非常清楚它们是如何为你省去很多麻烦的最简单方法。例如,你已经看到它们帮助我们减少了输入,让我们看看别名派上用场的其他几个案例。 + +### 别名即快捷方式 + +Linux shell 最美妙的事情之一是可以使用数以万计的选项和把命令连接在一起执行真正复杂的操作。好吧,也许这种美丽是在旁观者的眼中的,但是我们觉得这个功能很实用。 + +不利的一面是,你经常需要记得难以记忆或难以打字出来的命令组合。比如说硬盘上的空间非常宝贵,而你想要做一些清洁工作。你的第一步可能是寻找隐藏在你的家目录里的东西。你可以用来判断的一个标准是查找不再使用的内容。`ls` 可以帮助你: + +``` +ls -lct +``` + +上面的命令显示了每个文件和目录的详细信息(`-l`),并显示了每一项上次访问的时间(`-c`),然后它按从最近访问到最少访问的顺序排序这个列表(`-t`)。 + +这难以记住吗?你可能不会每天都使用 `-c` 和 `-t` 选项,所以也许是吧。无论如何,定义一个别名,如: + +``` +alias lt='ls -lct' +``` + +会更容易一些。 + +然后,你也可能希望列表首先显示最旧的文件: + +``` +alias lo='lt -F | tac' +``` + +![aliases][3] + +*图 1:使用 lt 和 lo 别名。* + +这里有一些有趣的事情。首先,我们使用别名(`lt`)来创建另一个别名 —— 这是完全可以的。其次,我们将一个新参数传递给 `lt`(后者又通过 `lt` 别名的定义传递给了 `ls`)。 + +`-F` 选项会将特殊符号附加到项目的名称后,以便更好地区分常规文件(没有符号)和可执行文件(附加了 `*`)、目录文件(以 `/` 结尾),以及所有链接文件、符号链接文件(以 `@` 符号结尾)等等。`-F` 选项是当你回归到单色终端的日子里,没有其他方法可以轻松看到列表项之间的差异时用的。在这里使用它是因为当你将输出从 `lt` 传递到 `tac` 时,你会丢失 `ls` 的颜色。 + +第三件我们需要注意的事情是我们使用了管道。管道用于你将一个命令的输出传递给另外一个命令时。第二个命令可以使用这些输出作为它的输入。在包括 Bash 在内的许多 shell 里,你可以使用管道符(`|`) 来做传递。 + +在这里,你将来自 `lt -F` 的输出导给 `tac`。`tac` 这个命令有点玩笑的意思,你或许听说过 `cat` 命令,它名义上用于将文件彼此连接(con`cat`),而在实践中,它被用于将一个文件的内容打印到终端。`tac` 做的事情一样,但是它是以逆序将接收到的内容输出出来。明白了吗?`cat` 和 `tac`,技术人有时候也挺有趣的。 + +`cat` 和 `tac` 都能输出通过管道传递过来的内容,在这里,也就是一个按时间顺序排序的文件列表。 + +那么,在有些离题之后,最终我们得到的就是这个列表将当前目录中的文件和目录以新鲜度的逆序列出(即老的在前)。 + +最后你需要注意的是,当在当前目录或任何目录运行 `lt` 时: + +``` +# 这可以工作: +lt +# 这也可以: +lt /some/other/directory +``` + +……而 `lo` 只能在当前目录奏效: + +``` +# 这可工作: +lo +# 而这不行: +lo /some/other/directory +``` + +这是因为 Bash 会展开别名的组分。当你键入: + +``` +lt /some/other/directory +``` + +Bash 实际上运行的是: + +``` +ls -lct /some/other/directory +``` + +这是一个有效的 Bash 命令。 + +而当你键入: + +``` +lo /some/other/directory +``` + +Bash 试图运行: + +``` +ls -lct -F | tac /some/other/directory +``` + +这不是一个有效的命令,主要是因为 `/some/other/directory` 是个目录,而 `cat` 和 `tac` 不能用于目录。 + +### 更多的别名快捷方式 + + * `alias lll='ls -R'` 会打印出目录的内容,并深入到子目录里面打印子目录的内容,以及子目录的子目录,等等。这是一个查看一个目录下所有内容的方式。 + * `mkdir='mkdir -pv'` 可以让你一次性创建目录下的目录。按照 `mkdir` 的基本形式,要创建一个包含子目录的目录,你必须这样: + +``` +mkdir newdir +mkdir newdir/subdir +``` + +或这样: + +``` +mkdir -p newdir/subdir +``` + +而用这个别名你将只需要这样就行: + +``` +mkdir newdir/subdir +``` + +你的新 `mkdir` 也会告诉你创建子目录时都做了什么。 + +### 别名也是一种保护 + +别名的另一个好处是它可以作为防止你意外地删除或覆写已有的文件的保护措施。你可能听说过这个 Linux 新用户的传言,当他们以 root 身份运行: + +``` +rm -rf / +``` + +整个系统就爆了。而决定输入如下命令的用户: + +``` +rm -rf /some/directory/ * +``` + +就很好地干掉了他们的家目录的全部内容。这里不小心键入的目录和 `*` 之间的那个空格有时候很容易就会被忽视掉。 + +这两种情况我们都可以通过 `alias rm='rm -i'` 别名来避免。`-i` 选项会使 `rm` 询问用户是否真的要做这个操作,在你对你的文件系统做出不可弥补的损失之前给你第二次机会。 + +对于 `cp` 也是一样,它能够覆盖一个文件而不会给你任何提示。创建一个类似 `alias cp='cp -i'` 来保持安全吧。 + +### 下一次 + +我们越来越深入到了脚本领域,下一次,我们将沿着这个方向,看看如何在命令行组合命令以给你真正的乐趣,并可靠地解决系统管理员每天面临的问题。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10377-1.html +[2]: https://www.linux.com/files/images/fig01png-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) +[4]: https://www.linux.com/licenses/category/used-permission diff --git a/published/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md b/published/201905/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md similarity index 100% rename from published/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md rename to published/201905/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md diff --git a/published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md b/published/201905/20190308 Virtual filesystems in Linux- Why we need them and how they work.md similarity index 100% rename from published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md rename to published/201905/20190308 Virtual filesystems in Linux- Why we need them and how they work.md diff --git a/published/201905/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/published/201905/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md new file mode 100644 index 0000000000..0ea20b929a --- /dev/null +++ b/published/201905/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10914-1.html) +[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) +[#]: author: (ostechnix https://www.ostechnix.com/author/editor/) + +区块链 2.0:房地产区块链(四) +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png) + +### 区块链 2.0:“更”智能的房地产 + +在本系列的[上一篇文章][1]中我们探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的交易最活跃、最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 + +就其无数的业务而言,房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。 + +全球房地产市场由价值 228 万亿 [^1] 美元的资产类别组成,出入不大。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的交易在很大程度上都是精心策划和执行的。很多时候,房地产也因许多欺诈事件而臭名昭著,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到了法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解,使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 + +从一个微不足道的开始,虽然是一个重要的例子,以显示当前的记录管理实践在房地产行业有多糟糕,考虑一下[产权保险业务][2] [^3]。产权保险用于对冲土地所有权和所有权记录不可接受且从而无法执行的可能性。诸如此类的保险产品也称为赔偿保险。在许多情况下,法律要求财产拥有产权保险,特别是在处理多年来多次易手的财产时。抵押贷款公司在支持房地产交易时也可能坚持同样的要求。事实上,这种产品自 19 世纪 50 年代就已存在,并且仅在美国每年至少有 1.5 万亿美元的商业价值这一事实证明了一开始的说法。在这种情况下,这些记录的维护方式必须进行改革,区块链提供了一个可持续解决方案。根据[美国土地产权协会][4],平均每个案例的欺诈平均约为 10 万美元,并且涉及交易的所有产权中有 25% 的文件存在问题。区块链允许设置一个不可变的永久数据库,该数据库将跟踪资产本身,记录已经进入的每个交易或投资。这样的分类帐本系统将使包括一次性购房者在内的房地产行业的每个人的生活更加轻松,并使诸如产权保险等金融产品基本上无关紧要。将诸如房地产之类的实物资产转换为这样的数字资产是非常规的,并且目前仅在理论上存在。然而,这种变化迫在眉睫,而不是迟到 [^5]。 + +区块链在房地产中影响最大的领域如上所述,在维护透明和安全的产权管理系统方面。基于区块链的财产记录可以包含有关财产、其所在地、所有权历史以及相关的公共记录的[信息][6]。这将允许房地产交易快速完成,并且无需第三方监控和监督。房地产评估和税收计算等任务成为有形的、客观的参数问题,而不是主观测量和猜测,因为可靠的历史数据是可公开验证的。[UBITQUITY][7] 就是这样一个平台,为企业客户提供定制的基于区块链的解决方案。该平台允许客户跟踪所有房产细节、付款记录、抵押记录,甚至允许运行智能合约,自动处理税收和租赁。 + +这为我们带来了房地产区块链的第二大机遇和用例。由于该行业受到众多第三方的高度监管,除了参与交易的交易对手外,尽职调查和财务评估可能非常耗时。这些流程主要通过离线渠道进行,文书工作需要在最终评估报告出来之前进行数天。对于公司房地产交易尤其如此,这构成了顾问所收取的总计费时间的大部分。如果交易由抵押背书,则这些过程的重复是不可避免的。一旦与所涉及的人员和机构的数字身份相结合,就可以完全避免当前的低效率,并且可以在几秒钟内完成交易。租户、投资者、相关机构、顾问等可以单独验证数据并达成一致的共识,从而验证永久性的财产记录 [^8]。这提高了验证流程的准确性。房地产巨头 RE/MAX 最近宣布与服务提供商 XYO Network Partners 合作,[建立墨西哥房上市地产国家数据库][9]。他们希望有朝一日能够创建世界上最大的(截至目前)去中心化房地产登记中心之一。 + +然而,区块链可以带来的另一个重要且可以说是非常民主的变化是投资房地产。与其他投资资产类别不同,即使是小型家庭投资者也可能参与其中,房地产通常需要大量的手工付款才能参与。诸如 ATLANT 和 BitOfProperty 之类的公司将房产的账面价值代币化,并将其转换为加密货币的等价物。这些代币随后在交易所出售,类似于股票和股票的交易方式。[房地产后续产生的任何现金流都会根据其在财产中的“份额”记入贷方或借记给代币所有者][4]。 + +然而,尽管如此,区块链技术仍处于房地产领域的早期采用阶段,目前的法规还没有明确定义它。诸如分布式应用程序、分布式匿名组织(DAO)、智能合约等概念在许多国家的法律领域是闻所未闻的。一旦所有利益相关者充分接受了区块链复杂性的良好教育,就会彻底改革现有的法规和指导方针,这是最务实的前进方式。 同样,这将是一个缓慢而渐进的变化,但是它是一个急需的变化。本系列的下一篇文章将介绍 “智能合约”,例如由 UBITQUITY 和 XYO 等公司实施的那些是如何在区块链中创建和执行的。 + +[^1]: HSBC, “Global Real Estate,” no. April, 2008 +[^3]: D. B. Burke, Law of title insurance. Aspen Law & Business, 2000. +[^5]: M. Swan, O’Reilly – Blockchain. Blueprint for a New Economy – 2015. +[^8]: Deloite, “Blockchain in commercial real estate The future is here ! Table of contents.” + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ + +作者:[ostechnix][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10689-1.html +[2]: https://www.forbes.com/sites/jordanlulich/2018/06/21/what-is-title-insurance-and-why-its-important/#1472022b12bb +[4]: https://www.cbinsights.com/research/blockchain-real-estate-disruption/#financing +[6]: https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html +[7]: https://www.ubitquity.io/ +[9]: https://www.businesswire.com/news/home/20181012005068/en/XYO-Network-Partners-REMAX-M%C3%A9xico-Bring-Blockchain diff --git a/published/20190327 Why DevOps is the most important tech strategy today.md b/published/201905/20190327 Why DevOps is the most important tech strategy today.md similarity index 100% rename from published/20190327 Why DevOps is the most important tech strategy today.md rename to published/201905/20190327 Why DevOps is the most important tech strategy today.md diff --git a/published/201905/20190329 How to manage your Linux environment.md b/published/201905/20190329 How to manage your Linux environment.md new file mode 100644 index 0000000000..0a226f79f7 --- /dev/null +++ b/published/201905/20190329 How to manage your Linux environment.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10916-1.html) +[#]: subject: (How to manage your Linux environment) +[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +如何管理你的 Linux 环境变量 +====== + +> Linux 用户环境变量可以帮助你找到你需要的命令,无须了解系统如何配置的细节而完成大量工作。而这些设置来自哪里和如何被修改它们是另一个话题。 + +![IIP Photo Archive \(CC BY 2.0\)][1] + +在 Linux 系统上的用户账户配置以多种方法简化了系统的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何追踪到它们的。你可以查看你的电子邮件,查看手册页,并容易地回到你的家目录,而不用管你在文件系统中身在何方。并且,当需要的时候,你可以调整你的账户设置,以便其更符合你喜欢的方式。 + +Linux 环境设置来自一系列的文件:一些是系统范围(意味着它们影响所有用户账户),一些是处于你的家目录中的配置文件里。系统范围的设置在你登录时生效,而本地设置在其后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件: + +``` +/etc/environment +/etc/bash.bashrc +/etc/profile +``` + +以及一些本地文件: + +``` +~/.bashrc +~/.profile # 如果有 ~/.bash_profile 或 ~/.bash_login 就不会读此文件 +~/.bash_profile +~/.bash_login +``` + +你可以修改本地存在的四个文件的任何一个,因为它们处于你的家目录,并且它们是属于你的。 + +### 查看你的 Linux 环境设置 + +为查看你的环境设置,使用 `env` 命令。你的输出将可能与这相似: + +``` +$ env +LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33; +01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32: +*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31: +*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31: +*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01; +31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31: +*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31: +*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31: +*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35: +*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35: +*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35: +*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35: +*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35: +*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35: +*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35: +*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36: +*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36: +*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36: +SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22 +LESSCLOSE=/usr/bin/lesspipe %s %s +LANG=en_US.UTF-8 +OLDPWD=/home/shs +XDG_SESSION_ID=2253 +USER=shs +PWD=/home/shs +HOME=/home/shs +SSH_CLIENT=192.168.0.21 34975 22 +XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop +SSH_TTY=/dev/pts/0 +MAIL=/var/mail/shs +TERM=xterm +SHELL=/bin/bash +SHLVL=1 +LOGNAME=shs +DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus +XDG_RUNTIME_DIR=/run/user/1000 +PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin +LESSOPEN=| /usr/bin/lesspipe %s +_=/usr/bin/env +``` + +虽然你可能会看到大量的输出,上面显示的第一大部分用于在命令行上使用颜色标识各种文件类型。当你看到类似 `*.tar=01;31:` 这样的东西,这告诉你 `tar` 文件将以红色显示在文件列表中,然而 `*.jpg=01;35:` 告诉你 jpg 文件将以紫色显现出来。这些颜色旨在使它易于从一个文件列表中分辨出某些文件。你可以在《[在 Linux 命令行中自定义你的颜色][3]》处学习更多关于这些颜色的定义,和如何自定义它们。 + +当你更喜欢一种不加装饰的显示时,一种关闭颜色显示的简单方法是使用如下命令: + +``` +$ ls -l --color=never +``` + +这个命令可以简单地转换到一个别名: + +``` +$ alias ll2='ls -l --color=never' +``` + +你也可以使用 `echo` 命令来单独地显现某个设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量: + +``` +$ echo $HISTSIZE +1000 +``` + +如果你已经移动到某个位置,你在文件系统中的最后位置会被记在这里: + +``` +PWD=/home/shs +OLDPWD=/tmp +``` + +### 作出更改 + +你可以使用一个像这样的命令更改环境设置,但是,如果你希望保持这个设置,在你的 `~/.bashrc` 文件中添加一行代码,例如 `HISTSIZE=1234`。 + +``` +$ export HISTSIZE=1234 +``` + +### “export” 一个变量的本意是什么 + +导出一个环境变量可使设置用于你的 shell 和可能的子 shell。默认情况下,用户定义的变量是本地的,并不被导出到新的进程,例如,子 shell 和脚本。`export` 命令使得环境变量可用在子进程中发挥功用。 + +### 添加和移除变量 + +你可以很容易地在命令行和子 shell 上创建新的变量,并使它们可用。然而,当你登出并再次回来时这些变量将消失,除非你也将它们添加到 `~/.bashrc` 或一个类似的文件中。 + +``` +$ export MSG="Hello, World!" +``` + +如果你需要,你可以使用 `unset` 命令来消除一个变量: + +``` +$ unset MSG +``` + +如果变量是局部定义的,你可以通过加载你的启动文件来简单地将其设置回来。例如: + +``` +$ echo $MSG +Hello, World! +$ unset $MSG +$ echo $MSG + +$ . ~/.bashrc +$ echo $MSG +Hello, World! +``` + +### 小结 + +用户账户是用一组恰当的启动文件设立的,创建了一个有用的用户环境,而个人用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg +[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/published/20190401 What is 5G- How is it better than 4G.md b/published/201905/20190401 What is 5G- How is it better than 4G.md similarity index 100% rename from published/20190401 What is 5G- How is it better than 4G.md rename to published/201905/20190401 What is 5G- How is it better than 4G.md diff --git a/published/20190405 Command line quick tips- Cutting content out of files.md b/published/201905/20190405 Command line quick tips- Cutting content out of files.md similarity index 100% rename from published/20190405 Command line quick tips- Cutting content out of files.md rename to published/201905/20190405 Command line quick tips- Cutting content out of files.md diff --git a/published/20190408 Getting started with Python-s cryptography library.md b/published/201905/20190408 Getting started with Python-s cryptography library.md similarity index 100% rename from published/20190408 Getting started with Python-s cryptography library.md rename to published/201905/20190408 Getting started with Python-s cryptography library.md diff --git a/published/20190408 How to quickly deploy, run Linux applications as unikernels.md b/published/201905/20190408 How to quickly deploy, run Linux applications as unikernels.md similarity index 100% rename from published/20190408 How to quickly deploy, run Linux applications as unikernels.md rename to published/201905/20190408 How to quickly deploy, run Linux applications as unikernels.md diff --git a/published/20190409 Anbox - Easy Way To Run Android Apps On Linux.md b/published/201905/20190409 Anbox - Easy Way To Run Android Apps On Linux.md similarity index 100% rename from published/20190409 Anbox - Easy Way To Run Android Apps On Linux.md rename to published/201905/20190409 Anbox - Easy Way To Run Android Apps On Linux.md diff --git a/published/20190409 How To Install And Configure Chrony As NTP Client.md b/published/201905/20190409 How To Install And Configure Chrony As NTP Client.md similarity index 100% rename from published/20190409 How To Install And Configure Chrony As NTP Client.md rename to published/201905/20190409 How To Install And Configure Chrony As NTP Client.md diff --git a/published/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md b/published/201905/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md similarity index 100% rename from published/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md rename to published/201905/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md diff --git a/published/20190411 Installing Ubuntu MATE on a Raspberry Pi.md b/published/201905/20190411 Installing Ubuntu MATE on a Raspberry Pi.md similarity index 100% rename from published/20190411 Installing Ubuntu MATE on a Raspberry Pi.md rename to published/201905/20190411 Installing Ubuntu MATE on a Raspberry Pi.md diff --git a/published/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md b/published/201905/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md similarity index 100% rename from published/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md rename to published/201905/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md diff --git a/published/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md b/published/201905/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md similarity index 100% rename from published/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md rename to published/201905/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md diff --git a/published/20190415 Inter-process communication in Linux- Shared storage.md b/published/201905/20190415 Inter-process communication in Linux- Shared storage.md similarity index 100% rename from published/20190415 Inter-process communication in Linux- Shared storage.md rename to published/201905/20190415 Inter-process communication in Linux- Shared storage.md diff --git a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/published/201905/20190415 Kubernetes on Fedora IoT with k3s.md similarity index 58% rename from translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md rename to published/201905/20190415 Kubernetes on Fedora IoT with k3s.md index 82729f24a3..a8293d4d3b 100644 --- a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md +++ b/published/201905/20190415 Kubernetes on Fedora IoT with k3s.md @@ -1,37 +1,37 @@ [#]: collector: (lujun9972) [#]: translator: (StdioA) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10908-1.html) [#]: subject: (Kubernetes on Fedora IoT with k3s) [#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/) [#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/) -使用 k3s 在 Fedora IoT 上运行 Kubernetes +使用 k3s 在 Fedora IoT 上运行 K8S ====== -![][1] +![](https://img.linux.net.cn/data/attachment/album/201905/28/094048yrzlik9oek5rbs5s.jpg) -Fedora IoT 是一个即将发布的、面相物联网的 Fedora 版本。去年 Fedora Magazine 中的《如何使用 Fedora IOT 点亮 LED》一文,第一次介绍了它。从那以后,它与 Fedora Silverblue 一起不断改进,以提供针对面相容器的工作流程的不可变基础操作系统。 +Fedora IoT 是一个即将发布的、面向物联网的 Fedora 版本。去年 Fedora Magazine 的《[如何使用 Fedora IoT 点亮 LED 灯][2]》一文第一次介绍了它。从那以后,它与 Fedora Silverblue 一起不断改进,以提供针对面向容器的工作流的不可变基础操作系统。 -Kubernetes 是一个颇受欢迎的容器编排系统。它可能最常用在那些能够处理巨大负载的强劲硬件上。不过,它也能在像树莓派 3 这样轻量级的设备上运行。我们继续阅读,来了解如何运行它。 +Kubernetes 是一个颇受欢迎的容器编排系统。它可能最常用在那些能够处理巨大负载的强劲硬件上。不过,它也能在像树莓派 3 这样轻量级的设备上运行。让我们继续阅读,来了解如何运行它。 ### 为什么用 Kubernetes? -虽然 Kubernetes 在云计算领域风靡一时,但让它在小型单板机上运行可能并不是显而易见的。不过,我们有非常明确的理由来做这件事。首先,这是一个不需要昂贵硬件就可以学习并熟悉 Kubernetes 的好方法;其次,由于它的流行性,市面上有[大量应用][2]进行了预先打包,以用于在 Kubernetes 集群中运行。更不用说,当你遇到问题时,会有大规模的社区用户为你提供帮助。 +虽然 Kubernetes 在云计算领域风靡一时,但让它在小型单板机上运行可能并不是常见的。不过,我们有非常明确的理由来做这件事。首先,这是一个不需要昂贵硬件就可以学习并熟悉 Kubernetes 的好方法;其次,由于它的流行性,市面上有[大量应用][3]进行了预先打包,以用于在 Kubernetes 集群中运行。更不用说,当你遇到问题时,会有大规模的社区用户为你提供帮助。 -最后但同样重要的是,即使是在家庭实验室这样的小规模环境中,容器编排也确实能事情变得更加简单。虽然在学习曲线方面,这一点并不明显,但这些技能在你将来与任何集群打交道的时候都会有帮助。不管你面对的是一个单节点树莓派集群,还是一个大规模的机器学习场,它们的操作方式都是类似的。 +最后但同样重要的是,即使是在家庭实验室这样的小规模环境中,容器编排也确实能够使事情变得更加简单。虽然在学习曲线方面,这一点并不明显,但这些技能在你将来与任何集群打交道的时候都会有帮助。不管你面对的是一个单节点树莓派集群,还是一个大规模的机器学习场,它们的操作方式都是类似的。 #### K3s - 轻量级的 Kubernetes -一个 Kubernetes 的“正常”安装(如果有这么一说的话)对于物联网来说有点沉重。K8s 的推荐内存配置,是每台机器 2GB!不过,我们也有一些替代品,其中一个新人是 [k3s][4]——一个轻量级的 Kubernetes 发行版。 +一个“正常”安装的 Kubernetes(如果有这么一说的话)对于物联网来说有点沉重。K8s 的推荐内存配置,是每台机器 2GB!不过,我们也有一些替代品,其中一个新人是 [k3s][4] —— 一个轻量级的 Kubernetes 发行版。 -K3s 非常特殊,因为它将 etcd 替换成了 SQLite 以满足键值存储需求。还有一点,在于整个 k3s 将使用一个二进制文件分发,而不是每个组件一个。这减少了内存占用并简化了安装过程。基于上述原因,我们只需要 512MB 内存即可运行 k3s,简直适合小型单板电脑! +K3s 非常特殊,因为它将 etcd 替换成了 SQLite 以满足键值存储需求。还有一点,在于整个 k3s 将使用一个二进制文件分发,而不是每个组件一个。这减少了内存占用并简化了安装过程。基于上述原因,我们只需要 512MB 内存即可运行 k3s,极度适合小型单板电脑! ### 你需要的东西 -1. 在虚拟机或实体设备中运行的 Fedora IoT。在[这里][5]可以看到优秀的入门指南。一台机器就足够了,不过两台可以用来测试向集群添加更多节点。 -2. [配置防火墙][6],允许 6443 和 8372 端口的通信。或者,你也可以简单地运行“systemctl stop firewalld”来为这次实验关闭防火墙。 +1. Fedora IoT 运行在虚拟机或实体设备中运行的。在[这里][5]可以看到优秀的入门指南。一台机器就足够了,不过两台可以用来测试向集群添加更多节点。 +2. [配置防火墙][6],允许 6443 和 8372 端口的通信。或者,你也可以简单地运行 `systemctl stop firewalld` 来为这次实验关闭防火墙。 ### 安装 k3s @@ -49,14 +49,14 @@ kubectl get nodes 需要注意的是,有几个选项可以通过环境变量传递给安装脚本。这些选项可以在[文档][7]中找到。当然,你也完全可以直接下载二进制文件来手动安装 k3s。 -对于实验和学习来说,这样已经很棒了,不过单节点的集群也不是一个集群。幸运的是,添加另一个节点并不比设置第一个节点要难。只需要向安装脚本传递两个环境变量,它就可以找到第一个节点,避免运行 k3s 的服务器部分。 +对于实验和学习来说,这样已经很棒了,不过单节点的集群也不能算一个集群。幸运的是,添加另一个节点并不比设置第一个节点要难。只需要向安装脚本传递两个环境变量,它就可以找到第一个节点,而不用运行 k3s 的服务器部分。 ``` curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \ K3S_TOKEN=XXX sh - ``` -上面的 example-url 应被替换为第一个节点的 IP 地址,或一个经过完全限定的域名。在该节点中,(用 XXX 表示的)令牌可以在 /var/lib/rancher/k3s/server/node-token 文件中找到。 +上面的 `example-url` 应被替换为第一个节点的 IP 地址,或一个完全限定域名。在该节点中,(用 XXX 表示的)令牌可以在 `/var/lib/rancher/k3s/server/node-token` 文件中找到。 ### 部署一些容器 @@ -66,19 +66,19 @@ curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \ kubectl create deployment my-server --image nginx ``` -这会从名为“nginx”的容器镜像中创建出一个名叫“my-server”的 [Deployment][8](镜像名默认使用 docker hub 注册中心,以及 latest 标签)。 +这会从名为 `nginx` 的容器镜像中创建出一个名叫 `my-server` 的 [部署][8](默认使用 docker hub 注册中心,以及 `latest` 标签)。 ``` kubectl get pods ``` -为了接触到 pod 中运行的 nginx 服务器,首先将 Deployment 通过一个 [Service][9] 来进行暴露。以下命令将创建一个与 Deployment 同名的 Service。 +为了访问到 pod 中运行的 nginx 服务器,首先通过一个 [服务][9] 来暴露该部署。以下命令将创建一个与该部署同名的服务。 ``` kubectl expose deployment my-server --port 80 ``` -Service 将作为一种负载均衡器和 Pod 的 DNS 记录来工作。比如,当运行第二个 Pod 时,我们只需指定 _my-server_(Service 名称)就可以通过 _curl_ 访问 nginx 服务器。有关如何操作,可以看下面的实例。 +服务将作为一种负载均衡器和 Pod 的 DNS 记录来工作。比如,当运行第二个 Pod 时,我们只需指定 `my-server`(服务名称)就可以通过 `curl` 访问 nginx 服务器。有关如何操作,可以看下面的实例。 ``` # 启动一个 pod,在里面以交互方式运行 bash @@ -90,15 +90,15 @@ curl my-server ### Ingress 控制器及外部 IP -默认状态下,一个 Service 只能获得一个 ClusterIP(只能从集群内部访问),但你也可以通过把它的类型设置为 [LoadBalancer][10] 为服务申请一个外部 IP。不过,并非所有应用都需要自己的 IP 地址。相反,通常可以通过基于 Host 请求头部或请求路径进行路由,从而使多个服务共享一个 IP 地址。你可以在 Kubernetes 使用 [Ingress][11] 完成此操作,而这也是我们要做的。Ingress 也提供了额外的功能,比如无需配置应用,即可对流量进行 TLS 加密。 +默认状态下,一个服务只能获得一个 ClusterIP(只能从集群内部访问),但你也可以通过把它的类型设置为 [LoadBalancer][10] 为该服务申请一个外部 IP。不过,并非所有应用都需要自己的 IP 地址。相反,通常可以通过基于 Host 请求头部或请求路径进行路由,从而使多个服务共享一个 IP 地址。你可以在 Kubernetes 使用 [Ingress][11] 完成此操作,而这也是我们要做的。Ingress 也提供了额外的功能,比如无需配置应用即可对流量进行 TLS 加密。 -Kubernetes 需要入口控制器来使 Ingress 资源工作,k3s 包含 [Traefik][12] 正是出于此目的。它还包含了一个简单的服务负载均衡器,可以为集群中的服务提供外部 IP。这篇[文档][13]描述了这种服务: +Kubernetes 需要 Ingress 控制器来使 Ingress 资源工作,k3s 包含 [Traefik][12] 正是出于此目的。它还包含了一个简单的服务负载均衡器,可以为集群中的服务提供外部 IP。这篇[文档][13]描述了这种服务: > k3s 包含一个使用可用主机端口的基础服务负载均衡器。比如,如果你尝试创建一个监听 80 端口的负载均衡器,它会尝试在集群中寻找一个 80 端口空闲的节点。如果没有可用端口,那么负载均衡器将保持在 Pending 状态。 > > k3s README -入口控制器已经通过这个负载均衡器暴露在外。你可以使用以下命令找到它正在使用的 IP 地址。 +Ingress 控制器已经通过这个负载均衡器暴露在外。你可以使用以下命令找到它正在使用的 IP 地址。 ``` $ kubectl get svc --all-namespaces @@ -109,13 +109,13 @@ NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d ``` -找到名为 traefik 的 Service。在上面的例子中,我们感兴趣的 IP 是 10.0.0.8。 +找到名为 `traefik` 的服务。在上面的例子中,我们感兴趣的 IP 是 10.0.0.8。 ### 路由传入的请求 -让我们创建一个 Ingress,使它通过基于 Host 头部的路由规则将请求路由至我们的服务器。这个例子中我们使用 [xip.io][14] 来避免必要的 DNS 记录配置工作。它的工作原理是将 IP 地址作为子域包含,以使用10.0.0.8.xip.io的任何子域来达到IP 10.0.0.8。换句话说,my-server.10.0.0.8.xip.io 被用于访问集群中的入口控制器。你现在就可以尝试(使用你自己的 IP,而不是 10.0.0.8)。如果没有入口,你应该会访问到“默认后端”,只是一个写着“404 page not found”的页面。 +让我们创建一个 Ingress,使它通过基于 Host 头部的路由规则将请求路由至我们的服务器。这个例子中我们使用 [xip.io][14] 来避免必要的 DNS 记录配置工作。它的工作原理是将 IP 地址作为子域包含,以使用 `10.0.0.8.xip.io` 的任何子域来达到 IP `10.0.0.8`。换句话说,`my-server.10.0.0.8.xip.io` 被用于访问集群中的 Ingress 控制器。你现在就可以尝试(使用你自己的 IP,而不是 10.0.0.8)。如果没有 Ingress,你应该会访问到“默认后端”,只是一个写着“404 page not found”的页面。 -我们可以使用以下 Ingress 让入口控制器将请求路由到我们的 Web 服务器 Service。 +我们可以使用以下 Ingress 让 Ingress 控制器将请求路由到我们的 Web 服务器的服务。 ``` apiVersion: extensions/v1beta1 @@ -133,17 +133,17 @@ spec: servicePort: 80 ``` -将以上片段保存到 _my-ingress.yaml_ 文件中,然后运行以下命令将其加入集群: +将以上片段保存到 `my-ingress.yaml` 文件中,然后运行以下命令将其加入集群: ``` kubectl apply -f my-ingress.yaml ``` -你现在应该能够在你选择的完全限定域名中访问到 nginx 的默认欢迎页面了。在我的例子中,它是 my-server.10.0.0.8.xip.io。入口控制器会通过 Ingress 中包含的信息来路由请求。对 my-server.10.0.0.8.xip.io 的请求将被路由到 Ingress 中定义为“后端”的 Service 和端口(在本例中为 my-server 和 80)。 +你现在应该能够在你选择的完全限定域名中访问到 nginx 的默认欢迎页面了。在我的例子中,它是 `my-server.10.0.0.8.xip.io`。Ingress 控制器会通过 Ingress 中包含的信息来路由请求。对 `my-server.10.0.0.8.xip.io` 的请求将被路由到 Ingress 中定义为 `backend` 的服务和端口(在本例中为 `my-server` 和 `80`)。 ### 那么,物联网呢? -想象如下场景:你的家伙农场周围有很多的设备。它是一个具有各种硬件功能,传感器和执行器的物联网设备的异构集合。也许某些设备拥有摄像头,天气或光线传感器。其它设备可能会被连接起来,用来控制通风、灯光、百叶窗或闪烁的LED。 +想象如下场景:你的家或农场周围有很多的设备。它是一个具有各种硬件功能、传感器和执行器的物联网设备的异构集合。也许某些设备拥有摄像头、天气或光线传感器。其它设备可能会被连接起来,用来控制通风、灯光、百叶窗或闪烁的 LED。 这种情况下,你想从所有传感器中收集数据,在最终使用它来制定决策和控制执行器之前,也可能会对其进行处理和分析。除此之外,你可能还想配置一个仪表盘来可视化那些正在发生的事情。那么 Kubernetes 如何帮助我们来管理这样的事情呢?我们怎么保证 Pod 在合适的设备上运行? @@ -155,13 +155,13 @@ kubectl label nodes = kubectl label nodes node2 camera=available ``` -一旦它们被打上标签,我们就可以轻松地使用 [nodeSelector][15] 为你的工作负载选择合适的节点。拼图的最后一块:如果你想在_所有_合适的节点上运行 Pod,那应该使用 [DaemonSet][16] 而不是 Deployment。换句话说,应为每个使用唯一传感器的数据收集应用程序创建一个 DaemonSet,并使用 nodeSelectors 确保它们仅在具有适当硬件的节点上运行。 +一旦它们被打上标签,我们就可以轻松地使用 [nodeSelector][15] 为你的工作负载选择合适的节点。拼图的最后一块:如果你想在*所有*合适的节点上运行 Pod,那应该使用 [DaemonSet][16] 而不是部署。换句话说,应为每个使用唯一传感器的数据收集应用程序创建一个 DaemonSet,并使用 nodeSelector 确保它们仅在具有适当硬件的节点上运行。 -服务发现功能允许 Pod 通过 Service 名称来寻找彼此,这项功能使得这类分布式系统的管理工作变得易如反掌。你不需要为应用配置 IP 地址或自定义端口,也不需要知道它们。相反,它们可以通过集群中的具名 Service 轻松找到彼此。 +服务发现功能允许 Pod 通过服务名称来寻找彼此,这项功能使得这类分布式系统的管理工作变得易如反掌。你不需要为应用配置 IP 地址或自定义端口,也不需要知道它们。相反,它们可以通过集群中的命名服务轻松找到彼此。 #### 充分利用空闲资源 -随着集群的启动并运行,收集数据并控制灯光和气候可能使你觉得你已经把它完成了。不过,集群中还有大量的计算资源可以用于其它项目。这才是 Kubernetes 真正出彩的地方。 +随着集群的启动并运行,收集数据并控制灯光和气候,可能使你觉得你已经把它完成了。不过,集群中还有大量的计算资源可以用于其它项目。这才是 Kubernetes 真正出彩的地方。 你不必担心这些资源的确切位置,或者去计算是否有足够的内存来容纳额外的应用程序。这正是编排系统所解决的问题!你可以轻松地在集群中部署更多的应用,让 Kubernetes 来找出适合运行它们的位置(或是否适合运行它们)。 @@ -182,14 +182,14 @@ via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/ 作者:[Lennart Jern][a] 选题:[lujun9972][b] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/lennartj/ [b]: https://github.com/lujun9972 [1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png -[2]: https://fedoramagazine.org/turnon-led-fedora-iot/ +[2]: https://linux.cn/article-10380-1.html [3]: https://hub.helm.sh/ [4]: https://k3s.io [5]: https://docs.fedoraproject.org/en-US/iot/getting-started/ diff --git a/published/20190416 Building a DNS-as-a-service with OpenStack Designate.md b/published/201905/20190416 Building a DNS-as-a-service with OpenStack Designate.md similarity index 100% rename from published/20190416 Building a DNS-as-a-service with OpenStack Designate.md rename to published/201905/20190416 Building a DNS-as-a-service with OpenStack Designate.md diff --git a/published/20190416 Detecting malaria with deep learning.md b/published/201905/20190416 Detecting malaria with deep learning.md similarity index 100% rename from published/20190416 Detecting malaria with deep learning.md rename to published/201905/20190416 Detecting malaria with deep learning.md diff --git a/published/20190416 Inter-process communication in Linux- Using pipes and message queues.md b/published/201905/20190416 Inter-process communication in Linux- Using pipes and message queues.md similarity index 100% rename from published/20190416 Inter-process communication in Linux- Using pipes and message queues.md rename to published/201905/20190416 Inter-process communication in Linux- Using pipes and message queues.md diff --git a/published/20190419 Building scalable social media sentiment analysis services in Python.md b/published/201905/20190419 Building scalable social media sentiment analysis services in Python.md similarity index 100% rename from published/20190419 Building scalable social media sentiment analysis services in Python.md rename to published/201905/20190419 Building scalable social media sentiment analysis services in Python.md diff --git a/published/20190419 Getting started with social media sentiment analysis in Python.md b/published/201905/20190419 Getting started with social media sentiment analysis in Python.md similarity index 100% rename from published/20190419 Getting started with social media sentiment analysis in Python.md rename to published/201905/20190419 Getting started with social media sentiment analysis in Python.md diff --git a/published/20190419 This is how System76 does open hardware.md b/published/201905/20190419 This is how System76 does open hardware.md similarity index 100% rename from published/20190419 This is how System76 does open hardware.md rename to published/201905/20190419 This is how System76 does open hardware.md diff --git a/published/20190422 2 new apps for music tweakers on Fedora Workstation.md b/published/201905/20190422 2 new apps for music tweakers on Fedora Workstation.md similarity index 100% rename from published/20190422 2 new apps for music tweakers on Fedora Workstation.md rename to published/201905/20190422 2 new apps for music tweakers on Fedora Workstation.md diff --git a/published/20190422 8 environment-friendly open software projects you should know.md b/published/201905/20190422 8 environment-friendly open software projects you should know.md similarity index 100% rename from published/20190422 8 environment-friendly open software projects you should know.md rename to published/201905/20190422 8 environment-friendly open software projects you should know.md diff --git a/published/20190422 Tracking the weather with Python and Prometheus.md b/published/201905/20190422 Tracking the weather with Python and Prometheus.md similarity index 100% rename from published/20190422 Tracking the weather with Python and Prometheus.md rename to published/201905/20190422 Tracking the weather with Python and Prometheus.md diff --git a/published/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md b/published/201905/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md similarity index 100% rename from published/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md rename to published/201905/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md diff --git a/published/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md b/published/201905/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md similarity index 100% rename from published/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md rename to published/201905/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md diff --git a/published/20190425 Automate backups with restic and systemd.md b/published/201905/20190425 Automate backups with restic and systemd.md similarity index 100% rename from published/20190425 Automate backups with restic and systemd.md rename to published/201905/20190425 Automate backups with restic and systemd.md diff --git a/published/20190430 Upgrading Fedora 29 to Fedora 30.md b/published/201905/20190430 Upgrading Fedora 29 to Fedora 30.md similarity index 100% rename from published/20190430 Upgrading Fedora 29 to Fedora 30.md rename to published/201905/20190430 Upgrading Fedora 29 to Fedora 30.md diff --git a/published/20190501 3 apps to manage personal finances in Fedora.md b/published/201905/20190501 3 apps to manage personal finances in Fedora.md similarity index 100% rename from published/20190501 3 apps to manage personal finances in Fedora.md rename to published/201905/20190501 3 apps to manage personal finances in Fedora.md diff --git a/published/20190501 Cisco issues critical security warning for Nexus data-center switches.md b/published/201905/20190501 Cisco issues critical security warning for Nexus data-center switches.md similarity index 100% rename from published/20190501 Cisco issues critical security warning for Nexus data-center switches.md rename to published/201905/20190501 Cisco issues critical security warning for Nexus data-center switches.md diff --git a/published/20190501 Write faster C extensions for Python with Cython.md b/published/201905/20190501 Write faster C extensions for Python with Cython.md similarity index 100% rename from published/20190501 Write faster C extensions for Python with Cython.md rename to published/201905/20190501 Write faster C extensions for Python with Cython.md diff --git a/published/20190502 Format Python however you like with Black.md b/published/201905/20190502 Format Python however you like with Black.md similarity index 100% rename from published/20190502 Format Python however you like with Black.md rename to published/201905/20190502 Format Python however you like with Black.md diff --git a/published/20190502 Get started with Libki to manage public user computer access.md b/published/201905/20190502 Get started with Libki to manage public user computer access.md similarity index 100% rename from published/20190502 Get started with Libki to manage public user computer access.md rename to published/201905/20190502 Get started with Libki to manage public user computer access.md diff --git a/published/20190503 API evolution the right way.md b/published/201905/20190503 API evolution the right way.md similarity index 100% rename from published/20190503 API evolution the right way.md rename to published/201905/20190503 API evolution the right way.md diff --git a/published/20190503 Check your spelling at the command line with Ispell.md b/published/201905/20190503 Check your spelling at the command line with Ispell.md similarity index 100% rename from published/20190503 Check your spelling at the command line with Ispell.md rename to published/201905/20190503 Check your spelling at the command line with Ispell.md diff --git a/published/20190503 Say goodbye to boilerplate in Python with attrs.md b/published/201905/20190503 Say goodbye to boilerplate in Python with attrs.md similarity index 100% rename from published/20190503 Say goodbye to boilerplate in Python with attrs.md rename to published/201905/20190503 Say goodbye to boilerplate in Python with attrs.md diff --git a/published/20190504 Add methods retroactively in Python with singledispatch.md b/published/201905/20190504 Add methods retroactively in Python with singledispatch.md similarity index 100% rename from published/20190504 Add methods retroactively in Python with singledispatch.md rename to published/201905/20190504 Add methods retroactively in Python with singledispatch.md diff --git a/published/20190504 Using the force at the Linux command line.md b/published/201905/20190504 Using the force at the Linux command line.md similarity index 100% rename from published/20190504 Using the force at the Linux command line.md rename to published/201905/20190504 Using the force at the Linux command line.md diff --git a/published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/published/201905/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md similarity index 100% rename from published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md rename to published/201905/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md diff --git a/published/20190505 How To Create SSH Alias In Linux.md b/published/201905/20190505 How To Create SSH Alias In Linux.md similarity index 100% rename from published/20190505 How To Create SSH Alias In Linux.md rename to published/201905/20190505 How To Create SSH Alias In Linux.md diff --git a/published/20190505 Kindd - A Graphical Frontend To dd Command.md b/published/201905/20190505 Kindd - A Graphical Frontend To dd Command.md similarity index 100% rename from published/20190505 Kindd - A Graphical Frontend To dd Command.md rename to published/201905/20190505 Kindd - A Graphical Frontend To dd Command.md diff --git a/published/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md b/published/201905/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md similarity index 100% rename from published/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md rename to published/201905/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md diff --git a/published/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md b/published/201905/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md similarity index 100% rename from published/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md rename to published/201905/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md diff --git a/published/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md b/published/201905/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md similarity index 100% rename from published/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md rename to published/201905/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md diff --git a/published/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md b/published/201905/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md similarity index 100% rename from published/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md rename to published/201905/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md diff --git a/published/20190508 How to use advanced rsync for large Linux backups.md b/published/201905/20190508 How to use advanced rsync for large Linux backups.md similarity index 100% rename from published/20190508 How to use advanced rsync for large Linux backups.md rename to published/201905/20190508 How to use advanced rsync for large Linux backups.md diff --git a/published/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md b/published/201905/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md similarity index 100% rename from published/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md rename to published/201905/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md diff --git a/published/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md b/published/201905/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md similarity index 100% rename from published/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md rename to published/201905/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md diff --git a/published/20190510 PHP in 2019.md b/published/201905/20190510 PHP in 2019.md similarity index 100% rename from published/20190510 PHP in 2019.md rename to published/201905/20190510 PHP in 2019.md diff --git a/published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/published/201905/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md similarity index 100% rename from published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md rename to published/201905/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md diff --git a/published/20190516 Building Smaller Container Images.md b/published/201905/20190516 Building Smaller Container Images.md similarity index 100% rename from published/20190516 Building Smaller Container Images.md rename to published/201905/20190516 Building Smaller Container Images.md diff --git a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/published/201905/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md similarity index 78% rename from translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md rename to published/201905/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md index b51f375a48..5c51721c84 100644 --- a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md +++ b/published/201905/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @@ -1,27 +1,28 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10906-1.html) [#]: subject: (Querying 10 years of GitHub data with GHTorrent and Libraries.io) [#]: via: (https://opensource.com/article/19/5/chaossearch-github-ghtorrent) [#]: author: (Pete Cheslock https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes) 用 GHTorrent 和 Libraries.io 查询 10 年的 GitHub 数据 ====== -> 有一种方法可以在没有任何本地基础设施的情况下使用开源数据集探索 GitHub 数据. - -![magnifying glass on computer screen][1] -我一直在寻找新的数据集,以用它们来展示我团队工作的力量。[CHAOSSEARCH][2] 可以将你的 [Amazon S3][3] 对象存储数据转换为完全可搜索的 [Elasticsearch][4] 式集群。使用 Elasticsearch API 或 [Kibana][5] 等工具,你可以查询你找的任何数据。 +> 有一种方法可以在没有任何本地基础设施的情况下使用开源数据集探索 GitHub 数据。 + +![magnifying glass on computer screen](https://img.linux.net.cn/data/attachment/album/201905/27/220200jlzrlz333vkfl8ok.jpg) + +我一直在寻找新的数据集,以用它们来展示我们团队工作的力量。[CHAOSSEARCH][2] 可以将你的 [Amazon S3][3] 对象存储数据转换为完全可搜索的 [Elasticsearch][4] 式集群。使用 Elasticsearch API 或 [Kibana][5] 等工具,你可以查询你所要找的任何数据。 当我找到 [GHTorrent][6] 项目进行探索时,我很兴奋。GHTorrent 旨在通过 GitHub API 构建所有可用数据的离线版本。如果你喜欢数据集,这是一个值得一看的项目,甚至你可以考虑[捐赠一个 GitHub API 密钥][7]。 ### 访问 GHTorrent 数据 -有许多方法可以访问和使用 [GHTorrent 的数据][8],它以 [NDJSON][9] 格式提供。这个项目可以以多种形式提供数据,包括用于恢复到 [MySQL][11] 数据库的 [CSV][10],可以转储所有对象的 [MongoDB][12],以及用于将数据直接导出到 Google 对象存储中的 Google Big Query(免费)。 有一点需要注意:这个数据集有从 2008 年到 2017 年的几乎完整的数据集,但从 2017 年到现在的数据还不完整。这将影响我们确定性查询的能力,但它仍然是一个令人兴奋的信息量。 +有许多方法可以访问和使用 [GHTorrent 的数据][8],它以 [NDJSON][9] 格式提供。这个项目可以以多种形式提供数据,包括用于恢复到 [MySQL][11] 数据库的 [CSV][10],可以转储所有对象的 [MongoDB][12],以及用于将数据直接导出到 Google 对象存储中的 Google Big Query(免费)。 有一点需要注意:这个数据集有从 2008 年到 2017 年几乎完整的数据集,但从 2017 年到现在的数据还不完整。这将影响我们确定性查询的能力,但它仍然是一个令人兴奋的信息量。 -我选择 Google Big Query 来避免自己运行任何数据库,那么我就可以很快下载包括用户和项目在内的完整数据库。 CHAOSSEARCH 可以原生分析 NDJSON 格式,因此在将数据上传到 Amazon S3 之后,我能够在几分钟内对其进行索引。 CHAOSSEARCH 平台不要求用户设置索引模式或定义其数据的映射,它可以发现所有字段本身(字符串、整数等)。 +我选择 Google Big Query 来避免自己运行任何数据库,那么我就可以很快下载包括用户和项目在内的完整数据库。CHAOSSEARCH 可以原生分析 NDJSON 格式,因此在将数据上传到 Amazon S3 之后,我能够在几分钟内对其进行索引。CHAOSSEARCH 平台不要求用户设置索引模式或定义其数据的映射,它可以发现所有字段本身(字符串、整数等)。 随着我的数据完全索引并准备好进行搜索和聚合,我想深入了解看看我们可以发现什么,比如哪些软件语言是 GitHub 项目最受欢迎的。 @@ -49,11 +50,10 @@ ![The rate at which new projects are created on GitHub.][20] -既然我知道了创建的项目的速度以及用于创建这些项目的最流行的语言,我还想知道这些项目选择的开源许可证。遗憾的是,这个 GitHub 项目数据集中并不存在这些数据,但是 [Tidelift][21] 的精彩团队在 [Libraries.io][22] [数据][23] 里发布了一个 GitHub 项目的详细列表,包括使用的许可证以及其中有关开源软件状态的其他详细信息。将此数据集导入 CHAOSSEARCH 只花了几分钟,让我看看哪些开源软件许可证在 GitHub 上最受欢迎: +既然我知道了创建项目的速度以及用于创建这些项目的最流行的语言,我还想知道这些项目选择的开源许可证。遗憾的是,这个 GitHub 项目数据集中并不存在这些数据,但是 [Tidelift][21] 的精彩团队在 [Libraries.io][22] [数据][23] 里发布了一个 GitHub 项目的详细列表,包括使用的许可证以及其中有关开源软件状态的其他详细信息。将此数据集导入 CHAOSSEARCH 只花了几分钟,让我看看哪些开源软件许可证在 GitHub 上最受欢迎: (提醒:这是为了便于阅读而合并的有效 JSON。) - ``` {"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}} ``` @@ -62,9 +62,9 @@ ![Which open source software licenses are the most popular on GitHub.][24] -如你所见,[MIT 许可证][25] 和 [Apache 2.0 许可证][26] 的开源项目远远超过了其他大多数开源许可证,而 [各种 BSD 和 GPL 许可证][27] 则差得很远。鉴于 GitHub 的开放模式,我不能说我对这些结果感到惊讶。我猜想用户(而不是公司)创建了大多数项目,并且他们使用 MIT 许可证可以使其他人轻松使用、共享和贡献。而鉴于有不少公司希望确保其商标得到尊重并为其业务提供开源组件,那么 Apache 2.0 许可证数量高企的背后也是有道理的。 +如你所见,[MIT 许可证][25] 和 [Apache 2.0 许可证][26] 的开源项目远远超过了其他大多数开源许可证,而 [各种 BSD 和 GPL 许可证][27] 则差得很远。鉴于 GitHub 的开放模式,我不能说我对这些结果感到惊讶。我猜想是用户(而不是公司)创建了大多数项目,并且他们使用 MIT 许可证可以使其他人轻松地使用、共享和贡献。而鉴于有不少公司希望确保其商标得到尊重并为其业务提供开源组件,那么 Apache 2.0 许可证数量高企的背后也是有道理的。 -现在我确定了最受欢迎的许可证,我很想看看到最少使用的许可证。通过调整我的上一个查询,我将前 10 名逆转为最后 10 名,并且只找到了两个使用 [伊利诺伊大学 - NCSA 开源许可证][28] 的项目。我之前从未听说过这个许可证,但它与 Apache 2.0 非常接近。看到了所有 GitHub 项目中使用了多少个不同的软件许可证,这很有意思。 +现在我确定了最受欢迎的许可证,我很想看看最少使用的许可证。通过调整我的上一个查询,我将前 10 名逆转为最后 10 名,并且只找到了两个使用 [伊利诺伊大学 - NCSA 开源许可证][28] 的项目。我之前从未听说过这个许可证,但它与 Apache 2.0 非常接近。看到所有 GitHub 项目中使用了多少个不同的软件许可证,这很有意思。 ![The University of Illinois/NCSA open source license.][29] @@ -78,7 +78,7 @@ ![The most popular open source licenses used for GitHub JavaScript projects.][30] -尽管使用 `npm init` 创建的 [NPM][31] 模块的默认许可证是 [Internet Systems Consortium(ISC)][32] 的许可证,但你可以看到相当多的这些项目使用 MIT 以及 Apache 2.0 的开源许可证。 +尽管使用 `npm init` 创建的 [NPM][31] 模块的默认许可证是来自 [Internet Systems Consortium(ISC)][32] 的许可证,但你可以看到相当多的这些项目使用 MIT 以及 Apache 2.0 的开源许可证。 由于 Libraries.io 数据集中包含丰富的开源项目内容,并且由于 GHTorrent 数据缺少最近几年的数据(因此缺少有关 Golang 项目的任何细节),因此我决定运行类似的查询来查看 Golang 项目是如何许可他们的代码的。 @@ -94,7 +94,7 @@ Golang 项目与 JavaScript 项目惊人逆转 —— 使用 Apache 2.0 的 Golang 项目几乎是 MIT 许可证的三倍。虽然很难准确地解释为什么会出现这种情况,但在过去的几年中,Golang 已经出现了大规模的增长,特别是在开源和商业化的项目和软件产品公司中。 -正如我们上面所了解的,这些公司中的许多公司都希望强制执行其商标,因此转向 Apache 2.0 许可证是有道理的。 +正如我们上面所了解的,这些公司中的许多公司都希望强制执行其商标策略,因此转向 Apache 2.0 许可证是有道理的。 #### 总结 @@ -104,7 +104,7 @@ Golang 项目与 JavaScript 项目惊人逆转 —— 使用 Apache 2.0 的 Gola 你对数据提出了哪些其他问题,以及你使用了哪些数据集?请在评论或推特上告诉我 [@petecheslock] [34]。 -本文的一个版本最初发布在 [CHAOSSEARCH][35]。 +本文的一个版本最初发布在 [CHAOSSEARCH][35],有更多结果可供发现。 -------------------------------------------------------------------------------- @@ -113,7 +113,7 @@ via: https://opensource.com/article/19/5/chaossearch-github-ghtorrent 作者:[Pete Cheslock][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md b/published/201905/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md similarity index 100% rename from published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md rename to published/201905/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md diff --git a/published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/published/201905/20190520 PiShrink - Make Raspberry Pi Images Smaller.md similarity index 100% rename from published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md rename to published/201905/20190520 PiShrink - Make Raspberry Pi Images Smaller.md diff --git a/published/20190520 xsos - A Tool To Read SOSReport In Linux.md b/published/201905/20190520 xsos - A Tool To Read SOSReport In Linux.md similarity index 100% rename from published/20190520 xsos - A Tool To Read SOSReport In Linux.md rename to published/201905/20190520 xsos - A Tool To Read SOSReport In Linux.md diff --git a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md b/published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md similarity index 74% rename from translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md rename to published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md index b825435dcb..4960672f5a 100644 --- a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md +++ b/published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (way-ww) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10923-1.html) [#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?) [#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) @@ -10,25 +10,17 @@ 如何在 Linux 上安装/卸载一个文件中列出的软件包? ====== -在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。 +在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。例如,你已经在服务器 A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器 B、服务器 C 上等等。 -例如,你已经在服务器A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器B,服务器C 上等等。 +我们可以手动去安装这些软件但是这将花费大量的时间。你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。在这种情况下你无法手动完成工作,那么怎样才能解决问题呢? -我们可以手动去安装这些软件但是这将花费大量的时间。 - -你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。 - -在这种情况下你无法手动完成工作,那么怎样才能解决问题呢? - -不要担心我们可以帮你摆脱这样的情况和场景。 - -我们在这篇文章中增加了四种方法来克服困难。 +不要担心我们可以帮你摆脱这样的情况和场景。我们在这篇文章中增加了四种方法来克服困难。 我希望这可以帮你解决问题。我已经在 Centos7 和 Ubuntu 18.04 上测试了这些命令。 我也希望这可以在其他发行版上工作。这仅仅需要使用该发行版的官方包管理器命令替代本文中的包管理器命令就行了。 -如果想要 **[检查 Linux 系统上已安装的软件包列表][1]** 请点击链接。 +如果想要 [检查 Linux 系统上已安装的软件包列表][1],请点击链接。 例如,如果你想要在基于 RHEL 系统上创建软件包列表请使用以下步骤。其他发行版也一样。 @@ -53,11 +45,9 @@ apr-util-1.5.2-6.el7.x86_64 apr-1.4.8-3.el7_4.1.x86_64 ``` -### 方法一 : 如何在 Linux 上使用 cat 命令安装文件中列出的包? +### 方法一:如何在 Linux 上使用 cat 命令安装文件中列出的包? -为实现这个目标,我将使用简单明了的第一种方法。 - -为此,创建一个文件并添加上你想要安装的包列表。 +为实现这个目标,我将使用简单明了的第一种方法。为此,创建一个文件并添加上你想要安装的包列表。 出于测试的目的,我们将只添加以下的三个软件包名到文件中。 @@ -69,7 +59,7 @@ mariadb-server nano ``` -只要简单的运行 **[apt 命令][2]** 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。 +只要简单的运行 [apt 命令][2] 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。 ``` # apt -y install $(cat /tmp/pack1.txt) @@ -138,20 +128,19 @@ Processing triggers for install-info (6.5.0.dfsg.1-2) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... ``` -使用 **[yum 命令][3]** 在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 - +使用 [yum 命令][3] 在基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 ``` # yum -y install $(cat /tmp/pack1.txt) ``` -使用以命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 +使用以命令在基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 ``` # yum -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[dnf 命令][4]** 在 Fedora 系统上安装文件中列出的软件包。 +使用以下 [dnf 命令][4] 在 Fedora 系统上安装文件中列出的软件包。 ``` # dnf -y install $(cat /tmp/pack1.txt) @@ -163,7 +152,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # dnf -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[zypper 命令][5]** 在 openSUSE 系统上安装文件中列出的软件包。 +使用以下 [zypper 命令][5] 在 openSUSE 系统上安装文件中列出的软件包。 ``` # zypper -y install $(cat /tmp/pack1.txt) @@ -175,7 +164,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # zypper -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[pacman 命令][6]** 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 +使用以下 [pacman 命令][6] 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 ``` # pacman -S $(cat /tmp/pack1.txt) @@ -188,36 +177,35 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # pacman -Rs $(cat /tmp/pack1.txt) ``` -### 方法二 : 如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。 +### 方法二:如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。 甚至,我更喜欢使用这种方法,因为这是一种非常简单直接的方法。 -使用以下 apt 命令在基于 Debian 的系统 (如 Debian,Ubuntu和Linux Mint) 上安装文件中列出的软件包。 - +使用以下 `apt` 命令在基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs apt -y install ``` -使用以下 apt 命令 从基于 Debian 的系统 (如 Debian,Ubuntu和Linux Mint) 上卸载文件中列出的软件包。 +使用以下 `apt` 命令 从基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上卸载文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs apt -y remove ``` -使用以下 yum 命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 +使用以下 `yum` 命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs yum -y install ``` -使用以命令从基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 +使用以命令从基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs yum -y remove ``` -使用以下 dnf 命令在 Fedora 系统上安装文件中列出的软件包。 +使用以下 `dnf` 命令在 Fedora 系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs dnf -y install @@ -229,7 +217,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs dnf -y remove ``` -使用以下 zypper 命令在 openSUSE 系统上安装文件中列出的软件包。 +使用以下 `zypper` 命令在 openSUSE 系统上安装文件中列出的软件包。 ``` @@ -242,7 +230,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs zypper -y remove ``` -使用以下 pacman 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 +使用以下 `pacman` 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs pacman -S @@ -254,17 +242,17 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs pacman -Rs ``` -### 方法三 : 如何使用 For Loop 在 Linux 上安装文件中列出的软件包? -我们也可以使用 For 循环命令来实现此目的。 +### 方法三 : 如何使用 For 循环在 Linux 上安装文件中列出的软件包 -安装批量包可以使用以下一条 For 循环的命令。 +我们也可以使用 `for` 循环命令来实现此目的。 + +安装批量包可以使用以下一条 `for` 循环的命令。 ``` # for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done ``` -要使用 shell 脚本安装批量包,请使用以下 For 循环。 - +要使用 shell 脚本安装批量包,请使用以下 `for` 循环。 ``` # vi /opt/scripts/bulk-package-install.sh @@ -275,7 +263,7 @@ do apt -y remove $pack done ``` -为 bulk-package-install.sh 设置可执行权限。 +为 `bulk-package-install.sh` 设置可执行权限。 ``` # chmod + bulk-package-install.sh @@ -287,17 +275,17 @@ done # sh bulk-package-install.sh ``` -### 方法四 : 如何使用 While 循环在 Linux 上安装文件中列出的软件包。 +### 方法四:如何使用 While 循环在 Linux 上安装文件中列出的软件包 -我们也可以使用 While 循环命令来实现目的。 +我们也可以使用 `while` 循环命令来实现目的。 -安装批量包可以使用以下一条 While 循环的命令。 +安装批量包可以使用以下一条 `while` 循环的命令。 ``` # file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file" ``` -要使用 shell 脚本安装批量包,请使用以下 While 循环。 +要使用 shell 脚本安装批量包,请使用以下 `while` 循环。 ``` # vi /opt/scripts/bulk-package-install.sh @@ -309,7 +297,7 @@ do apt -y remove $pack done < "$file" ``` -为 bulk-package-install.sh 设置可执行权限。 +为 `bulk-package-install.sh` 设置可执行权限。 ``` # chmod + bulk-package-install.sh @@ -328,13 +316,13 @@ via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-fi 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.2daygeek.com/author/magesh/ [b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/ +[1]: https://linux.cn/article-10116-1.html [2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ [3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ [4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ diff --git a/translated/tech/20190520 Getting Started With Docker.md b/published/20190520 Getting Started With Docker.md similarity index 52% rename from translated/tech/20190520 Getting Started With Docker.md rename to published/20190520 Getting Started With Docker.md index f745962bdd..2349664ad9 100644 --- a/translated/tech/20190520 Getting Started With Docker.md +++ b/published/20190520 Getting Started With Docker.md @@ -1,98 +1,98 @@ [#]: collector: "lujun9972" [#]: translator: "zhang5788" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " -[#]: subject: "Getting Started With Docker" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10940-1.html" +[#]: subject: "Getting Started With Docker" [#]: via: "https://www.ostechnix.com/getting-started-with-docker/" -[#]: author: "sk https://www.ostechnix.com/author/sk/" +[#]: author: "sk https://www.ostechnix.com/author/sk/" Docker 入门指南 ====== ![Getting Started With Docker][1] -在我们的上一个教程中,我们已经了解[**如何在ubuntu上安装Docker**][1],和如何在[**CentOS上安装Docker**][2]。今天,我们将会了解Docker的一些基础用法。该教程包含了如何创建一个新的docker容器,如何运行该容器,如何从现有的docker容器中创建自己的Docker镜像等Docker 的一些基础知识,操作。所有步骤均在Ubuntu 18.04 LTS server 版本下测试通过。 +在我们的上一个教程中,我们已经了解[如何在 Ubuntu 上安装 Docker][1],和如何在 [CentOS 上安装 Docker][2]。今天,我们将会了解 Docker 的一些基础用法。该教程包含了如何创建一个新的 Docker 容器,如何运行该容器,如何从现有的 Docker 容器中创建自己的 Docker 镜像等 Docker 的一些基础知识、操作。所有步骤均在 Ubuntu 18.04 LTS server 版本下测试通过。 ### 入门指南 -在开始指南之前,不要混淆Docker镜像和Docker容器这两个概念。在之前的教程中,我就解释过,Docker镜像是决定Docker容器行为的一个文件,Docker容器则是Docker镜像的运行态或停止态。(译者注:在`macOS`下使用docker终端时,不需要加`sudo`) +在开始指南之前,不要混淆 Docker 镜像和 Docker 容器这两个概念。在之前的教程中,我就解释过,Docker 镜像是决定 Docker 容器行为的一个文件,Docker 容器则是 Docker 镜像的运行态或停止态。(LCTT 译注:在 macOS 下使用 Docker 终端时,不需要加 `sudo`) -##### 1. 搜索Docker镜像 +#### 1、搜索 Docker 镜像 -我们可以从Docker的仓库中获取镜像,例如[**Docker hub**][3], 或者自己创建镜像。这里解释一下,`Docker hub`是一个云服务器,用来提供给Docker的用户们,创建,测试,和保存他们的镜像。 +我们可以从 Docker 仓库中获取镜像,例如 [Docker hub][3],或者自己创建镜像。这里解释一下,Docker hub 是一个云服务器,用来提供给 Docker 的用户们创建、测试,和保存他们的镜像。 -`Docker hub`拥有成千上万个Docker 的镜像文件。你可以在这里搜索任何你想要的镜像,通过`docker search`命令。 +Docker hub 拥有成千上万个 Docker 镜像文件。你可以通过 `docker search`命令在这里搜索任何你想要的镜像。 -例如,搜索一个基于ubuntu的镜像文件,只需要运行: +例如,搜索一个基于 Ubuntu 的镜像文件,只需要运行: ```shell $ sudo docker search ubuntu ``` -**Sample output:** +示例输出: ![][5] -搜索基于CentOS的镜像,运行: +搜索基于 CentOS 的镜像,运行: ```shell -$ sudo docker search ubuntu +$ sudo docker search centos ``` -搜索AWS的镜像,运行: +搜索 AWS 的镜像,运行: ```shell $ sudo docker search aws ``` -搜索`wordpress`的镜像: +搜索 WordPress 的镜像: ```shell $ sudo docker search wordpress ``` -`Docker hub`拥有几乎所有种类的镜像,包含操作系统,程序和其他任意的类型,这些你都能在`docker hub`上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。 +Docker hub 拥有几乎所有种类的镜像,包含操作系统、程序和其他任意的类型,这些你都能在 Docker hub 上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。 -##### 2. 下载Docker 镜像 +#### 2、下载 Docker 镜像 -下载`ubuntu`的镜像,你需要在终端运行以下命令: +下载 Ubuntu 的镜像,你需要在终端运行以下命令: ```shell $ sudo docker pull ubuntu ``` -这条命令将会从**Docker hub**下载最近一个版本的ubuntu镜像文件。 +这条命令将会从 Docker hub 下载最近一个版本的 Ubuntu 镜像文件。 -**Sample output:** +示例输出: -> ```shell -> Using default tag: latest -> latest: Pulling from library/ubuntu -> 6abc03819f3e: Pull complete -> 05731e63f211: Pull complete -> 0bd67c50d6be: Pull complete -> Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 -> Status: Downloaded newer image for ubuntu:latest -> ``` +``` +Using default tag: latest +latest: Pulling from library/ubuntu +6abc03819f3e: Pull complete +05731e63f211: Pull complete +0bd67c50d6be: Pull complete +Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 +Status: Downloaded newer image for ubuntu:latest +``` -![下载docker 镜像][6] +![下载 Docker 镜像][6] -你也可以下载指定版本的ubuntu镜像。运行以下命令: +你也可以下载指定版本的 Ubuntu 镜像。运行以下命令: ```shell $ docker pull ubuntu:18.04 ``` -Dokcer允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。 +Docker 允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。 -例如,下载CentOS镜像: +例如,下载 CentOS 镜像: ```shell $ sudo docker pull centos ``` -所有下载的镜像文件,都被保存在`/var/lib/docker`文件夹下。(译者注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询) +所有下载的镜像文件,都被保存在 `/var/lib/docker` 文件夹下。(LCTT 译注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询) 查看已经下载的镜像列表,可以使用以下命令: @@ -100,7 +100,7 @@ $ sudo docker pull centos $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -109,17 +109,17 @@ centos latest 9f38484d220f 2 months ago hello-world latest fce289e99eb9 4 months ago 1.84kB ``` -正如你看到的那样,我已经下载了三个镜像文件:**ubuntu**, **CentOS**和**Hello-world**. +正如你看到的那样,我已经下载了三个镜像文件:`ubuntu`、`centos` 和 `hello-world`。 现在,让我们继续,来看一下如何运行我们刚刚下载的镜像。 -##### 3. 运行Docker镜像 +#### 3、运行 Docker 镜像 -运行一个容器有两种方法。我们可以使用`TAG`或者是`镜像ID`。`TAG`指的是特定的镜像快照。`镜像ID`是指镜像的唯一标识。 +运行一个容器有两种方法。我们可以使用标签或者是镜像 ID。标签指的是特定的镜像快照。镜像 ID 是指镜像的唯一标识。 -正如上面结果中显示,`latest`是所有镜像的一个标签。**7698f282e524**是Ubuntu docker 镜像的`镜像ID`,**9f38484d220f**是CentOS镜像的`镜像ID`,**fce289e99eb9**是hello_world镜像的`镜像ID`。 +正如上面结果中显示,`latest` 是所有镜像的一个标签。`7698f282e524` 是 Ubuntu Docker 镜像的镜像 ID,`9f38484d220f`是 CentOS 镜像的镜像 ID,`fce289e99eb9` 是 hello_world 镜像的 镜像 ID。 -下载完Docker镜像之后,你可以通过下面的命令来使用`TAG`的方式启动: +下载完 Docker 镜像之后,你可以通过下面的命令来使用其标签来启动: ```shell $ sudo docker run -t -i ubuntu:latest /bin/bash @@ -127,12 +127,12 @@ $ sudo docker run -t -i ubuntu:latest /bin/bash 在这条语句中: -* **-t**: 在该容器中启动一个新的终端 -* **-i**: 通过容器中的标准输入流建立交互式连接 -* **ubuntu:latest**:带有标签`latest`的ubuntu容器 -* **/bin/bash** : 在新的容器中启动`BASH Shell` +* `-t`:在该容器中启动一个新的终端 +* `-i`:通过容器中的标准输入流建立交互式连接 +* `ubuntu:latest`:带有标签 `latest` 的 Ubuntu 容器 +* `/bin/bash`:在新的容器中启动 BASH Shell -或者,你可以通过`镜像ID`来启动新的容器: +或者,你可以通过镜像 ID 来启动新的容器: ```shell $ sudo docker run -t -i 7698f282e524 /bin/bash @@ -140,15 +140,15 @@ $ sudo docker run -t -i 7698f282e524 /bin/bash 在这条语句里: -* **7698f282e524** —`镜像ID` +* `7698f282e524` — 镜像 ID -在启动容器之后,将会自动进入容器的`shell`中(注意看命令行的提示符)。 +在启动容器之后,将会自动进入容器的 shell 中(注意看命令行的提示符)。 ![][7] -Docker 容器的`Shell` +*Docker 容器的 Shell* -如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到18.04 LTS),并且不中断该容器的执行,你可以按下`CTRL+P `,再按下`CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,docker 容器仍然在后台运行,我们并没有中断它。 +如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到 18.04 LTS),并且不中断该容器的执行,你可以按下 `CTRL+P`,再按下 `CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,Docker 容器仍然在后台运行,我们并没有中断它。 可以通过下面的命令来查看正在运行的容器: @@ -156,7 +156,7 @@ Docker 容器的`Shell` $ sudo docker ps ``` -**Sample output:** +示例输出: ```shell CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -165,14 +165,14 @@ CONTAINER ID IMAGE COMMAND CREATED ![][8] -列出正在运行的容器 +*列出正在运行的容器* -可以看到: +可以看到: -* **32fc32ad0d54** – `容器 ID` -* **ubuntu:latest** – Docker 镜像 +* `32fc32ad0d54` – 容器 ID +* `ubuntu:latest` – Docker 镜像 -需要注意的是,**`容器ID`和Docker `镜像ID`是不同的** +需要注意的是,容器 ID 和 Docker 的镜像 ID是不同的。 可以通过以下命令查看所有正在运行和停止运行的容器: @@ -192,13 +192,13 @@ $ sudo docker stop $ sudo docker stop 32fc32ad0d54 ``` -如果想要进入正在运行的容器中,你只需要运行 +如果想要进入正在运行的容器中,你只需要运行: ```shell $ sudo docker attach 32fc32ad0d54 ``` -正如你看到的,**32fc32ad0d54**是一个容器的ID。当你在容器中想要退出时,只需要在容器内的终端中输入命令: +正如你看到的,`32fc32ad0d54` 是一个容器的 ID。当你在容器中想要退出时,只需要在容器内的终端中输入命令: ```shell # exit @@ -210,46 +210,44 @@ $ sudo docker attach 32fc32ad0d54 $ sudo docker ps ``` -##### 4. 构建自己的Docker镜像 +#### 4、构建自己的 Docker 镜像 -Docker不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。 +Docker 不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。 -想要创建自己的Docker镜像,你需要先运行一个你已经下载完的容器: +想要创建自己的 Docker 镜像,你需要先运行一个你已经下载完的容器: ```shell $ sudo docker run -t -i ubuntu:latest /bin/bash ``` -现在,你运行了一个容器,并且进入了该容器。 +现在,你运行了一个容器,并且进入了该容器。然后,在该容器安装任意一个软件或做任何你想做的事情。 -然后,在该容器安装任意一个软件或做任何你想做的事情。 +例如,我们在容器中安装一个 Apache web 服务器。 -例如,我们在容器中安装一个**Apache web 服务器**。 - -当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的Docker镜像: +当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的 Docker 镜像: ```shell # apt update # apt install apache2 ``` -同样的,安装和测试所有的你想要安装的软件在容器中。 +同样的,在容器中安装和测试你想要安装的所有软件。 -当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机的host而不中断容器。请按下CTRL+P ,再按下CTRL+Q。 +当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机而不中断容器。请按下`CTRL+P`,再按下 `CTRL+Q`。 -从你的宿主机的终端中,运行以下命令如寻找容器的ID: +从你的宿主机的终端中,运行以下命令如寻找容器的 ID: ```shell $ sudo docker ps ``` -最后,从一个正在运行的容器中创建Docker镜像: +最后,从一个正在运行的容器中创建 Docker 镜像: ```shell $ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache ``` -**输出为:** +示例输出: ```shell sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 @@ -257,17 +255,17 @@ sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 在这里: -* **3d24b3de0bfc** — 指ubuntu容器的ID。 -* **ostechnix** — 我们创建的的名称 -* **ubuntu_apache** — 我们创建的镜像 +* `3d24b3de0bfc` — 指 Ubuntu 容器的 ID。 +* `ostechnix` — 我们创建的容器的用户名称 +* `ubuntu_apache` — 我们创建的镜像 -让我们检查一下我们新创建的docker镜像 +让我们检查一下我们新创建的 Docker 镜像: ```shell $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -279,7 +277,7 @@ hello-world latest fce289e99eb9 4 months ago ![][9] -列出所有的docker镜像 +*列出所有的 Docker 镜像* 正如你看到的,这个新的镜像就是我们刚刚在本地系统上从运行的容器上创建的。 @@ -289,9 +287,9 @@ hello-world latest fce289e99eb9 4 months ago $ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash ``` -##### 5. 移除容器 +#### 5、删除容器 -如果你在docker上的工作已经全部完成,你就可以删除哪些你不需要的容器。 +如果你在 Docker 上的工作已经全部完成,你就可以删除那些你不需要的容器。 想要删除一个容器,首先,你需要停止该容器。 @@ -301,14 +299,14 @@ $ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash $ sudo docker ps ``` -**输出为:** +示例输出: ```shell CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley ``` -使用`容器ID`来停止该容器: +使用容器 ID 来停止该容器: ```shell $ sudo docker stop 3d24b3de0bfc @@ -328,7 +326,7 @@ $ sudo docker rm 3d24b3de0bfc $ sudo docker container prune ``` -按下"**Y**",来确认你的操作 +按下 `Y`,来确认你的操作: ```sehll WARNING! This will remove all stopped containers. @@ -340,11 +338,11 @@ Deleted Containers: Total reclaimed space: 5B ``` -这个命令仅支持最新的docker。(译者注:仅支持1.25及以上版本的Docker) +这个命令仅支持最新的 Docker。(LCTT 译注:仅支持 1.25 及以上版本的 Docker) -##### 6. 删除Docker镜像 +#### 6、删除 Docker 镜像 -当你移除完不要的Docker容器后,你也可以删除你不需要的Docker镜像。 +当你删除了不要的 Docker 容器后,你也可以删除你不需要的 Docker 镜像。 列出已经下载的镜像: @@ -352,7 +350,7 @@ Total reclaimed space: 5B $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -364,13 +362,13 @@ hello-world latest fce289e99eb9 4 months ago 由上面的命令可以知道,在本地的系统中存在三个镜像。 -使用`镜像ID`来删除镜像。 +使用镜像 ID 来删除镜像。 ```shell $ sudo docekr rmi ce5aa74a48f1 ``` -**输出为:** +示例输出: ```shell Untagged: ostechnix/ubuntu_apache:latest @@ -378,17 +376,17 @@ Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd ``` -**解决问题** +#### 解决问题 -Docker禁止我们删除一个还在被容器使用的镜像。 +Docker 禁止我们删除一个还在被容器使用的镜像。 -例如,当我试图删除Docker镜像**b72889fa879c**时,我只能获得一个错误提示: +例如,当我试图删除 Docker 镜像 `b72889fa879c` 时,我只能获得一个错误提示: ```shell Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377 ``` -这是因为这个Docker镜像正在被一个容器使用。 +这是因为这个 Docker 镜像正在被一个容器使用。 所以,我们来检查一个正在运行的容器: @@ -396,19 +394,19 @@ Error response from daemon: conflict: unable to delete b72889fa879c (must be for $ sudo docker ps ``` -**输出为:** +示例输出: ![][10] 注意,现在并没有正在运行的容器!!! -查看一下所有的容器(包含所有的正在运行和已经停止的容器): +查看一下所有的容器(包含所有的正在运行和已经停止的容器): ```shell $ sudo docker pa -a ``` -**输出为:** +示例输出: ![][11] @@ -420,9 +418,9 @@ $ sudo docker pa -a $ sudo docker rm 12e892156219 ``` -我们仍然使用容器ID来删除这些容器。 +我们仍然使用容器 ID 来删除这些容器。 -当我们删除了所有使用该镜像的容器之后,我们就可以删除Docker的镜像了。 +当我们删除了所有使用该镜像的容器之后,我们就可以删除 Docker 的镜像了。 例如: @@ -438,19 +436,7 @@ $ sudo docker images 想要知道更多的细节,请参阅本指南末尾给出的官方资源的链接或者在评论区进行留言。 -或者,下载以下的关于Docker的电子书来了解更多。 - -* **Download** – [**Free eBook: “Docker Containerization Cookbook”**][12] - -* **Download** – [**Free Guide: “Understanding Docker”**][13] - -* **Download** – [**Free Guide: “What is Docker and Why is it So Popular?”**][14] - -* **Download** – [**Free Guide: “Introduction to Docker”**][15] - -* **Download** – [**Free Guide: “Docker in Production”**][16] - -这就是全部的教程了,希望你可以了解Docker的一些基础用法。 +这就是全部的教程了,希望你可以了解 Docker 的一些基础用法。 更多的教程马上就会到来,敬请关注。 @@ -461,7 +447,7 @@ via: https://www.ostechnix.com/getting-started-with-docker/ 作者:[sk][a] 选题:[lujun9972][b] 译者:[zhang5788](https://github.com/zhang5788) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -482,4 +468,4 @@ via: https://www.ostechnix.com/getting-started-with-docker/ [13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1 [14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1 [15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1 -[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 \ No newline at end of file +[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 diff --git a/published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md new file mode 100644 index 0000000000..487c0f0fe3 --- /dev/null +++ b/published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -0,0 +1,109 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10922-1.html) +[#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) +[#]: via: (https://itsfoss.com/zettlr-markdown-editor/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Zettlr:适合写作者和研究人员的 Markdown 编辑器 +====== + +有很多[适用于 Linux 的 Markdown 编辑器][1],并且还在继续增加。问题是,像 [Boostnote][2] 一样,大多数是为编码人员设计的,可能不会受到非技术人员的欢迎。让我们看一个想要替代 Word 和昂贵的文字处理器,适用于非技术人员的 Markdown 编辑器。我们来看看 Zettlr 吧。 + +### Zettlr Markdown 编辑器 + +![Zettlr Light Mode][3] + +我可能在网站上提到过一两次,我更喜欢用 [Markdown][4] 写下我的所有文档。它易于学习,不会让你受困于专有文档格式。我还在我的[适合作者的开源工具列表][5]中提到了 Markdown 编辑器。 + +我用过许多 Markdown 编辑器,但是我一直有兴趣尝试新的。最近,我遇到了 Zettlr,一个开源 Markdown 编辑器。 + +[Zettlr][6] 是一位名叫 [Hendrik Erz][7] 的德国社会学家/政治理论家创建的。Hendrik 创建了 Zettlr,因为他对目前的文字处理器感到不满意。他想要可以让他“专注于写作和阅读”的编辑器。 + +在发现 Markdown 之后,他在不同的操作系统上尝试了几个 Markdown 编辑器。但它们都没有他想要的东西。[根据 Hendrik 的说法][8],“但我不得不意识到没有为高效组织大量文本而写的编辑器。大多数编辑都是由编码人员编写的,因此可以满足工程师和数学家的需求。没有为我这样的社会科学、历史或政治学的学生的编辑器。“ + +所以他决定创造自己的。2017 年 11 月,他开始编写 Zettlr。 + +![Zettlr About][9] + +#### Zettlr 功能 + +Zettlr 有许多简洁的功能,包括: + + * 从 [Zotero 数据库][10]导入源并在文档中引用它们 +  * 使用可选的行屏蔽,让你无打扰地专注于写作 +  * 支持代码高亮 +  * 使用标签对信息进行排序 +  * 能够为该任务设定写作目标 +  * 查看一段时间的写作统计 +  * 番茄钟计时器 +  * 浅色/深色主题 +  * 使用 [reveal.js][11] 创建演示文稿 +  * 快速预览文档 +  * 可以在一个项目文件夹中搜索 Markdown 文档,并用热图展示文字搜索密度。 +  * 将文件导出为 HTML、PDF、ODT、DOC、reStructuredText、LaTex、TXT、Emacs ORG、[TextBundle][12] 和 Textpack +  * 将自定义 CSS 添加到你的文档 + +当我写这篇文章时,一个对话框弹出来告诉我最近发布了 [1.3.0 beta][14]。此测试版将包括几个新的主题,以及一大堆修复,新功能和改进。 + +![Zettlr Night Mode][15] + +#### 安装 Zettlr + +目前,唯一可安装 Zettlr 的 Linux 存储库是 [AUR][16]。如果你的 Linux 发行版不是基于 Arch 的,你可以从网站上下载 macOS、Windows、Debian 和 Fedora 的[安装程序][17]。 + +#### 对 Zettlr 的最后一点想法 + +注意:为了测试 Zettlr,我用它来写这篇文章。 + +Zettlr 有许多我希望我之前选择的编辑器 (ghostwriter) 有的简洁的功能,例如为文档设置字数目标。我也喜欢在不打开文档的情况下预览文档的功能。 + +![Zettlr Settings][18] + +我也遇到了几个问题,但这些更多的是因为 Zettlr 与 ghostwriter 的工作方式略有不同。例如,当我尝试从网站复制引用或名称时,它会将内嵌样式粘贴到 Zettlr 中。幸运的是,它有一个“不带样式粘贴”的选项。有几次我在打字时有轻微的延迟。但那可能是因为它是一个 Electron 程序。 + +总的来说,我认为 Zettlr 是第一次使用 Markdown 用户的好选择。它有许多 Markdown 编辑器已有的功能,并为那些只使用过文字处理器的用户增加了一些功能。 + +正如 Hendrik 在 [Zettlr 网站][8]中所说的那样,“让自己摆脱文字处理器的束缚,看看你的写作过程如何通过身边的技术得到改善!” + +如果你觉得 Zettlr 有用,请考虑支持 [Hendrik][19]。正如他在网站上所说,“它是免费的,因为我不相信激烈竞争、早逝的创业文化。我只是想帮忙。” + +你有没有用过 Zettlr?你最喜欢的 Markdown 编辑器是什么?请在下面的评论中告诉我们。 + +如果你觉得这篇文章有趣,请在社交媒体,Hacker News 或 [Reddit][21] 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/zettlr-markdown-editor/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-markdown-editors-linux/ +[2]: https://itsfoss.com/boostnote-linux-review/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-light-mode.png?fit=800%2C462&ssl=1 +[4]: https://daringfireball.net/projects/markdown/ +[5]: https://itsfoss.com/open-source-tools-writers/ +[6]: https://www.zettlr.com/ +[7]: https://github.com/nathanlesage +[8]: https://www.zettlr.com/about +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-about.png?fit=800%2C528&ssl=1 +[10]: https://www.zotero.org/ +[11]: https://revealjs.com/#/ +[12]: http://textbundle.org/ +[13]: https://itsfoss.com/great-little-book-shelf-review/ +[14]: https://github.com/Zettlr/Zettlr/releases/tag/v1.3.0-beta +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-night-mode.png?fit=800%2C469&ssl=1 +[16]: https://aur.archlinux.org/packages/zettlr-bin/ +[17]: https://www.zettlr.com/download +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-settings.png?fit=800%2C353&ssl=1 +[19]: https://www.zettlr.com/supporters +[21]: http://reddit.com/r/linuxusersgroup diff --git a/published/20190522 French IT giant Atos enters the edge-computing business.md b/published/20190522 French IT giant Atos enters the edge-computing business.md new file mode 100644 index 0000000000..1be6353555 --- /dev/null +++ b/published/20190522 French IT giant Atos enters the edge-computing business.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: (chen-ni) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10934-1.html) +[#]: subject: (French IT giant Atos enters the edge-computing business) +[#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +法国 IT 巨头 Atos 进军边缘计算 +====== + +> Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。 + +![iStock][1] + +法国 IT 巨头 Atos 是最近才开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics 的产品),Atos 的边缘设备完全可以被放进衣柜里。 + +Atos 表示,他们的这个设备使用人工智能应用提供快速响应,适合需要快速响应的领域比如生产 4.0、自动驾驶汽车、健康管理,以及零售业和机场的安保系统。在这些领域,数据需要在边缘进行实时处理和分析。 + +[延伸阅读:[什么是边缘计算?][2] 以及 [边缘网络和物联网如何重新定义数据中心][3]] + +BullSequana Edge 可以作为独立的基础设施单独采购,也可以和 Atos 的边缘软件捆绑采购,并且这个软件还是非常出色的。Atos 表示 BullSequana Edge 主要支持三种使用场景: + + * AI(人工智能):Atos 的边缘计算机视觉软件为监控摄像头提供先进的特征抽取和分析技术,包括人像、人脸、行为等特征。这些分析可以支持系统做出自动化响应。 + * 大数据:Atos 边缘数据分析系统通过预测性和规范性的解决方案,帮助机构优化商业模型。它使用数据湖的功能,确保数据的可信度和可用性。 + * 容器:Atos 边缘数据容器(EDC)是一种一体化容器解决方案。它可以作为一个去中心化的 IT 系统在边缘运行,并且可以在没有数据中心的环境下自动运行,而不需要现场操作。 + +由于体积小,BullSequana Edge 并不具备很强的处理能力。它装载一个 16 核的 Intel Xeon 中央处理器,可以装备最多两枚英伟达 Tesla T4 图形处理器或者是 FPGA(现场可编程门阵列)。Atos 表示,这就足够让复杂的 AI 模型在边缘进行低延迟的运行了。 + +考虑到数据的敏感性,BullSequana Edge 同时装备了一个入侵感应器,用来在遭遇物理入侵的时候禁用机器。 + +虽然大多数边缘设备都被安放在信号塔附近,但是考虑到边缘系统可能被安放在任何地方,BullSequana Edge 还支持通过无线电、全球移动通信系统(GSM),或者 Wi-Fi 来进行通信。 + +Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以和 IBM 相提并论,并且在过去的十年里已经收购了诸如 Bull SA、施乐 IT 外包以及西门子 IT 等 IT 巨头们。 + +关于边缘网络的延伸阅读: + + * [边缘网络和物联网如何重新定义数据中心][3] + * [边缘计算的最佳实践][4] + * [边缘计算如何提升物联网安全][5] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[chen-ni](https://github.com/chen-ni) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/01/huawei-18501-edge-gartner-100786331-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world diff --git a/published/20190525 4 Ways to Run Linux Commands in Windows.md b/published/20190525 4 Ways to Run Linux Commands in Windows.md new file mode 100644 index 0000000000..88944d79af --- /dev/null +++ b/published/20190525 4 Ways to Run Linux Commands in Windows.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10935-1.html) +[#]: subject: (4 Ways to Run Linux Commands in Windows) +[#]: via: (https://itsfoss.com/run-linux-commands-in-windows/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +在 Windows 中运行 Linux 命令的 4 种方法 +====== + +> 想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。 + +如果你正在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。 + +你的学校实验室可能安装了 Linux,但是你自己没有安装了 [Linux 的笔记本电脑][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你或许想知道如何在 Windows 上运行 Bash 命令和脚本。 + +你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是,为了运行 Linux 命令而使用单独分区的麻烦可能不适合所有人。 + +你也可以[使用在线 Linux 终端][3],但你的作业无法保存。 + +好消息是,有几种方法可以在 Windows 中运行 Linux 命令,就像其他常规应用一样。不是很酷吗? + +### 在 Windows 中使用 Linux 命令 + +![](https://img.linux.net.cn/data/attachment/album/201906/04/093809hlz2tblfzt7mbwwl.jpg) + +作为一个热心的 Linux 用户和推广者,我希望看到越来越多的人使用“真正的” Linux,但我知道有时候,这不是优先考虑的问题。如果你只是想练习 Linux 来通过考试,可以使用这些方法之一在 Windows 上运行 Bash 命令。 + +#### 1、在 Windows 10 上使用 Linux Bash Shell + +你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 (WSL)][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将在 Windows 内部使用真正 Linux 内核。 + +此 WSL 也称为 Bash on Windows,它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。 + +![Ubuntu Linux inside Windows][6] + +你可以在 Windows 应用商店中找到一些流行的 Linux 发行版,如 Ubuntu、Kali Linux、openSUSE 等。你只需像任何其他 Windows 应用一样下载和安装它。安装后,你可以运行所需的所有 Linux 命令。 + +![Linux distributions in Windows 10 Store][8] + +请参考教程:[在 Windows 上安装 Linux bash shell][9]。 + +#### 2、使用 Git Bash 在 Windows 上运行 Bash 命令 + +你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。 + +[Git for Windows][12] 是一组工具,能让你在命令行和图形界面中使用 Git。Git for Windows 中包含的工具之一是 Git Bash。 + +Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还支持许多 Bash 程序,如 `ssh`、`scp`、`cat`、`find` 等。 + +![Git Bash][13] + +换句话说,你可以使用 Git Bash 运行许多常见的 Linux/Bash 命令。 + +你可以从其网站免费下载和安装 Git for Windows 工具来在 Windows 中安装 Git Bash。 + +- [下载 Git for Windows][12] + +#### 3、使用 Cygwin 在 Windows 中使用 Linux 命令 + +如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的自由开源软件。 + +二十年来,Windows 用户使用 Cygwin 来运行和练习 Linux/Bash 命令。十多年前,我甚至用 Cygwin 来学习 Linux 命令。 + +![Cygwin][14] + +你可以从下面的官方网站下载 Cygwin。我还建议你参考这个 [Cygwin 备忘录][15]来开始使用。 + +- [下载 Cygwin][16] + +#### 4、在虚拟机中使用 Linux + +另一种方法是使用虚拟化软件并在其中安装 Linux。这样,你可以在 Windows 中安装 Linux 发行版(带有图形界面)并像常规 Windows 应用一样运行它。 + +这种方法要求你的系统有大的内存,至少 4GB ,但如果你有超过 8GB 的内存那么更好。这里的好处是你可以真实地使用桌面 Linux。如果你喜欢这个界面,那么你可能会在以后决定[切换到 Linux][17]。 + +![Ubuntu Running in Virtual Machine Inside Windows][18] + +有两种流行的工具可在 Windows 上创建虚拟机,它们是 Oracle VirtualBox 和 VMware Workstation Player。你可以使用两者中的任何一个。就个人而言,我更喜欢 VirtualBox。 + +你可以按照[本教程学习如何在 VirtualBox 中安装 Linux][20]。 + +### 总结 + +运行 Linux 命令的最佳方法是使用 Linux。当选择不安装 Linux 时,这些工具能让你在 Windows 上运行 Linux 命令。都试试看,看哪种适合你。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/run-linux-commands-in-windows/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/get-linux-laptops/ +[2]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[3]: https://itsfoss.com/online-linux-terminals/ +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/run-linux-commands-in-windows.png?resize=800%2C450&ssl=1 +[5]: https://itsfoss.com/bash-on-windows/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-10.jpeg?resize=800%2C268&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-4.jpeg?resize=800%2C632&ssl=1 +[9]: https://itsfoss.com/install-bash-on-windows/ +[10]: https://itsfoss.com/basic-git-commands-cheat-sheet/ +[11]: https://itsfoss.com/linus-torvalds-facts/ +[12]: https://gitforwindows.org/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/git-bash.png?ssl=1 +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/cygwin-shell.jpg?ssl=1 +[15]: http://www.voxforge.org/home/docs/cygwin-cheat-sheet +[16]: https://www.cygwin.com/ +[17]: https://itsfoss.com/reasons-switch-linux-windows-xp/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ubuntu-running-in-virtual-machine-inside-windows.jpeg?resize=800%2C450&ssl=1 +[20]: https://itsfoss.com/install-linux-in-virtualbox/ diff --git a/published/20190527 20- FFmpeg Commands For Beginners.md b/published/20190527 20- FFmpeg Commands For Beginners.md new file mode 100644 index 0000000000..2a646c6d89 --- /dev/null +++ b/published/20190527 20- FFmpeg Commands For Beginners.md @@ -0,0 +1,471 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10932-1.html) +[#]: subject: (20+ FFmpeg Commands For Beginners) +[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +给初学者的 20 多个 FFmpeg 命令示例 +====== + +![FFmpeg Commands](https://img.linux.net.cn/data/attachment/album/201906/03/011553xu323dzu40pb03bx.jpg) + +在这个指南中,我将用示例来阐明如何使用 FFmpeg 媒体框架来做各种各样的音频、视频转码和转换的操作。我已经为初学者汇集了最常用的 20 多个 FFmpeg 命令,我将不时地添加更多的示例来保持更新这个指南。请给这个指南加书签,以后回来检查更新。让我们开始吧,如果你还没有在你的 Linux 系统中安装 FFmpeg,参考下面的指南。 + +* [在 Linux 中安装 FFmpeg][2] + +### 针对初学者的 20 多个 FFmpeg 命令 + +FFmpeg 命令的典型语法是: + +``` +ffmpeg [全局选项] {[输入文件选项] -i 输入_url_地址} ... + {[输出文件选项] 输出_url_地址} ... +``` + +现在我们将查看一些重要的和有用的 FFmpeg 命令。 + +#### 1、获取音频/视频文件信息 + +为显示你的媒体文件细节,运行: + +``` +$ ffmpeg -i video.mp4 +``` + +样本输出: + +``` +ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers +built with gcc 8.2.1 (GCC) 20181127 +configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3 +libavutil 56. 22.100 / 56. 22.100 +libavcodec 58. 35.100 / 58. 35.100 +libavformat 58. 20.100 / 58. 20.100 +libavdevice 58. 5.100 / 58. 5.100 +libavfilter 7. 40.101 / 7. 40.101 +libswscale 5. 3.100 / 5. 3.100 +libswresample 3. 3.100 / 3. 3.100 +libpostproc 55. 3.100 / 55. 3.100 +Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': +Metadata: +major_brand : isom +minor_version : 512 +compatible_brands: isomiso2avc1mp41 +encoder : Lavf58.20.100 +Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s +Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +At least one output file must be specified +``` + +如你在上面的输出中看到的,FFmpeg 显示该媒体文件信息,以及 FFmpeg 细节,例如版本、配置细节、版权标记、构建参数和库选项等等。 + +如果你不想看 FFmpeg 标语和其它细节,而仅仅想看媒体文件信息,使用 `-hide_banner` 标志,像下面。 + +``` +$ ffmpeg -i video.mp4 -hide_banner +``` + +样本输出: + +![][3] + +*使用 FFMpeg 查看音频、视频文件信息。* + +看见了吗?现在,它仅显示媒体文件细节。 + + +#### 2、转换视频文件到不同的格式 + +FFmpeg 是强有力的音频和视频转换器,因此,它能在不同格式之间转换媒体文件。举个例子,要转换 mp4 文件到 avi 文件,运行: + +``` +$ ffmpeg -i video.mp4 video.avi +``` + +类似地,你可以转换媒体文件到你选择的任何格式。 + +例如,为转换 YouTube flv 格式视频为 mpeg 格式,运行: + +``` +$ ffmpeg -i video.flv video.mpeg +``` + +如果你想维持你的源视频文件的质量,使用 `-qscale 0` 参数: + +``` +$ ffmpeg -i input.webm -qscale 0 output.mp4 +``` + +为检查 FFmpeg 的支持格式的列表,运行: + +``` +$ ffmpeg -formats +``` + +#### 3、转换视频文件到音频文件 + +我转换一个视频文件到音频文件,只需具体指明输出格式,像 .mp3,或 .ogg,或其它任意音频格式。 + +上面的命令将转换 input.mp4 视频文件到 output.mp3 音频文件。 + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +此外,你也可以对输出文件使用各种各样的音频转换编码选项,像下面演示。 + +``` +$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 +``` + +在这里, + + * `-vn` – 表明我们已经在输出文件中禁用视频录制。 + * `-ar` – 设置输出文件的音频频率。通常使用的值是22050 Hz、44100 Hz、48000 Hz。 + * `-ac` – 设置音频通道的数目。 + * `-ab` – 表明音频比特率。 + * `-f` – 输出文件格式。在我们的实例中,它是 mp3 格式。 + +#### 4、更改视频文件的分辨率 + +如果你想设置一个视频文件为指定的分辨率,你可以使用下面的命令: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 +``` + +或, + +``` +$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4 +``` + +上面的命令将设置所给定视频文件的分辨率到 1280×720。 + +类似地,为转换上面的文件到 640×480 大小,运行: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 +``` + +或者, + +``` +$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4 +``` + +这个技巧将帮助你缩放你的视频文件到较小的显示设备上,例如平板电脑和手机。 + +#### 5、压缩视频文件 + +减小媒体文件的大小到较小来节省硬件的空间总是一个好主意. + +下面的命令将压缩并减少输出文件的大小。 + +``` +$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4 +``` + +请注意,如果你尝试减小视频文件的大小,你将损失视频质量。如果 24 太有侵略性,你可以降低 `-crf` 值到或更低值。 + +你也可以通过下面的选项来转换编码音频降低比特率,使其有立体声感,从而减小大小。 + +``` +-ac 2 -c:a aac -strict -2 -b:a 128k +``` + +#### 6、压缩音频文件 + +正像压缩视频文件一样,为节省一些磁盘空间,你也可以使用 `-ab` 标志压缩音频文件。 + +例如,你有一个 320 kbps 比特率的音频文件。你想通过更改比特率到任意较低的值来压缩它,像下面。 + +``` +$ ffmpeg -i input.mp3 -ab 128 output.mp3 +``` + +各种各样可用的音频比特率列表是: + + 1. 96kbps + 2. 112kbps + 3. 128kbps + 4. 160kbps + 5. 192kbps + 6. 256kbps + 7. 320kbps + +#### 7、从一个视频文件移除音频流 + +如果你不想要一个视频文件中的音频,使用 `-an` 标志。 + +``` +$ ffmpeg -i input.mp4 -an output.mp4 +``` + +在这里,`-an` 表示没有音频录制。 + +上面的命令会撤销所有音频相关的标志,因为我们不要来自 input.mp4 的音频。 + +#### 8、从一个媒体文件移除视频流 + +类似地,如果你不想要视频流,你可以使用 `-vn` 标志从媒体文件中简单地移除它。`-vn` 代表没有视频录制。换句话说,这个命令转换所给定媒体文件为音频文件。 + +下面的命令将从所给定媒体文件中移除视频。 + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +你也可以使用 `-ab` 标志来指出输出文件的比特率,如下面的示例所示。 + +``` +$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3 +``` + +#### 9、从视频中提取图像 + +FFmpeg 的另一个有用的特色是我们可以从一个视频文件中轻松地提取图像。如果你想从一个视频文件中创建一个相册,这可能是非常有用的。 + +为从一个视频文件中提取图像,使用下面的命令: + +``` +$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png +``` + +在这里, + + * `-r` – 设置帧速度。即,每秒提取帧到图像的数字。默认值是 25。 + * `-f` – 表示输出格式,即,在我们的实例中是图像。 + * `image-%2d.png` – 表明我们如何想命名提取的图像。在这个实例中,命名应该像这样image-01.png、image-02.png、image-03.png 等等开始。如果你使用 `%3d`,那么图像的命名像 image-001.png、image-002.png 等等开始。 + +#### 10、裁剪视频 + +FFMpeg 允许以我们选择的任何范围裁剪一个给定的媒体文件。 + +裁剪一个视频文件的语法如下给定: + +``` +ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4 +``` + +在这里, + + * `input.mp4` – 源视频文件。 + * `-filter:v` – 表示视频过滤器。 + * `crop` – 表示裁剪过滤器。 + * `w` – 我们想自源视频中裁剪的矩形的宽度。 + * `h` – 矩形的高度。 + * `x` – 我们想自源视频中裁剪的矩形的 x 坐标 。 + * `y` – 矩形的 y 坐标。 + +比如说你想要一个来自视频的位置 (200,150),且具有 640 像素宽度和 480 像素高度的视频,命令应该是: + +``` +$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 +``` + +请注意,剪切视频将影响质量。除非必要,请勿剪切。 + +#### 11、转换一个视频的具体的部分 + +有时,你可能想仅转换视频文件的一个具体的部分到不同的格式。以示例说明,下面的命令将转换所给定视频input.mp4 文件的开始 10 秒到视频 .avi 格式。 + +``` +$ ffmpeg -i input.mp4 -t 10 output.avi +``` + +在这里,我们以秒具体说明时间。此外,以 `hh.mm.ss` 格式具体说明时间也是可以的。 + +#### 12、设置视频的屏幕高宽比 + +你可以使用 `-aspect` 标志设置一个视频文件的屏幕高宽比,像下面。 + +``` +$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 +``` + +通常使用的高宽比是: + + * 16:9 + * 4:3 + * 16:10 + * 5:4 + * 2:21:1 + * 2:35:1 + * 2:39:1 + +#### 13、添加海报图像到音频文件 + +你可以添加海报图像到你的文件,以便图像将在播放音频文件时显示。这对托管在视频托管主机或共享网站中的音频文件是有用的。 + +``` +$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4 +``` + +#### 14、使用开始和停止时间剪下一段媒体文件 + +可以使用开始和停止时间来剪下一段视频为小段剪辑,我们可以使用下面的命令。 + +``` +$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 +``` + +在这里, + + * `–s` – 表示视频剪辑的开始时间。在我们的示例中,开始时间是第 50 秒。 + * `-t` – 表示总的持续时间。 + +当你想使用开始和结束时间从一个音频或视频文件剪切一部分时,它是非常有用的。 + +类似地,我们可以像下面剪下音频。 + +``` +$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3 +``` + +#### 15、切分视频文件为多个部分 + +一些网站将仅允许你上传具体指定大小的视频。在这样的情况下,你可以切分大的视频文件到多个较小的部分,像下面。 + +``` +$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4 +``` + +在这里, + + * `-t 00:00:30` 表示从视频的开始到视频的第 30 秒创建一部分视频。 + * `-ss 00:00:30` 为视频的下一部分显示开始时间戳。它意味着第 2 部分将从第 30 秒开始,并将持续到原始视频文件的结尾。 + +#### 16、接合或合并多个视频部分到一个 + +FFmpeg 也可以接合多个视频部分,并创建一个单个视频文件。 + +创建包含你想接合文件的准确的路径的 `join.txt`。所有的文件都应该是相同的格式(相同的编码格式)。所有文件的路径应该逐个列出,像下面。 + +``` +file /home/sk/myvideos/part1.mp4 +file /home/sk/myvideos/part2.mp4 +file /home/sk/myvideos/part3.mp4 +file /home/sk/myvideos/part4.mp4 +``` + +现在,接合所有文件,使用命令: + +``` +$ ffmpeg -f concat -i join.txt -c copy output.mp4 +``` + +如果你得到一些像下面的错误; + +``` +[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4' +join.txt: Operation not permitted +``` + +添加 `-safe 0` : + +``` +$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4 +``` + +上面的命令将接合 part1.mp4、part2.mp4、part3.mp4 和 part4.mp4 文件到一个称为 output.mp4 的单个文件中。 + +#### 17、添加字幕到一个视频文件 + +我们可以使用 FFmpeg 来添加字幕到视频文件。为你的视频下载正确的字幕,并如下所示添加它到你的视频。 + +``` +$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4 +``` + +#### 18、预览或测试视频或音频文件 + +你可能希望通过预览来验证或测试输出的文件是否已经被恰当地转码编码。为完成预览,你可以从你的终端播放它,用命令: + +``` +$ ffplay video.mp4 +``` + +![][7] + +类似地,你可以测试音频文件,像下面所示。 + +``` +$ ffplay audio.mp3 +``` + +![][8] + +#### 19、增加/减少视频播放速度 + +FFmpeg 允许你调整视频播放速度。 + +为增加视频播放速度,运行: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4 +``` + +该命令将双倍视频的速度。 + +为降低你的视频速度,你需要使用一个大于 1 的倍数。为减少播放速度,运行: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4 +``` + +#### 20、创建动画的 GIF + +出于各种目的,我们在几乎所有的社交和专业网络上使用 GIF 图像。使用 FFmpeg,我们可以简单地和快速地创建动画的视频文件。下面的指南阐释了如何在类 Unix 系统中使用 FFmpeg 和 ImageMagick 创建一个动画的 GIF 文件。 + + * [在 Linux 中如何创建动画的 GIF][9] + +#### 21、从 PDF 文件中创建视频 + +我长年累月的收集了很多 PDF 文件,大多数是 Linux 教程,保存在我的平板电脑中。有时我懒得从平板电脑中阅读它们。因此,我决定从 PDF 文件中创建一个视频,在一个大屏幕设备(像一台电视机或一台电脑)中观看它们。如果你想知道如何从一批 PDF 文件中制作一个电影,下面的指南将帮助你。 + + * [在 Linux 中如何从 PDF 文件中创建一个视频][10] + +#### 22、获取帮助 + +在这个指南中,我已经覆盖大多数常常使用的 FFmpeg 命令。它有很多不同的选项来做各种各样的高级功能。要学习更多用法,请参考手册页。 + +``` +$ man ffmpeg +``` + +这就是全部了。我希望这个指南将帮助你入门 FFmpeg。如果你发现这个指南有用,请在你的社交和专业网络上分享它。更多好东西将要来。敬请期待! + +谢谢! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png +[2]: https://www.ostechnix.com/install-ffmpeg-linux/ +[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png +[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi +[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi +[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi +[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png +[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png +[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/ +[10]: https://www.ostechnix.com/create-video-pdf-files-linux/ diff --git a/published/20190527 Dockly - Manage Docker Containers From Terminal.md b/published/20190527 Dockly - Manage Docker Containers From Terminal.md new file mode 100644 index 0000000000..44e9dc2c21 --- /dev/null +++ b/published/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10925-1.html) +[#]: subject: (Dockly – Manage Docker Containers From Terminal) +[#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Dockly:从终端管理 Docker 容器 +====== + +![](https://img.linux.net.cn/data/attachment/album/201906/01/144422bfwx1e7fqx1ee11x.jpg) + +几天前,我们发布了一篇指南,其中涵盖了[开始使用 Docker][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些可用于管理 Docker 容器的非官方工具。如果你看过我们以前的文章,你可能会看到两个基于 Web 的工具,[Portainer][3] 和 [PiCluster][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 Dockly 的 Docker 管理工具。 + +与上面的工具不同,Dockly 是一个 TUI(文本界面)程序,用于在类 Unix 系统中从终端管理 Docker 容器和服务。它是使用 NodeJS 编写的自由开源工具。在本简要指南中,我们将了解如何安装 Dockly 以及如何从命令行管理 Docker 容器。 + +### 安装 Dockly + +确保已在 Linux 上安装了 NodeJS。如果尚未安装,请参阅以下指南。 + +* [如何在 Linux 上安装 NodeJS][5] + +安装 NodeJS 后,运行以下命令安装 Dockly: + +``` +# npm install -g dockly +``` + +### 使用 Dockly 在终端管理 Docker 容器 + +使用 Dockly 管理 Docker 容器非常简单!你所要做的就是打开终端并运行以下命令: + +``` +# dockly +``` + +Dockly 将通过 unix 套接字自动连接到你的本机 docker 守护进程,并在终端中显示正在运行的容器列表,如下所示。 + +![][6] + +*使用 Dockly 管理 Docker 容器* + +正如你在上面的截图中看到的,Dockly 在顶部显示了运行容器的以下信息: + +* 容器 ID, +* 容器名称, +* Docker 镜像, +* 命令, +* 运行中容器的状态, +* 状态。 + +在右上角,你将看到容器的 CPU 和内存利用率。使用向上/向下箭头键在容器之间移动。 + +在底部,有少量的键盘快捷键来执行各种 Docker 管理任务。以下是目前可用的键盘快捷键列表: + +* `=` - 刷新 Dockly 界面, +* `/` - 搜索容器列表视图, +* `i` - 显示有关当前所选容器或服务的信息, +* `回车` - 显示当前容器或服务的日志, +* `v` - 在容器和服务视图之间切换, +* `l` - 在选定的容器上启动 `/bin/bash` 会话, +* `r` - 重启选定的容器, +* `s` - 停止选定的容器, +* `h` - 显示帮助窗口, +* `q` - 退出 Dockly。 + +#### 查看容器的信息 + +使用向上/向下箭头选择一个容器,然后按 `i` 以显示所选容器的信息。 + +![][7] + +*查看容器的信息* + +#### 重启容器 + +如果你想随时重启容器,只需选择它并按 `r` 即可重新启动。 + +![][8] + +*重启 Docker 容器* + +#### 停止/删除容器和镜像 + +如果不再需要容器,我们可以立即停止和/或删除一个或所有容器。为此,请按 `m` 打开菜单。 + +![][9] + +*停止,删除 Docker 容器和镜像* + +在这里,你可以执行以下操作。 + +* 停止所有 Docker 容器, +* 删除选定的容器, +* 删除所有容器, +* 删除所有 Docker 镜像等。 + +#### 显示 Dockly 帮助部分 + +如果你有任何疑问,只需按 `h` 即可打开帮助部分。 + +![][10] + +*Dockly 帮助* + +有关更多详细信息,请参考最后给出的官方 GitHub 页面。 + +就是这些了。希望这篇文章有用。如果你一直在使用 Docker 容器,请试试 Dockly,看它是否有帮助。 + +建议阅读: + + * [如何自动更新正在运行的 Docker 容器][11] + * [ctop:一个 Linux 容器的命令行监控工具][12] + +资源: + + * [Dockly 的 GitHub 仓库][13] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-720x340.png +[2]: https://www.ostechnix.com/getting-started-with-docker/ +[3]: https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/ +[4]: https://www.ostechnix.com/picluster-simple-web-based-docker-management-application/ +[5]: https://www.ostechnix.com/install-node-js-linux/ +[6]: http://www.ostechnix.com/wp-content/uploads/2019/05/Manage-Docker-Containers-Using-Dockly.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-containers-information.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Restart-containers.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/stop-remove-containers-and-images.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-Help.png +[11]: https://www.ostechnix.com/automatically-update-running-docker-containers/ +[12]: https://www.ostechnix.com/ctop-commandline-monitoring-tool-linux-containers/ +[13]: https://github.com/lirantal/dockly diff --git a/published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md new file mode 100644 index 0000000000..e853c92615 --- /dev/null +++ b/published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -0,0 +1,312 @@ +[#]: collector: (lujun9972) +[#]: translator: (jdh8383) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10938-1.html) +[#]: subject: (How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?) +[#]: via: (https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +如何在 CentOS 或 RHEL 系统上检查可用的安全更新? +====== + +![](https://img.linux.net.cn/data/attachment/album/201906/05/003907tljfmy4bnn4qj1tp.jpg) + +当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。大多数情况下,这应该是出于程序兼容性方面的考量。那该怎样实践呢?有没有办法让 `yum` 只安装安全补丁呢? + +答案是肯定的,可以用 `yum` 包管理器轻松实现。 + +在这篇文章中,我们不但会提供所需的信息。而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息。 + +希望这样可以启发你去了解并修复你列表上的那些漏洞。一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险。 + +对于 RHEL 或 CentOS 6 系统,运行下面的 [Yum 命令][1] 来安装 yum 安全插件。 + +``` +# yum -y install yum-plugin-security +``` + +在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 `yum` 的一部分了,不用单独安装。 + +要列出全部可用的补丁(包括安全、Bug 修复以及产品改进),但不安装它们: + +``` +# yum updateinfo list available +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 +RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 +RHBA-2015:0626 bugfix 389-ds-base-1.3.3.1-15.el7_1.x86_64 +RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 +RHBA-2015:1554 bugfix 389-ds-base-1.3.3.1-20.el7_1.x86_64 +RHBA-2015:1960 bugfix 389-ds-base-1.3.3.1-23.el7_1.x86_64 +RHBA-2015:2351 bugfix 389-ds-base-1.3.4.0-19.el7.x86_64 +RHBA-2015:2572 bugfix 389-ds-base-1.3.4.0-21.el7_2.x86_64 +RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 +RHBA-2016:0550 bugfix 389-ds-base-1.3.4.0-29.el7_2.x86_64 +RHBA-2016:1048 bugfix 389-ds-base-1.3.4.0-30.el7_2.x86_64 +RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 +``` + +要统计补丁的大约数量,运行下面的命令: + +``` +# yum updateinfo list available | wc -l +11269 +``` + +想列出全部可用的安全补丁但不安装,以下命令用来展示你系统里已安装和待安装的推荐补丁: + +``` +# yum updateinfo list security all +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock + RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 + RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 + RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 + RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 + RHSA-2016:2594 Moderate/Sec. 389-ds-base-1.3.5.10-11.el7.x86_64 + RHSA-2017:0920 Important/Sec. 389-ds-base-1.3.5.10-20.el7_3.x86_64 + RHSA-2017:2569 Moderate/Sec. 389-ds-base-1.3.6.1-19.el7_4.x86_64 + RHSA-2018:0163 Important/Sec. 389-ds-base-1.3.6.1-26.el7_4.x86_64 + RHSA-2018:0414 Important/Sec. 389-ds-base-1.3.6.1-28.el7_4.x86_64 + RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 + RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 + RHSA-2018:3127 Moderate/Sec. 389-ds-base-1.3.8.4-15.el7.x86_64 + RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 +``` + +要显示所有待安装的安全补丁: + +``` +# yum updateinfo list security all | grep -v "i" + + RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 + RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 + RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 + RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 + RHSA-2016:2594 Moderate/Sec. 389-ds-base-1.3.5.10-11.el7.x86_64 + RHSA-2017:0920 Important/Sec. 389-ds-base-1.3.5.10-20.el7_3.x86_64 + RHSA-2017:2569 Moderate/Sec. 389-ds-base-1.3.6.1-19.el7_4.x86_64 + RHSA-2018:0163 Important/Sec. 389-ds-base-1.3.6.1-26.el7_4.x86_64 + RHSA-2018:0414 Important/Sec. 389-ds-base-1.3.6.1-28.el7_4.x86_64 + RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 + RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 +``` + +要统计全部安全补丁的大致数量,运行下面的命令: + +``` +# yum updateinfo list security all | wc -l +3522 +``` + +下面根据已装软件列出可更新的安全补丁。这包括 bugzilla(bug 修复)、CVE(知名漏洞数据库)、安全更新等: + +``` +# yum updateinfo list security + +或者 + +# yum updateinfo list sec + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock + +RHSA-2018:3665 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-bluetooth-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-config-server-1:1.12.0-8.el7_6.noarch +RHSA-2018:3665 Important/Sec. NetworkManager-glib-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-libnm-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-tui-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-wifi-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-wwan-1:1.12.0-8.el7_6.x86_64 +``` + +显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁: + +``` +# yum --security check-update +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +--> policycoreutils-devel-2.2.5-20.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> smc-raghumalayalam-fonts-6.0-7.el7.noarch from rhel-7-server-rpms excluded (updateinfo) +--> amanda-server-3.3.3-17.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> 389-ds-base-libs-1.3.4.0-26.el7_2.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> 1:cups-devel-1.6.3-26.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> openwsman-client-2.6.3-3.git4391e5c.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> 1:emacs-24.3-18.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> augeas-libs-1.4.0-2.el7_4.2.i686 from rhel-7-server-rpms excluded (updateinfo) +--> samba-winbind-modules-4.2.3-10.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> tftp-5.2-11.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +. +. +35 package(s) needed for security, out of 115 available +NetworkManager.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-adsl.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-bluetooth.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-config-server.noarch 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-glib.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-libnm.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-ppp.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +``` + +列出所有可用的安全补丁,并且显示其详细信息: + +``` +# yum info-sec +. +. +=============================================================================== + tzdata bug fix and enhancement update +=============================================================================== + Update ID : RHBA-2019:0689 + Release : 0 + Type : bugfix + Status : final + Issued : 2019-03-28 19:27:44 UTC +Description : The tzdata packages contain data files with rules for various + : time zones. + : + : The tzdata packages have been updated to version + : 2019a, which addresses recent time zone changes. + : Notably: + : + : * The Asia/Hebron and Asia/Gaza zones will start + : DST on 2019-03-30, rather than 2019-03-23 as + : previously predicted. + : * Metlakatla rejoined Alaska time on 2019-01-20, + : ending its observances of Pacific standard time. + : + : (BZ#1692616, BZ#1692615, BZ#1692816) + : + : Users of tzdata are advised to upgrade to these + : updated packages. + Severity : None +``` + +如果你想要知道某个更新的具体内容,可以运行下面这个命令: + +``` +# yum updateinfo RHSA-2019:0163 + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +=============================================================================== + Important: kernel security, bug fix, and enhancement update +=============================================================================== + Update ID : RHSA-2019:0163 + Release : 0 + Type : security + Status : final + Issued : 2019-01-29 15:21:23 UTC + Updated : 2019-01-29 15:23:47 UTC Bugs : 1641548 - CVE-2018-18397 kernel: userfaultfd bypasses tmpfs file permissions + : 1641878 - CVE-2018-18559 kernel: Use-after-free due to race condition in AF_PACKET implementation + CVEs : CVE-2018-18397 + : CVE-2018-18559 +Description : The kernel packages contain the Linux kernel, the core of any + : Linux operating system. + : + : Security Fix(es): + : + : * kernel: Use-after-free due to race condition in + : AF_PACKET implementation (CVE-2018-18559) + : + : * kernel: userfaultfd bypasses tmpfs file + : permissions (CVE-2018-18397) + : + : For more details about the security issue(s), + : including the impact, a CVSS score, and other + : related information, refer to the CVE page(s) + : listed in the References section. + : + : Bug Fix(es): + : + : These updated kernel packages include also + : numerous bug fixes and enhancements. Space + : precludes documenting all of the bug fixes in this + : advisory. See the descriptions in the related + : Knowledge Article: + : https://access.redhat.com/articles/3827321 + Severity : Important +updateinfo info done +``` + +跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞: + +``` +# yum updateinfo list cves + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +CVE-2018-15688 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-bluetooth-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-config-server-1:1.12.0-8.el7_6.noarch +CVE-2018-15688 Important/Sec. NetworkManager-glib-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-libnm-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 +``` + +你也可以查看那些跟 bug 修复相关的更新,运行下面的命令: + +``` +# yum updateinfo list bugfix | less + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +RHBA-2018:3349 bugfix NetworkManager-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-adsl-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-adsl-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-bluetooth-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-bluetooth-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-config-server-1:1.12.0-7.el7_6.noarch +RHBA-2019:0519 bugfix NetworkManager-config-server-1:1.12.0-10.el7_6.noarch +``` + +要想得到待安装更新的摘要信息,运行这个: + +``` +# yum updateinfo summary +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +Updates Information Summary: updates + 13 Security notice(s) + 9 Important Security notice(s) + 3 Moderate Security notice(s) + 1 Low Security notice(s) + 35 Bugfix notice(s) + 1 Enhancement notice(s) +updateinfo summary done +``` + +如果只想打印出低级别的安全更新,运行下面这个命令。类似的,你也可以只查询重要级别和中等级别的安全更新。 + +``` +# yum updateinfo list sec | grep -i "Low" + +RHSA-2019:0201 Low/Sec. libgudev1-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-libs-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-sysv-219-62.el7_6.3.x86_64 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[jdh8383](https://github.com/jdh8383) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md new file mode 100644 index 0000000000..8b4d57d9a8 --- /dev/null +++ b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: (chen-ni) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (When IoT systems fail: The risk of having bad IoT data) +[#]: via: (https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +When IoT systems fail: The risk of having bad IoT data +====== +As the use of internet of things (IoT) devices grows, the data they generate can lead to significant savings for consumers and new opportunities for businesses. But what happens when errors inevitably crop up? +![Oonal / Getty Images][1] + +No matter what numbers you look at, it’s clear that the internet of things (IoT) continues to worm its way into more and more areas of personal and private life. That growth brings many benefits, but it also poses new risks. A big question is who takes responsibility when things go wrong. + +Perhaps the biggest issue surrounds the use of IoT-generated data to personalize the offering and pricing of various products and services. [Insurance companies have long struggled with how best to use IoT data][2], but last year I wrote about how IoT sensors are beginning to be used to help home insurers reduce water damage losses. And some companies are looking into the potential for insurers to bid for consumers: business based on the risks (or lack thereof) revealed by their smart-home data. + +But some of the biggest progress has come in the area of automobile insurance, where many automobile insurers already let customers install tracking devices in their cars in exchange for discounts for demonstrating safe-driving habits. + +**[ Also read:[Finally, a smart way for insurers to leverage IoT in smart homes][3] ]** + +### **The rise of usage-based insurance** + +Called usage-based insurance (UBI), this “pay-as-you-drive” approach tracks speed, location, and other factors to assess risk and calculate auto insurance premiums. An estimated [50 million U.S. drivers][4] will have enrolled in UBI programs by 2020. + +Not surprisingly, insurers love UBI because it helps them calculate their risks more precisely. In fact, [AIG Ireland is trying to get the country to require UBI for drivers under 25][5]. And demonstrably safe drivers are also often happy save some money. There has been pushback, of course, mostly from privacy advocates and groups who might have to pay more under this model. + +### **What happens when something goes wrong?** + +But there’s another, more worrisome, potential issue: What happens when the data provided by the IoT device is wrong or gets garbled somewhere along the way? Because despite all the automation, error-checking, and so on, occasional errors inevitably slip through the cracks. + +Unfortunately, this isn’t just an academic concern that might someday accidentally cost some careful drivers a few extra bucks on their insurance. It’s already a real-world problem with serious consequences. And just like [the insurance industry still hasn’t figured out who should “own” data generated by customer-facing IoT devices][6], it’s not clear who would take responsibility for dealing with problems with that data. + +Though not strictly an IoT issue, computer “glitches” allegedly led to Hertz rental cars erroneously being reported stolen and innocent renters being arrested and detained. The result? Criminal charges, years of litigation, and finger pointing. Lots and lots of finger pointing. + +With that in mind, it’s easy to imagine, for example, an IoT sensor getting confused and indicating that a car was speeding even while safely under the speed limit. Think of the hassles of trying to fight _that_ in court, or arguing with your insurance company over it. + +(Of course, there’s also the flip side of this problem: Consumers may find ways to hack the data shared by their IoT devices to fraudulently qualify for lower rates or deflect blame for an incident. There’s no real plan in place to deal with _that_ , either.) + +### **Studying the need for government regulation** + +Given the potential impacts of these issues, and the apparent lack of interest in dealing with them from the many companies involved, it seems legitimate to wonder if government intervention is warranted. + +That could be one motivation behind the [reintroduction of the SMART (State of Modern Application, Research, and Trends of) IoT Act][7] by Rep. Bob Latta (R-Ohio). [The bill][8], stemming from a bipartisan IoT working group helmed by Latta and Rep. Peter Welch (D-Vt.), passed the House last fall but failed in the Senate. It would require the Commerce Department to study the state of the IoT industry and report back to the House Energy & Commerce and Senate Commerce Committee in two years. + +In a statement, Latta said, “With a projected economic impact in the trillions of dollars, we need to look at the policies, opportunities, and challenges that IoT presents. The SMART IoT Act will make it easier to understand what the government is doing on IoT policy, what it can do better, and how federal policies can impact the research and discovery of cutting-edge technologies.” + +The research is welcome, but the bill may not even pass. Even it does, with its two-year wait time, the IoT will likely evolve too fast for the government to keep up. + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/08/cloud_connected_smart_cars_by_oonal_gettyimages-692819426_1200x800-100767788-large.jpg +[2]: https://www.networkworld.com/article/3264655/most-insurance-carriers-not-ready-to-use-iot-data.html +[3]: https://www.networkworld.com/article/3296706/finally-a-smart-way-for-insurers-to-leverage-iot-in-smart-homes.html +[4]: https://www.businessinsider.com/iot-is-changing-the-auto-insurance-industry-2015-8 +[5]: https://www.iotforall.com/iot-data-is-disrupting-the-insurance-industry/ +[6]: https://www.sas.com/en_us/insights/articles/big-data/5-challenges-for-iot-in-insurance-industry.html +[7]: https://www.multichannel.com/news/latta-re-ups-smart-iot-act +[8]: https://latta.house.gov/uploadedfiles/smart_iot_116th.pdf +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md new file mode 100644 index 0000000000..9df4495f05 --- /dev/null +++ b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Enterprise IoT: Companies want solutions in these 4 areas) +[#]: via: (https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +Enterprise IoT: Companies want solutions in these 4 areas +====== +Based on customer pain points, PwC identified four areas companies are seeking enterprise solutions for, including energy use and sustainability. +![Jackie Niam / Getty Images][1] + +Internet of things (IoT) vendors and pundits like to crow about the billions and billions of connected devices that make the IoT so ubiquitous and powerful. But how much of that installed base is really relevant to the enterprise? + +To find out, I traded emails with Rob Mesirow, principal at [PwC’s Connected Solutions][2], the firm’s new one-stop-shop of IoT solutions, who suggests that consumer adoption may not paint a true picture of the enterprise opportunities. If you remove the health trackers and the smart thermostats from the market, he suggested, there are very few connected devices left. + +So, I wondered, what is actually happening on the enterprise side of IoT? What kinds of devices are we talking about, and in what kinds of numbers? + +**[ Read also:[Forget 'smart homes,' the new goal is 'autonomous buildings'][3] ]** + +“When people talk about the IoT,” Mesirow told me, “they usually focus on [consumer devices, which far outnumber business devices][4]. Yet [connected buildings currently represent only 12% of global IoT projects][5],” he noted, “and that’s without including wearables and smart home projects.” (Mesirow is talking about buildings that “use various IoT devices, including occupancy sensors that determine when people are present in a room in order to keep lighting and temperature controls at optimal levels, lowering energy costs and aiding sustainability goals. Sensors can also detect water and gas leaks and aid in predictive maintenance for HVAC systems.”) + +### 4 key enterprise IoT opportunities + +More specifically, based on customer pain points, PwC’s Connected Solutions is focusing on a few key opportunities, which Mesirow laid out in a [blog post][6] earlier this year. (Not surprisingly, the opportunities seem tied to [the group’s products][7].) + +“A lot of these solutions came directly from our customers’ request,” he noted. “We pre-qualify our solutions with customers before we build them.” + +Let’s take a look at the top four areas, along with a quick reality check on how important they are and whether the technology is ready for prime time. + +#### **1\. Energy use and sustainability** + +The IoT makes it possible to manage buildings and spaces more efficiently, with savings of 25% or more. Occupancy sensors can tell whether anyone is actually in a room, adjusting lighting and temperature to saving money and conserve energy. + +Connected buildings can also help determine when meeting spaces are available, which can boost occupancy at large businesses and universities by 40% while cutting infrastructure and maintenance costs. Other sensors, meanwhile, can detect water and gas leaks and aid in predictive maintenance for HVAC systems. + +**Reality check:** Obviously, much of this technology is not new, but there’s a real opportunity to make it work better by integrating disparate systems and adding better analytics to the data to make planning more effective. + +#### **2. Asset tracking + +** + +“Businesses can also use the IoT to track their assets,“ Mesirow told me, “which can range from trucks to hotel luggage carts to medical equipment. It can even assist with monitoring trash by alerting appropriate people when dumpsters need to be emptied.” + +Asset trackers can instantly identify the location of all kinds of equipment (saving employee time and productivity), and they can reduce the number of lost, stolen, and misplaced devices and machines as well as provide complete visibility into the location of your assets. + +Such trackers can also save employees from wasting time hunting down the devices and machines they need. For example, PwC noted that during an average hospital shift, more than one-third of nurses spend at least an hour looking for equipment such as blood pressure monitors and insulin pumps. Just as important, location tracking often improves asset optimization, reduced inventory needs, and improved customer experience. + +**Reality check:** Asset tracking offers clear value. The real question is whether a given use case is cost effective or not, as well as how the data gathered will actually be used. Too often, companies spend a lot of money and effort tracking their assets, but don’t do much with the information. + +#### **3\. Security and compliance** + +Connected solutions can create better working environments, Mesirow said. “In a hotel, for example, these smart devices can ensure that air and water quality is up to standards, provide automated pest traps, monitor dumpsters and recycling bins, detect trespassers, determine when someone needs assistance, or discover activity in an unauthorized area. Monitoring the water quality of hotel swimming pools can lower chemical and filtering costs,” he said. + +Mesirow cited an innovative use case where, in response to workers’ complaints about harassment, hotel operators—in conjunction with the [American Hotel and Lodging Association][8]—are giving their employees portable devices that alert security staff when workers request assistance. + +**Reality check:** This seems useful, but the ROI might be difficult to calculate. + +#### **4\. Customer experience** + +According to PwC, “Sensors, facial recognition, analytics, dashboards, and notifications can elevate and even transform the customer experience. … Using connected solutions, you can identify and reward your best customers by offering perks, reduced wait times, and/or shorter lines.” + +Those kinds of personalized customer experiences can potentially boost customer loyalty and increase revenue, Mesirow said, adding that the technology can also make staff deployments more efficient and “enhance safety by identifying trespassers and criminals who are tampering with company property.” + +**Reality check:** Creating a great customer experience is critical for businesses today, and this kind of personalized targeting promises to make it more efficient and effective. However, it has to be done in a way that makes customers comfortable and not creeped out. Privacy concerns are very real, especially when it comes to working with facial recognition and other kinds of surveillance technology. For example, [San Francisco recently banned city agencies from using facial recognition][9], and others may follow. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][10] + * [What is edge computing and how it’s changing the network][11] + * [Most powerful Internet of Things companies][12] + * [10 Hot IoT startups to watch][13] + * [The 6 ways to make money in IoT][14] + * [What is digital twin technology? [and why it matters]][15] + * [Blockchain, service-centric networking key to IoT success][16] + * [Getting grounded in IoT networking and security][17] + * [Building IoT-ready networks must become a priority][18] + * [What is the Industrial IoT? [And why the stakes are so high]][19] + + + +Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_by_jackie_niam_gettyimages-996958260_2400x1600-100788446-large.jpg +[2]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html#get-connected +[3]: https://www.networkworld.com/article/3309420/forget-smart-homes-the-new-goal-is-autonomous-buildings.html +[4]: https://www.statista.com/statistics/370350/internet-of-things-installed-base-by-category/) +[5]: https://iot-analytics.com/top-10-iot-segments-2018-real-iot-projects/ +[6]: https://www.digitalpulse.pwc.com.au/five-unexpected-ways-internet-of-things/ +[7]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html +[8]: https://www.ahla.com/ +[9]: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html +[10]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[11]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[12]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[13]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[14]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[15]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[16]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[17]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[20]: https://www.facebook.com/NetworkWorld/ +[21]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md new file mode 100644 index 0000000000..86d7bf0efe --- /dev/null +++ b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Experts: Enterprise IoT enters the mass-adoption phase) +[#]: via: (https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +Experts: Enterprise IoT enters the mass-adoption phase +====== +Dropping hardware prices, 5G boost business internet-of-things deployments; technical complexity encourages partnerships. +![Avgust01 / Getty Images][1] + +[IoT][2] in general has taken off quickly over the past few years, but experts at the recent IoT World highlighted that the enterprise part of the market has been particularly robust of late – it’s not just an explosion of connected home gadgets anymore. + +Donna Moore, chairwoman of the LoRa Alliance, an industry group that works to develop and scale low-power WAN technology for mass usage, said on a panel that she’s never seen growth this fast in the sector. “I’d say we’re now in the early mass adopters [stage],” she said. + +**More on IoT:** + + * [Most powerful Internet of Things companies][3] + * [10 Hot IoT startups to watch][4] + * [The 6 ways to make money in IoT][5] + * [What is digital twin technology? [and why it matters]][6] + * [Blockchain, service-centric networking key to IoT success][7] + * [Getting grounded in IoT networking and security][8] + * [Building IoT-ready networks must become a priority][9] + * [What is the Industrial IoT? [And why the stakes are so high]][10] + + + +The technology itself has pushed adoption to these heights, said Graham Trickey, head of IoT for the GSMA, a trade organization for mobile network operators. Along with price drops for wireless connectivity modules, the array of upcoming technologies nestling under the umbrella label of [5G][11] could simplify the process of connecting devices to [edge-computing][12] hardware – and the edge to the cloud or [data center][13]. + +“Mobile operators are not just providers of connectivity now, they’re farther up the stack,” he said. Technologies like narrow-band IoT and support for highly demanding applications like telehealth are all set to be part of the final 5G spec. + +### Partnerships needed to deal with IoT complexity** + +** + +That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge. + +“That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey. + +Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said. + +One of the central problems – [IoT security][14] – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities. More than one panelist praised the passage of the EU’s General Data Protection Regulation, saying that it offered concrete guidelines for entities developing IoT tech – a crucial consideration for some companies that may not have a lot of in-house expertise in that area. + +Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg +[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[11]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[12]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html?nsdr=true +[13]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[14]: https://www.networkworld.com/article/3269736/getting-grounded-in-iot-networking-and-security.html +[15]: https://www.facebook.com/NetworkWorld/ +[16]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md new file mode 100644 index 0000000000..be8a4833cc --- /dev/null +++ b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever) +[#]: via: (https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever +====== +Burger King uses real-time IoT data to deliver burgers to drivers stuck in traffic — and it seems to be working. +![Mike Mozart \(CC BY 2.0\)][1] + +People love to eat in their cars. That’s why we invented the drive-in and the drive-thru. + +But despite a fast-food outlet on the corner of every major intersection, it turns out we were only scratching the surface of this idea. Burger King is taking this concept to the next logical step with its new IoT-powered Traffic Jam Whopper project. + +I have to admit, when I first heard about this, I thought it was a joke, but apparently the [Traffic Jam Whopper project is totally real][2] and has already passed a month-long test in Mexico City. While the company hasn’t specified a timeline, it plans to roll out the Traffic Jam Whopper project in Los Angeles (where else?) and other traffic-plagued megacities such as São Paulo and Shanghai. + +**[ Also read:[Is IoT in the enterprise about making money or saving money?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]** + +### How Burger King's Traffic Jam Whopper project works + +According to [Nations Restaurant News][5], this is how Burger King's Traffic Jam Whopper project works: + +The project uses real-time data to target hungry drivers along congested roads and highways for food delivery by couriers on motorcycles. + +The system leverages push notifications to the Burger King app and personalized messaging on digital billboards positioned along busy roads close to a Burger King restaurant. + +[According to the We Believers agency][6] that put it all together, “By leveraging traffic and drivers’ real-time data [location and speed], we adjusted our billboards’ location and content, displaying information about the remaining time in traffic to order, and personalized updates about deliveries in progress.” The menu is limited to Whopper Combos to speed preparation (though the company plans to offer a wider menu as it works out the kinks). + +**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]** + +The company said orders in Mexico City were delivered in an average of 15 minutes. Fortunately (or unfortunately, depending on how you look at it) many traffic jams hold drivers captive for far longer than that. + +Once the order is ready, the motorcyclist uses Google maps and GPS technology embedded into the app to locate the car that made the order. The delivery person then weaves through traffic to hand over the Whopper. (Lane-splitting is legal in California, but I have no idea if there are other potential safety or law-enforcement issues involved here. For drivers ordering burgers, at least, the Burger King app supports voice ordering. I also don’t know what happens if traffic somehow clears up before the burger arrives.) + +Here’s a video of the pilot program in Mexico City: + +#### **New technology = > new opportunities** + +Even more amazing, this is not _just_ a publicity stunt. NRN quotes Bruno Cardinali, head of marketing for Burger King Latin America and Caribbean, claiming the project boosted sales during rush hour, when app orders are normally slow: + +“Thanks to The Traffic Jam Whopper campaign, we’ve increased deliveries by 63% in selected locations across the month of April, adding a significant amount of orders per restaurant per day, just during rush hours." + +If nothing else, this project shows that creative thinking really can leverage IoT technology into new businesses. In this case, it’s turning notoriously bad traffic—pretty much required for this process to work—from a problem into an opportunity to generate additional sales during slow periods. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][8] + * [What is edge computing and how it’s changing the network][9] + * [Most powerful Internet of Things companies][10] + * [10 Hot IoT startups to watch][11] + * [The 6 ways to make money in IoT][12] + * [What is digital twin technology? [and why it matters]][13] + * [Blockchain, service-centric networking key to IoT success][14] + * [Getting grounded in IoT networking and security][15] + * [Building IoT-ready networks must become a priority][16] + * [What is the Industrial IoT? [And why the stakes are so high]][17] + + + +Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/burger-king-gift-card-100797164-large.jpg +[2]: https://abc7news.com/food/burger-king-to-deliver-to-drivers-stuck-in-traffic/5299073/ +[3]: https://www.networkworld.com/article/3343917/the-big-picture-is-iot-in-the-enterprise-about-making-money-or-saving-money.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.nrn.com/technology/tech-tracker-burger-king-deliver-la-motorists-stuck-traffic?cid= +[6]: https://www.youtube.com/watch?v=LXNgEZV7lNg +[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start +[8]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[9]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[10]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[11]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[12]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[13]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[14]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[15]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[16]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[17]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[18]: https://www.facebook.com/NetworkWorld/ +[19]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md new file mode 100644 index 0000000000..61ae9e656b --- /dev/null +++ b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md @@ -0,0 +1,55 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Benchmarks of forthcoming Epyc 2 processor leaked) +[#]: via: (https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Benchmarks of forthcoming Epyc 2 processor leaked +====== +Benchmarks of AMD's second-generation Epyc server briefly found their way online and show the chip is larger but a little slower than the Epyc 7601 on the market now. +![Gordon Mah Ung][1] + +Benchmarks of engineering samples of AMD's second-generation Epyc server, code-named “Rome,” briefly found their way online and show a very beefy chip running a little slower than its predecessor. + +Rome is based on the Zen 2 architecture, believed to be more of an incremental improvement over the prior generation than a major leap. It’s already known that Rome would feature a 64-core, 128-thread design, but that was about all of the details. + +**[ Also read:[Who's developing quantum computers][2] ]** + +The details came courtesy of SiSoftware's Sandra PC analysis and benchmarking tool. It’s very popular and has been used by hobbyists and benchmarkers alike for more than 20 years. New benchmarks are uploaded to the Sandra database all the time, and what I suspect happened is someone running a Rome sample ran the benchmark, not realizing the results would be uploaded to the Sandra database. + +The benchmarks were from two different servers, a Dell PowerEdge R7515 and a Super Micro Super Server. The Dell product number is not on the market, so this would indicate a future server with Rome processors. The entry has since been deleted, but several sites, including the hobbyist site Tom’s Hardware Guide, managed to [take a screenshot][3]. + +According to the entry, the chip is a mid-range processor with a base clock speed of 1.4GHz, jumping up to 2.2GHz in turbo mode, with 16MB of Level 2 cache and 256MB of Level 3 cache, the latter of which is crazy. The first-generation Epyc had just 32MB of L3 cache. + +That’s a little slower than the Epyc 7601 on the market now, but when you double the number of cores in the same space, something’s gotta give, and in this case, it’s electricity. The thermal envelope was not revealed by the benchmark. Previous Epyc processors ranged from 120 watts to 180 watts. + +Sandra ranked the processor at #3 for arithmetic and #5 for multimedia processing, which makes me wonder what on Earth beat the Rome chip. Interestingly, the servers were running Windows 10, not Windows Server 2019. + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]** + +Rome is expected to be officially launched at the massive Computex trade show in Taiwan on May 27 and will begin shipping in the third quarter of the year. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/11/rome_2-100779395-large.jpg +[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html +[3]: https://www.tomshardware.co.uk/amd-epyc-rome-processor-data-center,news-60265.html +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md b/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md new file mode 100644 index 0000000000..2638987b16 --- /dev/null +++ b/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco ties its security/SD-WAN gear with Teridion’s cloud WAN service) +[#]: via: (https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco ties its security/SD-WAN gear with Teridion’s cloud WAN service +====== +An agreement links Cisco Meraki MX Security/SD-WAN appliances and its Auto VPN technology to Teridion’s cloud-based WAN service that claims to accelerate TCP-based applications by up to 5X. +![istock][1] + +Cisco and Teridion have tied the knot to deliver faster enterprise [software-defined WAN][2] services. + +The agreement links [Cisco Meraki][3] MX Security/SD-WAN appliances and its Auto [VPN][4] technology which lets users quickly bring up and configure secure sessions between branches and data centers with [Teridion’s cloud-based WAN service][5]. Teridion’s service promises customers better performance and control over traffic running from remote offices over the public internet to the [data center][6]. The service features what Teridion calls “Curated Routing” which fuses WAN acceleration techniques with route optimization to speed traffic. + +**More about SD-WAN** + + * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][7] + * [How to pick an off-site data-backup method][8] + * [SD-Branch: What it is and why you’ll need it][9] + * [What are the options for security SD-WAN?][10] + + + +For example, Teridion says its WAN service can accelerate TCP-based applications like file transfers, backups and page loads, by as much as three to five times. + +“[The service] improves network performance for UDP based applications like voice, video, RDP, and VDI. Enterprises can get carrier grade performance over broadband and dedicated internet access. Depending on the locations of the sites, [customers] can expect to see a 15 to 30 percent reduction in latency. That’s the difference between a great quality video conference and an unworkable, choppy mess” Teridion [stated][11]. + +Teridion says the Meraki integration creates an IPSec connection from the Cisco Meraki MX to the Teridion edge. “Customers create locations in the Teridion portal and apply the preconfigured Meraki template to them, or just upload a csv file if you have a lot of locations. Then, from each Meraki MX, create a 3rd party IPSec tunnel to the Teridion edge IP addresses that are generated as part of the Teridion configuration.” + +The combined Cisco Meraki and Teridion offering brings SD-WAN and security capabilities at the WAN edge that are tightly integrated with a WAN service delivered over cost-effective broadband or dedicated Internet access, said Raviv Levi, director of product management at Cisco Meraki in a statement. “This brings better reliability and consistency to the enterprise WAN across multiple sites, as well as high performance access to all SaaS applications and cloud workloads.” + +Meraki’s MX family supports everything from SD-WAN and [Wi-Fi][12] features to next-generation [firewall][13] and intrusion prevention in a single package. + +Some studies show that by 2021 over 75 percent of enterprise traffic will be SaaS-oriented, so giving branch offices SD-WAN's reliable, secure transportation options will be a necessity, Cisco said when it [upgraded the Meraki][3] boxes last year. + +Cisco Meraki isn’t the only SD-WAN service Teridion supports. The company also has agreements Citrix, Silver Peak, VMware (VeloCloud). Teridion also has partnerships with over 25 cloud partners, including Google, Amazon Web Services and Microsoft Azure. + +[Teridion for Cisco Meraki][14] is available now from authorized Teridion resellers. Pricing starts at $50 per site per month. + +Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/02/istock-820219662-100749695-large.jpg +[2]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html +[3]: https://www.networkworld.com/article/3301169/cisco-meraki-amps-up-throughput-wi-fi-to-sd-wan-family.html +[4]: https://www.networkworld.com/article/3138952/5-things-you-need-to-know-about-virtual-private-networks.html +[5]: https://www.networkworld.com/article/3284285/teridion-enables-higher-performing-and-more-responsive-saas-applications.html +[6]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[7]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html +[8]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[9]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html +[10]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true +[11]: https://www.teridion.com/blog/teridion-announces-deep-integration-with-cisco-meraki-mx/ +[12]: https://www.networkworld.com/article/3318119/what-to-expect-from-wi-fi-6-in-2019.html +[13]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html +[14]: https://www.teridion.com/meraki +[15]: https://www.facebook.com/NetworkWorld/ +[16]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md new file mode 100644 index 0000000000..54ddf76db3 --- /dev/null +++ b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Edge-based caching and blockchain-nodes speed up data transmission) +[#]: via: (https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Edge-based caching and blockchain-nodes speed up data transmission +====== +Using a combination of edge-based data caches and blockchain-like distributed networks, Bluzelle claims it can significantly speed up the delivery of data across the globe. +![OlgaSalt / /getty][1] + +The combination of a blockchain-like distributed network, along with the ability to locate data at the edge will massively speed up future networks, such as those used by the internet of things (IoT), claims Bluzelle in announcing what is says is the first decentralized data delivery network (DDN). + +Distributed DDNs will be like content delivery networks (CDNs) that now cache content around the world to speed up the web, but in this case, it will be for data, the Singapore-based company explains. Distributed key-value (blockchain) networks and edge computing built into Bluzelle's system will provide significantly faster delivery than existing caching, the company claims in a press release announcing its product. + +“The future of data delivery can only ever be de-centrally distributed,” says Pavel Bains, CEO and co-founder of Bluzelle. It’s because the world requires instant access to data that’s being created at the edge, he argues. + +“But delivery is hampered by existing technology,” he says. + +**[ Also read:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3]. ]** + +Bluzelle says decentralized caching is the logical next step to generalized data caching, used for reducing latency. “Decentralized caching expands the theory of caching,” the company writes in a [report][4] (Dropbox pdf) on its [website][5]. It says the cache must be expanded from simply being located at one unique location. + +“Using a combination of distributed networks, the edge and the cloud, [it’s] thereby increasing the transactional throughput of data,” the company says. + +This kind of thing is particularly important in consumer gaming now, where split-second responses from players around the world make or break a game experience, but it will likely be crucial for the IoT, higher-definition media, artificial intelligence, and virtual reality as they gain more of a role in digitization—including at critical enterprise applications. + +“Currently applications are limited to data caching technologies that require complex configuration and management of 10-plus-year-old technology constrained to a few data centers,” Bains says. “These were not designed to handle the ever-increasing volumes of data.” + +Bains says one of the key selling points of Bluzelle's network is that developers should be able to implement and run networks without having to also physically expand the networks manually. + +“Software developers don’t want to react to where their customers come from. Our architecture is designed to always have the data right where the customer is. This provides a superior consumer experience,” he says. + +Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. + +The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. + +An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems. + +**More about edge networking** + + * [How edge networking and IoT will reshape data centers][3] + * [Edge computing best practices][6] + * [How edge computing can help secure the IoT][7] + + + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/blockchain_crypotocurrency_bitcoin-by-olgasalt-getty-100787949-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://www.dropbox.com/sh/go5bnhdproy1sk5/AAC5MDoafopFS7lXUnmiLAEFa?dl=0&preview=Bluzelle+Report+-+The+Decentralized+Internet+Is+Here.pdf +[5]: https://bluzelle.com/ +[6]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[7]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md new file mode 100644 index 0000000000..829fb127f8 --- /dev/null +++ b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md @@ -0,0 +1,80 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Online performance benchmarks all companies should try to achieve) +[#]: via: (https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +Online performance benchmarks all companies should try to achieve +====== +With digital performance more important than ever, companies must ensure their online performance meets customers’ needs. A new ThousandEyes report can help them determine that. +![Thinkstock][1] + +There's no doubt about it: We have entered the experience economy, and digital performance is more important than ever. + +Customer experience is the top brand differentiator, topping price and every other factor. And businesses that provide a poor digital experience will find customers will actively seek a competitor. In fact, recent ZK Research found that in 2018, about two-thirds of millennials changed loyalties to a brand because of a bad experience. (Note: I am an employee of ZK Research.) + +To help companies determine if their online performance is leading, lacking, or on par with some of the top companies, ThousandEyes this week released its [2019 Digital Experience Performance Benchmark Report][2]. This document provides a comparative analysis of web, infrastructure, and network performance from the top 20 U.S. digital retail, travel, and media websites. Although this is a small sampling of companies, those three industries are the most competitive when it comes to using their digital platforms for competitive advantage. The aggregated data from this report can be used as an industry-agnostic performance benchmark that all companies should strive to meet. + +**[ Read also:[IoT providers need to take responsibility for performance][3] ]** + +The methodology of the study was for ThousandEyes to use its own platform to provide an independent view of performance. It uses active monitoring and a global network of monitoring agents to measure application and network layer performance for websites, applications, and services. The company collected data from 36 major cities scattered across the U.S. Six of the locations (Ashburn, Chicago, Dallas, Los Angeles, San Jose, and Seattle) also included vantage points connected to six major broadband ISPs (AT&T, CenturyLink, Charter, Comcast, Cox, and Verizon). This acts as a good proxy for what a user would experience. + +The test involved page load tests against the websites of the major companies in retail, media, and travel and looked at several factors, including DNS response time, round-trip latency, network time (one-way latency), HTTP response time, and page load. The averages and median times can be seen in the table below. Those can be considered the average benchmarks that all companies should try to attain. + +![][4] + +### Choice of content delivery network matters by location + +ThousandEyes' report also looked at how the various services that companies use impacts web performance. For example, the study measured the performance of the content delivery network (CDN) providers in the 36 markets. It found that in Albuquerque, Akamai and Fastly had the most latency, whereas Edgecast had the least. It also found that in Boston, all of the CDN providers were close. Companies can use this type of data to help them select a CDN. Without it, decision makers are essentially guessing and hoping. + +### CDN performance is impacted by ISP + +Another useful set of data was cross-referencing CDN performance by ISP, which lead to some fascinating information. With Comcast, Akamai, Cloudfront, Google and Incapula all had high amounts of latency. Only Edgecast and Fastly offered average latency. On the other hand, all of the CDNs worked great with CenturyLink. This tells a buyer, "If my customer base is largely in Comcast’s footprint, I should look at Edgecast or Fastly or my customers will be impacted." + +### DNS and latency directly impact page load times + +The ThousandEyes study also confirmed some points that many people believe as true but until now had no quantifiable evidence to support it. For example, it's widely accepted that DNS response time and network latency to the CDN edge correlate to web performance; the data in the report now supports that belief. ThousandEyes did some regression analysis and fancy math and found that in general, companies that were in the top quartile of HTTP performance had above-average DNS response time and network performance. There were a few exceptions, but in most cases, this is true. + +Based on all the data, the below are the benchmarks for the three infrastructure metrics gathered and is what businesses, even ones outside the three verticals studied, should hope to achieve to support a high-quality digital experience. + + * DNS response time 25 ms + * Round trip network latency 15 ms + * HTTP response time 250 ms + + + +### Operations teams need to focus on digital performance + +Benchmarking certainly provides value, but the report also offers some recommendations on how operations teams can use the data to improve digital performance. Those include: + + * **Measure site from distributed user vantage points**. There is no single point that will provide a view of digital performance everywhere. Instead, measure from a range of ISPs in different regions and take a multi-layered approach to visibility (application, network and routing). + * **Use internet performance information as a baseline**. Compare your organization's data to the baselines, and if you’re not meeting it in some markets, focus on improvement there. + * **Compare performance to industry peers**. In highly competitive industries, it’s important to understand how you rank versus the competition. Don’t be satisfied with hitting the benchmarks if your key competitors exceed them. + * **Build a strong performance stack.** The data shows that solid DNS and HTTP response times and low latency are correlated to solid page load times. Focus on optimizing those factors and consider them foundational to digital performance. + + + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2017/07/racing_speed_runners_internet-speed-100728363-large.jpg +[2]: https://www.thousandeyes.com/research/digital-experience +[3]: https://www.networkworld.com/article/3340318/iot-providers-need-to-take-responsibility-for-performance.html +[4]: https://images.idgesg.net/images/article/2019/05/thousandeyes-100797290-large.jpg +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md new file mode 100644 index 0000000000..51098dad33 --- /dev/null +++ b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Study: Most enterprise IoT transactions are unencrypted) +[#]: via: (https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html) +[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/) + +Study: Most enterprise IoT transactions are unencrypted +====== +A Zscaler report finds 91.5% of IoT communications within enterprises are in plaintext and so susceptible to interference. +![HYWARDS / Getty Images][1] + +Of the millions of enterprise-[IoT][2] transactions examined in a recent study, the vast majority were sent without benefit of encryption, leaving the data vulnerable to theft and tampering. + +The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with [SSL][3]. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. + +**[ For more on IoT security, see[our corporate guide to addressing IoT security concerns][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]** + +Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. + +The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent). + +While they represented only 8 percent of the devices, data-collection terminals generated 80 percent of the traffic. + +The breakdown is that 18 percent of the IoT devices use SSL to communicate all the time, and of the remaining 82 percent, half used it part of the time and half never used it. +The study also found cases of plaintext HTTP being used to authenticate devices and to update software and firmware, as well as use of outdated crypto libraries and weak default credentials. + +While IoT devices are common in enterprises, “many of the devices are employee owned, and this is just one of the reasons they are a security concern,” the report says. Without strict policies and enforcement, these devices represent potential vulnerabilities. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]** + +Another reason employee-owned IoT devices are a concern is that many businesses don’t consider them a threat because no data is stored on them. But if the data they gather is transmitted insecurely, it is at risk. + +### 5 tips to protect enterprise IoT + +Zscaler recommends these security precautions: + + * Change default credentials to something more secure. As employees bring in devices, encourage them to use strong passwords and to keep their firmware current. + * Isolate IoT devices on networks and restrict inbound and outbound network traffic. + * Restrict access to IoT devices from external networks and block unnecessary ports from external access. + * Apply regular security and firmware updates to IoT devices, and secure network traffic. + * Deploy tools to gain visibility of shadow-IoT devices already inside the network so they can be protected. + + + +**More on IoT:** + + * [What is edge computing and how it’s changing the network][7] + * [Most powerful Internet of Things companies][8] + * [10 Hot IoT startups to watch][9] + * [The 6 ways to make money in IoT][10] + * [What is digital twin technology? [and why it matters]][11] + * [Blockchain, service-centric networking key to IoT success][12] + * [Getting grounded in IoT networking and security][13] + * [Building IoT-ready networks must become a priority][14] + * [What is the Industrial IoT? [And why the stakes are so high]][15] + + + +Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html + +作者:[Tim Greene][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Tim-Greene/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/network_security_network_traffic_scanning_by_hywards_gettyimages-673891964_2400x1600-100796830-large.jpg +[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/3045953/5-things-you-need-to-know-about-ssl.html +[4]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html +[5]: https://www.networkworld.com/newsletters/signup.html +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[13]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[14]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[15]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[16]: https://www.facebook.com/NetworkWorld/ +[17]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md new file mode 100644 index 0000000000..026b5d8e81 --- /dev/null +++ b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Managed WAN and the cloud-native SD-WAN) +[#]: via: (https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +Managed WAN and the cloud-native SD-WAN +====== +The motivation for WAN transformation is clear, today organizations require: improved internet access and last mile connectivity, additional bandwidth and a reduction in the WAN costs. +![Gerd Altmann \(CC0\)][1] + +In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations. + +The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia. + +A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects. + +**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]** + +### Motivations for transforming the WAN + +The motivation for WAN transformation is clear: Today organizations prefer improved internet access and last mile connectivity, additional bandwidth along with a reduction in the WAN costs. Replacing Multiprotocol Label Switching (MPLS) with SD-WAN has of course been the main driver for the SD-WAN evolution, but it is only a single piece of the jigsaw puzzle. + +Many SD-WAN vendors are quickly brought to their knees when they try to address security and gain direct internet access from remote site locations. The problem is how to ensure optimized cloud access that is secure, has improved visibility and predictable performance without the high costs associated with MPLS? SD-WAN is not just about connecting locations. Primarily, it needs to combine many other important network and security elements into one seamless worldwide experience. + +According to a recent report from [Cato Networks][3] into enterprise IT managers, a staggering 85% will confront use cases in 2019 that are poorly addressed or outright ignored by SD-WAN. Examples includes providing secure, Internet access from any location (50%) and improving visibility into and control over mobile access to cloud applications, such as Office 365 (46%). + +### Issues with traditional SD-WAN vendors + +First and foremost, SD-WAN unable to address the security challenges that arise during the WAN transformation. Such security challenges include protection against malware, ransomware and implementing the necessary security policies. Besides, there is a lack of visibility that is required to police the mobile users and remote site locations accessing resources in the public cloud. + +To combat this, organizations have to purchase additional equipment. There has always been and will always be a high cost associated with buying such security appliances. Furthermore, the additional tools that are needed to protect the remote site locations increase the network complexity and reduce visibility. Let’s us not forget that the variety of physical appliances require talented engineers for design, deployment and maintenance. + +There will often be a single network-cowboy. This means the network and security configuration along with the design essentials are stored in the mind of the engineer, not in a central database from where the knowledge can be accessed if the engineer leaves his or her employment. + +The physical appliance approach to SD-WAN makes it hard, if not impossible, to accommodate for the future. If the current SD-WAN vendors continue to focus just on connecting the devices with the physical appliances, they will have limited ability to accommodate for example, with the future of network IoT devices. With these factors in mind what are the available options to overcome the SD-WAN shortcomings? + +One can opt for a do it yourself (DIY) solution, or a managed service, which can fall into the category of telcos, with the improvements of either co-managed or self-managed service categories. + +### Option 1: The DIY solution + +Firstly DIY, from the experience of trying to stitch together a global network, this is not only costly but also complex and is a very constrained approach to the network transformation. We started with physical appliances decades ago and it was sufficient to an extent. The reason it worked was that it suited the requirements of the time, but our environment has changed since then. Hence, we need to accommodate these changes with the current requirements. + +Even back in those days, we always had a breachable perimeter. The perimeter-approach to networking and security never really worked and it was just a matter of time before the bad actor would penetrate the guarded walls. + +Securing a global network involves more than just firewalling the devices. A solid security perimeter requires URL filtering, anti-malware and IPS to secure the internet traffic. If you try to deploy all these functions in a single device, such as, unified threat management (UTM), you will hit scaling problems. As a result, you will be left with appliance sprawl. + +Back in my early days as an engineer, I recall stitching together a global network with a mixture of security and network appliances from a variety of vendors. It was me and just two others who used to get the job done on time and for a production network, our uptime levels were superior to most. + +However, it involved too many late nights, daily flights to our PoPs and of course the major changes required a forklift. A lot of work had to be done at that time, which made me want to push some or most of the work to a 3rd party. + +### Option 2: The managed service solution + +Today, there is a growing need for the managed service approach to SD-WAN. Notably, it simplifies the network design, deployment and maintenance activities while offloading the complexity, in line with what most CIOs are talking about today. + +Managed service provides a number of benefits, such as the elimination of backhauling to centralized cloud connectors or VPN concentrators. Evidently, backhauling is never favored for a network architect. More than often it will result in increased latency, congested links, internet chokepoints, and last-mile outages. + +Managed service can also authenticate mobile users at the local communication hub and not at a centralized point which would increase the latency. So what options are available when considering a managed service? + +### Telcos: An average service level + +Let’s be honest, telcos have a mixed track record and enterprises rely on them with caution. Essentially, you are building a network with 3rd party appliances and services that put the technical expertise outside of the organization. + +Secondly, the telco must orchestrate, monitor and manage numerous technical domains which are likely to introduce further complexity. As a result, troubleshooting requires close coordination with the suppliers which will have an impact on the customer experience. + +### Time equals money + +To resolve a query could easily take two or three attempts. It’s rare that you will get to the right person straight away. This eventually increases the time to resolve problems. Even for a minor feature change, you have to open tickets. Hence, with telcos, it increases the time required to solve a problem. + +In addition, it takes time to make major network changes such as opening new locations, which could take up to 45 days. In the same report mentioned above, 71% of the respondents are frustrated with the telco customer-service-time to resolve the problems, 73% indicated that deploying new locations requires at least 15 days and 47% claimed that “high bandwidth costs” is the biggest frustration while working with telcos. + +When it comes to lead times for projects, an engineer does not care. Does a project manager care if you have an optimum network design? No, many don’t, most just care about the timeframes. During my career, now spanning 18 years, I have never seen comments from any of my contacts saying “you must adhere to your project manager’s timelines”. + +However, out of the experience, the project managers have their ways and lead times do become a big part of your daily job. So as an engineer, 45-day lead time will certainly hit your brand hard, especially if you are an external consultant. + +There is also a problem with bandwidth costs. Telcos need to charge due to their complexity. There is always going to be a series of problems when working with them. Let’s face it, they offer an average service level. + +### Co-management and self-service management + +What is needed is a service that equips with the visibility and control of DIY to managed services. This, ultimately, opens the door to co-management and self-service management. + +Co-management allows both the telco and enterprise to make changes to the WAN. Then we have the self-service management of WAN that allows the enterprises to have sole access over the aspect of their network. + +However, these are just sticking plasters covering up the flaws. We need a managed service that not only connects locations but also synthesizes the site connectivity, along with security, mobile access, and cloud access. + +### Introducing the cloud-native approach to SD-WAN + +There should be a new style of managed services that combines the best of both worlds. It should offer the uptime, predictability and reach of the best telcos along with the cost structure and versatility of cloud providers. All such requirements can be met by what is known as the cloud-native carrier. + +Therefore, we should be looking for a platform that can connect and secure all the users and resources at scale, no matter where they are positioned. Eventually, such a platform will limit the costs and increase the velocity and agility. + +This is what a cloud-native carrier can offer you. You could say it’s a new kind of managed service, which is what enterprises are now looking for. A cloud-native carrier service brings the best of cloud services to the world of networking. This new style of managed service brings to SD-WAN the global reach, self-service, and agility of the cloud with the ability to easily migrate from MPLS. + +In summary, a cloud-native carrier service will improve global connectivity to on-premises and cloud applications, enable secure branch to internet access, and both securely and optimally integrate cloud datacenters. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][4]** + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://images.techhive.com/images/article/2017/03/network-wan-100713693-large.jpg +[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html +[3]: https://www.catonetworks.com/news/digital-transformation-survey +[4]: /contributor-network/signup.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md new file mode 100644 index 0000000000..8f6f46b6f2 --- /dev/null +++ b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Moving to the Cloud? SD-WAN Matters!) +[#]: via: (https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html) +[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/) + +Moving to the Cloud? SD-WAN Matters! +====== + +![istock][1] + +This is the first in a two-part blog series that will explore how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The focus for this installment will be on automating secure IPsec connectivity and intelligently steering traffic to cloud providers. + +Over the past several years we’ve seen a major shift in data center strategies where enterprise IT organizations are shifting applications and workloads to cloud, whether private or public. More and more, enterprises are leveraging software as-a-service (SaaS) applications and infrastructure as-a-service (IaaS) cloud services from leading providers like [Amazon AWS][3], [Google Cloud][4], [Microsoft Azure][5] and [Oracle Cloud Infrastructure][6]. This represents a dramatic shift in enterprise data traffic patterns as fewer and fewer applications are hosted within the walls of the traditional corporate data center. + +There are several drivers for the shift to IaaS cloud services and SaaS apps, but business agility tops the list for most enterprises. The traditional IT model for provisioning and deprovisioning applications is rigid and inflexible and is no longer able to keep pace with changing business needs. + +According to [LogicMonitor’s Cloud Vision 2020][7] study, more than 80 percent of enterprise workloads will run in the cloud by 2020 with more than 40 percent running on public cloud platforms. This major shift in the application consumption model is having a huge [impact on organizations and infrastructure][8]. A recent article entitled “[How Amazon Web Services is luring banks to the cloud][9],” published by CNBC, reported that some companies already have completely migrated all of their applications and IT workloads to public cloud infrastructures. An interesting fact is that while many enterprises must comply with stringent regulatory compliance mandates such as PCI-DSS or HIPAA, they still have made the move to the cloud. This tells us two things – the maturity of using public cloud services and the trust these organizations have in using them is at an all-time high. Again, it is all about speed and agility – without compromising performance, security and reliability. + +### **Is there a direct correlation between moving to the cloud and adopting SD-WAN?** + +As the cloud enables businesses to move faster, an SD-WAN architecture where top-down business intent is the driver is critical to ensuring success, especially when branch offices are geographically distributed across the globe. Traditional router-centric WAN architectures were never designed to support today’s cloud consumption model for applications in the most efficient way. With a conventional router-centric WAN approach, access to applications residing in the cloud means traversing unnecessary hops, resulting in wasted bandwidth, additional cost, added latency and potentially higher packet loss. In addition, under the existing, traditional WAN model where management tends to be rigid, complex network changes can be lengthy, whether setting up new branches or troubleshooting performance issues. This leads to inefficiencies and a costly operational model. Therefore, enterprises greatly benefit from taking a business-first WAN approach toward achieving greater agility in addition to realizing substantial CAPEX and OPEX savings. + +A business-driven SD-WAN platform is purpose-built to tackle the challenges inherent to the traditional router-centric model and more aptly support today’s cloud consumption model. This means application policies are defined based on business intent, connecting users securely and directly to applications where ever they reside without unnecessary extra hops or security compromises. For example, if the application is hosted in the cloud and is trusted, a business-driven SD-WAN can automatically connect users to it without backhauling traffic to a POP or HQ data center. Now, in general this traffic is usually going across an internet link which, on its own, may not be secure. However, the right SD-WAN platform will have a unified stateful firewall built-in for local internet breakout allowing only branch-initiated sessions to enter the branch and providing the ability to service chain traffic to a cloud-based security service if necessary, before forwarding it to its final destination. If the application is moved and becomes hosted by another provider or perhaps back to a company’s own data center, traffic must be intelligently redirected, wherever the application is being hosted. Without automation and embedded machine learning, dynamic and intelligent traffic steering is impossible. + +### **A closer look at how the Silver Peak EdgeConnect™ SD-WAN edge platform addresses these challenges: ** + +**Automate traffic steering and connectivity to cloud providers** + +An [EdgeConnect][10] virtual instance is easily spun up in any of the [leading cloud providers][11] through their respective marketplaces. For an SD-WAN to intelligently steer traffic to its destination, it requires insights into both HTTP and HTTPS traffic; it must be able to identify apps on the first packet received in order to steer traffic to the right destination in accordance with business intent. This is critical capability because once a TCP connection is NAT’d with a public IP address, it cannot be switched thus it can’t be re-routed once a connection is established. So, the ability of EdgeConnect to identify, classify and automatically steer traffic based on the first packet – and not the second or tenth packet – to the correct destination will assure application SLAs, minimize wasting expensive bandwidth and deliver the highest quality of experience. + +Another critical capability is automatic performance optimization. Irrespective of which link the traffic ends up traversing based on business intent and the unique requirements of the application, EdgeConnect automatically optimizes application performance without human intervention by correcting for out of order packets using Packet Order Correction (POC) or even under high latency conditions that can be related to distance or other issues. This is done using adaptive Forward Error Correction (FEC) and tunnel bonding where a virtual tunnel is created, resulting in a single logical overlay that traffic can be dynamically moved between the different paths as conditions change with each underlay WAN service. In this [lightboard video][12], Dinesh Fernando, a technical marketing engineer at Silver Peak, explains how EdgeConnect automates tunnel creation between sites and cloud providers, how it simplifies data transfers between multi-clouds, and how it improves application performance. + +If your business is global and increasingly dependent on the cloud, the business-driven EdgeConnect SD-WAN edge platform enables seamless multi-cloud connectivity, turning the network into a business accelerant. EdgeConnect delivers: + + 1. A consistent deployment from the branch to the cloud, extending the reach of the SD-WAN into virtual private cloud environments + 2. Multi-cloud flexibility, making it easier to initiate and distribute resources across multiple cloud providers + 3. Investment protection by confidently migrating on premise IT resources to any combination of the leading public cloud platforms, knowing their cloud-hosted instances will be fully supported by EdgeConnect + + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html + +作者:[Rami Rammaha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rami-Rammaha/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/istock-899678028-100797709-large.jpg +[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[3]: https://www.silver-peak.com/company/tech-partners/cloud/aws +[4]: https://www.silver-peak.com/company/tech-partners/cloud/google-cloud +[5]: https://www.silver-peak.com/company/tech-partners/cloud/microsoft-azure +[6]: https://www.silver-peak.com/company/tech-partners/cloud/oracle-cloud +[7]: https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/?utm_medium=pr&utm_source=businesswire&utm_campaign=cloudsurvey +[8]: http://www.networkworld.com/article/3152024/lan-wan/in-the-age-of-digital-transformation-why-sd-wan-plays-a-key-role-in-the-transition.html +[9]: http://www.cnbc.com/2016/11/30/how-amazon-web-services-is-luring-banks-to-the-cloud.html?__source=yahoo%257cfinance%257cheadline%257cheadline%257cstory&par=yahoo&doc=104135637 +[10]: https://www.silver-peak.com/products/unity-edge-connect +[11]: https://www.silver-peak.com/company/tech-partners?strategic_partner_type=69 +[12]: https://www.silver-peak.com/resource-center/automate-connectivity-to-cloud-networking-with-sd-wan diff --git a/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md b/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md new file mode 100644 index 0000000000..07f9eea10c --- /dev/null +++ b/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (With Cray buy, HPE rules but does not own the supercomputing market) +[#]: via: (https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +With Cray buy, HPE rules but does not own the supercomputing market +====== +In buying supercomputer vendor Cray, HPE has strengthened its high-performance-computing technology, but serious competitors remain. +![Cray Inc.][1] + +Hewlett Packard Enterprise was already the leader in the high-performance computing (HPC) sector before its announced acquisition of supercomputer maker Cray earlier this month. Now it has a commanding lead, but there are still competitors to the giant. + +The news that HPE would shell out $1.3 billion to buy the company came just as Cray had announced plans to build three of the biggest systems yet — all exascale, and all with the same deployment time of 2021. + +Sales had been slowing for HPC systems, but our government, with its endless supply of money, came to the rescue, throwing hundreds of millions at Cray for systems to be built at Lawrence Berkeley National Laboratory, Argonne National Laboratory and Oak Ridge National Laboratory. + +**[ Read also:[How to plan a software-defined data-center network][2] ]** + +And HPE sees a big revenue opportunity in HPC, a market that was $2 billion in 1990 and now nearly $30 billion, according to Steve Conway, senior vice president with Hyperion Research, which follows the HPC market. HPE thinks the HPC market will grow to $35 billion by 2021, and it hopes to earn a big chunk of that pie. + +“They were solidly in the lead without Cray. They were already in a significant lead over the No. 2 company, Dell. This adds to their lead and gives them access to very high end of market, especially government supercomputers that sell for $300 million to $600 million each,” said Conway. + +He’s not exaggerating. Earlier this month the U.S. Department of Energy announced a contract with Cray to build Frontier, an exascale supercomputer at Oak Ridge National Laboratory, sometime in 2021, with a $600 million price tag. Frontier will be powered by AMD Epyc processors and Radeon GPUs, which must have them doing backflips at AMD. + +With Cray, HPE is sitting on a lot of technology for the supercomputing and even the high-end, non-HPC market. It had the ProLiant business, the bulk of server sales (and proof the Compaq acquisition wasn’t such a bad idea), Integrity NonStop mission-critical servers, the SGI business it acquired in in 2016, plus a variety running everything from Arm to Xeon Scalable processors. + +Conway thinks all of those technologies fit in different spaces, so he doubts HPE will try to consolidate any of it. All HPE has said so far is it will keep the supercomputer products it has now under the Cray business unit. + +But the company is still getting something it didn’t have. “It takes a certain kind of technical experience [to do HPC right] and only a few companies able to play at that level. Before this deal, HPE was not one of them,” said Conway. + +And in the process, HPE takes Cray away from its many competitors: IBM, Lenovo, Dell/EMC, Huawei (well, not so much now), Super Micro, NEC, Hitachi, Fujitsu, and Atos. + +“[The acquisition] doesn’t fundamentally change things because there’s still enough competitors that buyers can have competitive bids. But it’s gotten to be a much bigger market,” said Conway. + +Cray sells a lot to government, but Conway thinks there is a new opportunity in the ever-expanding AI race. “Because HPC is indispensable at the forefront of AI, there is a new area for expanding the market,” he said. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/06/the_cray_xc30_piz_daint_system_at_the_swiss_national_supercomputing_centre_via_cray_inc_3x2_978x652-100762113-large.jpg +[2]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md b/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md new file mode 100644 index 0000000000..c1e0493e63 --- /dev/null +++ b/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md @@ -0,0 +1,92 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco security spotlights Microsoft Office 365 e-mail phishing increase) +[#]: via: (https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco security spotlights Microsoft Office 365 e-mail phishing increase +====== +Cisco blog follows DHS Cybersecurity and Infrastructure Security Agency (CISA) report detailing risks around Office 365 and other cloud services +![weerapatkiatdumrong / Getty Images][1] + +It’s no secret that if you have a cloud-based e-mail service, fighting off the barrage of security issues has become a maddening daily routine. + +The leading e-mail service – in [Microsoft’s Office 365][2] package – seems to be getting the most attention from those attackers hellbent on stealing enterprise data or your private information via phishing attacks. Amazon and Google see their share of phishing attempts in their cloud-based services as well. + +**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** + +But attackers are crafting and launching phishing campaigns targeting Office 365 users, [wrote][5] Ben Nahorney, a Threat Intelligence Analyst focused on covering the threat landscape for Cisco Security in a blog focusing on the Office 365 phishing issue. + +Nahorney wrote of research from security vendor [Agari Data][6], that found over the last few quarters, there has been a steady increase in the number of phishing emails impersonating Microsoft. While Microsoft has long been the most commonly impersonated brand, it now accounts for more than half of all brand impersonations seen in the last quarter. + +Recently cloud security firm Avanan wrote in its [annual phishing report][7], one in every 99 emails is a phishing attack, using malicious links and attachments as the main vector. “Of the phishing attacks we analyzed, 25 percent bypassed Office 365 security, a number that is likely to increase as attackers design new obfuscation methods that take advantage of zero-day vulnerabilities on the platform,” Avanan wrote. + +The attackers attempt to steal a user’s login credentials with the goal of taking over accounts. If successful, attackers can often log into the compromised accounts, and perform a wide variety of malicious activity: Spread malware, spam and phishing emails from within the internal network; carry out tailored attacks such as spear phishing and [business email compromise][8] [a long-standing business scam that uses spear-phishing, social engineering, identity theft, e-mail spoofing], and target partners and customers, Nahorney wrote. + +Nahorney wrote that at first glance, this may not seem very different than external email-based attacks. However, there is one critical distinction: The malicious emails sent are now coming from legitimate accounts. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]** + +“For the recipient, it’s often even someone that they know, eliciting trust in a way that would not necessarily be afforded to an unknown source. To make things more complicated, attackers often leverage ‘conversation hijacking,’ where they deliver their payload by replying to an email that’s already located in the compromised inbox,” Nahorney stated. + +The methods used by attackers to gain access to an Office 365 account are fairly straightforward, Nahorney wrote. + +“The phishing campaigns usually take the form of an email from Microsoft. The email contains a request to log in, claiming the user needs to reset their password, hasn’t logged in recently or that there’s a problem with the account that needs their attention. A URL is included, enticing the reader to click to remedy the issue,” Nahorney wrote. + +Once logged in, nefarious activities can go on unnoticed as the attacker has what look like authorized credentials. + +“This gives the attacker time for reconnaissance: a chance to observe and plan additional attacks. Nor will this type of attack set off a security alert in the same way something like a brute-force attack against a webmail client will, where the attacker guesses password after password until they get in or are detected,” Nahorney stated. + +Nahorney suggested the following steps customers can take to protect email: + + * Use multi-factor authentication. If a login attempt requires a secondary authorization before someone is allowed access to an inbox, this will stop many attackers, even with phished credentials. + * Deploy advanced anti-phishing technologies. Some machine-learning technologies can use local identity and relationship modeling alongside behavioral analytics to spot deception-based threats. + * Run regular phishing exercises. Regular, mandated phishing exercises across the entire organization will help to train employees to recognize phishing emails, so that they don’t click on malicious URLs, or enter their credentials into malicious website. + + + +### Homeland Security flags Office 365, other cloud email services + +The U.S. government, too, has been warning customers of Office 365 and other cloud-based email services that they should be on alert for security risks. The US Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) this month [issued a report targeting][10] Office 365 and other cloud services saying: + +“Organizations that used a third party have had a mix of configurations that lowered their overall security posture (e.g., mailbox auditing disabled, unified audit log disabled, multi-factor authentication disabled on admin accounts). In addition, the majority of these organizations did not have a dedicated IT security team to focus on their security in the cloud. These security oversights have led to user and mailbox compromises and vulnerabilities.” + +The agency also posted remediation suggestions including: + + * Enable unified audit logging in the Security and Compliance Center. + * Enable mailbox auditing for each user. + * Ensure Azure AD password sync is planned for and configured correctly, prior to migrating users. + * Disable legacy email protocols, if not required, or limit their use to specific users. + + + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/cso_phishing_social_engineering_security_threat_by_weerapatkiatdumrong_gettyimages-489433130_3x2_2400x1600-100796450-large.jpg +[2]: https://docs.microsoft.com/en-us/office365/securitycompliance/security-roadmap +[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://blogs.cisco.com/security/office-365-phishing-threat-of-the-month +[6]: https://www.agari.com/ +[7]: https://www.avanan.com/hubfs/2019-Global-Phish-Report.pdf +[8]: https://www.networkworld.com/article/3195072/fbi-ic3-vile-5b-business-e-mail-scam-continues-to-breed.html +[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[10]: https://www.us-cert.gov/ncas/analysis-reports/AR19-133A +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md b/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md new file mode 100644 index 0000000000..f608db970c --- /dev/null +++ b/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md @@ -0,0 +1,53 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Nvidia launches edge computing platform for AI processing) +[#]: via: (https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Nvidia launches edge computing platform for AI processing +====== +EGX platform goes to the edge to do as much processing there as possible before sending data upstream to major data centers. +![Leo Wolfert / Getty Images][1] + +Nvidia is launching a new platform called EGX Platform designed to bring real-time artificial intelligence (AI) to edge networks. The idea is to put AI computing closer to where sensors collect data before it is sent to larger data centers. + +The edge serves as a buffer to data sent to data centers. It whittles down the data collected and only sends what is relevant up to major data centers for processing. This can mean discarding more than 90% of data collected, but the trick is knowing which data to keep and which to discard. + +“AI is required in this data-driven world,” said Justin Boitano, senior director for enterprise and edge computing at Nvidia, on a press call last Friday. “We analyze data near the source, capture anomalies and report anomalies back to the mothership for analysis.” + +**[ Now read[20 hot jobs ambitious IT pros should shoot for][2]. ]** + +Boitano said we are hitting crossover where there is more compute at edge than cloud because more work needs to be done there. + +EGX comes from 14 server vendors in a range of form factors, combining AI with network, security and storage from Mellanox. Boitano said that the racks will fit in any industry-standard rack, so they will fit into edge containers from the likes of Vapor IO and Schneider Electric. + +EGX uses Nvidia’s low-power Jetson Nano processor, but also all the way up to Nvidia T4 processors that can deliver more than 10,000 trillion operations per second (TOPS) for real-time speech recognition and other real-time AI tasks. + +Nvdia is working on software stack called Nvidia Edge Stack that can be updated constantly, and the software runs in containers, so no reboots are required, just a restart of the container. EGX runs enterprise-grade Kubernetes container platforms like Red Hat Openshift. + +Edge Stack is optimized software that includes Nvidia drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server and DeepStream. + +The company is boasting more than 40 early adopters, including BMW Group Logistics, which uses EGX and its own Isaac robotic platforms to handle increasingly complex logistics with real-time efficiency. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg +[2]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md new file mode 100644 index 0000000000..383fac66ca --- /dev/null +++ b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md @@ -0,0 +1,63 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Satellite-based internet possible by year-end, says SpaceX) +[#]: via: (https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Satellite-based internet possible by year-end, says SpaceX +====== +Amazon, Tesla-associated SpaceX and OneWeb are emerging as just some of the potential suppliers of a new kind of data-friendly satellite internet service that could bring broadband IoT connectivity to most places on Earth. +![Getty Images][1] + +With SpaceX’s successful launch of an initial array of broadband-internet-carrying satellites last week, and Amazon’s surprising posting of numerous satellite engineering-related job openings on its [job board][2] this month, one might well be asking if the next-generation internet space race is finally getting going. (I first wrote about [OneWeb’s satellite internet plans][3] it was concocting with Airbus four years ago.) + +This new batch of satellite-driven internet systems, if they work and are eventually switched on, could provide broadband to most places, including previously internet-barren locations, such as rural areas. That would be good for high-bandwidth, low-latency remote-internet of things (IoT) and increasingly important edge-server connections for verticals like oil and gas and maritime. [Data could even end up getting stored in compliance-friendly outer space, too][4]. Leaky ground-based connections, also, perhaps a thing of the past. + +Of the principal new internet suppliers, SpaceX has gotten farthest along. That’s in part because it has commercial impetus. It needed to create payload for its numerous rocket projects. The Tesla electric-car-associated company (the two firms share materials science) has not only launched its first tranche of 60 satellites for its own internet constellation, called Starlink, but also successfully launched numerous batches (making up the full constellation of 75 satellites) for Iridium’s replacement, an upgraded constellation called Iridium NEXT. + +[The time of 5G is almost here][5] + +Potential competitor OneWeb launched its first six Airbus-built satellites in February. [It has plans for 900 more][6]. SpaceX has been approved for 4,365 more by the FCC, and Project Kuiper, as Amazon’s space internet project is known, wants to place 3,236 satellites in orbit, according to International Telecommunication Union filings [discovered by _GeekWire_][7] earlier this year. [Startup LeoSat, which I wrote about last year, aims to build an internet backbone constellation][8]. Facebook, too, is exploring [space-delivered internet][9]. + +### Why the move to space? + +Laser technical progress, where data is sent in open, free space, rather than via a restrictive, land-based cable or via traditional radio paths, is partly behind this space-internet rush. “Bits travel faster in free space than in glass-fiber cable,” LeoSat explained last year. Additionally, improving microprocessor tech is also part of the mix. + +One important difference from existing older-generation satellite constellations is that this new generation of internet satellites will be located in low Earth orbit (LEO). Initial Starlink satellites will be placed at about 350 miles above Earth, with later launches deployed at 710 miles. + +There’s an advantage to that. Traditional satellites in geostationary orbit, or GSO, have been deployed about 22,000 miles up. That extra distance versus LEO introduces latency and is one reason earlier generations of Internet satellites are plagued by slow round-trip times. Latency didn’t matter when GSO was introduced in 1964, and commercial satellites, traditionally, have been pitched as one-way video links, such as are used by sporting events for broadcast, and not for data. + +And when will we get to experience these new ISPs? “Starlink is targeted to offer service in the Northern U.S. and Canadian latitudes after six launches,” [SpaceX says on its website][10]. Each launch would deliver about 60 satellites. “SpaceX is targeting two to six launches by the end of this year.” + +Global penetration of the “populated world” could be obtained after 24 launches, it thinks. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/10/network_iot_world-map_us_globe_nodes_global-100777483-large.jpg +[2]: https://www.amazon.jobs/en/teams/projectkuiper +[3]: https://www.itworld.com/article/2938652/space-based-internet-starts-to-get-serious.html +[4]: https://www.networkworld.com/article/3200242/data-should-be-stored-data-in-space-firm-says.html +[5]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[6]: https://www.airbus.com/space/telecommunications-satellites/oneweb-satellites-connection-for-people-all-over-the-globe.html +[7]: https://www.geekwire.com/2019/amazon-lists-scores-jobs-bellevue-project-kuiper-broadband-satellite-operation/ +[8]: https://www.networkworld.com/article/3328645/space-data-backbone-gets-us-approval.html +[9]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html +[10]: https://www.starlink.com/ +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md new file mode 100644 index 0000000000..9b65a6c8dd --- /dev/null +++ b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Survey finds SD-WANs are hot, but satisfaction with telcos is not) +[#]: via: (https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +Survey finds SD-WANs are hot, but satisfaction with telcos is not +====== +A recent survey of over 400 IT executives by Cato Networks found that legacy telcos might be on the outside looking in for SD-WANs. +![istock][1] + +This week SD-WAN vendor Cato Networks announced the results of its [Telcos and the Future of the WAN in 2019 survey][2]. The study was a mix of companies of all sizes, with 42% being enterprise-class (over 2,500 employees). More than 70% had a network with more than 10 locations, and almost a quarter (24%) had over 100 sites. All of the respondents have a cloud presence, and almost 80% have at least two data centers. The survey had good geographic diversity, with 57% of respondents coming from the U.S. and 24% from Europe. + +Highlights of the survey include the following key findings: + +## **SD-WANs are hot but not a panacea to all networking challenges** + +The survey found that 44% of respondents have already deployed or will deploy an SD-WAN within the next 12 months. This number is up sharply from 25% when Cato ran the survey a year ago. Another 33% are considering SD-WAN but have no immediate plans to deploy. The primary drivers for the evolution of the WAN are improved internet access (46%), increased bandwidth (39%), improved last-mile availability (38%) and reduced WAN costs (37%). It’s good to see cost savings drop to fourth in motivation, since there is so much more to SD-WAN. + +[The time of 5G is almost here][3] + +It’s interesting that the majority of respondents believe SD-WAN alone can’t address all challenges facing the WAN. A whopping 85% stated they would be confronting issues not addressed by SD-WAN alone. This includes secure, local internet breakout, improved visibility, and control over mobile access to cloud apps. This indicates that customers are looking for SD-WAN to be the foundation of the WAN but understand that other technologies need to be deployed as well. + +## **Telco dissatisfaction is high** + +The traditional telco has been a point of frustration for network professionals for years, and the survey spelled that out loud and clear. Prior to being an analyst, I held a number of corporate IT positions and found telcos to be the single most frustrating group of companies to deal with. The problem was, there was no choice. If you need MPLS services, you need a telco. The same can’t be said for SD-WANs, though; businesses have more choices. + +Respondents to the survey ranked telco service as “average.” It’s been well documented that we are now in the customer-experience era and “good enough” service is no longer good enough. Regarding pricing, 54% gave telcos a failing grade. Although price isn’t everything, this will certainly open the door to competitive SD-WAN vendors. Respondents gave the highest marks for overall experience to SaaS providers, followed by cloud computing suppliers. Global telcos scored the lowest of all vendor types. + +A look deeper explains the frustration level. The network is now mission-critical for companies, but 48% stated they are able to reach the support personnel with the right expertise to solve a problem only on a second attempt. No retailer, airline, hotel or other type of company could survive this, but telco customers had no other options for years. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]** + +Another interesting set of data points is the speed at which telcos address customer needs. Digital businesses compete on speed, but telco process is the antithesis of fast. Moves, adds and changes take at least one business day for half of the respondents. Also, 70% indicated that opening a new location takes 15 days, and 38% stated it requires 45 days or more. + +## **Security is now part of SD-WAN** + +The use of broadband, cloud access and other trends raise the bar on security for SD-WAN, and the survey confirmed that respondents are skeptical that SD-WANs could address these issues. Seventy percent believe SD-WANs can’t address malware/ransomware, and 49% don’t think SD-WAN helps with enforcing company policies on mobile users. Because of this, network professionals are forced to buy additional security tools from other vendors, but that can drive up complexity. SD-WAN vendors that have intrinsic security capabilities can use that as a point of differentiation. + +## **Managed services are critical to the growth of SD-WANs** + +The survey found that 75% of respondents are using some kind of managed service provider, versus only 25% using an appliance vendor. This latter number was 32% last year. I’m not surprised by this shift and expect it to continue. Legacy WANs were inefficient but straightforward to deploy. D-WANs are highly agile and more cost-effective, but complexity has gone through the roof. Network engineers need to factor in cloud connectivity, distributed security, application performance, broadband connectivity and other issues. Managed services can help businesses enjoy the benefits of SD-WAN while masking the complexity. + +Despite the desire to use an MSP, respondents don’t want to give up total control. Eighty percent stated they preferred self-service or co-managed models. This further explains the shift away from telcos, since they typically work with fully managed models. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/02/istock-465661573-100750447-large.jpg +[2]: https://www.catonetworks.com/news/digital-transformation-survey/ +[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md new file mode 100644 index 0000000000..6d955c6485 --- /dev/null +++ b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (IoT Roundup: New research on IoT security, Microsoft leans into IoT) +[#]: via: (https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +IoT Roundup: New research on IoT security, Microsoft leans into IoT +====== +Verizon sets up widely available narrow-band IoT service, while most Americans think IoT manufacturers should ensure their products protect personal information. +As with any technology whose use is expanding at such speed, it can be tough to track exactly what’s going on in the [IoT][1] world – everything from basic usage numbers to customer attitudes to more in-depth slices of the market is constantly changing. Fortunately, the month of May brought several new pieces of research to light, which should help provide at least a partial outline of what’s really happening in IoT. + +### Internet of things polls + +Not all of the news is good. An IPSOS Mori poll performed on behalf of the Internet Society and Consumers International (respectively, an umbrella organization for open development and Internet use and a broad-based consumer advocacy group) found that, despite the skyrocketing numbers of smart devices in circulation around the world, more than half of users in large parts of the western world don’t trust those devices to safeguard their privacy. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][2] + * [What is edge computing and how it’s changing the network][3] + * [Most powerful Internet of Things companies][4] + * [10 Hot IoT startups to watch][5] + * [The 6 ways to make money in IoT][6] + * [What is digital twin technology? [and why it matters]][7] + * [Blockchain, service-centric networking key to IoT success][8] + * [Getting grounded in IoT networking and security][9] + * [Building IoT-ready networks must become a priority][10] + * [What is the Industrial IoT? [And why the stakes are so high]][11] + + + +While almost 70 percent of respondents owned connected devices, 55 percent said they didn’t feel their personal information was adequately protected by manufacturers. A further 28 percent said they had avoided using connected devices – smart home, fitness tracking and similar consumer gadgetry – primarily because they were concerned over privacy issues, and a whopping 85 percent of Americans agreed with the argument that manufacturers had a responsibility to produce devices that protected personal information. + +Those concerns are understandable, according to data from the Ponemon Institute, a tech-research organization. Its survey of corporate risk and security personnel, released in early May, found that there have been few concerted efforts to limit exposure to IoT-based security threats, and that those threats are sharply on the rise when compared to past years, with the percentage of organizations that had experienced a data breach related to unsecured IoT devices rising from 15 percent in fiscal 2017 to 26 percent in fiscal 2019. + +Beyond a lack of organizational wherewithal to address those threats, part of the problem in some verticals is technical. Security vendor Forescout said earlier this month that its research showed 40 percent of all healthcare IT environments had more than 20 different operating systems, and more than 30 percent had more than 100 – hardly an ideal situation for smooth patching and updating. + +To continue reading this article register now + +[Get Free Access][12] + +[Learn More][13] Existing Users [Sign In][12] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[12]: javascript:// +[13]: /learn-about-insider/ diff --git a/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md new file mode 100644 index 0000000000..cc5aa9db7c --- /dev/null +++ b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (It’s time for the IoT to 'optimize for trust') +[#]: via: (https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +It’s time for the IoT to 'optimize for trust' +====== +If we can't trust the internet of things (IoT) to gather accurate data and use it appropriately, IoT adoption and innovation are likely to suffer. +![Bose][1] + +One of the strengths of internet of things (IoT) technology is that it can do so many things well. From smart toothbrushes to predictive maintenance on jetliners, the IoT has more use cases than you can count. The result is that various IoT uses cases require optimization for particular characteristics, from cost to speed to long life, as well as myriad others. + +But in a recent post, "[How the internet of things will change advertising][2]" (which you should definitely read), the always-insightful Stacy Higginbotham tossed in a line that I can’t stop thinking about: “It's crucial that the IoT optimizes for trust." + +**[ Read also: Network World's[corporate guide to addressing IoT security][3] ]** + +### Trust is the IoT's most important attribute + +Higginbotham was talking about optimizing for trust as opposed to clicks, but really, trust is more important than just about any other value in the IoT. It’s more important than bandwidth usage, more important than power usage, more important than cost, more important than reliability, and even more important than security and privacy (though they are obviously related). In fact, trust is the critical factor in almost every aspect of the IoT. + +Don’t believe me? Let’s take a quick look at some recent developments in the field: + +For one thing, IoT devices often don’t take good care of the data they collect from you. Over 90% of data transactions on IoT devices are not fully encrypted, according to a new [study from security company Zscaler][4]. The [problem][5], apparently, is that many companies have large numbers of consumer-grade IoT devices on their networks. In addition, many IoT devices are attached to the companies’ general networks, and if that network is breached, the IoT devices and data may also be compromised. + +In some cases, ownership of IoT data can raise surprisingly serious trust concerns. According to [Kaiser Health News][6], smartphone sleep apps, as well as smart beds and smart mattress pads, gather amazingly personal information: “It knows when you go to sleep. It knows when you toss and turn. It may even be able to tell when you’re having sex.” And while companies such as Sleep Number say they don’t share the data they gather, their written privacy policies clearly state that they _can_. + +### **Lack of trust may lead to new laws** + +In California, meanwhile, "lawmakers are pushing for new privacy rules affecting smart speakers” such as the Amazon Echo. According to the _[LA Times][7]_ , the idea is “to ensure that the devices don’t record private conversations without permission,” requiring a specific opt-in process. Why is this an issue? Because consumers—and their elected representatives—don’t trust that Amazon, or any IoT vendor, will do the right thing with the data it collects from the IoT devices it sells—perhaps because it turns out that thousands of [Amazon employees have been listening in on what Alexa users are][8] saying to their Echo devices. + +The trust issues get even trickier when you consider that Amazon reportedly considered letting Alexa listen to users even without a wake word like “Alexa” or “computer,” and is reportedly working on [wearable devices designed to read human emotions][9] from listening to your voice. + +“The trust has been breached,” said California Assemblyman Jordan Cunningham (R-Templeton) to the _LA Times_. + +As critics of the bill ([AB 1395][10]) point out, the restrictions matter because voice assistants require this data to improve their ability to correctly understand and respond to requests. + +### **Some first steps toward increasing trust** + +Perhaps recognizing that the IoT needs to be optimized for trust so that we are comfortable letting it do its job, Amazon recently introduced a new Alexa voice command: “[Delete what I said today][11].” + +Moves like that, while welcome, will likely not be enough. + +For example, a [new United Nations report][12] suggests that “voice assistants reinforce harmful gender stereotypes” when using female-sounding voices and names like Alexa and Siri. Put simply, “Siri’s ‘female’ obsequiousness—and the servility expressed by so many other digital assistants projected as young women—provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.” I'm not sure IoT vendors are eager—or equipped—to tackle issues like that. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][13] + * [What is edge computing and how it’s changing the network][14] + * [Most powerful Internet of Things companies][15] + * [10 Hot IoT startups to watch][16] + * [The 6 ways to make money in IoT][17] + * [What is digital twin technology? [and why it matters]][18] + * [Blockchain, service-centric networking key to IoT success][19] + * [Getting grounded in IoT networking and security][20] + * [Building IoT-ready networks must become a priority][21] + * [What is the Industrial IoT? [And why the stakes are so high]][22] + + + +Join the Network World communities on [Facebook][23] and [LinkedIn][24] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/09/bose-sleepbuds-2-100771579-large.jpg +[2]: https://mailchi.mp/iotpodcast/stacey-on-iot-how-iot-changes-advertising?e=6bf9beb394 +[3]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html +[4]: https://www.zscaler.com/blogs/research/iot-traffic-enterprise-rising-so-are-threats +[5]: https://www.csoonline.com/article/3397044/over-90-of-data-transactions-on-iot-devices-are-unencrypted.html +[6]: https://khn.org/news/a-wake-up-call-on-data-collecting-smart-beds-and-sleep-apps/ +[7]: https://www.latimes.com/politics/la-pol-ca-alexa-google-home-privacy-rules-california-20190528-story.html +[8]: https://www.usatoday.com/story/tech/2019/04/11/amazon-employees-listening-alexa-customers/3434732002/ +[9]: https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions +[10]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1395 +[11]: https://venturebeat.com/2019/05/29/amazon-launches-alexa-delete-what-i-said-today-voice-command/ +[12]: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1 +[13]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[14]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[15]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[16]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[17]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[18]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[19]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[20]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[21]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[22]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[23]: https://www.facebook.com/NetworkWorld/ +[24]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md b/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md index 5e5f4df763..0fb3c6469d 100644 --- a/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md +++ b/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (ninifly) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md deleted file mode 100644 index 7fb7fe9a6a..0000000000 --- a/sources/tech/20170410 Writing a Time Series Database from Scratch.md +++ /dev/null @@ -1,439 +0,0 @@ -LuMing Translating -Writing a Time Series Database from Scratch -============================================================ - - -I work on monitoring. In particular on [Prometheus][2], a monitoring system that includes a custom time series database, and its integration with [Kubernetes][3]. - -In many ways Kubernetes represents all the things Prometheus was designed for. It makes continuous deployments, auto scaling, and other features of highly dynamic environments easily accessible. The query language and operational model, among many other conceptual decisions make Prometheus particularly well-suited for such environments. Yet, if monitored workloads become significantly more dynamic, this also puts new strains on monitoring system itself. With this in mind, rather than doubling back on problems Prometheus already solves well, we specifically aim to increase its performance in environments with highly dynamic, or transient services. - -Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series, all while occupying a surprisingly small amount of disk space. While the current storage has served us well, I propose a newly designed storage subsystem that corrects for shortcomings of the existing solution and is equipped to handle the next order of scale. - -> Note: I've no background in databases. What I say might be wrong and mislead. You can channel your criticism towards me (fabxc) in #prometheus on Freenode. - -### Problems, Problems, Problem Space - -First, a quick outline of what we are trying to accomplish and what key problems it raises. For each, we take a look at Prometheus' current approach, what it does well, and which problems we aim to address with the new design. - -### Time series data - -We have a system that collects data points over time. - -``` -identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), .... -``` - -Each data point is a tuple of a timestamp and a value. For the purpose of monitoring, the timestamp is an integer and the value any number. A 64 bit float turns out to be a good representation for counter as well as gauge values, so we go with that. A sequence of data points with strictly monotonically increasing timestamps is a series, which is addressed by an identifier. Our identifier is a metric name with a dictionary of  _label dimensions_ . Label dimensions partition the measurement space of a single metric. Each metric name plus a unique set of labels is its own  _time series_  that has a value stream associated with it. - -This is a typical set of series identifiers that are part of metric counting requests: - -``` -requests_total{path="/status", method="GET", instance=”10.0.0.1:80”} -requests_total{path="/status", method="POST", instance=”10.0.0.3:80”} -requests_total{path="/", method="GET", instance=”10.0.0.2:80”} -``` - -Let's simplify this representation right away: A metric name can be treated as just another label dimension — `__name__` in our case. At the query level, it might be be treated specially but that doesn't concern our way of storing it, as we will see later. - -``` -{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”} -{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”} -{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”} -``` - -When querying time series data, we want to do so by selecting series by their labels. In the simplest case `{__name__="requests_total"}` selects all series belonging to the `requests_total` metric. For all selected series, we retrieve data points within a specified time window. -In more complex queries, we may wish to select series satisfying several label selectors at once and also represent more complex conditions than equality. For example, negative (`method!="GET"`) or regular expression matching (`method=~"PUT|POST"`). - -This largely defines the stored data and how it is recalled. - -### Vertical and Horizontal - -In a simplified view, all data points can be laid out on a two-dimensional plane. The  _horizontal_  dimension represents the time and the series identifier space spreads across the  _vertical_  dimension. - -``` -series - ^ - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} - │ . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} - │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} - │ . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . - v - <-------------------- time ---------------------> -``` - -Prometheus retrieves data points by periodically scraping the current values for a set of time series. The entity from which we retrieve such a batch is called a  _target_ . Thereby, the write pattern is completely vertical and highly concurrent as samples from each target are ingested independently. -To provide some measurement of scale: A single Prometheus instance collects data points from tens of thousands of  _targets_ , which expose hundreds to thousands of different time series each. - -At the scale of collecting millions of data points per second, batching writes is a non-negotiable performance requirement. Writing single data points scattered across our disk would be painfully slow. Thus, we want to write larger chunks of data in sequence. -This is an unsurprising fact for spinning disks, as their head would have to physically move to different sections all the time. While SSDs are known for fast random writes, they actually can't modify individual bytes but only write in  _pages_  of 4KiB or more. This means writing a 16 byte sample is equivalent to writing a full 4KiB page. This behavior is part of what is known as [ _write amplification_ ][4], which as a bonus causes your SSD to wear out – so it wouldn't just be slow, but literally destroy your hardware within a few days or weeks. -For more in-depth information on the problem, the blog series ["Coding for SSDs" series][5] is a an excellent resource. Let's just consider the main take away: sequential and batched writes are the ideal write pattern for spinning disks and SSDs alike. A simple rule to stick to. - -The querying pattern is significantly more differentiated than the write the pattern. We can query a single datapoint for a single series, a single datapoint for 10000 series, weeks of data points for a single series, weeks of data points for 10000 series, etc. So on our two-dimensional plane, queries are neither fully vertical or horizontal, but a rectangular combination of the two. -[Recording rules][6] mitigate the problem for known queries but are not a general solution for ad-hoc queries, which still have to perform reasonably well. - -We know that we want to write in batches, but the only batches we get are vertical sets of data points across series. When querying data points for a series over a time window, not only would it be hard to figure out where the individual points can be found, we'd also have to read from a lot of random places on disk. With possibly millions of touched samples per query, this is slow even on the fastest SSDs. Reads will also retrieve more data from our disk than the requested 16 byte sample. SSDs will load a full page, HDDs will at least read an entire sector. Either way, we are wasting precious read throughput. -So ideally, samples for the same series would be stored sequentially so we can just scan through them with as few reads as possible. On top, we only need to know where this sequence starts to access all data points. - -There's obviously a strong tension between the ideal pattern for writing collected data to disk and the layout that would be significantly more efficient for serving queries. It is  _the_  fundamental problem our TSDB has to solve. - -#### Current solution - -Time to take a look at how Prometheus's current storage, let's call it "V2", addresses this problem. -We create one file per time series that contains all of its samples in sequential order. As appending single samples to all those files every few seconds is expensive, we batch up 1KiB chunks of samples for a series in memory and append those chunks to the individual files, once they are full. This approach solves a large part of the problem. Writes are now batched, samples are stored sequentially. It also enables incredibly efficient compression formats, based on the property that a given sample changes only very little with respect to the previous sample in the same series. Facebook's paper on their Gorilla TSDB describes a similar chunk-based approach and [introduces a compression format][7] that reduces 16 byte samples to an average of 1.37 bytes. The V2 storage uses various compression formats including a variation of Gorilla’s. - -``` - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A - └──────────┴─────────┴─────────┴─────────┴─────────┘ - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B - └──────────┴─────────┴─────────┴─────────┴─────────┘ - . . . - ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ - └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ - chunk 1 chunk 2 chunk 3 ... -``` - -While the chunk-based approach is great, keeping a separate file for each series is troubling the V2 storage for various reasons: - -* We actually need a lot more files than the number of time series we are currently collecting data for. More on that in the section on "Series Churn". With several million files, sooner or later way may run out of [inodes][1] on our filesystem. This is a condition we can only recover from by reformatting our disks, which is as invasive and disruptive as it could be. We generally want to avoid formatting disks specifically to fit a single application. -* Even when chunked, several thousands of chunks per second are completed and ready to be persisted. This still requires thousands of individual disk writes every second. While it is alleviated by also batching up several completed chunks for a series, this in return increases the total memory footprint of data which is waiting to be persisted. -* It's infeasible to keep all files open for reads and writes. In particular because ~99% of data is never queried again after 24 hours. If it is queried though though, we have to open up to thousands of files, find and read relevant data points into memory, and close them again. As this would result in high query latencies, data chunks are cached rather aggressively leading to problems outlined further in the section on "Resource Consumption". -* Eventually, old data has to be deleted and data needs to be removed from the front of millions of files. This means that deletions are actually write intensive operations. Additionally, cycling through millions of files and analyzing them makes this a process that often takes hours. By the time it completes, it might have to start over again. Oh yea, and deleting the old files will cause further write amplification for your SSD! -* Chunks that are currently accumulating are only held in memory. If the application crashes, data will be lost. To avoid this, the memory state is periodically checkpointed to disk, which may take significantly longer than the window of data loss we are willing to accept. Restoring the checkpoint may also take several minutes, causing painfully long restart cycles. - -The key take away from the existing design is the concept of chunks, which we most certainly want to keep. The most recent chunks always being held in memory is also generally good. After all, the most recent data is queried the most by a large margin. -Having one file per time series is a concept we would like to find an alternative to. - -### Series Churn - -In the Prometheus context, we use the term  _series churn_  to describe that a set of time series becomes inactive, i.e. receives no more data points, and a new set of active series appears instead. -For example, all series exposed by a given microservice instance have a respective “instance” label attached that identifies its origin. If we perform a rolling update of our microservice and swap out every instance with a newer version, series churn occurs. In more dynamic environments those events may happen on an hourly basis. Cluster orchestration systems like Kubernetes allow continuous auto-scaling and frequent rolling updates of applications, potentially creating tens of thousands of new application instances, and with them completely new sets of time series, every day. - -``` -series - ^ - │ . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . - │ . . . . . - │ . . . . . - v - <-------------------- time ---------------------> -``` - -So even if the entire infrastructure roughly remains constant in size, over time there's a linear growth of time series in our database. While a Prometheus server will happily collect data for 10 million time series, query performance is significantly impacted if data has to be found among a billion series. - -#### Current solution - -The current V2 storage of Prometheus has an index based on LevelDB for all series that are currently stored. It allows querying series containing a given label pair, but lacks a scalable way to combine results from different label selections. -For example, selecting all series with label `__name__="requests_total"` works efficiently, but selecting all series with `instance="A" AND __name__="requests_total"` has scalability problems. We will later revisit what causes this and which tweaks are necessary to improve lookup latencies. - -This problem is in fact what spawned the initial hunt for a better storage system. Prometheus needed an improved indexing approach for quickly searching hundreds of millions of time series. - -### Resource consumption - -Resource consumption is one of the consistent topics when trying to scale Prometheus (or anything, really). But it's not actually the absolute resource hunger that is troubling users. In fact, Prometheus manages an incredible throughput given its requirements. The problem is rather its relative unpredictability and instability in face of changes. By its architecture the V2 storage slowly builds up chunks of sample data, which causes the memory consumption to ramp up over time. As chunks get completed, they are written to disk and can be evicted from memory. Eventually, Prometheus's memory usage reaches a steady state. That is until the monitored environment changes —  _series churn_  increases the usage of memory, CPU, and disk IO every time we scale an application or do a rolling update. -If the change is ongoing, it will yet again reach a steady state eventually but it will be significantly higher than in a more static environment. Transition periods are often multiple hours long and it is hard to determine what the maximum resource usage will be. - -The approach of having a single file per time series also makes it way too easy for a single query to knock out the Prometheus process. When querying data that is not cached in memory, the files for queried series are opened and the chunks containing relevant data points are read into memory. If the amount of data exceeds the memory available, Prometheus quits rather ungracefully by getting OOM-killed. -After the query is completed the loaded data can be released again but it is generally cached much longer to serve subsequent queries on the same data faster. The latter is a good thing obviously. - -Lastly, we looked at write amplification in the context of SSDs and how Prometheus addresses it by batching up writes to mitigate it. Nonetheless, in several places it still causes write amplification by having too small batches and not aligning data precisely on page boundaries. For larger Prometheus servers, a reduced hardware lifetime was observed in the real world. Chances are that this is still rather normal for database applications with high write throughput, but we should keep an eye on whether we can mitigate it. - -### Starting Over - -By now we have a good idea of our problem domain, how the V2 storage solves it, and where its design has issues. We also saw some great concepts that we want to adapt more or less seamlessly. A fair amount of V2's problems can be addressed with improvements and partial redesigns, but to keep things fun (and after carefully evaluating my options, of course), I decided to take a stab at writing an entire time series database — from scratch, i.e. writing bytes to the file system. - -The critical concerns of performance and resource usage are a direct consequence of the chosen storage format. We have to find the right set of algorithms and disk layout for our data to implement a well-performing storage layer. - -This is where I take the shortcut and drive straight to the solution — skip the headache, failed ideas, endless sketching, tears, and despair. - -### V3 — Macro Design - -What's the macro layout of our storage? In short, everything that is revealed when running `tree` on our data directory. Just looking at that gives us a surprisingly good picture of what is going on. - -``` -$ tree ./data -./data -├── b-000001 -│ ├── chunks -│ │ ├── 000001 -│ │ ├── 000002 -│ │ └── 000003 -│ ├── index -│ └── meta.json -├── b-000004 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -├── b-000005 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -└── b-000006 - ├── meta.json - └── wal - ├── 000001 - ├── 000002 - └── 000003 -``` - -At the top level, we have a sequence of numbered blocks, prefixed with `b-`. Each block obviously holds a file containing an index and a "chunk" directory holding more numbered files. The “chunks” directory contains nothing but raw chunks of data points for various series. Just as for V2, this makes reading series data over a time windows very cheap and allows us to apply the same efficient compression algorithms. The concept has proven to work well and we stick with it. Obviously, there is no longer a single file per series but instead a handful of files holds chunks for many of them. -The existence of an “index” file should not be surprising. Let's just assume it contains a lot of black magic allowing us to find labels, their possible values, entire time series and the chunks holding their data points. - -But why are there several directories containing the layout of index and chunk files? And why does the last one contain a "wal" directory instead? Understanding those two questions, solves about 90% of our problems. - -#### Many Little Databases - -We partition our  _horizontal_  dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window. Hence, it has its own index and set of chunk files. - -``` - -t0 t1 t2 t3 now - ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ │ │ │ │ │ │ │ ┌────────────┐ - │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │ - │ │ │ │ │ │ │ │ └────────────┘ - └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^ - └──────────────┴───────┬──────┴──────────────┘ │ - │ query - │ │ - merge ─────────────────────────────────────────────────┘ -``` - -Every block of data is immutable. Of course, we must be able to add new series and samples to the most recent block as we collect new data. For this block, all new data is written to an in-memory database that provides the same lookup properties as our persistent blocks. The in-memory data structures can be updated efficiently. To prevent data loss, all incoming data is also written to a temporary  _write ahead log_ , which is the set of files in our “wal” directory, from which we can re-populate the in-memory database on restart. -All these files come with their own serialization format, which comes with all the things one would expect: lots of flags, offsets, varints, and CRC32 checksums. Good fun to come up with, rather boring to read about. - -This layout allows us to fan out queries to all blocks relevant to the queried time range. The partial results from each block are merged back together to form the overall result. - -This horizontal partitioning adds a few great capabilities: - -* When querying a time range, we can easily ignore all data blocks outside of this range. It trivially addresses the problem of  _series churn_  by reducing the set of inspected data to begin with. -* When completing a block, we can persist the data from our in-memory database by sequentially writing just a handful of larger files. We avoid any write-amplification and serve SSDs and HDDs equally well. -* We keep the good property of V2 that recent chunks, which are queried most, are always hot in memory. -* Nicely enough, we are also no longer bound to the fixed 1KiB chunk size to better align data on disk. We can pick any size that makes the most sense for the individual data points and chosen compression format. -* Deleting old data becomes extremely cheap and instantaneous. We merely have to delete a single directory. Remember, in the old storage we had to analyze and re-write up to hundreds of millions of files, which could take hours to converge. - -Each block also contains a `meta.json` file. It simply holds human-readable information about the block to easily understand the state of our storage and the data it contains. - -##### mmap - -Moving from millions of small files to a handful of larger allows us to keep all files open with little overhead. This unblocks the usage of [`mmap(2)`][8], a system call that allows us to transparently back a virtual memory region by file contents. For simplicity, you might want to think of it like swap space, just that all our data is on disk already and no writes occur when swapping data out of memory. - -This means we can treat all contents of our database as if they were in memory without occupying any physical RAM. Only if we access certain byte ranges in our database files, the operating system lazily loads pages from disk. This puts the operating system in charge of all memory management related to our persisted data. Generally, it is more qualified to make such decisions, as it has the full view on the entire machine and all its processes. Queried data can be rather aggressively cached in memory, yet under memory pressure the pages will be evicted. If the machine has unused memory, Prometheus will now happily cache the entire database, yet will immediately return it once another application needs it. -Therefore, queries can longer easily OOM our process by querying more persisted data than fits into RAM. The memory cache size becomes fully adaptive and data is only loaded once the query actually needs it. - -From my understanding, this is how a lot of databases work today and an ideal way to do it if the disk format allows — unless one is confident to outsmart the OS from within the process. We certainly get a lot of capabilities with little work from our side. - -#### Compaction - -The storage has to periodically "cut" a new block and write the previous one, which is now completed, onto disk. Only after the block was successfully persisted, the write ahead log files, which are used to restore in-memory blocks, are deleted. -We are interested in keeping each block reasonably short (about two hours for a typical setup) to avoid accumulating too much data in memory. When querying multiple blocks, we have to merge their results into an overall result. This merge procedure obviously comes with a cost and a week-long query should not have to merge 80+ partial results. - -To achieve both, we introduce  _compaction_ . Compaction describes the process of taking one or more blocks of data and writing them into a, potentially larger, block. It can also modify existing data along the way, e.g. dropping deleted data, or restructuring our sample chunks for improved query performance. - -``` - -t0 t1 t2 t3 t4 now - ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before - └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘ - ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐ - │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A) - └─────────────────────────────────────────┘ └───────────┘ └───────────┘ - ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐ - │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B) - └──────────────────────────┘ └──────────────────────────┘ └───────────┘ -``` - -In this example we have the sequential blocks `[1, 2, 3, 4]`. Blocks 1, 2, and 3 can be compacted together and the new layout is `[1, 4]`. Alternatively, compact them in pairs of two into `[1, 3]`. All time series data still exist but now in fewer blocks overall. This significantly reduces the merging cost at query time as fewer partial query results have to be merged. - -#### Retention - -We saw that deleting old data was a slow process in the V2 storage and put a toll on CPU, memory, and disk alike. How can we drop old data in our block based design? Quite simply, by just deleting the directory of a block that has no data within our configured retention window. In the example below, block 1 can safely be deleted, whereas 2 has to stick around until it falls fully behind the boundary. - -``` - | - ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . . - └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘ - | - | - retention boundary -``` - -The older data gets, the larger the blocks may become as we keep compacting previously compacted blocks. An upper limit has to be applied so blocks don’t grow to span the entire database and thus diminish the original benefits of our design. -Conveniently, this also limits the total disk overhead of blocks that are partially inside and partially outside of the retention window, i.e. block 2 in the example above. When setting the maximum block size at 10% of the total retention window, our total overhead of keeping block 2 around is also bound by 10%. - -Summed up, retention deletion goes from very expensive, to practically free. - -> _If you've come this far and have some background in databases, you might be asking one thing by now: Is any of this new? — Not really; and probably for the better._ -> -> _The pattern of batching data up in memory, tracked in a write ahead log, and periodically flushed to disk is ubiquitous today._ -> _The benefits we have seen apply almost universally regardless of the data's domain specifics. Prominent open source examples following this approach are LevelDB, Cassandra, InfluxDB, or HBase. The key takeaway is to avoid reinventing an inferior wheel, researching proven methods, and applying them with the right twist._ -> _Running out of places to add your own magic dust later is an unlikely scenario._ - -### The Index - -The initial motivation to investigate storage improvements were the problems brought by  _series churn_ . The block-based layout reduces the total number of series that have to be considered for serving a query. So assuming our index lookup was of complexity  _O(n^2)_ , we managed to reduce the  _n_  a fair amount and now have an improved complexity of  _O(n^2)_  — uhm, wait... damnit. -A quick flashback to "Algorithms 101" reminds us that this, in theory, did not buy us anything. If things were bad before, they are just as bad now. Theory can be depressing. - -In practice, most of our queries will already be answered significantly faster. Yet, queries spanning the full time range remain slow even if they just need to find a handful of series. My original idea, dating back way before all this work was started, was a solution to exactly this problem: we need a more capable [ _inverted index_ ][9]. -An inverted index provides a fast lookup of data items based on a subset of their contents. Simply put, I can look up all series that have a label `app=”nginx"` without having to walk through every single series and check whether it contains that label. - -For that, each series is assigned a unique ID by which it can be retrieved in constant time, i.e. O(1). In this case the ID is our  _forward index_ . - -> Example: If the series with IDs 10, 29, and 9 contain the label `app="nginx"`, the inverted index for the label "nginx" is the simple list `[10, 29, 9]`, which can be used to quickly retrieve all series containing the label. Even if there were 20 billion further series, it would not affect the speed of this lookup. - -In short, if  _n_  is our total number of series, and  _m_  is the result size for a given query, the complexity of our query using the index is now  _O(m)_ . Queries scaling along the amount of data they retrieve ( _m_ ) instead of the data body being searched ( _n_ ) is a great property as  _m_  is generally significantly smaller. -For brevity, let’s assume we can retrieve the inverted index list itself in constant time. - -Actually, this is almost exactly the kind of inverted index V2 has and a minimum requirement to serve performant queries across millions of series. The keen observer will have noticed, that in the worst case, a label exists in all series and thus  _m_  is, again, in  _O(n)_ . This is expected and perfectly fine. If you query all data, it naturally takes longer. Things become problematic once we get involved with more complex queries. - -#### Combining Labels - -Labels associated with millions of series are common. Suppose a horizontally scaling “foo” microservice with hundreds of instances with thousands of series each. Every single series will have the label `app="foo"`. Of course, one generally won't query all series but restrict the query by further labels, e.g. I want to know how many requests my service instances received and query `__name__="requests_total" AND app="foo"`. - -To find all series satisfying both label selectors, we take the inverted index list for each and intersect them. The resulting set will typically be orders of magnitude smaller than each input list individually. As each input list has the worst case size O(n), the brute force solution of nested iteration over both lists, has a runtime of O(n^2). The same cost applies for other set operations, such as the union (`app="foo" OR app="bar"`). When adding further label selectors to the query, the exponent increases for each to O(n^3), O(n^4), O(n^5), ... O(n^k). A lot of tricks can be played to minimize the effective runtime by changing the execution order. The more sophisticated, the more knowledge about the shape of the data and the relationships between labels is needed. This introduces a lot of complexity, yet does not decrease our algorithmic worst case runtime. - -This is essentially the approach in the V2 storage and luckily a seemingly slight modification is enough gain significant improvements. What happens if we assume that the IDs in our inverted indices are sorted? - -Suppose this example of lists for our initial query: - -``` -__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ] - app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ] - - intersection => [ 1000, 1001 ] -``` - -The intersection is fairly small. We can find it by setting a cursor at the beginning of each list and always advancing the one at the smaller number. When both numbers are equal, we add the number to our result and advance both cursors. Overall, we scan both lists in this zig-zag pattern and thus have a total cost of  _O(2n) = O(n)_  as we only ever move forward in either list. - -The procedure for more than two lists of different set operations works similarly. So the number of  _k_  set operations merely modifies the factor ( _O(k*n)_ ) instead of the exponent ( _O(n^k)_ ) of our worst-case lookup runtime. A great improvement. -What I described here is a simplified version of the canonical search index used by practically any [full text search engine][10] out there. Every series descriptor is treated as a short "document", and every label (name + fixed value) as a "word" inside of it. We can ignore a lot of additional data typically encountered in search engine indices, such as word position and frequency data. -Seemingly endless research exists on approaches improving the practical runtime, often making some assumptions about the input data. Unsurprisingly, there are also plenty of techniques to compress inverted indices that come with their own benefits and drawbacks. As our "documents" are tiny and the “words” are hugely repetitive across all series, compression becomes almost irrelevant. For example, a real-world dataset of ~4.4 million series with about 12 labels each has less than 5,000 unique labels. For our initial storage version, we stick to the basic approach without compression, and just a few simple tweaks added to skip over large ranges of non-intersecting IDs. - -While keeping the IDs sorted may sound simple, it is not always a trivial invariant to keep up. For instance, the V2 storage assigns hashes as IDs to new series and we cannot efficiently build up sorted inverted indices. -Another daunting task is modifying the indices on disk as data gets deleted or updated. Typically, the easiest approach is to simply recompute and rewrite them but doing so while keeping the database queryable and consistent. The V3 storage does exactly this by having a separate immutable index per block that is only modified via rewrite on compaction. Only the indices for the mutable blocks, which are held entirely in memory, need to be updated. - -### Benchmarking - -I started initial development of the storage with a benchmark based on ~4.4 million series descriptors extracted from a real world data set and generated synthetic data points to feed into those series. This iteration just tested the stand-alone storage and was crucial to quickly identify performance bottlenecks and trigger deadlocks only experienced under highly concurrent load. - -After the conceptual implementation was done, the benchmark could sustain a write throughput of 20 million data points per second on my Macbook Pro — all while a dozen Chrome tabs and Slack were running. So while this sounded all great it also indicated that there's no further point in pushing this benchmark (or running it in a less random environment for that matter). After all, it is synthetic and thus not worth much beyond a good first impression. Starting out about 20x above the initial design target, it was time to embed this into an actual Prometheus server, adding all the practical overhead and flakes only experienced in more realistic environments. - -We actually had no reproducible benchmarking setup for Prometheus, in particular none that allowed A/B testing of different versions. Concerning in hindsight, but [now we have one][11]! - -Our tool allows us to declaratively define a benchmarking scenario, which is then deployed to a Kubernetes cluster on AWS. While this is not the best environment for all-out benchmarking, it certainly reflects our user base better than dedicated bare metal servers with 64 cores and 128GB of memory. -We deploy two Prometheus 1.5.2 servers (V2 storage) and two Prometheus servers from the 2.0 development branch (V3 storage). Each Prometheus server runs on a dedicated machine with an SSD. A horizontally scaled application exposing typical microservice metrics is deployed to worker nodes. Additionally, the Kubernetes cluster itself and the nodes are being monitored. The whole setup is supervised by yet another Meta-Prometheus, monitoring each Prometheus server for health and performance. -To simulate series churn, the microservice is periodically scaled up and down to remove old pods and spawn new pods, exposing new series. Query load is simulated by a selection of "typical" queries, run against one server of each Prometheus version. - -Overall the scaling and querying load as well as the sampling frequency significantly exceed today's production deployments of Prometheus. For instance, we swap out 60% of our microservice instances every 15 minutes to produce series churn. This would likely only happen 1-5 times a day in a modern infrastructure. This ensures that our V3 design is capable of handling the workloads of the years ahead. As a result, the performance differences between Prometheus 1.5.2 and 2.0 are larger than in a more moderate environment. -In total, we are collecting about 110,000 samples per second from 850 targets exposing half a million series at a time. - -After leaving this setup running for a while, we can take a look at the numbers. We evaluate several metrics over the first 12 hours within both versiones reached a steady state. - -> Be aware of the slightly truncated Y axis in screen shots from the Prometheus graph UI. - - ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png) -> _Heap memory usage in GB_ - -Memory usage is the most troubling resource for users today as it is relatively unpredictable and it may cause the process to crash. -Obviously, the queried servers are consuming more memory, which can largely be attributed to overhead of the query engine, which will be subject to future optimizations. Overall, Prometheus 2.0's memory consumption is reduced by 3-4x. After about six hours, there is a clear spike in Prometheus 1.5, which aligns with the our retention boundary at six hours. As deletions are quite costly, resource consumption ramps up. This will become visible throughout various other graphs below. - - ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png) -> _CPU usage in cores/second_ - -A similar pattern shows for CPU usage, but the delta between queried and non-queried servers is more significant. Averaging at about 0.5 cores/sec while ingesting about 110,000 samples/second, our new storage becomes almost negligible compared to the cycles spent on query evaluation. In total the new storage needs 3-10 times fewer CPU resources. - - ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png) ->_Disk writes in MB/second_ - -The by far most dramatic and unexpected improvement shows in write utilization of our disk. It clearly shows why Prometheus 1.5 is prone to wear out SSDs. We see an initial ramp-up as soon as the first chunks are persisted into the series files and a second ramp-up once deletion starts rewriting them. Surprisingly, the queried and non-queried server show a very different utilization. -Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%. - - ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png) -> _Disk size in GB_ - -Closely related to disk writes is the total amount of occupied disk space. As we are using almost the same compression algorithm for samples, which is the bulk of our data, they should be about the same. In a more stable setup that would largely be true, but as we are dealing with high  _series churn_ , there's also the per-series overhead to consider. -As we can see, Prometheus 1.5 ramps up storage space a lot faster before both versions reach a steady state as the retention kicks in. Prometheus 2.0 seems to have a significantly lower overhead per individual series. We can nicely see how space is linearly filled up by the write ahead log and instantaneously drops as its gets compacted. The fact that the lines for both Prometheus 2.0 servers do not exactly match is a fact that needs further investigation. - -This all looks quite promising. The important piece left is query latency. The new index should have improved our lookup complexity. What has not substantially changed is processing of this data, e.g. in `rate()` functions or aggregations. Those aspects are part of the query engine. - - ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png) ->_99th percentile query latency in seconds_ - -Expectations are completely met by the data. In Prometheus 1.5 the query latency increases over time as more series are stored. It only levels off once retention starts and old series are deleted. In contrast, Prometheus 2.0 stays in place right from the beginning. -Some caution must be taken on how this data was collected. The queries fired against the servers were chosen by estimating a good mix of range and instant queries, doing heavier and more lightweight computations, and touching few or many series. It does not necessarily represent a real-world distribution of queries. It is also not representative for queries hitting cold data and we can assume that all sample data is practically always hot in memory in either storage. -Nonetheless, we can say with good confidence, that the overall query performance became very resilient to series churn and improved by up to 4x in our straining benchmarking scenario. In a more static environment, we can assume query time to be mostly spent in the query engine itself and the improvement to be notably lower. - - ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png) ->_Ingested samples/second_ - -Lastly, a quick look into our ingestion rates of the different Prometheus servers. We can see that both servers with the V3 storage have the same ingestion rate. After a few hours it becomes unstable, which is caused by various nodes of the benchmarking cluster becoming unresponsive due to high load rather than the Prometheus instances. (The fact that both 2.0 lines exactly match is hopefully convincing enough.) -Both Prometheus 1.5.2 servers start suffering from significant drops in ingestion rate even though more CPU and memory resources are available. The high stress of series churn causes a larger amount of data to not be collected. - -But what's the  _absolute maximum_  number of samples per second you could ingest now? - -I don't know — and deliberately don't care. - -There are a lot of factors that shape the data flowing into Prometheus and there is no single number capable of capturing quality. Maximum ingestion rate has historically been a metric leading to skewed benchmarks and neglect of more important aspects such as query performance and resilience to series churn. The rough assumption that resource usage increases linearly was confirmed by some basic testing. It is easy to extrapolate what could be possible. - -Our benchmarking setup simulates a highly dynamic environment stressing Prometheus more than most real-world setups today. The results show we went way above our initial design goal, while running on non-optimal cloud servers. Ultimately, success will be determined by user feedback rather than benchmarking numbers. - -> Note:  _At time of writing this, Prometheus 1.6 is in development, which will allow configuring the maximum memory usage more reliably and may notably reduce overall consumption in favor of slightly increased CPU utilization. I did not repeat the tests against this as the overall results still hold, especially when facing high series churn._ - -### Conclusion - -Prometheus sets out to handle high cardinality of series and throughput of individual samples. It remains a challenging task, but the new storage seems to position us well for the hyper-scale, hyper-convergent, GIFEE infrastructure of the futu... well, it seems to work pretty well. - -A [first alpha release of Prometheus 2.0][12] with the new V3 storage is available for testing. Expect crashes, deadlocks, and other bugs at this early stage. - -The code for the storage itself can be found [in a separate project][13]. It's surprisingly agnostic to Prometheus itself and could be widely useful for a wider range of applications looking for an efficient local storage time series database. - -> _There's a long list of people to thank for their contributions to this work. Here they go in no particular order:_ -> -> _The groundlaying work by Bjoern Rabenstein and Julius Volz on the V2 storage engine and their feedback on V3 was fundamental to everything seen in this new generation._ -> -> _Wilhelm Bierbaum's ongoing advice and insight contributed significantly to the new design. Brian Brazil's continous feedback ensured that we ended up with a semantically sound approach. Insightful discussions with Peter Bourgon validated the design and shaped this write-up._ -> -> _Not to forget my entire team at CoreOS and the company itself for supporting and sponsoring this work. Thanks to everyone who listened to my ramblings about SSDs, floats, and serialization formats again and again._ - - --------------------------------------------------------------------------------- - -via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/ - -作者:[Fabian Reinartz ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://twitter.com/fabxc -[1]:https://en.wikipedia.org/wiki/Inode -[2]:https://prometheus.io/ -[3]:https://kubernetes.io/ -[4]:https://en.wikipedia.org/wiki/Write_amplification -[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ -[6]:https://prometheus.io/docs/practices/rules/ -[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf -[8]:https://en.wikipedia.org/wiki/Mmap -[9]:https://en.wikipedia.org/wiki/Inverted_index -[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices -[11]:https://github.com/prometheus/prombench -[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/ -[13]:https://github.com/prometheus/tsdb diff --git a/sources/tech/20170414 5 projects for Raspberry Pi at home.md b/sources/tech/20170414 5 projects for Raspberry Pi at home.md deleted file mode 100644 index 69aeaf32ac..0000000000 --- a/sources/tech/20170414 5 projects for Raspberry Pi at home.md +++ /dev/null @@ -1,146 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 projects for Raspberry Pi at home) -[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) -[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) - -5 projects for Raspberry Pi at home -====== - -![5 projects for Raspberry Pi at home][1] - -The [Raspberry Pi][2] computer can be used in all kinds of settings and for a variety of purposes. It obviously has a place in education for helping students with learning programming and maker skills in the classroom and the hackspace, and it has plenty of industrial applications in the workplace and in factories. I'm going to introduce five projects you might want to build in your own home. - -### Media center - -One of the most common uses for Raspberry Pi in people's homes is behind the TV running media center software serving multimedia files. It's easy to set this up, and the Raspberry Pi provides plenty of GPU (Graphics Processing Unit) power to render HD TV shows and movies to your big screen TV. [Kodi][3] (formerly XBMC) on a Raspberry Pi is a great way to playback any media you have on a hard drive or network-attached storage. You can also install a plugin to play YouTube videos. - -There are a few different options available, most prominently [OSMC][4] (Open Source Media Center) and [LibreELEC][5], both based on Kodi. They both perform well at playing media content, but OSMC has a more visually appearing user interface, while LibreElec is much more lightweight. All you have to do is choose a distribution, download the image and install on an SD card (or just use [NOOBS][6]), boot it up, and you're ready to go. - -![LibreElec ][7] - -LibreElec; Raspberry Pi Foundation, CC BY-SA - -![OSMC][8] - -OSMC.tv, Copyright, Used with permission - -Before proceeding you'll need to decide [w][9][hich Raspberry Pi model to use][9]. These distributions will work on any Pi (1, 2, 3, or Zero), and video playback will essentially be matched on each of these. Apart from the Pi 3 (and Zero W) having built-in Wi-Fi, the only noticeable difference is the reaction speed of the user interface, which will be much faster on a Pi 3. A Pi 2 will not be much slower, so that's fine if you don't need Wi-Fi, but the Pi 3 will noticeably outperform the Pi 1 and Zero when it comes to flicking through the menus. - -### SSH gateway - -If you want to be able to access computers and devices on your home network from outside over the internet, you have to open up ports on those devices to allow outside traffic. Opening ports to the internet is a security risk, meaning you're always at risk of attack, misuse, or any kind of unauthorized access. However, if you install a Raspberry Pi on your network and set up port forwarding to allow only SSH access to that Pi, you can use that as a secure gateway to hop onto other Pis and PCs on the network. - -Most routers allow you to configure port-forwarding rules. You'll need to give your Pi a fixed internal IP address and set up port 22 on your router to map to port 22 on your Raspberry Pi. If your ISP provides you with a static IP address, you'll be able to SSH into it with this as the host address (for example, **ssh pi@123.45.56.78** ). If you have a domain name, you can configure a subdomain to point to this IP address, so you don't have to remember it (for example, **ssh[pi@home.mydomain.com][10]** ). - -![][11] - -However, if you're going to expose a Raspberry Pi to the internet, you should be very careful not to put your network at risk. There are a few simple procedures you can follow to make it sufficiently secure: - -1\. Most people suggest you change your login password (which makes sense, seeing as the default password “raspberry” is well known), but this does not protect against brute-force attacks. You could change your password and add a two-factor authentication (so you need your password _and_ a time-dependent passcode generated by your phone), which is more secure. However, I believe the best way to secure your Raspberry Pi from intruders is to [disable][12] [“password authentication”][12] in your SSH configuration, so you allow only SSH key access. This means that anyone trying to SSH in by guessing your password will never succeed. Only with your private SSH key can anyone gain access. Similarly, most people suggest changing the SSH port from the default 22 to something unexpected, but a simple [Nmap][13] of your IP address will reveal your true SSH port. - -2\. Ideally, you would not run much in the way of other software on this Pi, so you don't end up accidentally exposing anything else. If you want to run other software, you might be better running it on another Pi on the network that is not exposed to the internet. Ensure that you keep your packages up to date by upgrading regularly, particularly the **openssh-server** package, so that any security vulnerabilities are patched. - -3\. Install [sshblack][14] or [fail2ban][15] to blacklist any users who seem to be acting maliciously, such as attempting to brute force your SSH password. - -Once you've secured your Raspberry Pi and put it online, you'll be able to log in to your network from anywhere in the world. Once you're on your Raspberry Pi, you can SSH into other devices on the network using their local IP address (for example, 192.168.1.31). If you have passwords on these devices, just use the password. If they're also SSH-key-only, you'll need to ensure your key is forwarded over SSH by using the **-A** flag: **ssh -A pi@123.45.67.89**. - -### CCTV / pet camera - -Another great home project is to set up a camera module to take photos or stream video, capture and save files, or streamed internally or to the internet. There are many reasons you might want to do this, but two common use cases are for a homemade security camera or to monitor a pet. - -The [Raspberry Pi camera module][16] is a brilliant accessory. It provides full HD photo and video, lots of advanced configuration, and is [easy to][17] [program][17]. The [infrared camera][18] is ideal for this kind of use, and with an infrared LED (which the Pi can control) you can see in the dark! - -If you want to take still images on a regular basis to keep an eye on things, you can just write a short [Python][19] script or use the command line tool [raspistill][20], and schedule it to recur in [Cron][21]. You might want to have it save them to [Dropbox][22] or another web service, upload them to a web server, or you can even create a [web app][23] to display them. - -If you want to stream video, internally or externally, that's really easy, too. A simple MJPEG (Motion JPEG) example is provided in the [picamera documentation][24] (under “web streaming”). Just download or copy that code into a file, run it and visit the Pi's IP address at port 8000, and you'll see your camera's output live. - -A more advanced streaming project, [pistreaming][25], is available, which uses [JSMpeg][26] (a JavaScript video player) with the web server and a websocket for the camera stream running separately. This method is more performant and is just as easy to get running as the previous example, but there is more code involved and if set up to stream on the internet, requires you to open two ports. - -Once you have web streaming set up, you can position the camera where you want it. I have one set up to keep an eye on my pet tortoise: - -![Tortoise ][27] - -Ben Nuttall, CC BY-SA - -If you want to be able to control where the camera actually points, you can do so using servos. A neat solution is to use Pimoroni's [Pan-Tilt HAT][28], which allows you to move the camera easily in two dimensions. To integrate this with pistreaming, see the project's [pantilthat branch][29]. - -![Pan-tilt][30] - -Pimoroni.com, Copyright, Used with permission - -If you want to position your Pi outside, you'll need a waterproof enclosure and some way of getting power to the Pi. PoE (Power-over-Ethernet) cables can be a good way of achieving this. - -### Home automation and IoT - -It's 2017 and there are internet-connected devices everywhere, especially in the home. Our lightbulbs have Wi-Fi, our toasters are smarter than they used to be, and our tea kettles are at risk of attack from Russia. As long as you keep your devices secure, or don't connect them to the internet if they don't need to be, then you can make great use of IoT devices to automate tasks around the home. - -There are plenty of services you can buy or subscribe to, like Nest Thermostat or Philips Hue lightbulbs, which allow you to control your heating or your lighting from your phone, respectively—whether you're inside or away from home. You can use a Raspberry Pi to boost the power of these kinds of devices by automating interactions with them according to a set of rules involving timing or even sensors. One thing you can't do with Philips Hue is have the lights come on when you enter the room, but with a Raspberry Pi and a motion sensor, you can use a Python API to turn on the lights. Similarly, you can configure your Nest to turn on the heating when you're at home, but what if you only want it to turn on if there's at least two people home? Write some Python code to check which phones are on the network and if there are at least two, tell the Nest to turn on the heat. - -You can do a great deal more without integrating with existing IoT devices and with only using simple components. A homemade burglar alarm, an automated chicken coop door opener, a night light, a music box, a timed heat lamp, an automated backup server, a print server, or whatever you can imagine. - -### Tor proxy and blocking ads - -Adafruit's [Onion Pi][31] is a [Tor][32] proxy that makes your web traffic anonymous, allowing you to use the internet free of snoopers and any kind of surveillance. Follow Adafruit's tutorial on setting up Onion Pi and you're on your way to a peaceful anonymous browsing experience. - -![Onion-Pi][33] - -Onion-pi from Adafruit, Copyright, Used with permission - -![Pi-hole][34]You can install a Raspberry Pi on your network that intercepts all web traffic and filters out any advertising. Simply download the [Pi-hole][35] software onto the Pi, and all devices on your network will be ad-free (it even blocks in-app ads on your mobile devices). - -There are plenty more uses for the Raspberry Pi at home. What do you use Raspberry Pi for at home? What do you want to use it for? - -Let us know in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home - -作者:[Ben Nuttall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bennuttall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) -[2]: https://www.raspberrypi.org/ -[3]: https://kodi.tv/ -[4]: https://osmc.tv/ -[5]: https://libreelec.tv/ -[6]: https://www.raspberrypi.org/downloads/noobs/ -[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) -[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) -[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project -[10]: mailto:pi@home.mydomain.com -[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png -[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication -[13]: https://nmap.org/ -[14]: http://www.pettingers.org/code/sshblack.html -[15]: https://www.fail2ban.org/wiki/index.php/Main_Page -[16]: https://www.raspberrypi.org/products/camera-module-v2/ -[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects -[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ -[19]: http://picamera.readthedocs.io/ -[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md -[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md -[22]: https://github.com/RZRZR/plant-cam -[23]: https://github.com/bennuttall/bett-bot -[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming -[25]: https://github.com/waveform80/pistreaming -[26]: http://jsmpeg.com/ -[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) -[28]: https://shop.pimoroni.com/products/pan-tilt-hat -[29]: https://github.com/waveform80/pistreaming/tree/pantilthat -[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) -[31]: https://learn.adafruit.com/onion-pi/overview -[32]: https://www.torproject.org/ -[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) -[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) -[35]: https://pi-hole.net/ diff --git a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md deleted file mode 100644 index b4f891d16c..0000000000 --- a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md +++ /dev/null @@ -1,233 +0,0 @@ -Translating by MjSeve - -ddgr – A Command Line Tool To Search DuckDuckGo From The Terminal -====== -Bash tricks are really awesome in Linux that makes everything is possible in Linux. - -It really works well for developers or system admins because they are spending most of the time with terminal. Did you know why they are preferring this tricks? - -These trick will improve their productivity and also make them to work fast. - -### What Is ddgr - -[ddgr][1] is a command-line utility to search DuckDuckGo from the terminal. ddgr works out of the box with several text-based browsers if the BROWSER environment variable is set. - -Make sure your system should have installed any text-based browsers. You may know about [googler][2] that allow users to perform Google searches from the Linux command line. - -It’s highly popular among cmdline users and they are expect the similar utility for privacy-aware DuckDuckGo, that’s why ddgr came to picture. - -Unlike the web interface, you can specify the number of search results you would like to see per page. - -**Suggested Read :** -**(#)** [Googler – Google Search From The Linux Command Line][2] -**(#)** [Buku – A Powerful Command-line Bookmark Manager for Linux][3] -**(#)** [SoCLI – Easy Way To Search And Browse Stack Overflow From The Terminal][4] -**(#)** [RTV (Reddit Terminal Viewer) – A Simple Terminal Viewer For Reddit][5] - -### What Is DuckDuckGo - -DDG stands for DuckDuckGo. DuckDuckGo (DDG) is an Internet search engine that really protecting users search and privacy. - -They didn’t filter users personalized search results and It’s showing the same search results to all users for a given search term. - -Most of the users prefer google search engine, if you really worrying about privacy then you can blindly go with DuckDuckGo. - -### ddgr Features - - * Fast and clean (no ads, stray URLs or clutter), custom color - * Designed to deliver maximum readability at minimum space - * Specify the number of search results to show per page - * Navigate result pages from omniprompt, open URLs in browser - * Search and option completion scripts for Bash, Zsh and Fish - * DuckDuckGo Bang support (along with completion) - * Open the first result directly in browser (as in I’m Feeling Ducky) - * Non-stop searches: fire new searches at omniprompt without exiting - * Keywords (e.g. filetype:mime, site:somesite.com) support - * Limit search by time, specify region, disable safe search - * HTTPS proxy support, Do Not Track set, optionally disable User Agent - * Support custom url handler script or cmdline utility - * Comprehensive documentation, man page with handy usage examples - * Minimal dependencies - - - -### Prerequisites - -ddgr requires Python 3.4 or later. So, make sure you system should have Python 3.4 or later version. -``` -$ python3 --version -Python 3.6.3 - -``` - -### How To Install ddgr In Linux - -We can easily install ddgr using the following command based on the distributions. - -For **`Fedora`** , use [DNF Command][6] to install ddgr. -``` -# dnf install ddgr - -``` - -Alternatively we can use [SNAP Command][7] to install ddgr. -``` -# snap install ddgr - -``` - -For **`LinuxMint/Ubuntu`** , use [APT-GET Command][8] or [APT Command][9] to install ddgr. -``` -$ sudo add-apt-repository ppa:twodopeshaggy/jarun -$ sudo apt-get update -$ sudo apt-get install ddgr - -``` - -For **`Arch Linux`** based systems, use [Yaourt Command][10] or [Packer Command][11] to install ddgr from AUR repository. -``` -$ yaourt -S ddgr -or -$ packer -S ddgr - -``` - -For **`Debian`** , use [DPKG Command][12] to install ddgr. -``` -# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb -# dpkg -i ddgr_1.2-1_debian9.amd64.deb - -``` - -For **`CentOS 7`** , use [YUM Command][13] to install ddgr. -``` -# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm - -``` - -For **`opensuse`** , use [zypper Command][14] to install ddgr. -``` -# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm - -``` - -### How To Launch ddgr - -Enter the `ddgr` command without any option on terminal to bring DuckDuckGo search. You will get the same output similar to below. -``` -$ ddgr - -``` - -![][16] - -### How To Search Using ddgr - -We can initiate the search through two ways. Either from omniprompt or directly from terminal. You can search any phrases which you want. - -Directly from terminal. -``` -$ ddgr 2daygeek - -``` - -![][17] - -From `omniprompt`. -![][18] - -### Omniprompt Shortcut - -Enter `?` to obtain the `omniprompt`, which will show you list of keywords and shortcut to work further with ddgr. -![][19] - -### How To Move Next,Previous, and Fist Page - -It allows user to move next page or previous page or first page. - - * `n:` Move to next set of search results - * `p:` Move to previous set of search results - * `f:` Jump to the first page - - - -![][20] - -### How To Initiate A New Search - -“ **d** ” option allow users to initiate a new search from omniprompt. For example, i searched about `2daygeek website` and now i’m going to initiate a new search with phrase “ **Magesh Maruthamuthu** “. - -From `omniprompt`. -``` -ddgr (? for help) d magesh maruthmuthu - -``` - -![][21] - -### Show Complete URLs In Search Result - -By default it shows only an article heading, add the “ **x** ” option in search to show complete article urls in search result. -``` -$ ddgr -n 5 -x 2daygeek - -``` - -![][22] - -### Limit Search Results - -By default search results shows 10 results per page. If you want to limit the page results for your convenience, you can do by passing `--num or -n` argument with ddgr. -``` -$ ddgr -n 5 2daygeek - -``` - -![][23] - -### Website Specific Search - -To search specific pages from the particular website, use the below format. This will fetch the results for given keywords from the website. For example, We are going search about “ **Package Manager** ” from 2daygeek website. See the results. -``` -$ ddgr -n 5 --site 2daygeek "package manager" - -``` - -![][24] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://github.com/jarun/ddgr -[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/ -[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/ -[4]:https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/ -[5]:https://www.2daygeek.com/rtv-reddit-terminal-viewer-a-simple-terminal-viewer-for-reddit/ -[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[7]:https://www.2daygeek.com/snap-command-examples/ -[8]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[9]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[10]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ -[11]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ -[12]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ -[13]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[14]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux1.png -[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-3.png -[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-2.png -[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-4.png -[20]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-5a.png -[21]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-6a.png -[22]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-7a.png -[23]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-8.png -[24]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-9a.png diff --git a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md b/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md index d04ee6d742..ed85b172af 100644 --- a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md +++ b/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md @@ -1,3 +1,5 @@ +Translating by MjSeven + Manage your workstation with Ansible: Configure desktop settings ====== diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md deleted file mode 100644 index 581d22b527..0000000000 --- a/sources/tech/20180629 100 Best Ubuntu Apps.md +++ /dev/null @@ -1,1184 +0,0 @@ -100 Best Ubuntu Apps -====== - -Earlier this year we have published the list of [20 Best Ubuntu Applications for 2018][1] which can be very useful to many users. Now we are almost in the second half of 2018, so today we are going to have a look at 100 best applications for Ubuntu which you will find very useful. - -![100 Best Ubuntu Apps][2] - -Many users who have recently switched to Ubuntu from Microsoft Windows or any other operating system face the dilemma of finding best alternative to application software they have been using for years on their previous OS. Ubuntu has thousands of free to use and open-source application software’s that perform way better than many paid software’s on Windows and other OS. - -Following list features many application software in various categories, so that you can find best application which best matched to your requirements. - -### **1\. Google Chrome Browser** - -Almost all the Linux distributions feature Mozilla Firefox web browser by default and it is a tough competitor to Google Chrome. But Chrome has its own advantages over Firefox like Chrome gives you direct access to your Google account from where you can sync bookmarks, browser history, extensions, etc. from Chrome browser on other operating systems and mobile phones. - -![Chrome][3] - -Google Chrome features up-to-date Flash player for Linux which is not the case with other web browsers on Linux including Mozilla Firefox and Opera web browser. If you continuously use Chrome on Windows then it is best choice to use it on Linux too. - -### 2\. **Steam** - -Gaming on Linux is a real deal now, which was a distant dream few years ago. In 2013, Valve announced Steam gaming client for Linux and everything has changed since then. Earlier users were hesitant to switch to Linux from Windows just because they would not be able to play their favourite games on Ubuntu but that is not the case now. - -![Steam][4] - -Some users might find installing Steam on Linux tricky but it worth all your efforts as thousands of Steam games are available for Linux. Some popular high-end games like Counter Strike: Global Offensive, Hitman, Dota 2 are available for Linux, you just need to make sure you have minimum hardware required to play these games. - -``` -$ sudo add-apt-repository multiverse - -$ sudo apt-get update - -$ sudo apt-get install steam -``` - -### **3\. WordPress Desktop Client** - -Yes you read it correct, WordPress has its dedicated desktop client for Ubuntu from where you can manage your WordPress sites. You can also write and design separately on desktop client without need for switching browser tabs. - -![][5] - -If you have websites backed by WordPress then this desktop client is must have application for you as you can also keep track of all the WordPress notifications in one single window. You can also check stats for performance of posts on website. Desktop client is available in Ubuntu Software Centre from where you can download and install it. - -### **4\. VLC Media Player** - -VLC is a very popular cross-platform and open-source media player which is also available for Ubuntu. What makes VLC a best media player is that it can play videos in all the Audio and Video formats available on planet without any issue. - -![][6] - -VLC has a slick user interface which is very easy to use and apart from that it offers lot of features such as online video streaming, audio and video customization, etc. - -``` -$ sudo add-apt-repository ppa:videolan/master-daily -$ sudo apt update -$ sudo apt-get install vlc qtwayland5 -``` - -### **5\. Atom Text Editor** - -Having developed by Github, Atom is a free and open-source text editor which can also be used as Integrated Development Environment (IDE) for coding and editing in major programming languages. Atom developers claim it to be a completely hackable text editor for 21st Century. - -![][7] - -Atom Text Editor has one of the best user interfaces and it is a feature rich text editor with offerings like auto-completion, syntax highlighting and support of extensions and plug-ins. - -``` -$ sudo add-apt-repository ppa:webupd8team/atom -$ sudo apt-get update -$ sudo apt-get install atom -``` - -### **6\. GIMP Photo Editor** - -GIMP (GNU Image Manipulation Programme) is free and open-source photo editor for Ubuntu. It is arguably a best alternative to Adobe Photoshop on Windows. If you have been continuously using Adobe Photoshop and finding it difficult to get used to GIMP, then you can customize GIMP to look very similar to Photoshop. - -![][8] - -GIMP is a feature rich Photo editor and you can always use additional features by installing extensions and plug-ins anytime. - -``` -$ sudo apt-get install gimp -``` - -### **7\. Google Play Music Desktop Player** - -Google Play Music Desktop Player is an open-source music player which is replica of Google Play Music or you can say it’s better than that. Google always lacked a desktop music client but this third-party app fills the void perfectly. - -![][9] - -Like you can see in above screenshot, its interface is second to none in terms of look and feel. You just need to sign in into Google account and then it will import all your music and favorites into this desktop client. You can download installation files from its official [website][10] and install it using Software Center. - -### **8\. Franz** - -Franz is an instant messaging client that combines chat and messaging services into one application. It is one of the modern instant messaging platforms and it supports Facebook Messenger, WhatsApp, Telegram, HipChat, WeChat, Google Hangouts, Skype integration under one single application. - -![][11] - -Franz is complete messaging platform which you can use for business as well to manage mass customer service. To install Franz you need to download installation package from its [website][12] and open it using Software Center. - -### **9\. Synaptic Package Manager** - -Synaptic Package Manager is one of the must have tools on Ubuntu because it works for graphical interface for ‘apt-get’ command which we usually use to install apps on Ubuntu using Terminal. It gives tough competition to default app stores on various Linux distros. - -![][13] - -Synaptic comes with very simple user interface which is very fast and easy to use as compared to other app stores. On left-hand side you can browse various apps in different categories from where you can easily install and uninstall apps. - -``` -$ sudo apt-get install synaptic -``` - -### **10\. Skype** - -Skype is a very popular cross-platform video calling application which is now also available for Linux as a Snap app. Skype is an instant messaging application which offers features like voice and video calls, desktop screen sharing, etc. - -![][14] - -Skype has an excellent user interface which very similar to desktop client on Windows and it is very easy to use. It could be very useful app for many switchers from Windows. - -``` -$ sudo snap install skype -``` - -### **13\. VirtualBox** - -VirtualBox is a cross-platform virtualization software application developed by Oracle Corporation. If you love trying out new operating systems then VirtualBox is the must have Ubuntu application for you. You can tryout Linux, Mac inside Windows Operating System or Windows and Mac inside Linux. - -![][15] - -What VB actually does is it lets you run guest operating system on host operating system virtually. It creates virtual hard drive and installs guest OS on it. You can download and install VB directly from Ubuntu Software Center. - -### **12\. Unity Tweak Tool** - -Unity Tweak Tool (Gnome Tweak Tool) is must have tool for every Linux user because it gives user ability to customize desktop according to your need. You can try new GTK themes, set up desktop hot corners, customize icon set, tweak unity launcher, etc. - -![][16] - -Unity Tweak Tool can be very useful to user as it has everything covered right from the basic to advanced configurations. - -``` -$ sudo apt-get install unity-tweak-tool -``` - -### **13\. Ubuntu Cleaner** - -Ubuntu Cleaner is a system maintenance tool especially designed to remove packages that are no longer useful, remove unnecessary apps and clean-up browser caches. Ubuntu Cleaner has very simple user interface which is very easy to use. - -![][17] - -Ubuntu Cleaner is one of the best alternatives to BleachBit which is also a decent cleaning tool available for Linux distros. - -``` -$ sudo add-apt-repository ppa:gerardpuig/ppa -$ sudo apt-get update -$ sudo apt-get install ubuntu-cleaner -``` - -### 14\. Visual Studio Code - -Visual Studio Code is code editor which you will find very similar to Atom Text Editor and Sublime Text if you have already used them. Visual Studio Code proves to be very good educational tool as it explains everything from HTML tags to syntax in programming. - -![][18] - -Visual Studio comes with Git integration out of the box and it has excellent user interface which you will find very similar to likes of Atom Text Editor and Sublime Text. You can download and install it from Ubuntu Software Center. - -### **15\. Corebird** - -If you are looking for desktop client where you can use your Twitter then Corebird Twitter Client is the app you are looking for. It is arguably best Twitter client available for Linux distros and it offers features very similar to Twitter app on your mobile phone. - -![][19] - -Corebird Twitter Client also gives notifications whenever someone likes and retweets your tweet or messages you. You can also add multiple Twitter accounts on this client. - -``` -$ sudo snap install corebird -``` - -### **16\. Pixbuf** - -Pixbuf is a desktop client from Pixbuf photo community hub which lets you upload, share and sale your photos. It supports photo sharing to social media networks like Facebook, Pinterest, Instagram, Twitter, etc. and photography services including Flickr, 500px and Youpic. - -![][20] - -Pixbuf offers features like analytics which gives you stats about clicks, retweets, repins on your photo, scheduled posts, dedicated iOS extension. It also has mobile app, so that you can always be connected with your Pixbuf account from anywhere. Pixbuf is available for download in Ubuntu Software Center as a snap package. - -### **17\. Clementine Music Player** - -Clementine is a cross-platform music player and a good competitor to Rhythmbox which is default music player on Ubuntu. It is fast and easy to use music player thanks to its user friendly interface. It supports audio playback in all the major audio file formats. - -![][21] - -Apart from playing music from local library you can also listen to online radio from Spotify, SKY.fm, Soundcloud, etc. It also offers other features like smart and dynamic playlists, syncing music from cloud storages like Dropbox, Google Drive, etc. - -``` -$ sudo add-apt-repository ppa:me-davidsansome/clementine -$ sudo apt-get update -$ sudo apt-get install clementine -``` - -### **18\. Blender** - -Blender is free and open-source 3D creation application software which you can use to create 3D printed models, animated films, video games, etc. It comes with integrated game engine out of the box which you can use to develop and test video games. - -![blender][22] - -Blender has catchy user interface which is easy to use and it includes features like built-in render engine, digital sculpturing, simulation tool, animation tools and many more. It is one of the best applications you will ever find for Ubuntu considering it’s free and features it offers. - -### **19\. Audacity** - -Audacity is an open-source audio editing application which you can use to record, edit audio files. You can record audio from various inputs like microphone, electric guitar, etc. It also gives ability to edit and trim audio clips according to your need. - -![][23] - -Recently Audacity released with new features for Ubuntu which includes theme improvements, zoom toggle command, etc. Apart from these it offers features like various audio effects including noise reduction and many more. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity -$ sudo apt-get update -$ sudo apt-get install audacity -``` - -### **20\. Vim** - -Vim is an Integrated Development Environment which you can use as standalone application or command line interface for programming in various major programming languages like Python. - -![][24] - -Most of the programmers prefer coding in Vim because it is fast and highly customizable IDE. Initially you might find it difficult to use but you will quickly get used to it. - -``` -$ sudo apt-get install vim -``` - -### **21\. Inkscape** - -Inkscape is an open-source and cross-platform vector graphics editor which you will find very much similar to Corel Draw and Adobe Illustrator. Using it you can create and edit vector graphics such as charts, logos, diagrams, illustrations, etc. - -![][25] - -Inkscape uses Scalable Vector Graphics (SVG) and an open XML-based W3C standard as a primary format. It supports various formats including JPEG, PNG, GIF, PDF, AI (Adobe Illustrator Format), VSD, etc. - -``` -$ sudo add-apt-repository ppa:inkscape.dev/stable -$ sudo apt-get update -$ sudo apt-get install inkscape -``` - -### **22\. Shotcut** - -Shotcut is a free, open-source and cross-platform video editing application developed by Meltytech, LLC on the MLT Multimedia Framework. It is one of the most powerful video editors you will ever find for Linux distros as it supports all the major audio, video and image formats. - -![][26] - -It gives ability to edit multiple tracks with various file formats using non-linear video editing. It also comes with support for 4K video resolutions and features like various audio and video filters, tone generator, audio mixing and many others. - -``` -snap install shotcut -- classic -``` - -### **23\. SimpleScreenRecorder** - -SimpleScreenRecorder is a free and lightweight screen video recorder for Ubuntu. This screen recorder can be very useful tool for you if you are a YouTube creator or application developer. - -![][27] - -It can capture a video/audio record of desktop screen or record video games directly. You can set video resolution, frame rate, etc. before starting the screen recording. It has simple user interface which you will find very easy to use. - -``` -$ sudo add-apt-repository ppa:marten-baert/simplescreenrecorder -$ sudo apt-get update -$ sudo apt-get install simplescreenrecorder -``` - -### **24\. Telegram** - -Telegram is a cloud-based instant messaging and VoIP platform which has got lot of popularity in recent years. It is an open-source and cross-platform messenger where user can send messages, share videos, photos, audio and other files. - -![][28] - -Some of the notable features in Telegram are secrete chats, voice messages, bots, telescope for video messages, live locations and social login. Privacy and security is at highest priority in Telegram, so all messages you send and receive are end-to-end encrypted. - -``` -``` -$ sudo snap install telegram-desktop - -### **25\. ClamTk** - -As we know viruses meant to harm Windows PC cannot do any harm to Ubuntu but it is always prone to get infected by some mails from Windows PC containing harmful files. So it is safe to have some antivirus applications on Linux too. - -![][29] - -ClamTk is a lightweight malware scanner which scans files and folders on your system and cleans if any harmful files are found. ClamTk is available as a Snap package and can be downloaded from Ubuntu Software Center. - -### **26\. MailSpring** - -MailSpring earlier known as Nylas Mail or Nylas N1 is an open-source email client. It saves all the mails locally on computer so that you can access them anytime you need. It features advanced search which uses AND and OR operations so that you can search for mails based on different parameters. - -![][30] - -MailSpring comes with excellent user interface which you will find only on handful of other mail clients. Privacy and security, scheduler, contact manager, calendar are some of the features MailSpring offers. - -### **27\. PyCharm** - -PyCharm is one of my favorite Python IDEs after Vim because it has slick user interface with lot of extensions and plug-in support. Basically it comes in two editions, one is community edition which is free and open-source and other is professional edition which is paid one. - -![][31] - -PyCharm is highly customizable IDE and sports features such as error highlighting, code analysis, integrated unit testing and Python debugger, etc. PyCharm is the preferred IDE by most of the Python programmers and developers. - -### **28\. Caffeine** - -Imagine you are watching something on YouTube or reading a news article and suddenly Ubuntu locks your screen, I know that is very annoying. It happens with many of us, so Caffeine is the tool that will help you block the Ubuntu lock screen or screensaver. - -![][32] - -Caffeine Inhibitor is lightweight tool, it adds icon on notification bar from where you can activate or deactivate it easily. No additional setting needs to be done in order to use this tool. - -``` -$ sudo add-apt-repository ppa:eugenesan/ppa -$ sudo apt-get update -$ sudo apt-get install caffeine -y -``` - -### **29\. Etcher USB Image Writer** - -Etcher is an open-source USB Image Writer developed by resin.io. It is a cross-platform application which helps you burn image files like ZIP, ISO, IMG to USB storage. If you always try out new OS then Etcher is the must have tool for you as it is easy to use and reliable. - -![][33] - -Etcher has clean user interface that guides you through process of burning image file to USB drive or SD card in three easy steps. Steps involve selecting Image file, selecting USB drive and finally flash (writes files to USB drive). You can download and install Etcher from its [website][34]. - -### **30\. Neofetch** - -Neofetch is a cool system information tool that gives you all the information about your system by running “neofetch” command in Terminal. It is cool tool to have because it gives you information about desktop environment, kernel version, bash version and GTK theme you are running. - -![][35] - -As compared to other system information tools Nefetch is highly customizable tool. You can perform various customizations using command line. - -``` -$ sudo add-apt-repository ppa:dawidd0811/neofetch -$ sudo apt-get update -$ sudo apt-get update install neofetch -``` - -### 31\. Liferea - -Liferea (Linux Feed Reader) is a free and open-source news aggregator for online news feeds. It is a fast and easy to use new aggregator that supports various formats such as RSS/RDF, Atom, etc. - -![][36] -Liferea comes with sync support with TinyTinyRSS out of the box and it gives you an ability to read feeds in offline mode. It is one of the best feed readers you will find for Linux in terms of reliability and flexibility. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/apps -$ sudo apt-get update -$ sudo apt-get install liferea -``` - -### 32\. Shutter - -It is easy to take screenshots in Ubuntu but when it comes to edit screenshots Shutter is the must have application for you. It helps you capture, edit and share screenshots easily. Using Shutter’s selector tool you can select particular part of your screen to take screenshot. - -![][37] - -Shutter is a feature-rich screenshot tool which offers features like adding effects to screenshot, draw lines, etc. It also gives you option to upload your screenshot to various image hosting sites. You can directly download and install Shutter from Software Center. - -### 33\. Weather - -Weather is a small application which gives you real-time weather information for your city or any other location in the world. It is simple and lightweight tool which gives you detailed forecast of up to 7 days and hourly details for current and next day. - -![][38] - -It integrates with GNOME shell to give you information about current weather conditions at recently searched locations. It has minimalist user interface which works smoothly on minimum hardware requirement. - -### 34\. Ramme - -Ramme is cool unofficial Instagram desktop client which gives you feel of Instagram mobile phone app. It is an Electron-based client so it replicates Instagram app and offers features like theme customization, etc. - -![][39] - -But due to Instagram’s API restrictions you can’t upload image using Ramme client but you can always go through Instagram feed, like and comment on posts, message friends. You can download Ramme installation files from[Github.][40] - -### **35\. Thunderbird** - -Thunderbird is an open-source email client which is also a default email client in most of the Linux distributions. Despite parting ways with Mozilla in 2017, Thunderbird is still very popular and best email client on Linux platform. It comes with features like spam filtering, IMAP and POP email syncing, calendar support, address book integration and many other features out of the box. - -![][41] - -It is a cross-platform email client with full community support across all supported platforms. You can always change its look and feel thanks to its highly customizable nature. - -### **36\. Pidgin** - -Pidgin is an instant messaging client where you can login into different instant messaging networks under single window. You can login to instant messaging networks like Google Talk, XMPP, AIM, Bonjour, etc. - -![][42] - -Pidgin has all the features you can expect in an instant messenger and you can always enhance its performance by installing additional plug-ins. - -``` -``` -$ sudo apt-get install pidgin - -### **37\. Krita** - -Krita is a free and open-source digital painting, editing and animation application developed by KDE. It has excellent user interface with everything placed perfectly so that you can easily find the tool you need. - -![][43] - -It uses OpenGL canvas which boosts Krita’s performance and it offers many features like different painting tools, animation tools, vector tools, layers and masks and many more. Krita is available in Ubuntu Software Center, you can easily download it from there. - -### **38\. Dropbox** - -Dropbox is stand-out player in cloud storage and its Linux clients works really well on Ubuntu once installed properly. While Google Drive comes out of the box on Ubuntu 16.04 LTS and later, Dropbox is still a preferred cloud storage tool on Linux in terms of features it offers. - -![][44] -It always works in background and back up new files from your system to cloud storage, syncs files continuously between your computer and its cloud storage. - -``` -$ sudo apt-get install nautilus-dropbox -``` - -### 39\. Kodi - -Kodi formerly known as Xbox Media Center (XBMC) is an open-source media player. You can play music, videos, podcasts and video games both in online and offline mode. This software was first developed for first generation of Xbox gaming console and then slowly ported to personal computers. - -![][45] - -Kodi has very impressive video interface which is fast and powerful. It is highly customizable media player and by installing additional plug-ins you can access online streaming services like Pandora, Spotify, Amazon Prime Video, Netflix and YouTube. - -### **40\. Spotify** - -Spotify is one of the best online media streaming sites. It provides music, podcast and video streaming services both on free and paid subscription basis. Earlier Spotify was not supported on Linux but now it has its own fully functional desktop client for Ubuntu. - -![][46] - -Alongside Google Play Music Desktop Player, Spotify is must have media player for you. You just need to login to your Spotify account to access your favorite online content from anywhere. - -### 41\. Brackets - -Brackets is an open-source text editor developed by Adobe. It can be used for web development and design in web technologies such as HTML, CSS and JavaScript. It sports live preview mode which is great feature to have as it can get real-time view of whatever the changes you make in script. - -![][47] - -It is one of the modern text editors on Ubuntu and has slick user interface which takes web development task to new level. It also offers features like inline editor and supports for popular extensions like Emmet, Beautify, Git, File Icons, etc. - -### 42\. Bitwarden - -Account safety is serious concern now as we can see rise in security breaches in which users passwords are stolen and important data being compromised. So Bitwarden is recommended tool for you which stores all your account logins and passwords safe and secure at one place. - -![][48] - -Bitwarden uses AES-256 bit encryption technique to store all the login details and only user has access to his data. It also helps you to create strong passwords which are less likely to be hacked. - -### 43\. Terminator - -Terminator is an open-source terminal emulator programmed and developed in Java. It is a cross-platform emulator which lets you have privilege of multiple terminals in one single window which is not the case in Linux default terminal emulator. - -![][49] - -Other stand-out feature in Terminator includes automatic logging, drag and drop, intelligent vertical and horizontal scrolling, etc. - -``` -$ sudo apt-get install terminator -``` - -### 44\. Yak Yak - -Yak Yak is an open-source unofficial desktop client for Google Hangouts messenger. It could be good alternative to Microsoft’s Skype as it comes with bunch of some amazing features out of the box. You can enable desktop notifications, language preferences, and works on minimal memory and power requirements. - -![][50] - -Yak Yak comes with all the features you would expect in any instant messaging app such as typing indicator, drag and drop media files, and audio/video calling. - -### 45\. **Thonny** - -Thonny is a simple and lightweight IDE especially designed for beginners in programming. If you are new to programming then this is the must have IDE for you because it lets you learn while programming in Python. - -![][51] - -Thonny is also great tool for debugging as it supports live variables during debugging, apart from this it offers features like separate windows for executing function call, simple user interface, etc. - -``` -$ sudo apt-get install thonny -``` - -### **46\. Font Manager** - -Font Manager is a lightweight tool for managing, adding or removing fonts on your Ubuntu system. It is specially built for Gnome desktop environment, users don’t having idea about managing fonts using command line will find this tool very useful. - -![][52] - -Gtk+ Font Manager is not meant to be for professional users, it has simple user interface which you will find very easy to navigate. You just need to download font files from internet and add them using Font Manager. - -$ sudo add-apt-repository ppa:font-manager/staging -$ sudo apt-get update -$ sudo apt-get install font-manager - -### **47\. Atril Document Viewer** - -Atril is a simple document viewer which supports file formats like Portable Document Format (PDF), PostScript (PS), Encapsulated PostScript (EPS), DJVU and DVI. Atril comes bundled with MATE desktop environment and it is identical to Evince which is default document on the most of the Linux distros. - -![][53] - -Atril has simple and lightweight user interface which is highly customizable and offers features like search, bookmarks and UI includes thumbnails on the left-hand side. - -``` -$ sudo apt-get install atril -``` - -### **48\. Notepadqq** - -If you have ever used Notepad++ on Windows and looking for similar program on Linux then don’t worry developers have ported it to Linux as Notepadqq. It is a simple yet powerful text editor which you can use for daily tasks or programming in various languages. - -![][54] - -Despite being a simple text editor it has some amazing features like you can set theme between dark and light color scheme, multiple selection, regular expression search and real-time highlighting. - -``` -$ sudo add-apt-repository ppa:notpadqq-team/notepadqq -$ sudo apt-get update -$ sudo apt-get install notepadqq -``` - -### **49\. Amarok** - -Amarok is an open-source music player developed under KDE projetct. It has an intuitive interface which makes you feel home so you can discover your favorite music easily. Besides Clementine, Amarok is very good choice to have when you are looking for perfect music player for Ubuntu. - -![][55] - -Some of the top features in Amarok include intelligent playlist support, integration support for online services like MP3tunes, Last.fm, Magnatune, etc. - -### **50\. Cheese** - -Cheese is a Linux default webcam application which can be useful to you in some video chat or instant messaging applications. Apart from that you can also use this app to take photos and videos with fancy effects. - -![][56] - -It also sports burst mode which lets you take multiple snaps in quick succession and option to share your photos with friends and family. Cheese come pre-installed with most of the Linux distros but you can also download and install it from Software Center. - -### **51\. MyPaint** - -MyPaint is a free and open-source raster graphics editor which focuses on digital painting rather than image manipulation and post processing. It is cross-platform application and more or less similar to Corel Painter. - -![][57] - -MyPaint could be good alternative to those who use Microsoft Paint application on Windows. It has simple user interface which is fast and powerful. MyPaint is available is Software Center for download. - -### **52\. PlayOnLinux** - -PlayOnLinux is a front-end for WINE emulator which allows you to run Windows applications on Linux. You just need to install Windows applications and game on WINE then you can easily launch applications and games using PlayOnLinux. - -![][58] - -### **53\. Akregator** - -Akregator is a default RSS reader for KDE Plasma Environment developed under KDE project. It has simple user interface and comes with KDE’s Konqueror browser so that you don’t need to switch between apps while reading news feeds. - -![][59] - -Akregator also offers features like desktop notifications, automatic feeds, etc. It is one of the best feed readers you will find for across most of the Linux distros. - -### **54\. Brave** - -Brave is an open-source web browser which blocks ads and trackers so that you can browse your content fast and safely. What it actually does is that it pays to websites and YouTubers on behalf of you. If you prefer contributing to websites and YouTubers rather than seeing advertisements then this browser is for you. - -![][60] - -This is a new concept and could be good browser for those who prefer safe browsing without compromising important data on internet. - -### **55\. Bitcoin Core** - -Bitcoin Core is an official Bitcoin client which is highly secure and reliable. It keeps track of all your transactions and makes sure all transactions are valid. It restricts Bitcoin miners and banks from taking full control of your Bitcoin wallet. - -![][61] - -Bitcoin Core also offers other important features like, private keys backup, cold storage, security notifications. - -``` -$ sudo add-apt-repository ppa:bitcoin/bitcoin -$ sudo apt-get update -$ sudo apt-get install bitcoin-qt -``` - -### **56\. Speedy Duplicate Finder** - -Speedy Duplicate Finder is a cross-platform file finder which helps you find duplicate files on your system and free-up disk space. It is a smart tool which searches for duplicate files on entire hard disk and also features smart filter which helps you find file by file type, extension or size. - -![][62] - -It has a simple and clean user interface which is very easy to handle. As soon as you download it from Software Center you are good to go with disk space clean-up. - -### **57\. Zulip** - -Zulip is a free and open-source group chat application which was acquired by Dropbox. It is written in Python and uses PostgreSQL database. It was designed and developed to be a good alternative to other chat applications like Slack and HipChat. - -![][63] - -Zulip is a feature-rich application with features such as drag and drop files, group chats, private messaging, image previews and many more. It also supports integration with likes of Github, JIRA, Sentry, and hundreds of other services. - -### **58\. Okular** - -Okular is a cross-platform document viewer developed by KDE for KDE desktop environment. It is a simple document viewer and supports file formats like Portable Document Format (PDF), PostScript, DjVu, Microsoft Compiled HTML help, and many other major file formats. - -![][64] - -Okular is one of the best document viewers you should try on Ubuntu as it offers features like commenting on PDF documents, drawing lines, highlighting and much more. You can also extract text from PDF document to text file. - -### **59\. FocusWriter** - -FocusWriter is a distraction-free word processor which hides your desktop screen so that you can only focus on writing. Like you can see in the screenshot below whole Ubuntu screen is hidden, it’s just you and your word processor. But you can always access Ubuntu screen whenever you need it by just moving your mouse cursor to the edges of the screen. - -![][65] - -It is a lightweight word processor with support for file formats like TXT, RTF and ODT files. It also offers fully customizable user interface and features like timers and alarms, daily goals, sound effects and support for translation into 20 languages. - -### **60\. Guake** - -Guake is a cool drop-down terminal for GNOME Desktop Environment. Guake comes in a flash whenever you need it and disappears as soon as your task is completed. You just need to click F12 button to launch or exit it so launching Guake is way faster than launching new Terminal window. - -![][66] - -Guake is a feature-rich terminal with features like support for multiple tabs, ability to save your terminal content to file in few clicks, and fully customizable user interface. - -``` -$ sudo apt-get install guake -``` - -### **61\. KDE Connect** - -KDE Connect is an awesome application on Ubuntu and I would have loved to list this application much higher in this marathon article but competition is intense. Anyways KDE Connect helps you get your Android smartphone notifications directly on Ubuntu desktop. - -![][67] - -With KDE Connect you can do whole lot of other things like check your phones battery level, exchange files between computer and Android phone, clipboard sync, send SMS, and you can also use your phone as wireless mouse or keyboard. - -``` -$ sudo add-apt-repository ppa:webupd8team/indicator-kedeconnect -$ sudo apt-get update -$ sudo apt-get install kdeconnect indicator-kdeconnect -``` - -### **62\. CopyQ** - -CopyQ is a simple but very useful clipboard manager which stores content of the system clipboard whenever any changes you make so that you can search and restore it back whenever you need. It is a great tool to have as it supports text, images, HTML and other formats. - -![][68] - -CopyQ comes pre-loaded with features like drag and drop, copy/paste, edit, remove, sort, create, etc. It also supports integration with text editors like Vim, so it could be very useful tool if you are a programmer. - -``` -$ sudo add-apt-repository ppa:hluk/copyq -$ sudo apt-get update -$ sudo apt-get install copyq -``` - -### **63\. Tilix** - -Tilix is feature-rich advanced GTK3 tiling terminal emulator. If you uses GNOME desktop environment then you’re going to love Tilix as it follows GNOME Human Interface Guidelines. What Tilix emulator offers different than default terminal emulators on most of the Linux distros is it gives you ability to split terminal window into multiple terminal panes. - -![][69] - -Tilix offers features like custom links, image support, multiple panes, drag and drop, persistent layout and many more. It also has support for keyboard shortcuts and you can customize shortcuts from the Preferences settings according to your need. - -``` -$ sudo add-apt-repository ppa:webupd8team/terminix -$ sudo apt-get update -$ sudo apt-get install tilix -``` - -### **64\. Anbox** - -Anbox is an Android emulator which lets you install and run Android apps on your Linux system. It is free and open-source Android emulator that executes Android runtime environment by using Linux Containers. It uses latest Linux technologies and Android releases so that you can run any Android app like any other native application. - -![][70] - -Anbox is one of the modern and feature-rich emulators and offers features like no limit for application use, powerful user interface, and seamless integration with host operating system. - -First you need to install kernel modules. - -``` -$ sudo add-apt-repository ppa:morphis/anbox-support -$ sudo apt-get update -$ sudo apt install anbox-modules-dkms Now install Anbox using snap -$ snap install --devmode -- beta anbox -``` - -### **65\. OpenShot** - -OpenShot is the best open-source video editor you will find for Linux distros. It is a cross-platform video editor which is very easy to use without any compromise with its performance. It supports all the major audio, video and image formats. - -![][71] - -OpenShot has clean user interface and offers features like drag and drop, clip resizing, scaling, trimming, snapping, real-time previews, audio mixing and editing and many other features. - -``` -$ sudo add-apt-repository ppa:openshot.developers/ppa -$ sudo apt-get update -$ sudo apt-get install openshot -qt -``` - -### **66\. Plank** - -If you are looking for cool and simple dock for your Ubuntu desktop then Plank should be #1 choice for you. It is perfect dock and you don’t need to make any tweaks after installation but if you want to, then it has built-in preferences panel where you can customize themes, dock size and position. - -![][72] - -Despite being a simple dock, Plank offers features like item rearrangement by simple drag and drop, pinned and running apps icon, transparent theme support. - -``` -$ sudo add-apt-repository ppa:ricotz/docky -$ sudo apt-get update -$ sudo apt-get install plank -``` - -### **67\. Filezilla** - -Filezilla is a free and cross-platform FTP application which sports FileZilla client and server. It lets you transfer files using FTP and encrypted FTP like FTPS and SFTP and supports IPv6 internet protocol. - -![][73] - -It is simple file transfer application with features like drag and drop, support for various languages used worldwide, powerful user interface for multitasking, control and configures transfer speed and many other features. - -### **68\. Stacer** - -Stacer is an open-source system diagnostic tool and optimizer developed using Electron development framework. It has an excellent user interface and you can clean cache memory, start-up applications, uninstall apps that are no longer needed, monitor background system processes. - -![][74] - -It also lets you check disk, memory and CPU usage and also gives real-time stats of downloads and uploads. It looks like a tough competitor to Ubuntu cleaner but both have unique features that separate them apart. - -``` -$ sudo add-apt-repository ppa:oguzhaninan/stacer -$ sudo apt-get update -$ sudo apt-get install stacer -``` - -### **69\. 4K Video Downloader** - -4K Video Downloader is simple video downloading tool which you can use to download videos, playlists, channels from Vimeo, Facebook, YouTube and other online video streaming sites. It supports downloading YouTube playlists and channels in MP4, MKV, M4A, 3GP and many other video/audio file formats. - -![][75] - -4K Video Downloader is not as simple as you would think, apart from normal video downloads it supports 3D and 360 degree video downloading. It also offers features like in-app proxy setup and direct transfer to iTunes. You can download it from [here][76]. - -### 70\. **Qalculate** - -Qalculate is multi-purpose, cross-platform desktop calculator which is simple but very powerful calculator. It can be used for solving complicated maths problems and equations, currency conversions, and many other daily calculations. - -![][77] - -It has an excellent user interface and offers features such as customizable functions, unit calculations, symbolic calculations, arithmetic, plotting and many other functions you will find in any scientific calculator. - -### **71\. Hiri** - -Hiri is a cross-platform email client developed in Python programming language. It has slick user interface and it can be a good alternative to Microsoft Outlook in terms of features and services offered. This is great email client that can be used for sending and receiving emails, managing contacts, calendars and tasks. - -![][78] - -It is a feature-rich email client that offers features like integrated task manager, email synchronization, email rating, email filtering and many more. - -``` -$ sudo snap install hiri -``` - -### **72\. Sublime Text** - -Sublime Text is a cross-platform source code editor programmed in C++ and Python. It has Python Application Programming Interface (API) and supports all the major programming and markup languages. It is simple and lightweight text editor which can be used as IDE with features like auto-completion, syntax highlighting, split editing, etc. - -![][79] - -Some additional features in this text editor include Goto anything, Goto definition, multiple selections, command palette and fully customizable user interface. - -``` -$ sudo apt-get install sublime-text -``` - -### **73\. TeXstudio** - -TeXstudio is an integrated writing environment for creating and editing LaTex documents. It is an open-source editor which offers features like syntax highlighting, integrated viewer, interactive spelling checker, code folding, drag and drop, etc. - -![][80] - -It is a cross-platform editor and has very simple, lightweight user interface which is easy to use. It ships in with integration for BibTex and BibLatex bibliographies managers and also integrated PDF viewer. You can download TeXstudio installation package from its [website][81] or Ubuntu Software Center. - -### **74\. QtQR** - -QtQR is a Qt based application that lets you create and read QR codes in Ubuntu. It is developed in Python and Qt and has simple and lightweight user interface. You can encode website URL, email, text, SMS, etc. and you can also decode barcode using webcam camera. - -![][82] - -QtQR could prove to be useful tool to have if you generally deal with product sales and services because I don’t think we have any other similar tool to QtQR which requires minimal hardware to function smoothly. - -``` -$ sudo add-apt-repository ppa: qr-tools-developers/qr-tools-stable -$ sudo apt-get update -$ sudo apt-get install qtqr -``` - -### **75\. Kontact** - -Kontact is an integrated personal information manager (PIM) developed by KDE for KDE desktop environment. It is all-in-one software suite that integrates KMail, KOrganizer and KAddressBook into a single user interface from where you can manage all your mails, contacts, schedules, etc. - -![][83] - -It could be very good alternative to Microsoft Outlook as it is fast and highly configurable information manager. It has very good user interface which you will find very easy to use. - -``` -$ sudo apt-get install kontact -``` - -### **76\. NitroShare** - -NitroShare is a cross-platform, open-source network file sharing application. It lets you easily share files between multiple operating systems local network. It is a simple yet powerful application and it automatically detects other devices running NitroShare on local network. - -![][84] - -File transfer speed is what makes NitroShare a stand-out file sharing application as it achieves gigabit speed on capable hardware. There is no need for additional configuration required, you can start file transfer as soon as you install it. - -``` -$ sudo apt-add-repository ppa:george-edison55/nitroshare -$ sudo apt-get update -$ sudo apt-get install nitroshare -``` - -### **77\. Konversation** - -Konversation is an open-source Internet Relay Chat (IRC) client developed for KDE desktop environment. It gives speedy access to Freenode network’s channels where you can find support for most distributions. - -![][85] - -It is simple chat client with features like support for IPv6 connection, SSL server support, bookmarks, on-screen notifications, UTF-8 detection, and additional themes. It has easy to use GUI which is highly configurable. - -``` -$ sudo apt-get install konversation -``` - -### **78\. Discord** - -If you’re hardcore gamer and play online games frequently then I have a great application for you. Discord which is free Voice over Internet Protocol (VoIP) application especially designed for online gamers around the world. It is a cross-platform application and can be used for text and audio chats. - -![][86] - -Discord is very popular VoIP application gaming community and as it is completely free application, it is very good competitor to likes of Skype, Ventrilo and Teamspeak. It also offers features like crystal clear voice quality, modern text chat where you can share images, videos and links. - -### **79\. QuiteRSS** - -QuiteRSS is an open-source news aggregator for RSS and Atom news feeds. It is cross-platform feed reader written in Qt and C++. It has simple user interface which you can tweak in either classic or newspaper mode. It comes integrated with webkit browser out of the box so that you can perform all the tasks under single window. - -![][87] - -QuiteRSS comes with features such as content blocking, automatic scheduled feeds, import/export OPML, system tray integration and many other features you could expect in any feed reader. - -``` -$ sudo apt-get install quiterss -``` - -### **80\. MPV Media Player** - -MPV is a free and open-source media player based on MPlayer and MPlayer 2. It has simple user interface where user just needs to drag and drop audio/video files to play them as there is no option to add media files on GUI. - -![][88] - -One of the things about MPV is that it can play 4K videos effortlessly which is not the case with other media players on Linux distros. It also gives user ability to play videos from online video streaming sites including YouTube and Dailymotion. - -``` -$ sudo add-apt-repository ppa:mc3man/mpv-tests -$ sudo apt-get update -$ sudo apt-get install -y mpv -``` - -### **81\. Plume Creator** - -If you’re a writer then Plume Creator is must have application for you because you will not find other app for Ubuntu with privileges like Plume Creator. Writing and editing stories, chapters is tedious task and Plume Creator will ease this task for you with the help of some amazing tools it has to offer. - -![][89] - -It is an open-source application with minimal user interface which you could find confusing in the beginning but you will get used to it in some time. It offers features like edit notes, synopses, export in HTML and ODT formats support, and rich text editing. - -``` -$ sudo apt-get install plume-creator -``` - -### **82\. Chromium Web Browser** - -Chromium is an open-source web browser developed and distributed by Google. Chromium is very identical to Google Chrome web browser in terms of appearance and features. It is a lightweight and fast web browser with minimalist user interface. - -![][90] - -If you use Google Chrome regularly on Windows and looking for similar browser for Linux then Chromium is the best browser for you as you can login into your Google account to access all your Google services including Gmail. - -### **83\. Simple Weather Indicator** - -Simple Weather Indicator is an open-source weather indicator app developed in Python. It automatically detects your location and shows you weather information like temperature, possibility of rain, humidity, wind speed and visibility. - -![][91] - -Weather indicator comes with configurable options such as location detection, temperature SI unit, location visibility on/off, etc. It is cool app to have which adjusts with your desktop comfortably. - -### **84\. SpeedCrunch** - -SpeedCrunch is a fast and high-precision scientific calculator. It comes preloaded with math functions, user-defined functions, complex numbers and unit conversions support. It has simple user interface which is easy to use. - -![][92] - -This scientific calculator has some amazing features like result preview, syntax highlighting and auto-completion. It is a cross-platform calculator with multi-language support. - -### **85\. Scribus** - -Scribus is a free and open-source desktop publishing application that lets you create posters, magazines and books. It is a cross-platform application based on Qt toolkit and released under GNU general public license. It is a professional application with features like CMYK and ICC color management, Python based scripting engine, and PDF creation. - -![][93] - -Scribus comes with decent user interface which is easy to use and works effortlessly on systems with low hardware. It could be downloaded and installed from Software Center on all the latest Linux distros. - -### **86.** **Cura** - -Cura is an open-source 3D printing application developed by David Braam. Ultimaker Cura is the most popular software in 3D printing world and used by millions of users worldwide. It follows 3 step 3D printing model: Design, Prepare and Print. - -![][94] - -It is a feature-rich 3D printing application with seamless integration support for CAD plug-ins and add-ons. It is very simple and easy to use tool, novice artists can start right away. It supports major file formats like STL, 3MF and OBJ file formats. - -### **87\. Nomacs** - -Nomacs is an open-source, cross-platform image viewer which is supports all the major image formats including RAW and psd images. It has ability to browse images in zip or Microsoft Office files and extract them to a directory. - -![][95] - -Nomacs has very simple user interface with image thumbnails at the top and it offers some basic image manipulation features like crop, resize, rotate, color correction, etc. - -``` -$ sudo add-apt-repository ppa:nomacs/stable -$ sudo apt-get update -$ sudo apt-get install nomacs -``` - -### **88\. BitTicker** - -BitTicker is a live bitcoin-USDT Ticker for Ubuntu. It is a simple tool that connects to bittrex.com market and retrieves the latest price for BTC-USDT and display Ubuntu clock on system tray. - -![][96] - -It is simple but very useful too if you invest in Bitcoin regularly and have to study price fluctuations regulary. - -### **89\. Organize My Files** - -Organize My Files is cross-platform one click file organizer and it is available in Ubuntu as a snap package. It is a simple but powerful tool that will help you find unorganized files in a simple click and take them under control. - -![][97] - -It has intuitive user interface which is powerful and fast. It offers features such as auto organizing, recursive organizing, smart filters and multi folders organizing. - -### **90\. GnuCash** - -GnuCash is financial accounting software licensed under GNU general public license for free usage. It is an ideal software for personal and small business accounting. It has simple user interface which allows you to keep track of bank accounts, stocks, income and expenses. - -![][98] - -GnuCash implements double-entry bookkeeping system and based on professional accounting principle to ensure balanced books and accurate reports. - -### **91\. Calibre** - -Calibre is a cross-platform, open-source solution to all your e-book needs. It is a simple e-book organizer that offers features like displaying, creating, editing e-books, organizing existing e-books into virtual libraries, syncing and many more. - -![][99] - -Calibre also helps you convert e-books into whichever format you need and send them to your e-book reading device. Calibre is a very good tool to have if you regularly read and manage e-books. - -### **92\. MATE Dictionary** - -MATE Dictionary is a simple dictionary basically developed for MATE desktop environment. You just need to type the word and then this dictionary will display the meaning and references for it. - -![][100] - -It is simple and lightweight online dictionary with minimalist user interface. - -### **93\. Converseen** - -Converseen is a free cross-platform batch image processing application that lets you convert, edit, resize, rotate and flip large number of images with a single mouse click. It offers features like renaming bunch of images using prefix/suffix, extract image from a Windows icon file, etc. - -![][101] - -It has very good user interface which is easy to use even if you are a novice user. It can also convert entire PDF file into collection of images. - -### **94\. Tiled Map Editor** - -Tiled is a free software level map editor which you can use to edit the maps in various projections such as orthogonal, isometric and hexagonal. This tool can be very useful for game developers during game engine development cycle. - -![][102] - -Tiled is a versatile map editor which lets you create power boost positions, map layouts, collision areas, and enemy positions. It saves all the data in tmx format. - -### **95.** **Qmmp** - -Qmmp is a free and open-source audio player developed in Qt and C++. It is a cross-platform audio player with user interface very similar to Winamp. It has simple and intuitive user interface, Winamp skins can be used instead of default UI. - -![][103] - -It offers features such as automatic album cover fetching, multiple artists support, additional plug-ins and add-ons support and other features similar to Winamp. - -``` -$ sudo add-apt-repository ppa:forkotov02/ppa -$ sudo apt-get update -$ sudo apt-get install qmmp qmmp-q4 qmmp-plugin-pack-qt4 -``` - -### **96\. Arora** - -Arora is free and open-source web browser which offers features like dedicated download manager, bookmarks, privacy mode and tabbed browsing. - -![][104] - -Arora web browser is developed by Benjamin C. Meyer and it is popular among Linux users for its lightweight nature and flexibility. - -### **97\. XnSketch** - -XnSketch is a cool application for Ubuntu that will help you transform your photos into cartoon or sketch images in few clicks. It sports 18 different effects such as black strokes, white strokes, pencil sketch and others. - -![][105] - -It has an excellent user interface which you will find easy to use. Some additional features in XnSketch include opacity and edge strength adjustment, contrast, brightness and saturation adjustment. - -### **98\. Geany** - -Geany is a simple and lightweight text editor which works like an Integrated Development Environment (IDE). It is a cross-platform text editor and supports all the major programming languages including Python, C++, LaTex, Pascal, C#, etc. - -![][106] - -Geany has simple user interface which resembles to programming editors like Notepad++. It offers IDE like features such as code navigation, auto-completion, syntax highlighting, and extensions support. - -``` -$ sudo apt-get install geany -``` - -### **99\. Mumble** - -Mumble is another Voice over Internet Protocol application very similar to Discord. Mumble is also basically designed for online gamers and uses client-server architecture for end-to-end chat. Voice quality is very good on Mumble and it offers end-to-end encryption to ensure privacy. - -![][107] - -Mumble is an open-source application and features very simple user interface which is easy to use. Mumble is available for download in Ubuntu Software Center. - -``` -$ sudo apt-get install mumble-server -``` - -### **100\. Deluge** - -Deluge is a cross-platform and lightweight BitTorrent client that can be used for downloading files on Ubuntu. BitTorrent client come bundled with many Linux distros but Deluge is the best BitTorrent client which has simple user interface which is easy to use. - -![][108] - -Deluge comes with all the features you will find in BitTorrent clients but one feature that stands out is it gives user ability to access the client from other devices as well so that you can download files to your computer even if you are not at home. - -``` -$ sudo add-apt-repository ppa:deluge-team/ppa -$ sudo apt-get update -$ sudo apt-get install deluge -``` - -So these are my picks for best 100 applications for Ubuntu in 2018 which you should try. All the applications listed here are tested on Ubuntu 18.04 and will definitely work on older versions too. Feel free to share your view about the article at [@LinuxHint][109] and [@SwapTirthakar][110]. - --------------------------------------------------------------------------------- - -via: https://linuxhint.com/100_best_ubuntu_apps/ - -作者:[Swapnil Tirthakar][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxhint.com/author/swapnil/ -[1]:https://linuxhint.com/applications-2018-ubuntu/ -[2]:https://linuxhint.com/wp-content/uploads/2018/06/100-Best-Ubuntu-Apps.png -[3]:https://linuxhint.com/wp-content/uploads/2018/06/Chrome.png -[4]:https://linuxhint.com/wp-content/uploads/2018/06/Steam.png -[5]:https://linuxhint.com/wp-content/uploads/2018/06/Wordpress.png -[6]:https://linuxhint.com/wp-content/uploads/2018/06/VLC.png -[7]:https://linuxhint.com/wp-content/uploads/2018/06/Atom-Text-Editor.png -[8]:https://linuxhint.com/wp-content/uploads/2018/06/GIMP.png -[9]:https://linuxhint.com/wp-content/uploads/2018/06/Google-Play.png -[10]:https://www.googleplaymusicdesktopplayer.com/ -[11]:https://linuxhint.com/wp-content/uploads/2018/06/Franz.png -[12]:https://meetfranz.com/#download -[13]:https://linuxhint.com/wp-content/uploads/2018/06/Synaptic.png -[14]:https://linuxhint.com/wp-content/uploads/2018/06/Skype.png -[15]:https://linuxhint.com/wp-content/uploads/2018/06/VirtualBox.png -[16]:https://linuxhint.com/wp-content/uploads/2018/06/Unity-Tweak-Tool.png -[17]:https://linuxhint.com/wp-content/uploads/2018/06/Ubuntu-Cleaner.png -[18]:https://linuxhint.com/wp-content/uploads/2018/06/Visual-Studio-Code.png -[19]:https://linuxhint.com/wp-content/uploads/2018/06/Corebird.png -[20]:https://linuxhint.com/wp-content/uploads/2018/06/Pixbuf.png -[21]:https://linuxhint.com/wp-content/uploads/2018/06/Clementine.png -[22]:https://linuxhint.com/wp-content/uploads/2016/06/blender.jpg -[23]:https://linuxhint.com/wp-content/uploads/2018/06/Audacity.png -[24]:https://linuxhint.com/wp-content/uploads/2018/06/Vim.png -[25]:https://linuxhint.com/wp-content/uploads/2018/06/Inkscape-1.png -[26]:https://linuxhint.com/wp-content/uploads/2018/06/ShotCut.png -[27]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Screen-Recorder.png -[28]:https://linuxhint.com/wp-content/uploads/2018/06/Telegram.png -[29]:https://linuxhint.com/wp-content/uploads/2018/06/ClamTk.png -[30]:https://linuxhint.com/wp-content/uploads/2018/06/Mailspring.png -[31]:https://linuxhint.com/wp-content/uploads/2018/06/PyCharm.png -[32]:https://linuxhint.com/wp-content/uploads/2018/06/Caffeine.png -[33]:https://linuxhint.com/wp-content/uploads/2018/06/Etcher.png -[34]:https://etcher.io/ -[35]:https://linuxhint.com/wp-content/uploads/2018/06/Neofetch.png -[36]:https://linuxhint.com/wp-content/uploads/2018/06/Liferea.png -[37]:https://linuxhint.com/wp-content/uploads/2018/06/Shutter.png -[38]:https://linuxhint.com/wp-content/uploads/2018/06/Weather.png -[39]:https://linuxhint.com/wp-content/uploads/2018/06/Ramme.png -[40]:https://github.com/terkelg/ramme/releases -[41]:https://linuxhint.com/wp-content/uploads/2018/06/Thunderbird.png -[42]:https://linuxhint.com/wp-content/uploads/2018/06/Pidgin.png -[43]:https://linuxhint.com/wp-content/uploads/2018/06/Krita.png -[44]:https://linuxhint.com/wp-content/uploads/2018/06/Dropbox.png -[45]:https://linuxhint.com/wp-content/uploads/2018/06/kodi.png -[46]:https://linuxhint.com/wp-content/uploads/2018/06/Spotify.png -[47]:https://linuxhint.com/wp-content/uploads/2018/06/Brackets.png -[48]:https://linuxhint.com/wp-content/uploads/2018/06/Bitwarden.png -[49]:https://linuxhint.com/wp-content/uploads/2018/06/Terminator.png -[50]:https://linuxhint.com/wp-content/uploads/2018/06/Yak-Yak.png -[51]:https://linuxhint.com/wp-content/uploads/2018/06/Thonny.png -[52]:https://linuxhint.com/wp-content/uploads/2018/06/Font-Manager.png -[53]:https://linuxhint.com/wp-content/uploads/2018/06/Atril.png -[54]:https://linuxhint.com/wp-content/uploads/2018/06/Notepadqq.png -[55]:https://linuxhint.com/wp-content/uploads/2018/06/Amarok.png -[56]:https://linuxhint.com/wp-content/uploads/2018/06/Cheese.png -[57]:https://linuxhint.com/wp-content/uploads/2018/06/MyPaint.png -[58]:https://linuxhint.com/wp-content/uploads/2018/06/PlayOnLinux.png -[59]:https://linuxhint.com/wp-content/uploads/2018/06/Akregator.png -[60]:https://linuxhint.com/wp-content/uploads/2018/06/Brave.png -[61]:https://linuxhint.com/wp-content/uploads/2018/06/Bitcoin-Core.png -[62]:https://linuxhint.com/wp-content/uploads/2018/06/Speedy-Duplicate-Finder.png -[63]:https://linuxhint.com/wp-content/uploads/2018/06/Zulip.png -[64]:https://linuxhint.com/wp-content/uploads/2018/06/Okular.png -[65]:https://linuxhint.com/wp-content/uploads/2018/06/Focus-Writer.png -[66]:https://linuxhint.com/wp-content/uploads/2018/06/Guake.png -[67]:https://linuxhint.com/wp-content/uploads/2018/06/KDE-Connect.png -[68]:https://linuxhint.com/wp-content/uploads/2018/06/CopyQ.png -[69]:https://linuxhint.com/wp-content/uploads/2018/06/Tilix.png -[70]:https://linuxhint.com/wp-content/uploads/2018/06/Anbox.png -[71]:https://linuxhint.com/wp-content/uploads/2018/06/OpenShot.png -[72]:https://linuxhint.com/wp-content/uploads/2018/06/Plank.png -[73]:https://linuxhint.com/wp-content/uploads/2018/06/FileZilla.png -[74]:https://linuxhint.com/wp-content/uploads/2018/06/Stacer.png -[75]:https://linuxhint.com/wp-content/uploads/2018/06/4K-Video-Downloader.png -[76]:https://www.4kdownload.com/download -[77]:https://linuxhint.com/wp-content/uploads/2018/06/Qalculate.png -[78]:https://linuxhint.com/wp-content/uploads/2018/06/Hiri.png -[79]:https://linuxhint.com/wp-content/uploads/2018/06/Sublime-text.png -[80]:https://linuxhint.com/wp-content/uploads/2018/06/TeXstudio.png -[81]:https://www.texstudio.org/ -[82]:https://linuxhint.com/wp-content/uploads/2018/06/QtQR.png -[83]:https://linuxhint.com/wp-content/uploads/2018/06/Kontact.png -[84]:https://linuxhint.com/wp-content/uploads/2018/06/Nitro-Share.png -[85]:https://linuxhint.com/wp-content/uploads/2018/06/Konversation.png -[86]:https://linuxhint.com/wp-content/uploads/2018/06/Discord.png -[87]:https://linuxhint.com/wp-content/uploads/2018/06/QuiteRSS.png -[88]:https://linuxhint.com/wp-content/uploads/2018/06/MPU-Media-Player.png -[89]:https://linuxhint.com/wp-content/uploads/2018/06/Plume-Creator.png -[90]:https://linuxhint.com/wp-content/uploads/2018/06/Chromium.png -[91]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Weather-Indicator.png -[92]:https://linuxhint.com/wp-content/uploads/2018/06/SpeedCrunch.png -[93]:https://linuxhint.com/wp-content/uploads/2018/06/Scribus.png -[94]:https://linuxhint.com/wp-content/uploads/2018/06/Cura.png -[95]:https://linuxhint.com/wp-content/uploads/2018/06/Nomacs.png -[96]:https://linuxhint.com/wp-content/uploads/2018/06/Bit-Ticker-1.png -[97]:https://linuxhint.com/wp-content/uploads/2018/06/Organize-My-Files.png -[98]:https://linuxhint.com/wp-content/uploads/2018/06/GNU-Cash.png -[99]:https://linuxhint.com/wp-content/uploads/2018/06/Calibre.png -[100]:https://linuxhint.com/wp-content/uploads/2018/06/MATE-Dictionary.png -[101]:https://linuxhint.com/wp-content/uploads/2018/06/Converseen.png -[102]:https://linuxhint.com/wp-content/uploads/2018/06/Tiled-Map-Editor.png -[103]:https://linuxhint.com/wp-content/uploads/2018/06/Qmmp.png -[104]:https://linuxhint.com/wp-content/uploads/2018/06/Arora.png -[105]:https://linuxhint.com/wp-content/uploads/2018/06/XnSketch.png -[106]:https://linuxhint.com/wp-content/uploads/2018/06/Geany.png -[107]:https://linuxhint.com/wp-content/uploads/2018/06/Mumble.png -[108]:https://linuxhint.com/wp-content/uploads/2018/06/Deluge.png -[109]:https://twitter.com/linuxhint -[110]:https://twitter.com/SwapTirthakar diff --git a/sources/tech/20180902 Learning BASIC Like It-s 1983.md b/sources/tech/20180902 Learning BASIC Like It-s 1983.md index 5790ef6e88..83ef0ff982 100644 --- a/sources/tech/20180902 Learning BASIC Like It-s 1983.md +++ b/sources/tech/20180902 Learning BASIC Like It-s 1983.md @@ -1,3 +1,4 @@ +Translating by robsean Learning BASIC Like It's 1983 ====== I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us. diff --git a/sources/tech/20190107 Aliases- To Protect and Serve.md b/sources/tech/20190107 Aliases- To Protect and Serve.md deleted file mode 100644 index 783c59dc41..0000000000 --- a/sources/tech/20190107 Aliases- To Protect and Serve.md +++ /dev/null @@ -1,176 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Aliases: To Protect and Serve) -[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) -[#]: author: (Paul Brown https://www.linux.com/users/bro66) - -Aliases: To Protect and Serve -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) - -Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy. - -### Aliases as Shortcuts - -One of the most beautiful things about Linux's shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let's agree that this feature published practical. - -The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year's cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don't use anymore. `ls` can help with that: - -``` -ls -lct -``` - -The instruction above shows the details of each file and directory (`-l`) and also shows when each item was last accessed (`-c`). It then orders the list from most recently accessed to least recently accessed (`-t`). - -Is this hard to remember? You probably don’t use the `-c` and `-t` options every day, so perhaps. In any case, defining an alias like - -``` -alias lt='ls -lct' -``` - -will make it easier. - -Then again, you may want to have the list show the oldest files first: - -``` -alias lo='lt -F | tac' -``` - -![aliases][3] - -Figure 1: The lt and lo aliases in action. - -[Used with permission][4] - -There are a few interesting things going here. First, we are using an alias (`lt`) to create another alias -- which is perfectly okay. Second, we are passing a new parameter to `lt` (which, in turn gets passed to `ls` through the definition of the `lt` alias). - -The `-F` option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an `*`), files from directories (end in `/`), and all of the above from links, symbolic and otherwise (that end in an `@` symbol). The `-F` option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from `lt` through to `tac` you lose the colors from `ls`. - -The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (`|`). - -In this case, you are piping the output from `lt -F` into `tac`. `tac`'s name is a bit of a joke. You may have heard of `cat`, the instruction that was nominally created to con _cat_ enate files together, but that in practice is used to print out the contents of a file to the terminal. `tac` does the same, but prints out the contents it receives in reverse order. Get it? `cat` and `tac`. Developers, you so funny! - -The thing is both `cat` and `tac` can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically. - -So... after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness. - -The final thing you have to bear in mind is that, while `lt` will work the current directory and any other directory... - -``` -# This will work: -lt -# And so will this: -lt /some/other/directory -``` - -... `lo` will only work with the current directory: - -``` -# This will work: -lo -# But this won't: -lo /some/other/directory -``` - -This is because Bash expands aliases into their components. When you type this: - -``` -lt /some/other/directory -``` - -Bash REALLY runs this: - -``` -ls -lct /some/other/directory -``` - -which is a valid Bash command. - -However, if you type this: - -``` -lo /some/other/directory -``` - -Bash tries to run this: - -``` -ls -lct -F | tac /some/other/directory -``` - -which is not a valid instruction, because `tac` mainly because _/some/other/directory_ is a directory, and `cat` and `tac` don't do directories. - -### More Alias Shortcuts - - * `alias lll='ls -R'` prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory. - - * `mkdir='mkdir -pv'` let's you make directories within directories all in one go. With the base form of `mkdir`, to make a new directory containing a subdirectory you have to do this: - -``` - mkdir newdir -mkdir newdir/subdir -``` - -Or this: - -``` -mkdir -p newdir/subdir -``` - -while with the alias you would only have to do this: - -``` -mkdir newdir/subdir -``` - -Your new `mkdir` will also tell you what it is doing while is creating new directories. - - - - -### Aliases as Safeguards - -The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran: - -``` -rm -rf / -``` - -as root, and nuked the whole system. Then there's the user who decided that: - -``` -rm -rf /some/directory/ * -``` - -was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the `*`. - -Both things can be avoided with the `alias rm='rm -i'` alias. The `-i` option makes `rm` ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system. - -The same goes for `cp`, which can overwrite a file without telling you anything. Create an alias like `alias cp='cp -i'` and stay safe! - -### Next Time - -We are moving more and more into scripting territory. Next time, we'll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems. - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve - -作者:[Paul Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/bro66 -[b]: https://github.com/lujun9972 -[1]: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve -[2]: https://www.linux.com/files/images/fig01png-0 -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) -[4]: https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md new file mode 100644 index 0000000000..253ad103b5 --- /dev/null +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5] +====== + +![Explaining Smart Contracts And Its Types][1] + +This is the 5th article in **Blockchain 2.0** series. The previous article of this series explored how can we implement [**Blockchain in real estate**][2]. This post briefly explores the topic of **Smart Contracts** within the domain of Blockchains and related technology. Smart Contracts, which are basic protocols to verify and create new “blocks” of data on the blockchain, are touted to be a focal point for future developments and applications of the system. However, like all “cure-all” medications, it is not the answer to everything. We explore the concept from the basics to understand what “smart contracts” are and what they’re not. + +### Evolving Contracts + +The world is built on contracts. No individual or firm on earth can function in current society without the use and reuse of contracts. The task of creating, maintaining, and enforcing contracts have become so complicated that entire judicial and legal systems have had to be setup in the name of **“contract law”** to support it. Most of all contracts are in fact overseen by a “trusted” third party to make sure the stakeholders at the ends are taken care of as per the conditions arrived. There are contracts that even talk about a third-party beneficiary. Such contracts are intended to have an effect on a third party who is not an active (or participating) party to the contract. Settling and arguing over contractual obligations takes up the bulk of most legal battles that civil lawsuits are involved in. Surely enough a better way to take care of contracts would be a godsend for individuals and enterprises alike. Not to mention the enormous paperwork it would save the government in the name of verifications and attestations[1][2]. + +Most posts in this series have looked at how existing blockchain tech is being leveraged today. In contrast, this post will be more about what to expect in the coming years. A natural discussion about “smart contracts” evolve from the property discussions presented in the previous post. The current post aims to provide an overview of the capabilities of blockchain to automate and carry out “smart” executable programs. Dealing with this issue pragmatically means we’ll first have to define and explore what these “smart contracts” are and how they fit into the existing system of contracts. We look at major present-day applications and projects going on in the field in the next post titled, **“Blockchain 2.0: Ongoing Projects”**. + +### Defining Smart Contracts + +The [**first article of this series**][3] looked at blockchain from a fundamental point of view as a **“distributed ledger”** consisting of blocks of data that were: + + * Tamper-proof + * Non-repudiable (Meaning every block of data is explicitly created by someone and that someone cannot deny any accountability of the same) + * Secure and is resilient to traditional methods of cyber attack + * Almost permanent (of course this depends on the blockchain protocol overlay) + * Highly redundant, by existing on multiple network nodes or participant systems, the failure of one of these nodes will not affect the capabilities of the system in any way, and, + * Offers faster processing depending on application. + + + +Because every instance of data is securely stored and accessible by suitable credentials, a blockchain network can provide easy basis for precise verification of facts and information without the need for third party oversight. blockchain 2.0 developments also allow for **“distributed applications”** (a term which we’ll be looking at in detail in coming posts). Such distributed applications exist and run on the network as per requirements. They’re called when a user needs them and executed by making use of information that has already been vetted and stored on the blockchain. + +The last paragraph provides a foundation for defining smart contracts. _**The Chamber for Digital Commerce**_ , provides a definition for smart contracts which many experts agree on. + +_**“Computer code that, upon the occurrence of a specified condition or conditions, is capable of running automatically according to prespecified functions. The code can be stored and processed on a distributed ledger and would write any resulting change into the distributed ledger”[1].**_ + +Smart contracts are as mentioned above simple computer programs working like “if-then” or “if-else if” statements. The “smart” aspect about the same comes from the fact that the predefined inputs for the program comes from the blockchain ledger, which as proven above, is a secure and reliable source of recorded information. The program can call upon external services or sources to get information as well, if need be, to verify the terms of operation and will only execute once all the predefined conditions are met. + +It has to be kept in mind that unlike what the name implies, smart contracts are not usually autonomous entities nor are they strictly speaking contracts. A very early mention of smart contracts was made by **Nick Szabo** in 1996, where he compared the same with a vending machine accepting payment and delivering the product chosen by the user[3]. The full text can be accessed **[here][4]**. Furthermore, Legal frameworks allowing the entry of smart contracts into mainstream contract use are still being developed and as such the use of the technology currently is limited to areas where legal oversight is less explicit and stringent[4]. + +### Major types of smart contracts + +Assuming the reader has a basic understanding of contracts and computer programming, and building on from our definition of smart contracts, we can roughly classify smart contracts and protocols into the following major categories. + +##### 1\. Smart legal contracts + +These are presumably the most obvious kind. Most, if not, all contracts are legally enforceable. Without going into much technicalities, a smart legal contact is one that involves strict legal recourses in case parties involved in the same were to not fulfill their end of the bargain. As previously mentioned, the current legal framework in different countries and contexts lack sufficient support for smart and automated contracts on the blockchain and their legal status is unclear. However, once the laws are made, smart contracts can be made to simplify processes which currently involve strict regulatory oversight such as transactions in the financial and real estate market, government subsidies, international trade etc. + +##### 2\. DAOs + +**Decentralized Autonomous Organizations** , shortly DAO, can be loosely defined as communities that exist on the blockchain. The community may be defined by a set of rules arrived at and put into code via smart contracts. Every action by every participant would then be subject to these sets of rules with the task of enforcing and reaching at recourse in case of a break being left to the program. Multitudes of smart contracts make up these rules and they work in tandem policing and watching over participants. + +A DAO called the **Genesis DAO** was created by **Ethereum** participants in may of 2016. The community was meant to be a crowdfunding and venture capital platform. In a surprisingly short period of time they managed to raise an astounding **$150 million**. However, hacker(s) found loopholes in the system and managed to steal about **$50 million** dollars’ worth of Ethers from the crowdfund investors. The hack and its fallout resulted in a fork of the Ethereum blockchain into two, **Ethereum** and **Ethereum Classic** [5]. + +##### 3\. Application logic contracts (ALCs) + +If you’ve heard about the internet of things in conjunction with the blockchain, chances are that the matter talked about **Application logic contacts** , shortly ALC. Such smart contracts contain application specific code that work in conjunction with other smart contracts and programs on the blockchain. They aid in communicating with and validating communication between devices (while in the domain of IoT). ALCs are a pivotal piece of every multi-function smart contract and mostly always work under a managing program. They find applications everywhere in most examples cited here[6][7]. + +_Since development is ongoing in the area, any definition or standard so to speak of will be fluidic and vague at best currently._ + +### How smart contracts work** + +To simplify things, let’s proceed by taking an example. + +John and Peter are two individuals debating about the scores in a football match. They have conflicting views about the outcome with both of them supporting different teams (context). Since both of them need to go elsewhere and won’t be able to finish the match then, John bets that team A will beat team B in the match and _offers_ Peter $100 in that case. Peter _considers_ and _accepts_ the bet while making it clear that they are _bound_ to the terms. However, neither of them _trusts_ each other to honour the bet and they don’t have the time nor the money to appoint a _third party_ to oversee the same. + +Assuming both John and Peter were to use a smart contract platform such as **[Etherparty][5]** , to automatically settle the bet at the time of the contract negotiation, they’ll both link their blockchain based identities to the contract and set the terms, making it clear that as soon as the match is over, the program will find out who the winning side is and automatically credit the amount to the winners bank account from the losers. As soon as the match ends and media outlets report the same, the program will scour the internet for the prescribed sources, identify which team won, relate it to the terms of the contract, in this case since B won Peter gets the money from John and after intimating both the parties transfers $100 from John’s to Peter’s account. After having executed, the smart contract will terminate and be inactive for all the time to come unless otherwise mentioned. + +The simplicity of the example aside, the situation involved a classic contract (paying attention to the italicized words) and the participants chose to implement the same using a smart contract. All smart contracts basically work on a similar principle, with the program being coded to execute on predefined parameters and spitting out expected outputs only. The outside sources the smart contract consults for information is may a times referred to as the _Oracle_ in the IT world. Oracles are a common part of many smart contract systems worldwide today. + +The use of a smart contract in this situation allowed the participants the following benefits: + + * It was faster than getting together and settling the bet manually. + * Removed the issue of trust from the equation. + * Eliminated the need for a trusted third party to handle the settlement on behalf of the parties involved. + * Costed nothing to execute. + * Is secure in how it handles parameters and sensitive data. + * The associated data will remain in the blockchain platform they ran it on permanently and future bets can be placed on by calling the same function and giving it added inputs. + * Gradually over time, assuming John and Peter develop gambling addictions, the program will help them develop reliable statistics to gauge their winning streaks. + + + +Now that we know **what smart contracts are** and **how they work** , we’re still yet to address **why we need them**. + +### The need for smart contracts + +As the previous example we visited highlights we need smart contracts for a variety of reasons. + +##### **Transparency** + +The terms and conditions involved are very clear to the counterparties. Furthermore, since the execution of the program or the smart contract involves certain explicit inputs, users have a very direct way of verifying the factors that would impact them and the contract beneficiaries. + +##### Time Efficient + +As mentioned, smart contracts go to work immediately once they’re triggered by a control variable or a user call. Since data is made available to the system instantaneously by the blockchain and from other sources in the network, the execution does not take any time at all to verify and process information and settle the transaction. Transferring land title deeds for instance, a process which involved manual verification of tons of paperwork and takes weeks on normal can be processed in a matter of minutes or even seconds with the help of smart contract programs working to vet the documents and the parties involved. + +##### Precision + +Since the platform is basically just computer code and everything predefined, there can be no subjective errors and all the results will be precise and completely free of human errors. + +##### Safety + +An inherent feature of the blockchain is that every block of data is cryptographically encrypted. Meaning even though the data is stored on a multitude of nodes on the network for redundancy, **only the owner of the data has access to see and use the data**. Similarly, all process will be completely secure and tamper proof with the execution utilizing the blockchain for storing important variables and outcomes in the process. The same also simplifies auditing and regulatory affairs by providing auditors with a native, un-changed and non-repudiable version of the data chronologically. + +##### Trust + +The article series started by saying that blockchain adds a much-needed layer of trust to the internet and the services that run on it. The fact that smart contracts will under no circumstances show bias or subjectivity in executing the agreement means that parties involved are fully bound the outcomes and can trust the system with no strings attached. This also means that the **“trusted third-party”** required in conventional contracts of significant value is not required here. Foul play between the parties involved and oversight will be issues of the past. + +##### Cost effective + +As highlighted in the example, utilizing a smart contract involves minimal costs. Enterprises usually have administrative staff who work exclusively for making that transactions they undertake are legitimate and comply with regulations. If the deal involved multiple parties, duplication of the effort is unavoidable. Smart contracts essentially make the former irrelevant and duplication is eliminated since both the parties can simultaneously have their due diligence done. + +### Applications of Smart Contracts + +Basically, if two or more parties use a common blockchain platform and agree on a set of principles or business logic, they can come together to create a smart contract on the blockchain and it is executed with no human intervention at all. No one can tamper with the conditions set and, any changes, if the original code allows for it, is timestamped and carries the editor’s fingerprint increasing accountability. Imagine a similar situation at a much larger enterprise scale and you understand what smart contracts are capable of and in fact a **Capgemini study** from 2016 found that smart contracts could actually be commercially mainstream **“in the early years of the next decade”** [8]. Commercial applications involve uses in Insurance, Financial Markets, IoT, Loans, Identity Management systems, Escrow Accounts, Employment contracts, and Patent & Royalty contracts among others. Platforms such as Ethereum, a blockchain designed keeping smart contracts in mind, allow for individual private users to utilize smart contracts free of cost as well. + +A more comprehensive overview of the applications of smart contracts on current technological problems will be presented in the next article of the series by exploring the companies that deal with it. + +### So, what are the drawbacks? + +This is not to say that smart contracts come with no concerns regarding their use whatsoever. Such concerns have actually slowed down development in this aspect as well. The tamper-proof nature of everything blockchain essentially makes it next to impossible to modify or add new clauses to existing clauses if the parties involved need to without major overhaul or legal recourse. + +Secondly, even though activity on a public blockchain is open for all to see and observe. The personal identities of the parties involved in a transaction are not always known. This anonymity raises question regarding legal impunity in case either party defaults especially since current laws and lawmakers are not exactly accommodative of modern technology. + +Thirdly, blockchains and smart contracts are still subject to security flaws in many ways because the technology for all the interest in it is still in a very nascent stage of development. This inexperience with the code and platform is what ultimately led to the DAO incident in 2016. + +All of this is keeping aside the significant initial investment that might be needed in case an enterprise or firm needs to adapt a blockchain for its use. The fact that these are initial one-time investments and come with potential savings down the road however is what interests people. + +### Conclusion + +Current legal frameworks don’t really support a full-on smart contract enabled society and won’t in the near future due to obvious reasons. A solution is to opt for **“hybrid” contracts** that combine traditional legal texts and documents with smart contract code running on blockchains designed for the purpose[4]. However, even hybrid contracts remain largely unexplored as innovative legislature is required to bring them into fruition. The applications briefly mentioned here and many more are explored in detail in the [**next post of the series**][6]. + +**References:** + + * **[1] S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018.** + * **[2] [Legal Definition of ius quaesitum tertio][7]. +** + * **[3][N. Szabo, “Nick Szabo — Smart Contracts: Building Blocks for Digital Markets,” 1996.][4]** + * **[4] Cardozo Blockchain Project, “‘Smart Contracts’ & Legal Enforceability,” vol. 2, p. 28, 2018.** + * **[5][The DAO Heist Undone: 97% of ETH Holders Vote for the Hard Fork.][8]** + * **[6] F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167–183.** + * **[7][Types of Smart Contracts Based on Applications | Market InsightsTM – Everest Group.][9]** + * **[8] B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 1–24, 2016.** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/03/smart-contracts-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html +[5]: https://etherparty.com/ +[6]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/ +[7]: http://www.legal-glossary.org/ +[8]: https://futurism.com/the-dao-heist-undone-97-of-eth-holders-vote-for-the-hard-fork/ +[9]: https://www.everestgrp.com/2016-10-types-smart-contracts-based-applications-market-insights-36573.html/ diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md deleted file mode 100644 index 9e85b82f2c..0000000000 --- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ /dev/null @@ -1,50 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) -[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) -[#]: author: (EDITOR https://www.ostechnix.com/author/editor/) - -Blockchain 2.0: Blockchain In Real Estate [Part 4] -====== - -![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png) - -### Blockchain 2.0: Smart‘er’ Real Estate - -The [**previous article**][1] of this series explored the features of blockchain which will enable institutions to transform and interlace **traditional banking** and **financing systems** with it. This part will explore – **Blockchain in real estate**. The real estate industry is ripe for a revolution. It’s among the most actively traded most significant asset classes known to man. However, filled with regulatory hurdles and numerous possibilities of fraud and deceit, it’s also one of the toughest to participate in. The distributed ledger capabilities of the blockchain utilizing an appropriate consensus algorithm are touted as the way forward for the industry which is traditionally regarded as conservative in its attitude to change. - -Real estate has always been a very conservative industry in terms of its myriad operations. Somewhat rightfully so as well. A major economic crisis such as the 2008 financial crisis or the great depression from the early half of the 20th century managed to destroy the industry and its participants. However, like most products of economic value, the real estate industry is resilient and this resilience is rooted in its conservative nature. - -The global real estate market comprises an asset class worth **$228 trillion dollars** [1]. Give or take. Other investment assets such as stocks, bonds, and shares combined are only worth **$170 trillion**. Obviously, any and all transactions implemented in such an industry is naturally carefully planned and meticulously executed, for the most part. For the most part, because real estate is also notorious for numerous instances of fraud and devastating loses which ensue them. The industry because of the very conservative nature of its operations is also tough to navigate. It’s heavily regulated with complex laws creating an intertwined web of nuances that are just too difficult for an average person to understand fully. This makes entry and participation near impossible for most people. If you’ve ever been involved in one such deal, you’ll know how heavy and long the paper trail was. - -This hard reality is now set to change, albeit a slow and gradual transformation. The very reasons the industry has stuck to its hardy tested roots all this while can finally give way to its modern-day counterpart. The backbone of the real estate industry has always been its paper records. Land deeds, titles, agreements, rental insurance, proofs, and declarations etc., are just the tip of the iceberg here. If you’ve noticed the pattern here, this should be obvious, the distributed ledger technology that is blockchain, fits in perfectly with the needs here. Forget paper records, conventional database systems are also points of major failure. They can be modified by multiple participants, is not tamper proof or un-hackable, has a complicated set of ever-changing regulatory parameters making auditing and verifying data a nightmare. The blockchain perfectly solves all of these issues and more. - -Starting with a trivial albeit an important example to show just how bad the current record management practices are in the real estate sector, consider the **Title Insurance business** [2], [3]. Title Insurance is used to hedge against the possibility of the land’s titles and ownership records being inadmissible and hence unenforceable. An insurance product such as this is also referred to as an indemnity cover. It is by law required in many cases that properties have title insurance, especially when dealing with property that has changed hands multiple times over the years. Mortgage firms might insist on the same as well when they back real estate deals. The fact that a product of this kind has existed since the 1850s and that it does business worth at least **$1.5 trillion a year in the US alone** is a testament to the statement at the start. A revolution in terms of how these records are maintained is imperative to have in this situation and the blockchain provides a sustainable solution. Title fraud averages around $100k per case on average as per the **American Land Title Association** and 25% of all titles involved in transactions have an issue regarding their documents[4]. The blockchain allows for setting up an immutable permanent database that will track the property itself, recording each and every transaction or investment that has gone into it. Such a ledger system will make life easier for everyone involved in the real estate industry including one-time home buyers and make financial products such as Title Insurance basically irrelevant. Converting a physical asset such as real estate to a digital asset like this is unconventional and is extant only in theory at the moment. However, such a change is imminent sooner rather than later[5]. - -Among the areas in which blockchain will have the most impact within real estate is as highlighted above in maintaining a transparent and secure title management system for properties. A blockchain based record of the property can contain information about the property, its location, history of ownership, and any related public record of the same[6]. This will permit closing real estate deals fast and obliviates the need for 3rd party monitoring and oversight. Tasks such as real estate appraisal and tax calculations become matters of tangible objective parameters rather than subjective measures and guesses because of reliable historical data which is publicly verifiable. **UBITQUITY** is one such platform that offers customized blockchain-based solutions to enterprise customers. The platform allows customers to keep track of all property details, payment records, mortgage records and even allows running smart contracts that’ll take care of taxation and leasing automatically[7]. - -This brings us to the second biggest opportunity and use case of blockchains in real estate. Since the sector is highly regulated by numerous 3rd parties apart from the counterparties involved in the trade, due-diligence and financial evaluations can be significantly time-consuming. These processes are predominantly carried out using offline channels and paperwork needs to travel for days before a final evaluation report comes out. This is especially true for corporate real estate deals and forms a bulk of the total billable hours charged by consultants. In case the transaction is backed by a mortgage, duplication of these processes is unavoidable. Once combined with digital identities for the people and institutions involved along with the property, the current inefficiencies can be avoided altogether and transactions can take place in a matter of seconds. The tenants, investors, institutions involved, consultants etc., could individually validate the data and arrive at a critical consensus thereby validating the property records for perpetuity[8]. This increases the accuracy of verification manifold. Real estate giant **RE/MAX** has recently announced a partnership with service provider **XYO Network Partners** for building a national database of real estate listings in Mexico. They hope to one day create one of the largest (as of yet) decentralized real estate title registry in the world[9]. - -However, another significant and arguably a very democratic change that the blockchain can bring about is with respect to investing in real estate. Unlike other investment asset classes where even small household investors can potentially participate, real estate often requires large hands-down payments to participate. Companies such as **ATLANT** and **BitOfProperty** tokenize the book value of a property and convert them into equivalents of a cryptocurrency. These tokens are then put for sale on their exchanges similar to how stocks and shares are traded. Any cash flow that the real estate property generates afterward is credited or debited to the token owners depending on their “share” in the property[4]. - -However, even with all of that said, Blockchain technology is still in very early stages of adoption in the real estate sector and current regulations are not exactly defined for it to be either[8]. Concepts such as distributed applications, distributed anonymous organizations, smart contracts etc., are unheard of in the legal domain in many countries. A complete overhaul of existing regulations and guidelines once all the stakeholders are well educated on the intricacies of the blockchain is the most pragmatic way forward. Again, it’ll be a slow and gradual change to go through, however a much-needed one nonetheless. The next article of the series will look at how **“Smart Contracts”** , such as those implemented by companies such as UBITQUITY and XYO are created and executed in the blockchain. - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ - -作者:[EDITOR][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/editor/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/ diff --git a/sources/tech/20190329 How to manage your Linux environment.md b/sources/tech/20190329 How to manage your Linux environment.md deleted file mode 100644 index 74aab10896..0000000000 --- a/sources/tech/20190329 How to manage your Linux environment.md +++ /dev/null @@ -1,177 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to manage your Linux environment) -[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -How to manage your Linux environment -====== - -### Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter. - -![IIP Photo Archive \(CC BY 2.0\)][1] - -The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they're located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking. - -Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files: - -``` -/etc/environment -/etc/bash.bashrc -/etc/profile -``` - -And some of these local files: - -``` -~/.bashrc -~/.profile -- not read if ~/.bash_profile or ~/.bash_login -~/.bash_profile -~/.bash_login -``` - -You can modify any of the local four that exist, since they sit in your home directory and belong to you. - -**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]** - -### Viewing your Linux environment settings - -To view your environment settings, use the **env** command. Your output will likely look similar to this: - -``` -$ env -LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33; -01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32: -*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31: -*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31: -*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01; -31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31: -*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31: -*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31: -*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35: -*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35: -*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35: -*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35: -*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35: -*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35: -*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35: -*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36: -*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36: -*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36: -SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22 -LESSCLOSE=/usr/bin/lesspipe %s %s -LANG=en_US.UTF-8 -OLDPWD=/home/shs -XDG_SESSION_ID=2253 -USER=shs -PWD=/home/shs -HOME=/home/shs -SSH_CLIENT=192.168.0.21 34975 22 -XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop -SSH_TTY=/dev/pts/0 -MAIL=/var/mail/shs -TERM=xterm -SHELL=/bin/bash -SHLVL=1 -LOGNAME=shs -DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus -XDG_RUNTIME_DIR=/run/user/1000 -PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin -LESSOPEN=| /usr/bin/lesspipe %s -_=/usr/bin/env -``` - -While you're likely to get a _lot_ of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like ***.tar=01;31:** , this tells you that tar files will be displayed in a file listing in red, while ***.jpg=01;35:** tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at [Customizing your colors on the Linux command line][3]. - -One easy way to turn colors off when you prefer a simpler display is to use a command such as this one: - -``` -$ ls -l --color=never -``` - -That command could easily be turned into an alias: - -``` -$ alias ll2='ls -l --color=never' -``` - -You can also display individual settings using the **echo** command. In this command, we display the number of commands that will be remembered in our history buffer: - -``` -$ echo $HISTSIZE -1000 -``` - -Your last location in the file system will be remembered if you've moved. - -``` -PWD=/home/shs -OLDPWD=/tmp -``` - -### Making changes - -You can make changes to environment settings with a command like this, but add a line lsuch as "HISTSIZE=1234" in your ~/.bashrc file if you want to retain this setting. - -``` -$ export HISTSIZE=1234 -``` - -### What it means to "export" a variable - -Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes. - -### Adding and removing variables - -You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file. - -``` -$ export MSG="Hello, World!" -``` - -You can unset a variable if you need by using the **unset** command: - -``` -$ unset MSG -``` - -If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example: - -``` -$ echo $MSG -Hello, World! -$ unset $MSG -$ echo $MSG - -$ . ~/.bashrc -$ echo $MSG -Hello, World! -``` - -### Wrap-up - -User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins). - -Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg -[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua -[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html -[4]: https://www.facebook.com/NetworkWorld/ -[5]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md deleted file mode 100644 index 8efc47ae76..0000000000 --- a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md +++ /dev/null @@ -1,126 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (tomjlw) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) -[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor) -[#]: author: (Stephan Tetzel https://opensource.com/users/stephan) - -How to build a mobile particulate matter sensor with a Raspberry Pi -====== - -Monitor your air quality with a Raspberry Pi, a cheap sensor, and an inexpensive display. - -![Team communication, chat][1] - -About a year ago, I wrote about [measuring air quality][2] using a Raspberry Pi and a cheap sensor. We've been using this project in our school and privately for a few years now. However, it has one disadvantage: It is not portable because it depends on a WLAN network or a wired network connection to work. You can't even access the sensor's measurements if the Raspberry Pi and the smartphone or computer are not on the same network. - -To overcome this limitation, we added a small screen to the Raspberry Pi so we can read the values directly from the device. Here's how we set up and configured a screen for our mobile fine particulate matter sensor. - -### Setting up the screen for the Raspberry Pi - -There is a wide range of Raspberry Pi displays available from [Amazon][3], AliExpress, and other sources. They range from ePaper screens to LCDs with touch function. We chose an inexpensive [3.5″ LCD][4] with touch and a resolution of 320×480 pixels that can be plugged directly into the Raspberry Pi's GPIO pins. It's also nice that a 3.5″ display is about the same size as a Raspberry Pi. - -The first time you turn on the screen and start the Raspberry Pi, the screen will remain white because the driver is missing. You have to install [the appropriate drivers][5] for the display first. Log in with SSH and execute the following commands: - -``` -$ rm -rf LCD-show -$ git clone -$ chmod -R 755 LCD-show -$ cd LCD-show/ -``` - -Execute the appropriate command for your screen to install the drivers. For example, this is the command for our model MPI3501 screen: - -``` -$ sudo ./LCD35-show -``` - -This command installs the appropriate drivers and restarts the Raspberry Pi. - -### Installing PIXEL desktop and setting up autostart - -Here is what we want our project to do: If the Raspberry Pi boots up, we want to display a small website with our air quality measurements. - -First, install the Raspberry Pi's [PIXEL desktop environment][6]: - -``` -$ sudo apt install raspberrypi-ui-mods -``` - -Then install the Chromium browser to display the website: - -``` -$ sudo apt install chromium-browser -``` - -Autologin is required for the measured values to be displayed directly after startup; otherwise, you will just see the login screen. However, autologin is not configured for the "pi" user by default. You can configure autologin with the **raspi-config** tool: - -``` -$ sudo raspi-config -``` - -In the menu, select: **3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**. - -There is a step missing to start Chromium with our website right after boot. Create the folder **/home/pi/.config/lxsession/LXDE-pi/** : - -``` -$ mkdir -p /home/pi/config/lxsession/LXDE-pi/ -``` - -Then create the **autostart** file in this folder: - -``` -$ nano /home/pi/.config/lxsession/LXDE-pi/autostart -``` - -and paste the following code: - -``` -#@unclutter -@xset s off -@xset -dpms -@xset s noblank - -# Open Chromium in Full Screen Mode -@chromium-browser --incognito --kiosk -``` - -If you want to hide the mouse pointer, you have to install the package **unclutter** and remove the comment character at the beginning of the **autostart** file: - -``` -$ sudo apt install unclutter -``` - -![Mobile particulate matter sensor][7] - -I've made a few small changes to the code in the last year. So, if you set up the air quality project before, make sure to re-download the script and files for the AQI website using the instructions in the [original article][2]. - -By adding the touch screen, you now have a mobile particulate matter sensor! We use it at our school to check the quality of the air in the classrooms or to do comparative measurements. With this setup, you are no longer dependent on a network connection or WLAN. You can use the small measuring station everywhere—you can even use it with a power bank to be independent of the power grid. - -* * * - -_This article originally appeared on[Open School Solutions][8] and is republished with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor - -作者:[Stephan Tetzel][a] -选题:[lujun9972][b] -译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/stephan -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) -[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi -[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a -[4]: https://amzn.to/2CcvgpC -[5]: https://github.com/goodtft/LCD-show -[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc -[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor) -[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/ diff --git a/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md b/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md new file mode 100644 index 0000000000..3674b73954 --- /dev/null +++ b/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6] +====== + +![The State Of Smart Contracts Now][1] + +Continuing from our [**earlier post on smart contracts**][2], this post aims to discuss the state of Smart contracts, highlight some current projects and companies currently undertaking developments in the area. Smart contracts as discussed in the previous article of the series are programs that exist and execute themselves on a blockchain network. We explored how smart contracts work and why they are superior to traditional digital platforms. Companies described here operate in a wide variety of industries however most of them deal with identity management systems, financial services, crowd funding systems etc., as these are the areas thought to be most suitable for switching to blockchain based data base systems. + +### Open platforms + +Platforms such as **Counterparty** [1] and **Solidity(Ethereum)** are fully public building blocks for developers to create their own smart contracts. Wide spread developer participation in such projects have allowed these to become de facto standards for developing smart contracts, designing your own cryptocurrency token systems, and creating protocols for the blockchains to function. Many commendable projects have derived from them. **Quorum** , by JP Morgan, derived from Ethereum, is an example. **Ripple** is another example for the same. + +### Managing financial transactions + +Transferring cryptocurrencies over the internet is touted to be the norm in the coming years. The shortfalls with the same are: + + * Identities and wallet addresses are anonymous. The payer doesn’t have any first recourse if the receiver does not honor the transaction. + * Erroneous transactions if any will cannot be traced. + * Cryptographically generated hash keys are difficult to work with for humans and human errors are a prime concern. + + + +Having someone else take in the transaction momentarily and settle it with the receiver after due diligence is preferred in this case. + +**EscrowMyEther** [3] and **PAYFAIR** [4] are two such escrow platforms. Basically, the escrow company takes the agreed upon amount and sends a token to the receiver. Once the receiver delivers what the payer wants via the same escrow platform, both confirm and the final payment is released. These are used extensively by freelancers and hobbyist collectors online. + +### Financial services + +Developments in micro-financing and micro-insurance projects will improve the banking infrastructure for much of the world’s poor and unbanked. Involving the poorer “unbanked” sections of the society is estimated to increase revenues for banks and institutions involved by **$380 billion** [5]. This amount supersedes the savings in operational expenses that can be expected by switching to blockchain DLT for banks. + +**BankQu Inc.** based in Midwest United States goes by the slogan “Dignity through identity”. Their platform allows for individuals to setup their own digital identity record where all their transactions will be vetted and processed real time on the blockchain. Overtime the underlying code records and builds a unique online identity for its users allowing for ultra-quick transactions and settlements. The BankQu case studies exploring more about how they’re helping individuals and companies this way is available [here][3]. + +**Stratumn** is helping insurance companies offer better insurance services by automating tasks which were earlier micromanaged by humans. By automation, end to end traceability, and efficient data privacy methods they’ve radically changed how insurance claims are settled. Improved customer experience along with significant cost reductions present a win-win situation for clients as well as firms involved[6]. + +A similar endeavor is being run on a trial basis currently by the French Insurance firm, **AXA**. The product _**“fizzy”**_ allows users to subscribe to its service for a small fee and enter their flight details. In case, the flight gets delayed or comes across some other issue, the program automatically scours online databases, checks with the insurance terms and credits the insurance amount to the user’s account. This eliminates the need for the user or the customer to file a claim after checking with the terms manually and in the long-run once such systems become mainstream, increase accountability from airlines[7][8]. + +### Keeping track of ownership rights + +It is theoretically possible to track media from creation to end user consumption utilizing timestamped blocks of data in a DLT. Companies **Peertracks** and **Mycelia** are currently helping musicians publish content without worrying about their content being stolen or misused. They help artists sell directly to fans and clients while getting paid for their work without having to go through rights and record labels[9]. + +### Identity management platforms + +Blockchain based identity management platforms store your identity on a distributed ledger. Once an account is setup, it is securely encrypted and sent to all the participating nodes after. However, as the owner of the data block only the user has access to the data. Once your identity is established on the network and you begin transactions, an automated program within the network will verify all previous transactions associated with your account, send it for regulatory filings after checking requirements and execute the settlement automatically provided the program deems the transaction legitimate. The upside here being that since the data on the blockchain is tamper-proof and the smart contract checks the input with zero bias (or subjectivity), the transaction doesn’t, as previously mentioned, require oversight or approval from anyone and is taken care of instantaneously. + +Start-ups like **ShoCard** , **Credits** , and **OneName** are currently rolling out similar services and are currently in talks with government and social institutions for integrating them into mainstream use. + +Other independent projects by developers like **Chris Ellis** and **David Duccini** have respectively developed or proposed alternative identity management systems such as **“[World Citizenship][4]”** , and **[IDCoin][5]** , respectively. Mr Ellis even demonstrated the capabilities of his work by creating passports on the a blockchain network[10][11] [12][5]. + +### Resource sharing + +**Share & Charge (Slock.It)** is a European blockchain start-up. Their mobile app allows homeowners and other individuals who’ve invested their money in setting up a charging station share their resource with other individuals who’re looking for a quick. This not only allows owners to get back some of their investment, but also allows EV drivers to access significantly more charging points in their near-by geographical area allowing for suppliers to meet demands in a convenient manner. Once a “customer” is done charging their vehicle, the hardware associated creates a secure time stamped block consisting of the data and a smart contract working on the platform automatically credits the corresponding amount of money into the owners account. A track of all such transactions is recorded and proper security verifications kept in place. Interested readers can take a look [here][6], to know the technical angle behind their product[13][14]. The company’s platforms will gradually enable users to share other products and services with individuals in need and earn a passive income from the same. + +The companies we’ve looked at here, comprise a very short list of ongoing projects that make use of smart contracts and blockchain database systems. Platform such as these help in building a secure “box” full of information to be accessed only by the users themselves and the overlying code or the smart contract. The information is vetted in real time based on a trigger, examined, and the algorithm is executed by the system. Such platforms with minimal human oversight, a much-needed step in the right direction with respect to secure digital automation, something which has never been thought of at this scale previously. + +The next post will shed some light on the **different types of blockchains**. Click the following link to know more about this topic. + + * [**Blockchain 2.0 – Public Vs Private Blockchain Comparison**][7] + + + +**References:** + + * **[1][About | Counterparty][8]** + * **[2] [Quorum | J.P. Morgan][9] +** + * **[3][Escrow My Ether][10]** + * **[4][PayFair][11]** + * **[5] B. Pani, “Blockchain Powered Financial Inclusion,” 2016.** + * **[6][STRATUMN | Insurance Claim Automation Across Europe][12]** + * **[7][fizzy][13]** + * **[8][AXA goes blockchain with fizzy | AXA][14]** + * **[9] M. Gates, “Blockchain. Ultimate guide to understanding blockchain bitcoin cryptocurrencies smart-contracts and the future of money.pdf.” 2017.** + * **[10][ShoCard Is A Digital Identity Card On The Blockchain | TechCrunch][15]** + * **[11][J. Biggs, “Your Next Passport Could Be On The Blockchain | TechCrunch][16]** + * **[12][OneName – Namecoin Wiki][17]** + * **[13][Share&Charge launches its app, on-boards over 1,000 charging stations on the blockchain][18]** + * **[14][slock.it – Landing][19]** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/State-Of-Smart-Contracts-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[3]: https://banqu.co/case-study/ +[4]: https://github.com/MrChrisJ/World-Citizenship +[5]: https://github.com/IDCoin/IDCoin +[6]: https://blog.slock.it/share-charge-smart-contracts-the-technical-angle-58b93ce80f15 +[7]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ +[8]: https://counterparty.io/platform/ +[9]: https://www.jpmorgan.com/global/Quorum +[10]: http://escrowmyether.com/ +[11]: https://payfair.io/ +[12]: https://stratumn.com/business-case/insurance-claim-automation-across-europe/ +[13]: https://fizzy.axa/en-gb/ +[14]: https://group.axa.com/en/newsroom/news/axa-goes-blockchain-with-fizzy +[15]: https://techcrunch.com/2015/05/05/shocard-is-a-digital-identity-card-on-the-blockchain/ +[16]: https://techcrunch.com/2014/10/31/your-next-passport-could-be-on-the-blockchain/ +[17]: https://wiki.namecoin.org/index.php?title=OneName +[18]: https://blog.slock.it/share-charge-launches-its-app-on-boards-over-1-000-charging-stations-on-the-blockchain-ba8275390309 +[19]: https://slock.it/ diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md deleted file mode 100644 index 679c1a92fc..0000000000 --- a/sources/tech/20190409 5 open source mobile apps.md +++ /dev/null @@ -1,131 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (fuzheng1998 ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 open source mobile apps) -[#]: via: (https://opensource.com/article/19/4/mobile-apps) -[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen) - -5 open source mobile apps -====== -You can count on these apps to meet your needs for productivity, -communication, and entertainment. -![][1] - -Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid. - -​​​​​​Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go. - -### MPDroid - -_An Android controller for the Music Player Daemon (MPD)_ - -![MPDroid][2] - -MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work… - -MPDroid is available on [Google Play][4] and [F-Droid][5]. - -### RadioDroid - -_An Android internet radio tuner that I use standalone and with Chromecast_ - -** - -** - -** - -_![RadioDroid][6]_ - -RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under. - -RadioDroid is available on [Google Play][8] and [F-Droid][9]. - -### Signal - -_A secure messaging client for Android, iOS, and desktop_ - -** - -** - -** - -_![Signal][10]_ - -If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like? - -Signal is available for [Android][12], [iOS][13], and [desktop][14]. - -### ConnectBot - -_Android SSH client_ - -** - -** - -** - -_![ConnectBot][15]_ - -Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone. - -ConnectBot is available on [Google Play][17]. - -### Termux - -_Android terminal emulator with many familiar utilities_ - -** - -** - -** - -_![Termux][18]_ - -Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot. - -Termux is available on [Google Play][20] and [F-Droid][21]. - -* * * - -What are your favorite open source mobile apps for work or fun? Please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/mobile-apps - -作者:[Chris Hermansen (Community Moderator)][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 -[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid) -[3]: https://opensource.com/article/17/4/fun-new-gadget -[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US -[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/ -[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid) -[7]: https://www.internet-radio.com/ -[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2 -[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/ -[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal) -[11]: https://opensource.com/article/19/3/open-messenger-client -[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms -[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8 -[14]: https://signal.org/download/ -[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot) -[16]: https://connectbot.org/ -[17]: https://play.google.com/store/apps/details?id=org.connectbot -[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux) -[19]: https://termux.com/ -[20]: https://play.google.com/store/apps/details?id=com.termux -[21]: https://f-droid.org/packages/com.termux/ diff --git a/sources/tech/20190411 Be your own certificate authority.md b/sources/tech/20190411 Be your own certificate authority.md deleted file mode 100644 index f6ea26aba4..0000000000 --- a/sources/tech/20190411 Be your own certificate authority.md +++ /dev/null @@ -1,135 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Be your own certificate authority) -[#]: via: (https://opensource.com/article/19/4/certificate-authority) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) - -Be your own certificate authority -====== -Create a simple, internal CA for your microservice architecture or -integration testing. -![][1] - -The Transport Layer Security ([TLS][2]) model, which is sometimes referred to by the older name SSL, is based on the concept of [certificate authorities][3] (CAs). These authorities are trusted by browsers and operating systems and, in turn, _sign_ servers' certificates to validate their ownership. - -However, for an intranet, a microservice architecture, or integration testing, it is sometimes useful to have a _local CA_ : one that is trusted only internally and, in turn, signs local servers' certificates. - -This especially makes sense for integration tests. Getting certificates can be a burden because the servers will be up for minutes. But having an "ignore certificate" option in the code could allow it to be activated in production, leading to a security catastrophe. - -A CA certificate is not much different from a regular server certificate; what matters is that it is trusted by local code. For example, in the **requests** library, this can be done by setting the **REQUESTS_CA_BUNDLE** variable to a directory containing this certificate. - -In the example of creating a certificate for integration tests, there is no need for a _long-lived_ certificate: if your integration tests take more than a day, you have already failed. - -So, calculate **yesterday** and **tomorrow** as the validity interval: - - -``` ->>> import datetime ->>> one_day = datetime.timedelta(days=1) ->>> today = datetime.date.today() ->>> yesterday = today - one_day ->>> tomorrow = today - one_day -``` - -Now you are ready to create a simple CA certificate. You need to generate a private key, create a public key, set up the "parameters" of the CA, and then self-sign the certificate: a CA certificate is _always_ self-signed. Finally, write out both the certificate file as well as the private key file. - - -``` -from cryptography.hazmat.primitives.asymmetric import rsa -from cryptography.hazmat.primitives import hashes, serialization -from cryptography import x509 -from cryptography.x509.oid import NameOID - -private_key = rsa.generate_private_key( -public_exponent=65537, -key_size=2048, -backend=default_backend() -) -public_key = private_key.public_key() -builder = x509.CertificateBuilder() -builder = builder.subject_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), -])) -builder = builder.issuer_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), -])) -builder = builder.not_valid_before(yesterday) -builder = builder.not_valid_after(tomorrow) -builder = builder.serial_number(x509.random_serial_number()) -builder = builder.public_key(public_key) -builder = builder.add_extension( -x509.BasicConstraints(ca=True, path_length=None), -critical=True) -certificate = builder.sign( -private_key=private_key, algorithm=hashes.SHA256(), -backend=default_backend() -) -private_bytes = private_key.private_bytes( -encoding=serialization.Encoding.PEM, -format=serialization.PrivateFormat.TraditionalOpenSSL, -encryption_algorithm=serialization.NoEncrption()) -public_bytes = certificate.public_bytes( -encoding=serialization.Encoding.PEM) -with open("ca.pem", "wb") as fout: -fout.write(private_bytes + public_bytes) -with open("ca.crt", "wb") as fout: -fout.write(public_bytes) -``` - -In general, a real CA will expect a [certificate signing request][4] (CSR) to sign a certificate. However, when you are your own CA, you can make your own rules! Just go ahead and sign what you want. - -Continuing with the integration test example, you can create the private keys and sign the corresponding public keys right then. Notice **COMMON_NAME** needs to be the "server name" in the **https** URL. If you've configured name lookup, the needed server will respond on **service.test.local**. - - -``` -service_private_key = rsa.generate_private_key( -public_exponent=65537, -key_size=2048, -backend=default_backend() -) -service_public_key = service_private_key.public_key() -builder = x509.CertificateBuilder() -builder = builder.subject_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local') -])) -builder = builder.not_valid_before(yesterday) -builder = builder.not_valid_after(tomorrow) -builder = builder.public_key(public_key) -certificate = builder.sign( -private_key=private_key, algorithm=hashes.SHA256(), -backend=default_backend() -) -private_bytes = service_private_key.private_bytes( -encoding=serialization.Encoding.PEM, -format=serialization.PrivateFormat.TraditionalOpenSSL, -encryption_algorithm=serialization.NoEncrption()) -public_bytes = certificate.public_bytes( -encoding=serialization.Encoding.PEM) -with open("service.pem", "wb") as fout: -fout.write(private_bytes + public_bytes) -``` - -Now the **service.pem** file has a private key and a certificate that is "valid": it has been signed by your local CA. The file is in a format that can be given to, say, Nginx, HAProxy, or most other HTTPS servers. - -By applying this logic to testing scripts, it's easy to create servers that look like authentic HTTPS servers, as long as the client is configured to trust the right CA. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/certificate-authority - -作者:[Moshe Zadka (Community Moderator)][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/moshez/users/elenajon123 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj -[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security -[3]: https://en.wikipedia.org/wiki/Certificate_authority -[4]: https://en.wikipedia.org/wiki/Certificate_signing_request diff --git a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md deleted file mode 100644 index 89f942ce66..0000000000 --- a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md +++ /dev/null @@ -1,166 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Monitoring CPU and GPU Temperatures on Linux) -[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/) -[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) - -Monitoring CPU and GPU Temperatures on Linux -====== - -_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_ - -Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these). - -Whether you are one of those users or otherwise, you are bound to have wondered how hot your computer’s CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps. - -My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives. - -To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds. - -![][7] - -### Monitoring CPU Temperature in Linux - -For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on It’s FOSS][8]. However, we will use the terminal version here: - -``` -watch -n 2 sensors -``` - -`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs): - -``` -Every 2,0s: sensors - -iwlwifi-virtual-0 -Adapter: Virtual device -temp1: +39.0°C - -acpitz-virtual-0 -Adapter: Virtual device -temp1: +27.8°C (crit = +119.0°C) -temp2: +29.8°C (crit = +119.0°C) - -coretemp-isa-0000 -Adapter: ISA adapter -Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C) -Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C) -Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C) -Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C) -Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C) -Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C) -Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C) -``` - -Amongst other things, we get the following information: - - * We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC). - * Values higher than 82.0ºC are considered high. - * A value over 100.0ºC is deemed critical. - - - -[][9] - -Suggested read Top 10 Command Line Games For Linux - -The values above lead us to the conclusion that the computer’s workload is very light at the moment. - -### Monitoring GPU Temperature in Linux - -Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10]. - -On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`: - -``` -watch -n 2 nvidia-smi -``` - -And — the same as for the CPU — we will get updated readings every two seconds: - -``` -Every 2,0s: nvidia-smi - -Fri Apr 19 20:45:30 2019 -+-----------------------------------------------------------------------------+ -| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | -|-------------------------------+----------------------+----------------------+ -| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | -| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | -|===============================+======================+======================| -| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A | -| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default | -+-------------------------------+----------------------+----------------------+ - -+-----------------------------------------------------------------------------+ -| Processes: GPU Memory | -| GPU PID Type Process name Usage | -|=============================================================================| -| 0 1557 G /usr/lib/xorg/Xorg 190MiB | -| 0 1820 G /usr/bin/gnome-shell 174MiB | -| 0 7820 G ...equest-channel-token=303407235874180773 65MiB | -+-----------------------------------------------------------------------------+ -``` - -The chart gives the following information about the graphics card: - - * it is using the open source driver version 418.56. - * the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity. - * the power consumption is very low: only 10W. - * out of 6 GB of vram (video random access memory), it is only using 433 MB. - * the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820. - - - -[][11] - -Suggested read Googler: Now You Can Google From Linux Terminal! - -Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up. - -#### Conclusion - -Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time. - -What do you make of them? You can learn more about the utilities involved by reading their man pages. - -Do you have other preferences? Share them with us in the comments, ;). - -Halof!!! (Have a lot of fun!!!). - -![avatar][12] - -### Alejandro Egea-Abellán - -It’s FOSS Community Contributor - -I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/ - -作者:[It's FOSS Community][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/itsfoss/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/install-steam-ubuntu-linux/ -[2]: https://itsfoss.com/steam-play-proton/ -[3]: https://itsfoss.com/best-video-editing-software-linux/ -[4]: https://www.blender.org/ -[5]: https://slimbook.es/ -[6]: https://zorinos.com/ -[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png -[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/ -[9]: https://itsfoss.com/best-command-line-games-linux/ -[10]: https://itsfoss.com/install-additional-drivers-ubuntu/ -[11]: https://itsfoss.com/review-googler-linux/ -[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg diff --git a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md deleted file mode 100644 index b7a2707cfe..0000000000 --- a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md +++ /dev/null @@ -1,116 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) -[#]: via: (https://itsfoss.com/install-budgie-ubuntu/) -[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) - -Installing Budgie Desktop on Ubuntu [Quick Guide] -====== - -_**Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial.**_ - -Among all the [various Ubuntu versions][1], [Ubuntu Budgie][2] is the most underrated one. It looks elegant and it’s not heavy on resources. - -Read this [Ubuntu Budgie review][3] or simply watch this video to see what Ubuntu Budgie 18.04 looks like. - -[Subscribe to our YouTube channel for more Linux Videos][4] - -If you like [Budgie desktop][5] but you are using some other version of Ubuntu such as the default Ubuntu with GNOME desktop, I have good news for you. You can install Budgie on your current Ubuntu system and switch the desktop environments. - -In this post, I’m going to tell you exactly how to do that. But first, a little introduction to Budgie for those who are unaware about it. - -Budgie desktop environment is developed mainly by [Solus Linux team.][6] It is designed with focus on elegance and modern usage. Budgie is available for all major Linux distributions for users to try and experience this new desktop environment. Budgie is pretty mature by now and provides a great desktop experience. - -Warning - -Installing multiple desktops on the same system MAY result in conflicts and you may see some issue like missing icons in the panel or multiple icons of the same program. - -You may not see any issue at all as well. It’s your call if you want to try different desktop. - -### Install Budgie on Ubuntu - -This method is not tested on Linux Mint, so I recommend that you not follow this guide for Mint. - -For those on Ubuntu, Budgie is now a part of the Ubuntu repositories by default. Hence, we don’t need to add any PPAs in order to get Budgie. - -To install Budgie, simply run this command in terminal. We’ll first make sure that the system is fully updated. - -``` -sudo apt update && sudo apt upgrade -sudo apt install ubuntu-budgie-desktop -``` - -When everything is done downloading, you will get a prompt to choose your display manager. Select ‘lightdm’ to get the full Budgie experience. - -![Select lightdm][7] - -After the installation is complete, reboot your computer. You will be then greeted by the Budgie login screen. Enter your password to go into the homescreen. - -![Budgie Desktop Home][8] - -### Switching to other desktop environments - -![Budgie login screen][9] - -You can click the Budgie icon next to your name to get options for login. From there you can select between the installed Desktop Environments (DEs). In my case, I see Budgie and the default Ubuntu (GNOME) DEs. - -![Select your DE][10] - -Hence whenever you feel like logging into GNOME, you can do so using this menu. - -[][11] - -Suggested read Get Rid of 'snapd returned status code 400: Bad Request' Error in Ubuntu - -### How to Remove Budgie - -If you don’t like Budgie or just want to go back to your regular old Ubuntu, you can switch back to your regular desktop as described in the above section. - -However, if you really want to remove Budgie and its component, you can follow the following commands to get back to a clean slate. - -_**Switch to some other desktop environments before using these commands:**_ - -``` -sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm -sudo apt autoremove -sudo apt install --reinstall gdm3 -``` - -After running all the commands successfully, reboot your computer. - -Now, you will be back to GNOME or whichever desktop environment you had. - -**What you think of Budgie?** - -Budgie is one of the [best desktop environments for Linux][12]. Hope this short guide helped you install the awesome Budgie desktop on your Ubuntu system. - -If you did install Budgie, what do you like about it the most? Let us know in the comments below. And as usual, any questions or suggestions are always welcome. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-budgie-ubuntu/ - -作者:[Atharva Lele][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/atharva/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/which-ubuntu-install/ -[2]: https://ubuntubudgie.org/ -[3]: https://itsfoss.com/ubuntu-budgie-18-review/ -[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[5]: https://github.com/solus-project/budgie-desktop -[6]: https://getsol.us/home/ -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1 -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1 -[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1 -[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1 -[11]: https://itsfoss.com/snapd-error-ubuntu/ -[12]: https://itsfoss.com/best-linux-desktop-environments/ diff --git a/sources/tech/20190509 5 essential values for the DevOps mindset.md b/sources/tech/20190509 5 essential values for the DevOps mindset.md index 4746d2ffaa..e9dafbd673 100644 --- a/sources/tech/20190509 5 essential values for the DevOps mindset.md +++ b/sources/tech/20190509 5 essential values for the DevOps mindset.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (arrowfeng) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md deleted file mode 100644 index c14652a7f4..0000000000 --- a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md +++ /dev/null @@ -1,168 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Using Testinfra with Ansible to verify server state) -[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) -[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) - -Using Testinfra with Ansible to verify server state -====== -Testinfra is a powerful library for writing tests to verify an -infrastructure's state. Coupled with Ansible and Nagios, it offers a -simple solution to enforce infrastructure as code. -![Terminal command prompt on orange background][1] - -By design, [Ansible][2] expresses the desired state of a machine to ensure that the content of an Ansible playbook or role is deployed to the targeted machines. But what if you need to make sure all the infrastructure changes are in Ansible? Or verify the state of a server at any time? - -[Testinfra][3] is an infrastructure testing framework that makes it easy to write unit tests to verify the state of a server. It is a Python library and uses the powerful [pytest][4] test engine. - -### Getting started with Testinfra - -Testinfra can be easily installed using the Python package manager (pip) and a Python virtual environment. - - -``` -$ python3 -m venv venv -$ source venv/bin/activate -(venv) $ pip install testinfra -``` - -Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, on CentOS 7 you can install it with the following commands: - - -``` -$ yum install -y epel-release -$ yum install -y python-testinfra -``` - -#### A simple test script - -Writing tests in Testinfra is easy. Using the code editor of your choice, add the following to a file named **test_simple.py** : - - -``` -import testinfra - -def test_os_release(host): -assert host.file("/etc/os-release").contains("Fedora") - -def test_sshd_inactive(host): -assert host.service("sshd").is_running is False -``` - -By default, Testinfra provides a host object to the test case; this object gives access to different helper modules. For example, the first test uses the **file** module to verify the content of the file on the host, and the second test case uses the **service** module to check the state of a systemd service. - -To run these tests on your local machine, execute the following command: - - -``` -(venv)$ pytest test_simple.py -================================ test session starts ================================ -platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 -rootdir: /home/cverna/Documents/Python/testinfra -plugins: testinfra-3.0.0 -collected 2 items -test_simple.py .. - -================================ 2 passed in 0.05 seconds ================================ -``` - -For a full list of Testinfra's APIs, you can consult the [documentation][5]. - -### Testinfra and Ansible - -One of Testinfra's supported backends is Ansible, which means Testinfra can directly use Ansible's inventory file and a group of machines defined in the inventory to run tests against them. - -Let's use the following inventory file as an example: - - -``` -[web] -app-frontend01 -app-frontend02 - -[database] -db-backend01 -``` - -We want to make sure that our Apache web server service is running on **app-frontend01** and **app-frontend02**. Let's write the test in a file called **test_web.py** : - - -``` -def check_httpd_service(host): -"""Check that the httpd service is running on the host""" -assert host.service("httpd").is_running -``` - -To run this test using Testinfra and Ansible, use the following command: - - -``` -(venv) $ pip install ansible -(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py -``` - -When invoking the tests, we use the Ansible inventory **[web]** group as the targeted machines and also specify that we want to use Ansible as the connection backend. - -#### Using the Ansible module - -Testinfra also provides a nice API to Ansible that can be used in the tests. The Ansible module enables access to run Ansible plays inside a test and makes it easy to inspect the result of the play. - - -``` -def check_ansible_play(host): -""" -Verify that a package is installed using Ansible -package module -""" -assert not host.ansible("package", "name=httpd state=present")["changed"] -``` - -By default, Ansible's [Check Mode][6] is enabled, which means that Ansible will report what would change if the play were executed on the remote host. - -### Testinfra and Nagios - -Now that we can easily run tests to validate the state of a machine, we can use those tests to trigger alerts on a monitoring system. This is a great way to catch unexpected changes. - -Testinfra offers an integration with [Nagios][7], a popular monitoring solution. By default, Nagios uses the [NRPE][8] plugin to execute checks on remote hosts, but using Testinfra allows you to run the tests directly from the Nagios master. - -To get a Testinfra output compatible with Nagios, we have to use the **\--nagios** flag when triggering the test. We also use the **-qq** pytest flag to enable pytest's **quiet** mode so all the test details will not be displayed. - - -``` -(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py -TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds -``` - -Testinfra is a powerful library for writing tests to verify an infrastructure's state. Coupled with Ansible and Nagios, it offers a simple solution to enforce infrastructure as code. It is also a key component of adding testing during the development of your Ansible roles using [Molecule][9]. - -* * * - -Sysadmins who think the cloud is a buzzword and a bunch of hype should check out Ansible. - -Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state - -作者:[Clement Verna][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) -[2]: https://www.ansible.com/ -[3]: https://testinfra.readthedocs.io/en/latest/ -[4]: https://pytest.org/ -[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules -[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html -[7]: https://www.nagios.org/ -[8]: https://en.wikipedia.org/wiki/Nagios#NRPE -[9]: https://github.com/ansible/molecule diff --git a/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md new file mode 100644 index 0000000000..d25298583c --- /dev/null +++ b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: (arrowfeng) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Disable IPv6 on Ubuntu Linux) +[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How to Disable IPv6 on Ubuntu Linux +====== + +Are you looking for a way to **disable IPv6** connections on your Ubuntu machine? In this article, I’ll teach you exactly how to do it and why you would consider this option. I’ll also show you how to **enable or re-enable IPv6** in case you change your mind. + +### What is IPv6 and why would you want to disable IPv6 on Ubuntu? + +**[Internet Protocol version 6][1]** [(][1] **[IPv6][1]**[)][1] is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It was developed in 1998 to replace the **IPv4** protocol. + +**IPv6** aims to improve security and performance, while also making sure we don’t run out of addresses. It assigns unique addresses globally to every device, storing them in **128-bits** , compared to just 32-bits used by IPv4. + +![Disable IPv6 Ubuntu][2] + +Although the goal is for IPv4 to be replaced by IPv6, there is still a long way to go. Less than **30%** of the sites on the Internet makes IPv6 connectivity available to users (tracked by Google [here][3]). IPv6 can also cause [problems with some applications at time][4]. + +Since **VPNs** provide global services, the fact that IPv6 uses globally routed addresses (uniquely assigned) and that there (still) are ISPs that don’t offer IPv6 support shifts this feature lower down their priority list. This way, they can focus on what matters the most for VPN users: security. + +Another possible reason you might want to disable IPv6 on your system is not wanting to expose yourself to various threats. Although IPv6 itself is safer than IPv4, the risks I am referring to are of another nature. If you aren’t actively using IPv6 and its features, [having IPv6 enabled leaves you vulnerable to various attacks][5], offering the hacker another possible exploitable tool. + +On the same note, configuring basic network rules is not enough. You have to pay the same level of attention to tweaking your IPv6 configuration as you do for IPv4. This can prove to be quite a hassle to do (and also to maintain). With IPv6 comes a suite of problems different to those of IPv4 (many of which can be referenced online, given the age of this protocol), giving your system another layer of complexity. + +[][6] + +Suggested read How To Remove Drive Icons From Unity Launcher In Ubuntu 14.04 [Beginner Tips] + +### Disabling IPv6 on Ubuntu [For Advanced Users Only] + +In this section, I’ll be covering how you can disable IPv6 protocol on your Ubuntu machine. Open up a terminal ( **default:** CTRL+ALT+T) and let’s get to it! + +**Note:** _For most of the commands you are going to input in the terminal_ _you are going to need root privileges ( **sudo** )._ + +Warning! + +If you are a regular desktop Linux user and prefer a stable working system, please avoid this tutorial. This is for advanced users who know what they are doing and why they are doing so. + +#### 1\. Disable IPv6 using Sysctl + +First of all, you can **check** if you have IPv6 enabled with: + +``` +ip a +``` + +You should see an IPv6 address if it is enabled (the name of your internet card might be different): + +![IPv6 Address Ubuntu][7] + +You have see the sysctl command in the tutorial about [restarting network in Ubuntu][8]. We are going to use it here as well. To **disable IPv6** you only have to input 3 commands: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1 +``` + +You can check if it worked using: + +``` +ip a +``` + +You should see no IPv6 entry: + +![IPv6 Disabled Ubuntu][9] + +However, this only **temporarily disables IPv6**. The next time your system boots, IPv6 will be enabled again. + +One method to make this option persist is modifying **/etc/sysctl.conf**. I’ll be using vim to edit the file, but you can use any editor you like. Make sure you have **administrator rights** (use **sudo** ): + +![Sysctl Configuration][10] + +Add the following lines to the file: + +``` +net.ipv6.conf.all.disable_ipv6=1 +net.ipv6.conf.default.disable_ipv6=1 +net.ipv6.conf.lo.disable_ipv6=1 +``` + +For the settings to take effect use: + +``` +sudo sysctl -p +``` + +If IPv6 is still enabled after rebooting, you must create (with root privileges) the file **/etc/rc.local** and fill it with: + +``` +#!/bin/bash +# /etc/rc.local + +/etc/sysctl.d +/etc/init.d/procps restart + +exit 0 +``` + +Now use [chmod command][11] to make the file executable: + +``` +sudo chmod 755 /etc/rc.local +``` + +What this will do is manually read (during the boot time) the kernel parameters from your sysctl configuration file. + +[][12] + +Suggested read 3 Ways to Check Linux Kernel Version in Command Line + +#### 2\. Disable IPv6 using GRUB + +An alternative method is to configure **GRUB** to pass kernel parameters at boot time. You’ll have to edit **/etc/default/grub**. Once again, make sure you have administrator privileges: + +![GRUB Configuration][13] + +Now you need to modify **GRUB_CMDLINE_LINUX_DEFAULT** and **GRUB_CMDLINE_LINUX** to disable IPv6 on boot: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1" +GRUB_CMDLINE_LINUX="ipv6.disable=1" +``` + +Save the file and run: + +``` +sudo update-grub +``` + +The settings should now persist on reboot. + +### Re-enabling IPv6 on Ubuntu + +To re-enable IPv6, you’ll have to undo the changes you made. To enable IPv6 until reboot, enter: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0 +``` + +Otherwise, if you modified **/etc/sysctl.conf** you can either remove the lines you added or change them to: + +``` +net.ipv6.conf.all.disable_ipv6=0 +net.ipv6.conf.default.disable_ipv6=0 +net.ipv6.conf.lo.disable_ipv6=0 +``` + +You can optionally reload these values: + +``` +sudo sysctl -p +``` + +You should once again see a IPv6 address: + +![IPv6 Reenabled in Ubuntu][14] + +Optionally, you can remove **/etc/rc.local** : + +``` +sudo rm /etc/rc.local +``` + +If you modified the kernel parameters in **/etc/default/grub** , go ahead and delete the added options: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" +GRUB_CMDLINE_LINUX="" +``` + +Now do: + +``` +sudo update-grub +``` + +**Wrapping Up** + +In this guide I provided you ways in which you can **disable IPv6** on Linux, as well as giving you an idea about what IPv6 is and why you would want to disable it. + +Did you find this article useful? Do you disable IPv6 connectivity? Let us know in the comment section! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/disable-ipv6-ubuntu-linux/ + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/IPv6 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1 +[3]: https://www.google.com/intl/en/ipv6/statistics.html +[4]: https://whatismyipaddress.com/ipv6-issues +[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/ +[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/ +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1 +[8]: https://itsfoss.com/restart-network-ubuntu/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1 +[11]: https://linuxhandbook.com/chmod-command/ +[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/ +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1 diff --git a/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md new file mode 100644 index 0000000000..8fab8bfcae --- /dev/null +++ b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert Markdown files to word processor docs using pandoc) +[#]: via: (https://opensource.com/article/19/5/convert-markdown-to-word-pandoc) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez) + +Convert Markdown files to word processor docs using pandoc +====== +Living that plaintext life? Here's how to create the word processor +documents people ask for without having to work in a word processor +yourself. +![][1] + +If you live your life in [plaintext][2], there invariably comes a time when someone asks for a word processor document. I run into this issue frequently, especially at the Day JobTM. Although I've introduced one of the development teams I work with to a [Docs Like Code][3] workflow for writing and reviewing release notes, there are a small number of people who have no interest in GitHub or working with [Markdown][4]. They prefer documents formatted for a certain proprietary application. + +The good news is that you're not stuck copying and pasting unformatted text into a word processor document. Using **[pandoc][5]** , you can quickly give people what they want. Let's take a look at how to convert a document from Markdown to a word processor format in [Linux][6] using **pandoc.** ​​​​ + +Note that **pandoc** is also available for a wide variety of operating systems, ranging from two flavors of BSD ([NetBSD][7] and [FreeBSD][8]) to Chrome OS, MacOS, and Windows. + +### Converting basics + +To begin, [install **pandoc**][9] on your computer. Then, crack open a console terminal window and navigate to the directory containing the file that you want to convert. + +Type this command to create an ODT file (which you can open with a word processor like [LibreOffice Writer][10] or [AbiWord][11]): + +**pandoc -t odt filename.md -o filename.odt** + +Remember to replace **filename** with the file's actual name. And if you need to create a file for that other word processor (you know the one I mean), replace **odt** on the command line with **docx**. Here's what this article looks like when converted to an ODT file: + +![Basic conversion results with pandoc.][12] + +These results are serviceable, but a bit bland. Let's look at how to add a bit more style to the converted documents. + +### Converting with style + +**pandoc** has a nifty feature enabling you to specify a style template when converting a marked-up plaintext file to a word processor format. In this file, you can edit a small number of styles in the document, including those that control the look of paragraphs, headings, captions, titles and subtitles, a basic table, and hyperlinks. + +Let's look at the possibilities. + +#### Creating a template + +In order to style your documents, you can't just use _any_ template. You need to generate what **pandoc** calls a _reference_ template, which is the template it uses when converting text files to word processor documents. To create this file, type the following in a terminal window: + +**pandoc -o custom-reference.odt --print-default-data-file reference.odt** + +This command creates a file called **custom-reference.odt**. If you're using that other word processor, change the references to **odt** on the command line to **docx**. + +Open the template file in LibreOffice Writer, and then press **F11** to open LibreOffice Writer's **Styles** pane. Although the [pandoc manual][13] advises against making other changes to the file, I change the page size and add headers and footers when necessary. + +#### Using the template + +So, how do you use that template you just created? There are two ways to do this. + +The easiest way is to drop the template in your **/home** directory's **.pandoc** folder—you might have to create the folder first if it doesn't exist. When it's time to convert a document, **pandoc** uses this template file. See the next section on how to choose from multiple templates if you need more than one. + +The other way to use your template is to type this set of conversion options at the command line: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/reference.odt -o file-name.odt** + +If you're wondering what a converted file looks like with a customized template, here's an example: + +![A document converted using a pandoc style template.][14] + +#### Choosing from multiple templates + +Many people only need one **pandoc** template. Some people, however, need more than one. + +At my day job, for example, I use several templates—one with a DRAFT watermark, one with a watermark stating FOR INTERNAL USE, and one for a document's final versions. Each type of document needs a different template. + +If you have similar needs, start the same way you do for a single template, by creating the file **custom-reference.odt**. Rename the resulting file—for example, to **custom-reference-draft.odt** —then open it in LibreOffice Writer and modify the styles. Repeat this process for each template you need. + +Next, copy the files into your **/home** directory. You can even put them in the **.pandoc** folder if you want to. + +To select a specific template at conversion time, you'll need to run this command in a terminal: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/custom-template.odt -o file-name.odt** + +Change **custom-template.odt** to your template file's name. + +### Wrapping up + +To avoid having to remember a set of options I don't regularly use, I cobbled together some simple, very lame one-line scripts that encapsulate the options for each template. For example, I run the script **todraft.sh** to create a word processor document using the template with a DRAFT watermark. You might want to do the same. + +Here's an example of a script using the template containing a DRAFT watermark: + +`pandoc -t odt $1.md -o $1.odt --reference-doc=~/Documents/pandoc-templates/custom-reference-draft.odt` + +Using **pandoc** is a great way to provide documents in the format that people ask for, without having to give up the command line life. This tool doesn't just work with Markdown, either. What I've discussed in this article also allows you to create and convert documents between a wide variety of markup languages. See the **pandoc** site linked earlier for more details. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb +[2]: https://plaintextproject.online/ +[3]: https://www.docslikecode.com/ +[4]: https://en.wikipedia.org/wiki/Markdown +[5]: https://pandoc.org/ +[6]: /resources/linux +[7]: https://www.netbsd.org/ +[8]: https://www.freebsd.org/ +[9]: https://pandoc.org/installing.html +[10]: https://www.libreoffice.org/discover/writer/ +[11]: https://www.abisource.com/ +[12]: https://opensource.com/sites/default/files/uploads/pandoc-wp-basic-conversion_600_0.png (Basic conversion results with pandoc.) +[13]: https://pandoc.org/MANUAL.html +[14]: https://opensource.com/sites/default/files/uploads/pandoc-wp-conversion-with-tpl_600.png (A document converted using a pandoc style template.) diff --git a/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md new file mode 100644 index 0000000000..38c11508bf --- /dev/null +++ b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Damn! Antergos Linux has been Discontinued) +[#]: via: (https://itsfoss.com/antergos-linux-discontinued/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Damn! Antergos Linux has been Discontinued +====== + +_**Beginner-friendly Arch Linux based distribution Antergos has announced that the project is being discontinued.**_ + +Arch Linux has always been considered a no-go zone for the beginners. Antergos challenged this status quo and made Arch Linux accessible to everyone by providing easier installation method. People who wouldn’t dare [installing Arch Linux][1], opted for Antergos. + +![Antergos provided easy access to Arch with its easy to use GUI tools][2] + +The project started in 2012-13 and started gaining popularity around 2014. I used Antergos, liked it and covered it here on It’s FOSS and perhaps (slightly) contributed to its popularity. In last five years, Antergos was downloaded close to a million times. + +But for past year or so, I felt that this project was stagnating. Antergos hardly made any news. Neither the forum nor the social media handles were active. The community around Antergos grew thinner though a few dedicated users still remain. + +### The end of Antergos Linux project + +![][3] + +On May 21, 2019, Antergos [announced][4] its discontinuation. Lack of free time cited as the main reason behind this decision. + +> Today, we are announcing the end of this project. As many of you probably noticed over the past several months, we no longer have enough free time to properly maintain Antergos. We came to this decision because we believe that continuing to neglect the project would be a huge disservice to the community. +> +> Antergos Team + +Antergos developers also mentioned that since the project’s code still works, it’s an opportunity for interested developers to take what they find useful and start their own projects. + +#### What happens to Existing Antergos users? + +If you are an Antergos user, you don’t have to worry a lot. It’s not that your system will be unusable from today. Your system will continue to get updates directly from Arch Linux. + +Antergos team plans to release an update to remove the Antergos repositories from your system along with any Antergos-specific packages that no longer serve a purpose as the project is ending. After that any packages installed from the Antergos repo that are in the AUR will begin to receive updates from [AUR][5]. + +[][6] + +Suggested read Peppermint 8 Released. Download Now! + +The Antergos forum and wiki will be functional but only for some time. + +If you think using an ‘unmaintained’ project is not a good idea, you should switch your distribution. The most appropriate choice would be [Manjaro Linux][7]. + +Manjaro Linux started around the same time as Antergos. Both Antergos and Manjaro were sort of competitors as both of them tried to make Arch Linux accessible for everyone. + +Manjaro gained a huge userbase in the last few years and its community is thriving. If you want to remain in Arch domain but don’t want to install Arch Linux itself, Manjaro is the best choice for you. + +Just note that Manjaro Linux doesn’t provide all the updates immediately as Arch or Antergos. It is a rolling release but with stability in mind. So the updates are tested first. + +#### Inevitable fate for smaller distributions? + +_Here’s my opinion on the discontinuation on Antergos and other similar open source projects._ + +Antergos was a niche distribution. It had a smaller but dedicated userbase. The developers cited lack of free time as the main reason for their decision. However, I believe that lack of motivation plays a bigger role in such cases. + +What motivates the people behind a project? They start it mostly as a side project and if the project is good, they start gaining users. This growth of userbase drives their motivation to work on the project. + +If the userbase starts declining or gets stagnated, the motivation takes a hit. + +If the userbase keeps on growing, the motivation increases but only to a certain point. More users require more effort in various tasks around the project. Keeping the wiki and forum along with social media itself is a challenging part, leave aside the actual code development. The situation becomes overwhelming. + +When a project grows in considerable size, project owners have two choices. First choice is to form a community of volunteers and start delegating tasks that could be delegated. Having volunteers dedicated to project is not easy but it can surely be achieved as Debian and Manjaro have done it already. + +[][8] + +Suggested read Lightweight Distribution Linux Lite 4.0 Released With Brand New Look + +Second choice is to create some revenue generation channel around the project. The additional revenue may ‘justify’ those extra hours and in some cases, it could drive the developer to work full time on the project. [elementary OS][9] is trying to achieve something similar by developing an ecosystem of ‘payable apps’ in their software center. + +You may argue that money should not be a factor in Free and Open Source Software culture but the unfortunate truth is that money is always a factor, in every aspect of our life. I am not saying that a project should be purely driven by money but a project must be sustainable in every aspect. + +We have see how other smaller but moderately popular Linux distributions like Korora has been discontinued due to lack of free time. [Solus creator Ikey Doherty had to leave the project][10] to focus on his personal life. Developing and maintaining a successful open source project is not an easy task. + +That’s just my opinion. Please feel free to disagree with it and voice your opinion in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/antergos-linux-discontinued/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-arch-linux/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/08/Installing_Antergos_Linux_7.png?ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/antergos-linux-dead.jpg?resize=800%2C450&ssl=1 +[4]: https://antergos.com/blog/antergos-linux-project-ends/ +[5]: https://itsfoss.com/best-aur-helpers/ +[6]: https://itsfoss.com/peppermint-8-released/ +[7]: https://manjaro.org/ +[8]: https://itsfoss.com/linux-lite-4/ +[9]: https://elementary.io/ +[10]: https://itsfoss.com/ikey-leaves-solus/ diff --git a/sources/tech/20190523 Hardware bootstrapping with Ansible.md b/sources/tech/20190523 Hardware bootstrapping with Ansible.md new file mode 100644 index 0000000000..94842453cc --- /dev/null +++ b/sources/tech/20190523 Hardware bootstrapping with Ansible.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Hardware bootstrapping with Ansible) +[#]: via: (https://opensource.com/article/19/5/hardware-bootstrapping-ansible) +[#]: author: (Mark Phillips https://opensource.com/users/markp/users/feeble/users/markp) + +Hardware bootstrapping with Ansible +====== + +![computer servers processing data][1] + +At a recent [Ansible London Meetup][2], I got chatting with somebody about automated hardware builds. _"It's all cloud now!"_ I hear you say. Ah, but for many large organisations it's not—they still have massive data centres full of hardware. Almost regularly somebody pops up on our internal mail list and asks, *"can Ansible do hardware provisioning?" *Well yes, you can provision hardware with Ansible… + +### Requirements + +Bootstrapping hardware is mostly about network services. Before we do any operating system (OS) installing then, we must set up some services. We will need: + + * DHCP + * PXE + * TFTP + * Operating system media + * Web server + + + +### Setup + +Besides the DHCP configuration, everything else in this article is handled by the Ansible plays included in [this repository][3]. + +#### DHCP server + +I'm writing here on the assumption you can control your DHCP configuration. If you don't have access to your DHCP server, you'll need to ask the owner to set two options. DHCP option 67 needs to be set to **pxelinux.0** and **next-server** (which is option 66—but you may not need to know that; often a DHCP server will have a field/option for 'next server') needs to be set to the IP address of your TFTP server. + +If you can own the DHCP server, I'd suggest using dnsmasq. It's small and simple. I will not cover configuring it here, but look at [the man page][4] and the **\--enable-tftp** option. + +#### TFTP + +The **next-server** setting for our DHCP server, above, will point to a machine serving [TFTP][5]. Here I've used a [CentOS Linux][6] virtual machine, as it only takes one package (syslinux-tftpboot) and a service to start to have TFTP up and running. We'll stick with the default path, **/var/lib/tftpboot**. + +#### PXE + +If you're not already familiar with PXE, you might like to take a quick look at [the Wikipedia page][7]. For this article I'll keep it short—we will serve some files over TFTP, which DHCP guides our hardware to. + +You'll want **images/pxeboot/{initrd.img,vmlinuz}** from the OS distribution media for pxeboot. These need to be copied to **/var/lib/tftpboot/pxeboot**. The referenced Ansible plays **do not do this step, **so you need to copy them over yourself. + +We'll also need to serve the OS installation files. There are two approaches to this: 1) install, via HTTP, from the internet or 2) install, again via HTTP, from a local server. For my testing, since I'm on a private LAN (and I guess you are too), the fastest installation method is the second. The easiest way to prepare this is to mount the DVD image and rsync the `images`, **`Packages` **and `repodata` directories to your webserver location. The referenced Ansible plays will install **httpd** but won't copy over these files, so don't forget to do that after running [the play][8]. For this article, we'll once again stick with defaults for simplicity—so files need to be copied to Apache's standard docroot, **/var/www/html**. + +#### Directories + +We should end up with directory structures like this: + +##### PXE/TFTP + + +``` +[root@c7 ~]# tree /var/lib/tftpboot/pxe{b*,l*cfg} +/var/lib/tftpboot/pxeboot +└── 6 +├── initrd.img +└── vmlinuz +``` + +##### httpd + + +``` +[root@c7 ~]# tree -d /var/www/html/ +/var/www/html/ +├── 6 -> centos/6 +├── 7 -> centos/7 +├── centos +│ ├── 6 +│ │ └── os +│ │ └── x86_64 +│ │ ├── images +│ │ │ └── pxeboot +│ │ ├── Packages +│ │ └── repodata +│ └── 7 +│ └── os +│ └── x86_64 +│ ├── images +│ │ └── pxeboot +│ ├── Packages +│ └── repodata +└── ks +``` + +You'll notice my web setup appears a little less simple than the words above! I've pasted my actual structure to give you some ideas. The hardware I'm using is really old, and even getting CentOS 7 to work was horrible (if you're interested, it's due to the lack of [cciss][9] drivers for the HP Smart Array controller—yes, [there is an answer][10], but it takes a lot of faffing to make work), so all examples are of CentOS 6. I also wanted a flexible setup that could install many versions. Here I've done that using symlinks—this arrangement will work just fine for RHEL too, for example. The basic structure is present though—note the images, Packages and repodata directories. + +These paths relate directly to [the PXE menu][11] file we'll serve up and [the kickstart file][12] too. + +#### If you don't have DHCP + +If you can't manage your own DHCP server or the owners of your infrastructure can't help, there is another option. In the past, I've used [iPXE][13] to create a boot image that I've loaded as virtual media. A lot of out-of-band/lights-out-management (LOM) interfaces on modern hardware support this functionality. You can make a custom embedded PXE menu in seconds with iPXE. I won't cover that here, but if it turns out to be a problem for you, then drop me a line [on Twitter][14] and I'll look at doing a follow-up blog post if enough people request it. + +### Installing hardware + +We've got our structure in place now, and we can [kickstart][15] a server. Before we do, we have to add some configuration to the TFTP setup to enable a given piece of hardware to pick up the PXE boot menu. + +It's here we come across a small chicken/egg problem. We need a host's MAC address to create a link to the specific piece of hardware we want to kickstart. If the hardware is already running and we can access it with Ansible, that's great—we have a way of finding out the boot interface MAC address via the setup module (see [the reinstall play][16]). If it's a new piece of tin, however, we need to get the MAC address and tell our setup what to do with it. This probably means some manual intervention—booting the server and looking at a screen or maybe getting the MAC from a manifest or such like. Whichever way you get hold of it, we can tell our play about it via the inventory. + +Let's put a custom variable into our simple INI format [inventory file][17], but run a play to set up TFTP… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +failed: [ks.box] (item=hp.box) => { +"assertion": "hostvars[item]['mac'] is defined", +"changed": false, +"evaluated_to": false, +"item": "hp.box", +"msg": "Assertion failed" +} + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=0 changed=0 unreachable=0 failed=1 +``` + +Uh oh, play failed. It [contains a check][18] that the host we're about to install actually has a MAC address added. Let's fix that and run the play again… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator", +"mac": "00:AA:BB:CC:DD:EE" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +ok: [ks.box] => (item=hp.box) => { +"changed": false, +"item": "hp.box", +"msg": "All assertions passed" +} + +TASK [Set PXE menu to install] ***************************************************************************************** +ok: [ks.box] => (item=hp.box) + +TASK [Reboot target host for PXE boot] ********************************************************************************* +skipping: [ks.box] => (item=hp.box) + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=2 changed=0 unreachable=0 failed=0 +``` + +That worked! What did it do? Looking at the pxelinux.cfg directory under our TFTP root, we can see a symlink… + + +``` +[root@c7 pxelinux.cfg]# pwd +/var/lib/tftpboot/pxelinux.cfg +[root@c7 pxelinux.cfg]# l +total 12 +drwxr-xr-x. 2 root root 65 May 13 14:23 ./ +drwxr-xr-x. 4 root root 4096 May 2 22:13 ../ +-r--r--r--. 1 root root 515 May 2 12:22 00README +lrwxrwxrwx. 1 root root 7 May 13 14:12 01-00-aa-bb-cc-dd-ee -> install +-rw-r--r--. 1 root root 682 May 2 22:07 install +``` + +The **install** file is symlinked to a file named after our MAC address. This is the key, useful piece. It will ensure our hardware with MAC address **00-aa-bb-cc-dd-ee** is served a PXE menu when it boots from its network card. + +So let's boot our machine. + +Usefully, Ansible has some [remote management modules][19]. We're working with an HP server here, so we can use the [hpilo_boot][20] module to save us from having to interact directly with the LOM web interface. + +Let's run the reinstall play on a booted server… + +The neat thing about the **hpilo_boot** module, you'll notice, is it sets the boot medium to be the network. When the installation completes, the server restarts and boots from its hard drive. The eagle-eyed amongst you will have spotted the critical problem with this—what happens if the server boots to its network card again? It will pick up the PXE menu and promptly reinstall itself. I would suggest removing the symlink as a "belt and braces" step then. I will leave that as an exercise for you, dear reader. Hint: I would make the new server do a 'phone home' on boot, to somewhere, which runs a clean-up job. Since you wouldn't need the console open, as I had here to demonstrate what's going on in the background, a 'phone home' job would also give a nice indication that the process completed. Ansible, [naturally][21]. Good luck! + +If you've any thoughts or comments on this process, please let me know. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/hardware-bootstrapping-ansible + +作者:[Mark Phillips][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/markp/users/feeble/users/markp +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data) +[2]: https://www.meetup.com/Ansible-London/ +[3]: https://github.com/phips/ansible-hw-bootstrap +[4]: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html +[5]: https://en.m.wikipedia.org/wiki/Trivial_File_Transfer_Protocol +[6]: https://www.centos.org +[7]: https://en.m.wikipedia.org/wiki/Preboot_Execution_Environment +[8]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/kickstart.yml +[9]: https://linux.die.net/man/4/cciss +[10]: https://serverfault.com/questions/611182/centos-7-x64-and-hp-proliant-dl360-g5-scsi-controller-compatibility +[11]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/pxe_install.j2#L10 +[12]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/local6.ks.j2#L3 +[13]: https://ipxe.org +[14]: https://twitter.com/thismarkp +[15]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-kickstart2 +[16]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/reinstall.yml +[17]: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html +[18]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/install.yml#L9 +[19]: https://docs.ansible.com/ansible/latest/modules/list_of_remote_management_modules.html +[20]: https://docs.ansible.com/ansible/latest/modules/hpilo_boot_module.html#hpilo-boot-module +[21]: https://github.com/phips/ansible-demos/tree/master/roles/phone_home diff --git a/sources/tech/20190523 Run your blog on GitHub Pages with Python.md b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md new file mode 100644 index 0000000000..4763e5e215 --- /dev/null +++ b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: (QiaoN) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Run your blog on GitHub Pages with Python) +[#]: via: (https://opensource.com/article/19/5/run-your-blog-github-pages-python) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani) + +Run your blog on GitHub Pages with Python +====== +Create a blog with Pelican, a Python-based blogging platform that works +well with GitHub. +![Raspberry Pi and Python][1] + +[GitHub][2] is a hugely popular web service for source code control that uses [Git][3] to synchronize local files with copies kept on GitHub's servers so you can easily share and back up your work. + +In addition to providing a user interface for code repositories, GitHub also enables users to [publish web pages][4] directly from a repository. The website generation package GitHub recommends is [Jekyll][5], written in Ruby. Since I'm a bigger fan of [Python][6], I prefer [Pelican][7], a Python-based blogging platform that works well with GitHub. + +Pelican and Jekyll both transform content written in [Markdown][8] or [reStructuredText][9] into HTML to generate static websites, and both generators support themes that allow unlimited customization. + +In this article, I'll describe how to install Pelican, set up your GitHub repository, run a quickstart helper, write some Markdown files, and publish your first page. I'll assume that you have a [GitHub account][10], are comfortable with [basic Git commands][11], and want to publish a blog using Pelican. + +### Install Pelican and create the repo + +First things first, Pelican (and **ghp-import** ) must be installed on your local machine. This is super easy with [pip][12], the Python package installation tool (you have pip right?): + + +``` +`$ pip install pelican ghp-import` +``` + +Next, open a browser and create a new repository on GitHub for your sweet new blog. Name it as follows (substituting your GitHub username for here and throughout this tutorial): + + +``` +`https://GitHub.com/username/username.github.io` +``` + +Leave it empty; we will fill it with compelling blog content in a moment. + +Using a command line (you command line right?), clone your empty Git repository to your local machine: + + +``` +$ git clone blog +$ cd blog +``` + +### That one weird trick… + +Here's a not-super-obvious trick about publishing web content on GitHub. For user pages (pages hosted in repos named _username.github.io_ ), the content is served from the **master** branch. + +I strongly prefer not to keep all the Pelican configuration files and raw Markdown files in **master** , rather just the web content. So I keep the Pelican configuration and the raw content in a separate branch I like to call **content**. (You can call it whatever you want, but the following instructions will call it **content**.) I like this structure since I can throw away all the files in **master** and re-populate it with the **content** branch. + + +``` +$ git checkout -b content +Switched to a new branch 'content' +``` + +### Configure Pelican + +Now it's time for content configuration. Pelican provides a great initialization tool called **pelican-quickstart** that will ask you a series of questions about your blog. + + +``` +$ pelican-quickstart +Welcome to pelican-quickstart v3.7.1. + +This script will help you create a new Pelican-based website. + +Please answer the following questions so this script can generate the files +needed by Pelican. + +> Where do you want to create your new web site? [.] +> What will be the title of this web site? Super blog +> Who will be the author of this web site? username +> What will be the default language of this web site? [en] +> Do you want to specify a URL prefix? e.g., (Y/n) n +> Do you want to enable article pagination? (Y/n) +> How many articles per page do you want? [10] +> What is your time zone? [Europe/Paris] US/Central +> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) y +> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) y +> Do you want to upload your website using FTP? (y/N) n +> Do you want to upload your website using SSH? (y/N) n +> Do you want to upload your website using Dropbox? (y/N) n +> Do you want to upload your website using S3? (y/N) n +> Do you want to upload your website using Rackspace Cloud Files? (y/N) n +> Do you want to upload your website using GitHub Pages? (y/N) y +> Is this your personal page (username.github.io)? (y/N) y +Done. Your new project is available at /Users/username/blog +``` + +You can take the defaults on every question except: + + * Website title, which should be unique and special + * Website author, which can be a personal username or your full name + * Time zone, which may not be in Paris + * Upload to GitHub Pages, which is a "y" in our case + + + +After answering all the questions, Pelican leaves the following in the current directory: + + +``` +$ ls +Makefile content/ develop_server.sh* +fabfile.py output/ pelicanconf.py +publishconf.py +``` + +You can check out the [Pelican docs][13] to find out how to use those files, but we're all about getting things done _right now_. No, I haven't read the docs yet either. + +### Forge on + +Add all the Pelican-generated files to the **content** branch of the local Git repo, commit the changes, and push the local changes to the remote repo hosted on GitHub by entering: + + +``` +$ git add . +$ git commit -m 'initial pelican commit to content' +$ git push origin content +``` + +This isn't super exciting, but it will be handy if we need to revert edits to one of these files. + +### Finally getting somewhere + +OK, now you can get bloggy! All of your blog posts, photos, images, PDFs, etc., will live in the **content** directory, which is initially empty. To begin creating a first post and an About page with a photo, enter: + + +``` +$ cd content +$ mkdir pages images +$ cp /Users/username/SecretStash/HotPhotoOfMe.jpg images +$ touch first-post.md +$ touch pages/about.md +``` + +Next, open the empty file **first-post.md** in your favorite text editor and add the following: + + +``` +title: First Post on My Sweet New Blog +date: +author: Your Name Here + +# I am On My Way To Internet Fame and Fortune! + +This is my first post on my new blog. While not super informative it +should convey my sense of excitement and eagerness to engage with you, +the reader! +``` + +The first three lines contain metadata that Pelican uses to organize things. There are lots of different metadata you can put there; again, the docs are your best bet for learning more about the options. + +Now, open the empty file **pages/about.md** and add this text: + + +``` +title: About +date: + +![So Schmexy][my_sweet_photo] + +Hi, I am and I wrote this epic collection of Interweb +wisdom. In days of yore, much of this would have been deemed sorcery +and I would probably have been burned at the stake. + +😆 + +[my_sweet_photo]: {filename}/images/HotPhotoOfMe.jpg +``` + +You now have three new pieces of web content in your content directory. Of the content branch. That's a lot of content. + +### Publish + +Don't worry; the payoff is coming! + +All that's left to do is: + + * Run Pelican to generate the static HTML files in **output** : [code]`$ pelican content -o output -s publishconf.py` +``` +* Use **ghp-import** to add the contents of the **output** directory to the **master** branch: [code]`$ ghp-import -m "Generate Pelican site" --no-jekyll -b master output` +``` + * Push the local master branch to the remote repo: [code]`$ git push origin master` +``` + * Commit and push the new content to the **content** branch: [code] $ git add content +$ git commit -m 'added a first post, a photo and an about page' +$ git push origin content +``` + + + +### OMG, I did it! + +Now the exciting part is here, when you get to view what you've published for everyone to see! Open your browser and enter: + + +``` +`https://username.github.io` +``` + +Congratulations on your new blog, self-published on GitHub! You can follow this pattern whenever you want to add more pages or articles. Happy blogging. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/run-your-blog-github-pages-python + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[QiaoN](https://github.com/QiaoN) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python) +[2]: https://github.com/ +[3]: https://git-scm.com +[4]: https://help.github.com/en/categories/github-pages-basics +[5]: https://jekyllrb.com +[6]: https://python.org +[7]: https://blog.getpelican.com +[8]: https://guides.github.com/features/mastering-markdown +[9]: http://docutils.sourceforge.net/docs/user/rst/quickref.html +[10]: https://github.com/join?source=header-home +[11]: https://git-scm.com/docs +[12]: https://pip.pypa.io/en/stable/ +[13]: https://docs.getpelican.com diff --git a/sources/tech/20190523 Testing a Go-based S2I builder image.md b/sources/tech/20190523 Testing a Go-based S2I builder image.md new file mode 100644 index 0000000000..a6facd515d --- /dev/null +++ b/sources/tech/20190523 Testing a Go-based S2I builder image.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing a Go-based S2I builder image) +[#]: via: (https://opensource.com/article/19/5/source-image-golang-part-3) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +Testing a Go-based S2I builder image +====== +In the third article in this series on Source-to-Image for Golang +applications, build your application image and take it out for a spin. +![gopher illustrations][1] + +In the first two articles in this series, we explored the general [requirements of a Source To Image (S2I) system][2] and [prepared an environment][3] specifically for a Go (Golang) application. Now let's give it a spin. + +### Building the builder image + +Once the Dockerfile and Source-to-Image (S2I) scripts are ready, the Golang builder image can be created with the **docker build** command: + + +``` +`docker build -t golang-builder .` +``` + +This will produce a builder image named **golang-builder** with the context of our current directory. + +### Building the application image + +The golang-builder image is not much use without an application to build. For this exercise, we will build a simple **hello-world** application. + +#### GoHelloWorld + +Let's meet our test app, [GoHelloWorld][4]. Download the latest [version of Go][5] if you want to follow along. There are two important (for this exercise) files in this repository: + + +``` +// goHelloWorld.go +package main + +import "fmt" + +func main() { +fmt.Println("Hello World!") +} +``` + +This is a very basic app, but it will work fine for testing the builder image. We also have a basic test for GoHelloWorld: + + +``` +// goHelloWorld_test.go +package main + +import "testing" + +func TestMain(t *testing.T) { +t.Log("Hello World!") +} +``` + +#### Build the application image + +Building the application image entails running the **s2i build** command with arguments for the repository containing the code to build (or **.** to build with code from the current directory), the name of the builder image to use, and the name of the resulting application image to create. + + +``` +`$ s2i build https://github.com/clcollins/goHelloWorld.git golang-builder go-hello-world` +``` + +To build from a local directory on a filesystem, replace the Git URL with a period to represent the current directory. For example: + + +``` +`$ s2i build . golang-builder go-hello-world` +``` + +_Note:_ If a Git repository is initialized in the current directory, S2I will fetch the code from the repository URL rather than using the local code. This results in local, uncommitted changes not being used when building the image (if you're unfamiliar with what I mean by "uncommitted changes," brush up on your [Git terminology over here][6]). Directories that are not Git-initialized repositories behave as expected. + +#### Run the application image + +Once the application image is built, it can be tested by running it with the **Docker** command. Source-to-Image has replaced the **CMD** in the image with the run script created earlier so it will execute the **/go/src/app/app** binary created during the build process: + + +``` +$ docker run go-hello-world +Hello World! +``` + +Success! We now have a compiled Go application inside a Docker image created by passing the contents of a Git repo to S2I and without needing a special Dockerfile for our application. + +The application image we just built includes not only the application, but also its source code, test code, the S2I scripts, Golang libraries, and _much of the Debian Linux distribution_ (because the Golang image is based on the Debian base image). The resulting image is not small: + + +``` +$ docker images | grep go-hello-world +go-hello-world latest 75a70c79a12f 4 minutes ago 789 MB +``` + +For applications written in languages that are interpreted at runtime and depend on linked libraries, like Ruby or Python, having all the source code and operating system are necessary to run. The build images will be pretty large as a result, but at least we know it will be able to run. With these languages, we could stop here with our S2I builds. + +There is the option, however, to more explicitly define the production requirements for the application. + +Since the resulting application image would be the same image that would run the production app, I want to assure the required ports, volumes, and environment variables are added to the Dockerfile for the builder image. By writing these in a declarative way, our app is closer to the [Twelve-Factor App][7] recommended practice. For example, if we were to use the builder image to create application images for a Ruby on Rails application running [Puma][8], we would want to open a port to access the webserver. We should add the line **PORT 3000** in the builder Dockerfile so it can be inherited by all the images generated from it. + +But for the Go app, we can do better. + +### Build a runtime image + +Since our builder image created a statically compiled Go binary with our application, we can create a final "runtime" image containing _only_ the binary and none of the other cruft. + +Once the application image is created, the compiled GoHelloWorld app can be extracted and put into a new, empty image using the save-artifacts script. + +#### Runtime files + +Only the application binary and a Dockerfile are required to create the runtime image. + +##### Application binary + +Inside of the application image, the save-artifacts script is written to stream a tar archive of the app binary to stdout. We can check the files included in the tar archive created by save-artifacts with the **-vt** flags for tar: + + +``` +$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -tvf - +-rwxr-xr-x 1001/root 1997502 2019-05-03 18:20 app +``` + +If this results in errors along the lines of "This does not appear to be a tar archive," the save-artifacts script is probably outputting other data in addition to the tar stream, as mentioned above. We must make sure to suppress all output other than the tar stream. + +If everything looks OK, we can use **save-artifacts** to copy the binary out of the application image: + + +``` +`$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -xf -` +``` + +This will copy the app file into the current directory, ready to be added to its own image. + +##### Dockerfile + +The Dockerfile is extremely simple, with only three lines. The **FROM scratch** source denotes that it uses an empty, blank parent image. The rest of the Dockerfile specifies copying the app binary into **/app** in the image and using that binary as the image **ENTRYPOINT** : + + +``` +FROM scratch +COPY app /app +ENTRYPOINT ["/app"] +``` + +Save this Dockerfile as **Dockerfile-runtime**. + +Why **ENTRYPOINT** and not **CMD**? We could do either, but since there is nothing else in the image (no filesystem, no shell), we couldn't run anything else anyway. + +#### Building the runtime image + +With the Dockerfile and binary ready to go, we can build the new runtime image: + + +``` +`$ docker build -f Dockerfile-runtime -t go-hello-world:slim .` +``` + +The new runtime image is considerably smaller—just 2MB! + + +``` +$ docker images | grep -e 'go-hello-world *slim' +go-hello-world slim 4bd091c43816 3 minutes ago 2 MB +``` + +We can test that it still works as expected with **docker run** : + + +``` +$ docker run go-hello-world:slim +Hello World! +``` + +### Bootstrapping s2i with s2i create + +While we hand-created all the S2I files in this example, the **s2i** command has a sub-command to help scaffold all the files we might need for a Source-to-Image build: **s2i create**. + +Using the **s2i create** command, we can generate a new project, creatively named **go-hello-world-2** in the **./ghw2** directory: + + +``` +$ s2i create go-hello-world-2 ./ghw2 +$ ls ./ghw2/ +Dockerfile Makefile README.md s2i test +``` + +The **create** sub-command creates a placeholder Dockerfile, a README.md with information about how to use Source-to-Image, some example S2I scripts, a basic test framework, and a Makefile. The Makefile is a great way to automate building and testing the Source-to-Image builder image. Out of the box, running **make** will build our image, and it can be extended to do more. For example, we could add steps to build a base application image, run tests, or generate a runtime Dockerfile. + +### Conclusion + +In this tutorial, we have learned how to use Source-to-Image to build a custom Golang builder image, create an application image using **s2i build** , and extract the application binary to create a super-slim runtime image. + +In a future extension to this series, I would like to look at how to use the builder image we created with [OKD][9] to automatically deploy our Golang apps with **buildConfigs** , **imageStreams** , and **deploymentConfigs**. Please let me know in the comments if you are interested in me continuing the series, and thanks for reading. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/source-image-golang-part-3 + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (gopher illustrations) +[2]: https://opensource.com/article/19/5/source-image-golang-part-1 +[3]: https://opensource.com/article/19/5/source-image-golang-part-2 +[4]: https://github.com/clcollins/goHelloWorld.git +[5]: https://golang.org/doc/install +[6]: /article/19/2/git-terminology +[7]: https://12factor.net/ +[8]: http://puma.io/ +[9]: https://okd.io/ diff --git a/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md new file mode 100644 index 0000000000..06b8fd6de3 --- /dev/null +++ b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Choosing the right model for maintaining and enhancing your IoT project) +[#]: via: (https://opensource.com/article/19/5/model-choose-embedded-iot-development) +[#]: author: (Drew Moseley https://opensource.com/users/drewmoseley) + +Choosing the right model for maintaining and enhancing your IoT project +====== +Learn more about these two models: Centralized Golden Master and +Distributed Build System +![][1] + +In today's connected embedded device market, driven by the [Internet of things (IoT)][2], a large share of devices in development are based on Linux of one form or another. The prevalence of low-cost boards with ready-made Linux distributions is a key driver in this. Acquiring hardware, building your custom code, connecting the devices to other hardware peripherals and the internet as well as device management using commercial cloud providers has never been easier. A developer or development team can quickly prototype a new application and get the devices in the hands of potential users. This is a good thing and results in many interesting new applications, as well as many questionable ones. + +When planning a system design for beyond the prototyping phase, things get a little more complex. In this post, we want to consider mechanisms for developing and maintaining your base [operating system (OS) image][3]. There are many tools to help with this but we won't be discussing individual tools; of interest here is the underlying model for maintaining and enhancing this image and how it will make your life better or worse. + +There are two primary models for generating these images: + + 1. Centralized Golden Master + 2. Distributed Build System + + + +These categories mirror the driving models for [Source Code Management (SCM)][4] systems, and many of the arguments regarding centralized vs. distributed are applicable when discussing OS images. + +### Centralized Golden Master + +Hobbyist and maker projects primarily use the Centralized Golden Master method of creating and maintaining application images. This fact gives this model the benefit of speed and familiarity, allowing developers to quickly set up such a system and get it running. The speed comes from the fact that many device manufacturers provide canned images for their off-the-shelf hardware. For example, boards from such families as the [BeagleBone][5] and [Raspberry Pi][6] offer ready-to-use OS images and [flashing][7]. Relying on these images means having your system up and running in just a few mouse clicks. The familiarity is due to the fact that these images are generally based on a desktop distro many device developers have already used, such as [Debian][8]. Years of using Linux can then directly transfer to the embedded design, including the fact that the packaging utilities remain largely the same, and it is simple for designers to get the extra software packages they need. + +There are a few downsides of such an approach. The first is that the [golden master image][9] is generally a choke point, resulting in lost developer productivity after the prototyping stage since everyone must wait for their turn to access the latest image and make their changes. In the SCM realm, this practice is equivalent to a centralized system with individual [file locking][10]. Only the developer with the lock can work on any given file. + +![Development flow with the Centralized Golden Master model.][11] + +The second downside with this approach is image reproducibility. This issue is usually managed by manually logging into the target systems, installing packages using the native package manager, configuring applications and dot files, and then modifying the system configuration files in place. Once this process is completed, the disk is imaged using the **dd** utility, or an equivalent, and then distributed. + +Again, this approach creates a minefield of potential issues. For example, network-based package feeds may cease to exist, and the base software provided by the vendor image may change. Scripting can help mitigate these issues. However, these scripts tend to be fragile and break when changes are made to configuration file formats or the vendor's base software packages. + +The final issue that arises with this development model is reliance on third parties. If the hardware vendor's image changes don't work for your design, you may need to invest significant time to adapt. To make matters even more complicated, as mentioned before, the hardware vendors often based their images on an upstream project such as Debian or Ubuntu. This situation introduces even more third parties who can affect your design. + +### Distributed Build System + +This method of creating and maintaining an image for your application relies on the generation of target images separate from the target hardware. The developer workflow here is similar to standard software development using an SCM system; the image is fully buildable by tooling and each developer can work independently. Changes to the system are made via edits to metadata files (scripting, recipes, configuration files, etc) and then the tooling is rerun to generate an updated image. These metadata files are then managed using an SCM system. Individual developers can merge the latest changes into their working copies to produce their development images. In this case, no golden master image is needed and developers can avoid the associated bottleneck. + +Release images are then produced by a build system using standard SCM techniques to pull changes from all the developers. + +![Development flow with the Distributed Build System model.][12] + +Working in this fashion allows the size of your development team to increase without reducing productivity of individual developers. All engineers can work independently of the others. Additionally, this build setup ensures that your builds can be reproduced. Using standard SCM workflows can ensure that, at any future time, you can regenerate a specific build allowing for long term maintenance, even if upstream providers are no longer available. Similar to working with distributed SCM tools however, there is additional policy that needs to be in place to enable reproducible, release candidate images. Individual developers have their own copies of the source and can build their own test images but for a proper release engineering effort, development teams will need to establish merging and branching standards and ensure that all changes targeted for release eventually get merged into a well-defined branch. Many upstream projects already have well-defined processes for this kind of release strategy (for instance, using *-stable and *-next branches). + +The primary downside of this approach is the lack of familiarity. For example, adding a package to the image normally requires creating a recipe of some kind and then updating the definitions so that the package binaries are included in the image. This is very different from running apt while logged into a running system. The learning curve of these systems can be daunting but the results are more predictable and scalable and are likely a better choice when considering a design for a product that will be mass produced. + +Dedicated build systems such as [OpenEmbedded][13] and [Buildroot][14] use this model as do distro packaging tools such as [debootstrap][15] and [multistrap][16]. Newer tools such as [Isar][17], [debos][18], and [ELBE][19] also use this basic model. Choices abound, and it is worth the investment to learn one or more of these packages for your designs. The long term maintainability and reproducibility of these systems will reduce risk in your design by allowing you to generate reproducible builds, track all the source code, and remove your dependency on third-party providers continued existence. + +#### Conclusion + +To be clear, the distributed model does suffer some of the same issues as mentioned for the Golden Master Model; especially the reliance on third parties. This is a consequence of using systems designed by others and cannot be completely avoided unless you choose a completely roll-your-own approach which comes with a significant cost in development and maintenance. + +For prototyping and proof-of-concept level design, and a team of just a few developers, the Golden Master Model may well be the right choice given restrictions in time and budget that are present at this stage of development. For low volume, high touch designs, this may be an acceptable trade-off for production use. + +For general production use, the benefits in terms of team size scalability, image reproducibility and developer productivity greatly outweigh the learning curve and overhead of systems implementing the distributed model. Support from board and chip vendors is also widely available in these systems reducing the upfront costs of developing with them. For your next product, I strongly recommend starting the design with a serious consideration of the model being used to generate the base OS image. If you choose to prototype with the golden master model with the intention of migrating to the distributed model, make sure to build sufficient time in your schedule for this effort; the estimates will vary widely depending on the specific tooling you choose as well as the scope of the requirements and the out-of-the-box availability of software packages your code relies on. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/model-choose-embedded-iot-development + +作者:[Drew Moseley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drewmoseley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe +[2]: https://en.wikipedia.org/wiki/Internet_of_things +[3]: https://en.wikipedia.org/wiki/System_image +[4]: https://en.wikipedia.org/wiki/Version_control +[5]: http://beagleboard.org/ +[6]: https://www.raspberrypi.org/ +[7]: https://en.wikipedia.org/wiki/Flash_memory +[8]: https://www.debian.org/ +[9]: https://en.wikipedia.org/wiki/Software_release_life_cycle#RTM +[10]: https://en.wikipedia.org/wiki/File_locking +[11]: https://opensource.com/sites/default/files/uploads/cgm1_500.png (Development flow with the Centralized Golden Master model.) +[12]: https://opensource.com/sites/default/files/uploads/cgm2_500.png (Development flow with the Distributed Build System model.) +[13]: https://www.openembedded.org/ +[14]: https://buildroot.org/ +[15]: https://wiki.debian.org/Debootstrap +[16]: https://wiki.debian.org/Multistrap +[17]: https://github.com/ilbers/isar +[18]: https://github.com/go-debos/debos +[19]: https://elbe-rfs.org/ diff --git a/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md new file mode 100644 index 0000000000..b281b6036b --- /dev/null +++ b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md @@ -0,0 +1,104 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dual booting Windows and Linux using UEFI) +[#]: via: (https://opensource.com/article/19/5/dual-booting-windows-linux-uefi) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/ckrzen) + +Dual booting Windows and Linux using UEFI +====== +A quick rundown of setting up Linux and Windows to dual boot on the same +machine, using the Unified Extensible Firmware Interface (UEFI). +![Linux keys on the keyboard for a desktop computer][1] + +Rather than doing a step-by-step how-to guide to configuring your system to dual boot, I’ll highlight the important points. As an example, I will refer to my new laptop that I purchased a few months ago. I first installed [Ubuntu Linux][2] onto the entire hard drive, which destroyed the pre-installed [Windows 10][3] installation. After a few months, I decided to install a different Linux distribution, and so also decided to re-install Windows 10 alongside [Fedora Linux][4] in a dual boot configuration. I’ll highlight some essential facts to get started. + +### Firmware + +Dual booting is not just a matter of software. Or, it is, but it involves changing your firmware, which among other things tells your machine how to begin the boot process. Here are some firmware-related issues to keep in mind. + +#### UEFI vs. BIOS + +Before attempting to install, make sure your firmware configuration is optimal. Most computers sold today have a new type of firmware known as [Unified Extensible Firmware Interface (UEFI)][5], which has pretty much replaced the other firmware known as [Basic Input Output System (BIOS)][6], which is often included through the mode many providers call Legacy Boot. + +I had no need for BIOS, so I chose UEFI mode. + +#### Secure Boot + +One other important setting is Secure Boot. This feature detects whether the boot path has been tampered with, and stops unapproved operating systems from booting. For now, I disabled this option to ensure that I could install Fedora Linux. According to the Fedora Project Wiki [Features/Secure Boot ][7] Fedora Linux will work with it enabled. This may be different for other Linux distributions —I plan to revisit this setting in the future. + +In short, if you find that you cannot install your Linux OS with this setting active, disable Secure Boot and try again. + +### Partitioning the boot drive + +If you choose to dual boot and have both operating systems on the same drive, you have to break it into partitions. Even if you dual boot using two different drives, most Linux installations are best broken into a few basic partitions for a variety of reasons. Here are some options to consider. + +#### GPT vs MBR + +If you decide to manually partition your boot drive in advance, I recommend using the [GUID Partition Table (GPT)][8] rather than the older [Master Boot Record (MBR)][9]. Among the reasons for this change, there are two specific limitations of MBR that GPT doesn’t have: + + * MBR can hold up to 15 partitions, while GPT can hold up to 128. + * MBR only supports up to 2 terabytes, while GPT uses 64-bit addresses which allows it to support disks up to 8 million terabytes. + + + +If you have shopped for hard drives recently, then you know that many of today’s drives exceed the 2 terabyte limit. + +#### The EFI system partition + +If you are doing a fresh installation or using a new drive, there are probably no partitions to begin with. In this case, the OS installer will create the first one, which is the [EFI System Partition (ESP)][10]. If you choose to manually partition your drive using a tool such as [gdisk][11], you will need to create this partition with several parameters. Based on the existing ESP, I set the size to around 500MB and assigned it the ef00 (EFI System) partition type. The UEFI specification requires the format to be FAT32/msdos, most likely because it is supportable by a wide range of operating systems. + +![Partitions][12] + +### Operating System Installation + +Once you accomplish the first two tasks, you can install your operating systems. While I focus on Windows 10 and Fedora Linux here, the process is fairly similar when installing other combinations as well. + +#### Windows 10 + +I started the Windows 10 installation and created a 20 Gigabyte Windows partition. Since I had previously installed Linux on my laptop, the drive had an ESP, which I chose to keep. I deleted all existing Linux and swap partitions to start fresh, and then started my Windows installation. The Windows installer automatically created another small partition—16 Megabytes—called the [Microsoft Reserved Partition (MSR)][13]. Roughly 400 Gigabytes of unallocated space remained on the 512GB boot drive once this was finished. + +I then proceeded with and completed the Windows 10 installation process. I then rebooted into Windows to make sure it was working, created my user account, set up wi-fi, and completed other tasks that need to be done on a first-time OS installation. + +#### Fedora Linux + +I next moved to install Linux. I started the process, and when it reached the disk configuration steps, I made sure not to change the Windows NTFS and MSR partitions. I also did not change the EPS, but I did set its mount point to **/boot/efi**. I then created the usual ext4 formatted partitions, **/** (root), **/boot** , and **/home**. The last partition I created was Linux **swap**. + +As with Windows, I continued and completed the Linux installation, and then rebooted. To my delight, at boot time the [GRand][14] [Unified Boot Loader (GRUB)][14] menu provided the choice to select either Windows or Linux, which meant I did not have to do any additional configuration. I selected Linux and completed the usual steps such as creating my user account. + +### Conclusion + +Overall, the process was painless. In past years, there has been some difficulty navigating the changes from UEFI to BIOS, plus the introduction of features such as Secure Boot. I believe that we have now made it past these hurdles and can reliably set up multi-boot systems. + +I don’t miss the [Linux LOader (LILO)][15] anymore! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/dual-booting-windows-linux-uefi + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/ckrzen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer) +[2]: https://www.ubuntu.com +[3]: https://www.microsoft.com/en-us/windows +[4]: https://getfedora.org +[5]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface +[6]: https://en.wikipedia.org/wiki/BIOS +[7]: https://fedoraproject.org/wiki/Features/SecureBoot +[8]: https://en.wikipedia.org/wiki/GUID_Partition_Table +[9]: https://en.wikipedia.org/wiki/Master_boot_record +[10]: https://en.wikipedia.org/wiki/EFI_system_partition +[11]: https://sourceforge.net/projects/gptfdisk/ +[12]: /sites/default/files/u216961/gdisk_screenshot_s.png +[13]: https://en.wikipedia.org/wiki/Microsoft_Reserved_Partition +[14]: https://en.wikipedia.org/wiki/GNU_GRUB +[15]: https://en.wikipedia.org/wiki/LILO_(boot_loader) diff --git a/sources/tech/20190527 4 open source mobile apps for Nextcloud.md b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md new file mode 100644 index 0000000000..c97817e3c1 --- /dev/null +++ b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 open source mobile apps for Nextcloud) +[#]: via: (https://opensource.com/article/19/5/mobile-apps-nextcloud) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +4 open source mobile apps for Nextcloud +====== +Increase Nextcloud's value by turning it into an on-the-go information +hub. +![][1] + +I've been using [Nextcloud][2] (and before that, ownCloud), an open source alternative to file syncing and storage services like Dropbox and Google Drive, for many years. It's been both reliable and useful, and it respects my privacy. + +While Nextcloud is great at both syncing and storage, it's much more than a place to dump your files. Thanks to applications that you can fold into Nextcloud, it becomes more of an information hub than a storage space. + +While I usually interact with Nextcloud using the desktop client or in a browser, I'm not always at my computer (or any computer that I trust). So it's important that I can work with Nextcloud using my [LineageOS][3]-powered smartphone or tablet. + +To do that, I use several open source apps that work with Nextcloud. Let's take a look at four of them. + +As you've probably guessed, this article looks at the Android version of those apps. I grabbed mine from [F-Droid][4], although you get them from other Android app markets. You might be able to get some or all of them from Apple's App Store if you're an iOS person. + +### Working with files and folders + +The obvious app to start with is the [Nextcloud sync client][5]. This little app links your phone or tablet to your Nextcloud account. + +![Nextcloud mobile app][6] + +Using the app, you can: + + * Create folders + * Upload one or more files + * Sync files between your device and server + * Rename or remove files + * Make files available offline + + + +You can also tap a file to view or edit it. If your device doesn't have an app that can open the file, then you're out of luck. You can still download it to your phone or tablet though. + +### Reading news feeds + +Remember all the whining that went on when Google pulled the plug on Google Reader in 2013? This despite Google giving users several months to find an alternative. And, yes, there are alternatives. One of them, believe it or not, is Nextcloud. + +Nextcloud has a built-in RSS reader. All you need to do to get started is upload an [OPML][7] file containing your feeds or manually add a site's RSS feed to Nextcloud. + +Going mobile is easy, too, with the Nextcloud [News Reader app][8]. + +![Nextcloud News app][9] + +Unless you configure the app to sync when you start it up, you'll need to swipe down from the top of the app to load updates to your feeds. Depending on how many feeds you have, and how many unread items are in those feeds, syncing takes anywhere from a few seconds to half a minute. + +From there, tap an item to read it in the News app. + +![Nextcloud News app][10] + +You can also add feeds or open what you're reading in your device's default web browser. + +### Reading and writing notes + +I don't use Nextcloud's [Notes][11] app all that often (I'm more of a [Standard Notes][12] person). That said, you might find the Notes app comes in handy. + +How? By giving you a lightweight way to take [plain text][13] notes on your mobile device. The Notes app syncs any notes you have in your Nextcloud account and displays them in chronological order—newest or last-edited notes first. + +![Nextcloud Notes app][14] + +Tap a note to read or edit it. You can also create a note by tapping the **+** button, then typing what you need to type. + +![Nextcloud Notes app][15] + +There's no formatting, although you can add markup (like Markdown) to the note. Once you're done editing, the app syncs your note with Nextcloud. + +### Accessing your bookmarks + +Nextcloud has a decent bookmarking tool, and its [Bookmarks][16] app makes it easy to work with the tool on your phone or tablet. + +![Nextcloud Bookmarks app][17] + +Like the Notes app, Bookmarks displays your bookmarks in chronological order, with the newest appearing first in the list. + +If you tagged your bookmarks in Nextcloud, you can swipe left in the app to see a list of those tags rather than a long list of bookmarks. Tap a tag to view the bookmarks under it. + +![Nextcloud Bookmarks app][18] + +From there, just tap a bookmark. It opens in your device's default browser. + +You can also add a bookmark within the app. To do that, tap the **+** menu and add the bookmark. + +![Nextcloud Bookmarks app][19] + +You can include: + + * The URL + * A title for the bookmark + * A description + * One or more tags + + + +### Is that all? + +Definitely not. There are apps for [Nextcloud Deck][20] (a personal kanban tool) and [Nextcloud Talk][21] (a voice and video chat app). There are also a number of third-party apps that work with Nextcloud. Just do a search for _Nextcloud_ or _ownCloud_ in your favorite app store to track them down. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/mobile-apps-nextcloud + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_cloudagenda_20140341_jehb.png?itok=1NGs3_n4 +[2]: https://nextcloud.com/ +[3]: https://lineageos.org/ +[4]: https://opensource.com/life/15/1/going-open-source-android-f-droid +[5]: https://f-droid.org/en/packages/com.nextcloud.client/ +[6]: https://opensource.com/sites/default/files/uploads/nextcloud-app.png (Nextcloud mobile app) +[7]: http://en.wikipedia.org/wiki/OPML +[8]: https://f-droid.org/en/packages/de.luhmer.owncloudnewsreader/ +[9]: https://opensource.com/sites/default/files/uploads/nextcloud-news.png (Nextcloud News app) +[10]: https://opensource.com/sites/default/files/uploads/nextcloud-news-reading.png (Nextcloud News app) +[11]: https://f-droid.org/en/packages/it.niedermann.owncloud.notes +[12]: https://opensource.com/article/18/12/taking-notes-standard-notes +[13]: https://plaintextproject.online +[14]: https://opensource.com/sites/default/files/uploads/nextcloud-notes.png (Nextcloud Notes app) +[15]: https://opensource.com/sites/default/files/uploads/nextcloud-notes-add.png (Nextcloud Notes app) +[16]: https://f-droid.org/en/packages/org.schabi.nxbookmarks/ +[17]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks.png (Nextcloud Bookmarks app) +[18]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-tags.png (Nextcloud Bookmarks app) +[19]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-add.png (Nextcloud Bookmarks app) +[20]: https://f-droid.org/en/packages/it.niedermann.nextcloud.deck +[21]: https://f-droid.org/en/packages/com.nextcloud.talk2 diff --git a/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md b/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md new file mode 100644 index 0000000000..989d69e524 --- /dev/null +++ b/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 GNOME keyboard shortcuts to be more productive) +[#]: via: (https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive/) +[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/) + +5 GNOME keyboard shortcuts to be more productive +====== + +![][1] + +For some people, using GNOME Shell as a traditional desktop manager may be frustrating since it often requires more action of the mouse. In fact, GNOME Shell is also a [desktop manager][2] designed for and meant to be driven by the keyboard. Learn how to be more efficient with GNOME Shell with these 5 ways to use the keyboard instead of the mouse. + +### GNOME activities overview + +The activities overview can be easily opened using the **Super** key from the keyboard. (The **Super** key usually has a logo on it.) This is really useful when it comes to start an application. For example, it’s easy to start the Firefox web browser with the following key sequence **Super + f i r + Enter.** + +![][3] + +### Message tray + +In GNOME, notifications are available in the message tray. This is also the place where the calendar and world clocks are available. To open the message tray using the keyboard use the **Super+m** shortcut. To close the message tray simply use the same shortcut again. + + * ![][4] + + + +### Managing workspaces in GNOME + +Gnome Shell uses dynamic workspaces, meaning it creates additional workspaces as they are needed. A great way to be more productive using Gnome is to use one workspace per application or per dedicated activity, and then use the keyboard to navigate between these workspaces. + +Let’s look at a practical example. To open a Terminal in the current workspace press the following keys: **Super + t e r + Enter.** Then, to open a new workspace press **Super + PgDn**. Open Firefox ( **Super + f i r + Enter)**. To come back to the terminal, use **Super + PgUp**. + +![][5] + +### Managing an application window + +Using the keyboard it is also easy to manage the size of an application window. Minimizing, maximizing and moving the application to the left or the right of the screen can be done with only a few key strokes. Use **Super+**🠝 to maximize, **Super+**🠟 to minimize, **Super+**🠜 and **Super+**🠞 to move the window left and right. + +![][6] + +### Multiple windows from the same application + +Using the activities overview to start an application is very efficient. But trying to open a new window from an application already running only results in focusing on the open window. To create a new window, instead of simply hitting **Enter** to start the application, use **Ctrl+Enter**. + +So for example, to start a second instance of the terminal using the application overview, **Super + t e r + (Ctrl+Enter)**. + +![][7] + +Then you can use **Super+`** to switch between windows of the same application. + +![][8] + +As shown, GNOME Shell is a really powerful desktop environment when controlled from the keyboard. Learning to use these shortcuts and train your muscle memory to not use the mouse will give you a better user experience, and make you more productive when using GNOME. For other useful shortcuts, check out [this page on the GNOME wiki][9]. + +* * * + +_Photo by _[ _1AmFcS_][10]_ on _[_Unsplash_][11]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive/ + +作者:[Clément Verna][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/cverna/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/5-gnome-keycombos-816x345.jpg +[2]: https://fedoramagazine.org/gnome-3-32-released-coming-to-fedora-30/ +[3]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-10-50.gif +[4]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-11-01.gif +[5]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-12-57.gif +[6]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-06.gif +[7]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-19.gif +[8]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-22.gif +[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts +[10]: https://unsplash.com/photos/MuTWth_RnEs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[11]: https://unsplash.com/search/photos/keyboard?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190527 A deeper dive into Linux permissions.md b/sources/tech/20190527 A deeper dive into Linux permissions.md new file mode 100644 index 0000000000..26a132fdf9 --- /dev/null +++ b/sources/tech/20190527 A deeper dive into Linux permissions.md @@ -0,0 +1,172 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A deeper dive into Linux permissions) +[#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +A deeper dive into Linux permissions +====== +Sometimes you see more than just the ordinary r, w, x and - designations when looking at file permissions on Linux. How can you get a clearer view of what the uncommon charactrers are trying to tell you and how do these permissions work? +![Sandra Henry-Stocker][1] + +Sometimes you see more than just the ordinary **r** , **w** , **x** and **-** designations when looking at file permissions on Linux. Instead of **rwx** for the owner, group and other fields in the permissions string, you might see an **s** or **t** , as in this example: + +``` +drwxrwsrwt +``` + +One way to get a little more clarity on this is to look at the permissions with the **stat** command. The fourth line of stat’s output displays the file permissions both in octal and string format: + +``` +$ stat /var/mail + File: /var/mail + Size: 4096 Blocks: 8 IO Block: 4096 directory +Device: 801h/2049d Inode: 1048833 Links: 2 +Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail) +Access: 2019-05-21 19:23:15.769746004 -0400 +Modify: 2019-05-21 19:03:48.226656344 -0400 +Change: 2019-05-21 19:03:48.226656344 -0400 + Birth: - +``` + +This output reminds us that there are more than nine bits assigned to file permissions. In fact, there are 12. And those extra three bits provide a way to assign permissions beyond the usual read, write and execute — 3777 (binary 011111111111), for example, indicates that two extra settings are in use. + +The first **1** (second bit) in this particular value represents the SGID (set group ID) and assigns temporary permission to run the file or use the directory with the permissions of the associated group. + +``` +011111111111 + ^ +``` + +**SGID** gives temporary permissions to the person using the file to act as a member of that group. + +The second **1** (third bit) is the “sticky” bit. It ensures that _only_ the owner of the file is able to delete or rename the file or directory. + +``` +011111111111 + ^ +``` + +Had the permissions been 7777 rather than 3777, we’d have known that the SUID (set UID) field had also been set. + +``` +111111111111 +^ +``` + +**SUID** gives temporary permissions to the user using the file to act as the file owner. + +As for the /var/mail directory which we looked at above, all users require some access so some special values are required to provide it. + +But now let’s take this a step further. + +One of the common uses of the special permission bits is with commands like the **passwd** command. If you look at the /usr/bin/passwd file, you’ll notice that the SUID bit is set, allowing you to change your password (and, thus, the contents of the /etc/shadow file) even when you’re running as an ordinary (not a privileged) user and have no read or write access to this file. Of course, the passwd command is clever enough not to allow you to change other people's passwords unless you are actually running as root or using sudo. + +``` +$ ls -l /usr/bin/passwd +-rwsr-xr-x 1 root root 63736 Mar 22 14:32 /usr/bin/passwd +$ ls -l /etc/shadow +-rw-r----- 1 root shadow 2195 Apr 22 10:46 /etc/shadow +``` + +Now, let’s look at what you can do with the these special permissions. + +### How to assign special file permissions + +As with many things on the Linux command line, you have some choices on how you make your requests. The **chmod** command allows you to change permissions numerically or using character expressions. + +To change file permissions numerically, you might use a command like this to set the setuid and setgid bits: + +``` +$ chmod 6775 tryme +``` + +Or you might use a command like this: + +``` +$ chmod ug+s tryme <== for SUID and SGID permissions +``` + +If the file that you are adding special permissions to is a script, you might be surprised that it doesn’t comply with your expectations. Here’s a very simple example: + +``` +$ cat tryme +#!/bin/bash + +echo I am $USER +``` + +Even with the SUID and SGID bits set and the file root-owned file, running a script like this won’t yield the “I am root” response you might expect. Why? Because Linux ignores the set-user-ID and set-group-ID bits on scripts. + +``` +$ ls -l tryme +-rwsrwsrwt 1 root root 29 May 26 12:22 tryme +$ ./tryme +I am jdoe +``` + +If you try something similar using a compiled program, on the other hand, as with this simple C program, you’ll see a different effect. In this example program, we prompt the user to enter a file and create it for them, giving the file write permission. + +``` +#include + +int main() +{ + FILE *fp; /* file pointer*/ + char fName[20]; + + printf("Enter the name of file to be created: "); + scanf("%s",fName); + + /* create the file with write permission */ + fp=fopen(fName,"w"); + /* check if file was created */ + if(fp==NULL) + { + printf("File not created"); + exit(0); + } + + printf("File created successfully\n"); + return 0; +} +``` + +Once you compile the program and run the commands for both making root the owner and setting the needed permissions, you’ll see that it runs with root authority as expected — leaving a newly created root-owned file. Of course, you must have sudo privileges to run some of the required commands. + +``` +$ cc -o mkfile mkfile.c <== compile the program +$ sudo chown root:root mkfile <== change owner and group to “root” +$ sudo chmod ug+s mkfile <== add SUID and SGID permissions +$ ./mkfile <== run the program +Enter name of file to be create: empty +File created successfully +$ ls -l empty +-rw-rw-r-- 1 root root 0 May 26 13:15 empty +``` + +Notice that the file is owned by root — something that wouldn’t have happened if the program hadn’t run with root authority. + +The positions of the uncommon settings in the permissions string (e.g., rw **s** rw **s** rw **t** ) can help remind us what each bit means. At least the first "s" (SUID) is in the owner-permissions area and the second (SGID) is in the group-permissions area. Why the sticky bit is a "t" instead of an "s" is beyond me. Maybe the founders imagined referring to it as the "tacky bit" and changed their minds due to less flattering second definition of the word. In any case, the extra permissions settings provide a lot of additional functionality to Linux and other Unix systems. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/shs_rwsr-100797564-large.jpg +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md new file mode 100644 index 0000000000..8ab4b5b0b8 --- /dev/null +++ b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12] +====== + +![Introduction To Hyperledger Sawtooth][1] + +After having discussed the [**Hyperledger Fabric**][2] project in detail on this blog, its time we moved on to the next project of interest at the Hyperledger camp. **Hyperledger Sawtooth** is Intel’s contribution to the [**Blockchain**][3] consortium mission to develop enterprise ready modular distributed ledgers and applications. Sawtooth is another attempt at creating an easy to roll out blockchain ledger for businesses keeping their resource constraints and security requirements in mind. While platforms such as [**Ethereum**][4] will in theory offer similar functionality when placed in capable hands, Sawtooth readily provides a lot of customizability and is built from the ground up for specific enterprise level use cases. + +The Hyperledger project page has an introductory video detailing the Sawtooth architecture and platform. We’re attaching it here for readers to get a quick round-up about the product. + +Moving to the intricacies of the Sawtooth project, there are **five primary and significant differences** between Sawtooth and its alternatives. The post from now and on will explore these differences and at the end will mention an example real world use case for Hyperledger Sawtooth in managing supply chains. + +### Distinction 1: The consensus algorithm – PoET + +This is perhaps amongst the most notable and significant changes that Sawtooth brings to the fore. While exploring all the different consensus algorithms that exist for blockchain platforms these days is out of the scope of this post, what is to be noted is that Sawtooth uses a **Proof Of Elapsed Time** (POET) based consensus algorithm. Such a system for validating transactions and blocks on the blockchain is considered to be resources efficient unlike other computation heavy systems which use the likes of **Proof of work** or **Proof of stake** algorithms. + +POET is designed to utilize the security and tamper proof features of modern processors with reference implementations utilizing **Intel’s trusted execution environment** (TEE) architecture on its modern CPUs. The fact that the execution of the validating program takes use of a TEE along with a **“lottery”** system that is implemented to choose the **validator** or **node** to fulfill the request makes the process of creating blocks on the Sawtooth architecture secure and resource efficient at the same time. + +The POET algorithm basically elects a validator randomly based on a stochastic process. The probability of a particular node being selected depends on a host of pointers one of which depends on the amount of computing resources the said node has contributed to the ledger so far. The chosen validator then proceeds to timestamp the said block of data and shares it with the permissioned nodes in the network so that there remains a reliable record of the blockchains immutability. This method of electing the “validator” node was developed by **Intel** and so far, has been shown to exhibit zero bias and or error in executing its function. + +### Distinction 2: A fully separated level of abstraction between the application level and core system + +As mentioned, the Sawtooth platform takes modularity to the next level. Here in the reference implementation that is shared by the [**Hyperledger project**][5] foundation, the core system that enables users to create a distributed ledger, and, the application run-time environment (the virtual environment where applications developed to run on the blockchain otherwise known as [**smart contracts**][6] or **chaincode** ) are separated by a full level of abstraction. This essentially means that developers can separately code applications in any programming language of their choice instead of having to conform to and follow platform specific languages. The Sawtooth platform provides support for the following contract languages out of the box: **Java** , **Python** , **C++** , **Go** , **JavaScript** and **Rust**. This distinction between the core system and application levels are obtained by defining a custom transaction family for each application that is developed on the platform. + +A transaction family contains the following: + + * **A transaction processor** : basically, your applications logic or business logic. + * **Data model** : a system that defines and handles data storage and processing at the system level. + * **Client-side handler** to handle the end user side of your application. + + + +Multiple low-level transaction families such as this may be defined in a permissioned blockchain and used in a modular fashion throughout the network. For instance, if a consortium of banks chose to implement it, they could come together and define common functionalities or rules for their applications and then plug and play the transaction families they need in their respective systems without having to develop everything on their own. + +### Distinction 3: SETH + +It is without doubt that a blockchain future would for sure have Ethereum as one of the key players. People at the Hyperledger foundation know this well. The **Hyperledger Burrow project** is in fact meant to address the existence of entities working on multiple platforms by providing a way for developers to use Ethereum blockchain specifications to build custom distributed applications using the **EVM** (Ethereum virtual machine). + +Basically speaking, Burrow lets you customize and deploy Ethereum based [**DApps**][7] (written in **solidity** ) in non-public blockchains (the kind developed for use at the Hyperledger foundation). The Burrow and Sawtooth projects teamed up and created **SETH**. **Sawtooth-Ethereum Integration project** (SETH) is meant to add Ethereum (solidity) smart contract functionality and compatibility to Sawtooth based distributed ledger networks. A much-less known agenda to SETH is the fact that applications (DApps) and smart contracts written for EVM can now be easily ported to Sawtooth. + +### Distinction 4: ACID principle and ability to batch process + +A rather path breaking feature of Sawtooth is its ability to batch transactions together and then package them into a block. The blocks and transactions will still be subject to the **ACID** principle ( **A** tomicity, **C** onsistency, **I** solation and **D** urability). The implication of these two facts are highlighted using an example as follows. + +Let’s say you have **6 transactions** to be packaged into **two blocks (4+2)**. Block A has 4 transactions which individually needs to succeed in order for the next block of 2 transactions to be timestamped and validated. Assuming they succeed, the next block of 2 transactions is processed and assuming even they are successful the entire package of 6 transactions are deemed successful and the overall business logic is deemed successful. For instance, assume you’re selling a car. Different transactions at the ends of the buyer (block A) and the seller (block B) will need be completed in order for the trade to be deemed valid. Ownership is transferred only if both the sides are successful in carrying out their individual transactions. + +Such a feature will improve accountability on individual ends by separating responsibilities and improve the recognizability of faults and errors by the same principle. The ACID principle is implemented by coding for a custom transaction processor and defining a transaction family that will store data in the said block structure. + +### Distinction 5: Parallel transaction execution + +Blockchain platforms usually follow a serial **first come first serve route** to executing transactions and follow a queueing system for the same. Sawtooth provides support for both **serial** and **parallel execution** of transactions. Parallel transaction processing offers significant performance gains for even faster transactions by reducing overall transaction latencies. More faster transactions will be processed along with slower and bigger transactions at the same time on the platform instead of transactions of all types to be kept waiting. + +The methodology followed to implement the parallel transaction paradigm efficiently takes care of the double spending problems and errors due to multiple changes being made to the same state by defining a custom scheduler for the network which can identity processes and their predecessors. + +Real world use case: Supply Chain Management using Sawtooth applied with IoT + +The Sawtooth **official website** lists seafood traceability as an example use case for the Sawtooth platform. The basic template is applicable for almost all supply chain related use cases. + +![][8] + +Figure 1 From the Hyperledger Sawtooth Official Website + +Traditional supply chain management solutions in this space work majorly through manual record keeping which leaves room for massive frauds, errors, and significant quality control issues. IoT has been cited as a solution to overcome such issues with supply chains in day to day use. Inexpensive GPS enabled RFID-tags can be attached to fresh catch or produce as the case may be and can be scanned for updating at the individual processing centres automatically. Buyers or middle men can verify and or track the information easily using a client on their mobile device to know the route their dinner has taken before arriving on their plates. + +While tracking seafood seems to be a first world problem in countries like India, the change an IoT enabled system can bring to public delivery systems in developing countries can be a welcome change. The application is available for demo at this **[link][9]** and users are encouraged to fiddle with it and adopt it to suit their needs. + +**Reference:** + + * [**The Official Hyperledger Sawtooth Docs**][10] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Hyperledger-Sawtooth-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[5]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[7]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Sawtooth.png +[9]: https://sawtooth.hyperledger.org/examples/seafood.html +[10]: https://sawtooth.hyperledger.org/docs/core/releases/1.0/contents.html diff --git a/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md new file mode 100644 index 0000000000..a717d05ed8 --- /dev/null +++ b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md @@ -0,0 +1,300 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?) +[#]: via: (https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/) +[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/) + +How To Enable Or Disable SSH Access For A Particular User Or Group In Linux? +====== + +As per your organization standard policy, you may need to allow only the list of users that are allowed to access the Linux system. + +Or you may need to allow only few groups, which are allowed to access the Linux system. + +How to achieve this? What is the best way? How to achieve this in a simple way? + +Yes, there are many ways are available to perform this. + +However, we need to go with simple and easy method. + +If so, it can be done by making the necessary changes in `/etc/ssh/sshd_config` file. + +In this article we will show you, how to perform this in details. + +Why are we doing this? due to security reason. Navigate to the following URL to know more about **[openSSH][1]** usage. + +### What Is SSH? + +openssh stands for OpenBSD Secure Shell. Secure Shell (ssh) is a free open source networking tool which allow us to access remote system over an unsecured network using Secure Shell (SSH) protocol. + +It’s a client-server architecture. It handles user authentication, encryption, transferring files between computers and tunneling. + +These can be accomplished via traditional tools such as telnet or rcp, these are insecure and use transfer password in cleartext format while performing any action. + +### How To Allow A User To Access SSH In Linux? + +We can allow/enable the ssh access for a particular user or list of the users using the following method. + +If you would like to allow more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to allow ssh access for `user3`. + +``` +# echo "AllowUsers user3" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowusers +AllowUsers user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with different user. Yes, `user2` isn’t allowed for SSH login and will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:00:35 CentOS7 sshd[4900]: User user2 from 192.168.1.6 not allowed because not listed in AllowUsers +Mar 29 02:00:35 CentOS7 sshd[4900]: input_userauth_request: invalid user user2 [preauth] +Mar 29 02:00:40 CentOS7 unix_chkpwd[4902]: password check failed for user (user2) +Mar 29 02:00:40 CentOS7 sshd[4900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user2 +Mar 29 02:00:43 CentOS7 sshd[4900]: Failed password for invalid user user2 from 192.168.1.6 port 42568 ssh2 +``` + +At the same time `user3` is allowed to login into the system because it’s in allowed users list. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:01:13 CentOS7 sshd[4939]: Accepted password for user3 from 192.168.1.6 port 42590 ssh2 +Mar 29 02:01:13 CentOS7 sshd[4939]: pam_unix(sshd:session): session opened for user user3 by (uid=0) +``` + +### How To Deny Users To Access SSH In Linux? + +We can deny/disable the ssh access for a particular user or list of the users using the following method. + +If you would like to disable more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `user1`. + +``` +# echo "DenyUsers user1" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i denyusers +DenyUsers user1 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with Deny user. Yes, `user1` is in denyusers list. So, you will be getting an error message as shown below when you are try to login. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 01:53:42 CentOS7 sshd[4753]: User user1 from 192.168.1.6 not allowed because listed in DenyUsers +Mar 29 01:53:42 CentOS7 sshd[4753]: input_userauth_request: invalid user user1 [preauth] +Mar 29 01:53:46 CentOS7 unix_chkpwd[4755]: password check failed for user (user1) +Mar 29 01:53:46 CentOS7 sshd[4753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 01:53:48 CentOS7 sshd[4753]: Failed password for invalid user user1 from 192.168.1.6 port 42522 ssh2 +``` + +### How To Allow Groups To Access SSH In Linux? + +We can allow/enable the ssh access for a particular group or groups using the following method. + +If you would like to allow more than one group then you have to add the groups with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `2g-admin` group. + +``` +# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowgroups +AllowGroups 2g-admin +``` + +Run the following command to know the list of the users are belongs to this group. + +``` +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes, `user3` is allowed to login into the system because user3 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:21 CentOS7 sshd[5165]: Accepted password for user1 from 192.168.1.6 port 42640 ssh2 +Mar 29 02:10:22 CentOS7 sshd[5165]: pam_unix(sshd:session): session opened for user user1 by (uid=0) +``` + +Yes, `user2` is allowed to login into the system because user2 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:38 CentOS7 sshd[5225]: Accepted password for user2 from 192.168.1.6 port 42642 ssh2 +Mar 29 02:10:38 CentOS7 sshd[5225]: pam_unix(sshd:session): session opened for user user2 by (uid=0) +``` + +When you are try to login into the system with other users which are not part of this group then you will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:12:36 CentOS7 sshd[5306]: User ladmin from 192.168.1.6 not allowed because none of user's groups are listed in AllowGroups +Mar 29 02:12:36 CentOS7 sshd[5306]: input_userauth_request: invalid user ladmin [preauth] +Mar 29 02:12:56 CentOS7 unix_chkpwd[5310]: password check failed for user (ladmin) +Mar 29 02:12:56 CentOS7 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=ladmin +Mar 29 02:12:58 CentOS7 sshd[5306]: Failed password for invalid user ladmin from 192.168.1.6 port 42674 ssh2 +``` + +### How To Deny Group To Access SSH In Linux? + +We can deny/disable the ssh access for a particular group or groups using the following method. + +If you would like to disable more than one group then you need to add the group with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. + +``` +# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# # cat /etc/ssh/sshd_config | grep -i denygroups +DenyGroups 2g-admin + +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes `user3` isn’t allowed to login into the system because it’s not part of `2g-admin` group. It’s in Denygroups. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:17:32 CentOS7 sshd[5400]: User user1 from 192.168.1.6 not allowed because a group is listed in DenyGroups +Mar 29 02:17:32 CentOS7 sshd[5400]: input_userauth_request: invalid user user1 [preauth] +Mar 29 02:17:38 CentOS7 unix_chkpwd[5402]: password check failed for user (user1) +Mar 29 02:17:38 CentOS7 sshd[5400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 02:17:41 CentOS7 sshd[5400]: Failed password for invalid user user1 from 192.168.1.6 port 42710 ssh2 +``` + +Anyone can login into the system except `2g-admin` group. Hence, `ladmin` user is allowed to login into the system. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:19:13 CentOS7 sshd[5432]: Accepted password for ladmin from 192.168.1.6 port 42716 ssh2 +Mar 29 02:19:13 CentOS7 sshd[5432]: pam_unix(sshd:session): session opened for user ladmin by (uid=0) +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/ + +作者:[2daygeek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.2daygeek.com/author/2daygeek/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/ssh-tutorials/ diff --git a/sources/tech/20190531 Why translation platforms matter.md b/sources/tech/20190531 Why translation platforms matter.md new file mode 100644 index 0000000000..e513267640 --- /dev/null +++ b/sources/tech/20190531 Why translation platforms matter.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why translation platforms matter) +[#]: via: (https://opensource.com/article/19/5/translation-platforms) +[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) + +Why translation platforms matter +====== +Technical considerations are not the best way to judge a good +translation platform. +![][1] + +Language translation enables open source software to be used by people all over the world, and it's a great way for non-developers to get involved in their favorite projects. There are many [translation tools][2] available that you can evaluate according to how well they handle the main functional areas involved in translations: technical interaction capabilities, teamwork support capabilities, and translation support capabilities. + +Technical interaction considerations include: + + * Supported file formats + * Synchronization with the source repository + * Automation support tools + * Interface possibilities + + + +Support for teamwork (which could also be called "community animation") includes how a platform: + + * Monitors changes (by a translator, on a project, etc.) + * Follows up on updates pushed by projects + * Displays the state of the situation + * Enables or not review and validation steps + * Assists in discussions between translators (from the same team and inter-languages) and with project maintainers + * Supports global communication on the platform (news, etc.) + + + +Translator assistance includes: + + * A clear and ergonomic interface + * A limited number of steps to find a project and start working + * A simple way to read the flow between translation and distribution + * Access to a translation memory machine + * Glossary enrichment + + + +There are no major differences, though there are some minor ones, between source code management platforms relating to the first two functional areas. ****I suspect that the last area pertains mainly to source code. However, the data handled is quite different and users are usually much less technically sophisticated than developers, as well as more numerous. + +### My recommendation + +In my opinion, the GNOME platform offers the best translation platform for the following reasons: + + * Its site contains both the team organization and the translation platform. It's easy to see who is responsible and their roles on the team. Everything is concentrated on a few screens. + * It's easy to find what to work on, and you quickly realize you'll have to download files to your computer and send them back once you modify them. It's not very sexy, but the logic is easy to understand. + * Once you send a file back, the platform can send an alert to the mailing list so the team knows the next steps and the translation can be easily discussed at the global level (rather than commenting on specific sentences). + * It has 297 languages. + * It shows clear percentages on progress, both on basic sentences and advanced menus and documentation. + + + +Coupled with a predictable GNOME release schedule, everything is available for the community to work well because the tool promotes community work. + +If we look at the Debian translation team, which has been doing a good job for years translating an unimaginable amount of content for Fedora (especially news), we see there is a highly codified translation process based exclusively on emails with a manual push in the repositories. This team also puts everything into the process, rather than the tools, and—despite the considerable energy this seems to require—it has worked for many years while being among the leading group of languages. + +My perception is that the primary issue for a successful translation platform is not based on the ability to make the unitary (technical, translation) work, but on how it structures and supports the translation team's processes. This is what gives sustainability. + +The production processes are the most important way to structure a team; by putting them together correctly, it's easy for newcomers to understand how processes work, adopt them, and explain them to the next group of newcomers. + +To build a sustainable community, the first consideration must be on a tool that supports collaborative work, then on its usability. + +This explains my frustration with the [Zanata][3] tool, which is efficient from a technical and interface standpoint, but poor when it comes to helping to structure a community. GIven that translation is a community-driven process (possibly one of the most community-driven processes in open source software development), this is a critical problem for me. + +* * * + +_This article is adapted from "[What's a good translation platform?][4]" originally published on the Jibec Journal and is reused with permission._ + +Learn about seven tools and processes, both human and software, which are used to manage patch... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/translation-platforms + +作者:[Jean-Baptiste Holcroft][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jibec/users/annegentle/users/bcotton +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel +[2]: https://opensource.com/article/17/6/open-source-localization-tools +[3]: http://zanata.org/ +[4]: https://jibecfed.fedorapeople.org/blog-hugo/en/2016/09/whats-a-good-translation-platform/ diff --git a/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md new file mode 100644 index 0000000000..8c54e5a6ac --- /dev/null +++ b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md @@ -0,0 +1,214 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to set up virtual environments for Python on MacOS) +[#]: via: (https://opensource.com/article/19/6/virtual-environments-python-macos) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez) + +How to set up virtual environments for Python on MacOS +====== +Save yourself a lot of confusion by managing your virtual environments +with pyenv and virtualwrapper. +![][1] + +If you're a Python developer and a MacOS user, one of your first tasks upon getting a new computer is to set up your Python development environment. Here is the best way to do it (although we have written about [other ways to manage Python environments on MacOS][2]). + +### Preparation + +First, open a terminal and enter **xcode-select --install** at its cold, uncaring prompt. Click to confirm, and you'll be all set with a basic development environment. This step is required on MacOS to set up local development utilities, including "many commonly used tools, utilities, and compilers, including make, GCC, clang, perl, svn, git, size, strip, strings, libtool, cpp, what, and many other useful commands that are usually found in default Linux installations," according to [OS X Daily][3]. + +Next, install [Homebrew][4] by executing the following Ruby script from the internet: + + +``` +`ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` +``` + +If you, like me, have trust issues with arbitrarily running scripts from the internet, click on the script above and take a longer look to see what it does. + +Once this is done, congratulations, you have an excellent package management tool in Homebrew. Naively, you might think that you next **brew install python** or something. No, haha. Homebrew will give you a version of Python, but the version you get will be out of your control if you let the tool manage your environment for you. You want [pyenv][5], "a tool for simple Python version management," that can be installed on [many operating systems][6]. Run: + + +``` +`$ brew install pyenv` +``` + +You want pyenv to run every time you open your prompt, so include the following in your configuration files (by default on MacOS, this is **.bash_profile** in your home directory): + + +``` +$ cd ~/ +$ echo 'eval "$(pyenv init -)"' >> .bash_profile +``` + +By adding this line, every new terminal will initiate pyenv to manage the **PATH** environment variable in your terminal and insert the version of Python you want to run (as opposed to the first one that shows up in the environment. For more information, read "[How to set your $PATH variable in Linux][7].") Open a new terminal for the updated **.bash_profile** to take effect. + +Before installing your favorite version of Python, you'll want to install a couple of helpful tools: + + +``` +`$ brew install zlib sqlite` +``` + +The [zlib][8] compression algorithm and the [SQLite][9] database are dependencies for pyenv and often [cause build problems][10] when not configured correctly. Add these exports to your current terminal window to ensure the installation completes: + + +``` +$ export LDFLAGS="-L/usr/local/opt/zlib/lib -L/usr/local/opt/sqlite/lib" +$ export CPPFLAGS="-I/usr/local/opt/zlib/include -I/usr/local/opt/sqlite/include" +``` + +Now that the preliminaries are done, it's time to install a version of Python that is fit for a modern person in the modern age: + + +``` +`$ pyenv install 3.7.3` +``` + +Go have a cup of coffee. From beans you hand-roast. After you pick them. What I'm saying here is it's going to take some time. + +### Adding virtual environments + +Once it's finished, it's time to make your virtual environments pleasant to use. Without this next step, you will effectively be sharing one Python development environment for every project you work on. Using virtual environments to isolate dependency management on a per-project basis will give us more certainty and reproducibility than Python offers out of the box. For these reasons, install **virtualenvwrapper** into the Python environment: + + +``` +$ pyenv global 3.7.3 +# Be sure to keep the $() syntax in this command so it can evaluate +$ $(pyenv which python3) -m pip install virtualenvwrapper +``` + +Open your **.bash_profile** again and add the following to be sure it works each time you open a new terminal: + + +``` +# We want to regularly go to our virtual environment directory +$ echo 'export WORKON_HOME=~/.virtualenvs' >> .bash_profile +# If in a given virtual environment, make a virtual environment directory +# If one does not already exist +$ echo 'mkdir -p $WORKON_HOME' >> .bash_profile +# Activate the new virtual environment by calling this script +# Note that $USER will substitute for your current user +$ echo '. ~/.pyenv/versions/3.7.3/bin/virtualenvwrapper.sh' >> .bash_profile +``` + +Close the terminal and open a new one (or run **exec /bin/bash -l** to refresh the current terminal session), and you'll see **virtualenvwrapper** initializing the environment: + + +``` +$ exec /bin/bash -l +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkproject +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkproject +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/initialize +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/prermvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postrmvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/get_env_details +``` + +From now on, all your work should be in a virtual environment, allowing you to use temporary environments to play around with development safely. With this toolchain, you can set up multiple projects and switch between them, depending upon what you're working on at that moment: + + +``` +$ mkvirtualenv test1 +Using base prefix '/Users/moshe/.pyenv/versions/3.7.3' +New python executable in /Users/moshe/.virtualenvs/test1/bin/python3 +Also creating executable in /Users/moshe/.virtualenvs/test1/bin/python +Installing setuptools, pip, wheel... +done. +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/get_env_details +(test1)$ mkvirtualenv test2 +Using base prefix '/Users/moshe/.pyenv/versions/3.7.3' +New python executable in /Users/moshe/.virtualenvs/test2/bin/python3 +Also creating executable in /Users/moshe/.virtualenvs/test2/bin/python +Installing setuptools, pip, wheel... +done. +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/get_env_details +(test2)$ ls $WORKON_HOME +get_env_details postmkvirtualenv premkvirtualenv +initialize postrmvirtualenv prermvirtualenv +postactivate preactivate test1 +postdeactivate predeactivate test2 +postmkproject premkproject +(test2)$ workon test1 +(test1)$ +``` + +The **deactivate** command exits you from the current environment. + +### Recommended practices + +You may already set up your long-term projects in a directory like **~/src**. When you start working on a new project, go into this directory, add a subdirectory for the project, then use the power of Bash interpretation to name the virtual environment based on your directory name. For example, for a project named "pyfun": + + +``` +$ mkdir -p ~/src/pyfun && cd ~/src/pyfun +$ mkvirtualenv $(basename $(pwd)) +# we will see the environment initialize +(pyfun)$ workon +pyfun +test1 +test2 +(pyfun)$ deactivate +$ +``` + +Whenever you want to work on this project, go back to that directory and reconnect to the virtual environment by entering: + + +``` +$ cd ~/src/pyfun +(pyfun)$ workon . +``` + +Since initializing a virtual environment means taking a point-in-time copy of your Python version and the modules that are loaded, you will occasionally want to refresh the project's virtual environment, as dependencies can change dramatically. You can do this safely by deleting the virtual environment because the source code will remain unscathed: + + +``` +$ cd ~/src/pyfun +$ rmvirtualenv $(basename $(pwd)) +$ mkvirtualenv $(basename $(pwd)) +``` + +This method of managing virtual environments with pyenv and virtualwrapper will save you from uncertainty about which version of Python you are running as you develop code locally. This is the simplest way to avoid confusion—especially when you're working with a larger team. + +If you are just beginning to configure your Python environment, read up on how to use [Python 3 on MacOS][2]. Do you have other beginner or intermediate Python questions? Leave a comment and we will consider them for the next article. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/virtual-environments-python-macos + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_snake_file_box.jpg?itok=UuDVFLX- +[2]: https://opensource.com/article/19/5/python-3-default-macos +[3]: http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/ +[4]: https://brew.sh/ +[5]: https://github.com/pyenv/pyenv +[6]: https://github.com/pyenv/pyenv/wiki +[7]: https://opensource.com/article/17/6/set-path-linux +[8]: https://zlib.net/ +[9]: https://www.sqlite.org/index.html +[10]: https://github.com/pyenv/pyenv/wiki/common-build-problems#build-failed-error-the-python-zlib-extension-was-not-compiled-missing-the-zlib diff --git a/sources/tech/20190603 How to stream music with GNOME Internet Radio.md b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md new file mode 100644 index 0000000000..fc21d82d0b --- /dev/null +++ b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to stream music with GNOME Internet Radio) +[#]: via: (https://opensource.com/article/19/6/gnome-internet-radio) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/r3bl) + +How to stream music with GNOME Internet Radio +====== +If you're looking for a simple, straightforward interface that gets your +streams playing, try GNOME's Internet Radio plugin. +![video editing dashboard][1] + +Internet radio is a great way to listen to stations from all over the world. Like many developers, I like to turn on a station as I code. You can listen to internet radio with a media player for the terminal like [MPlayer][2] or [mpv][3], which is what I use to listen via the Linux command line. However, if you prefer using a graphical user interface (GUI), you may want to try [GNOME Internet Radio][4], a nifty plugin for the GNOME desktop. You can find it in the package manager. + +![GNOME Internet Radio plugin][5] + +Listening to internet radio with a graphical desktop operating system generally requires you to launch an application such as [Audacious][6] or [Rhythmbox][7]. They have nice interfaces, plenty of options, and cool audio visualizers. But if you want a simple, straightforward interface that gets your streams playing, GNOME Internet Radio is for you. + +After installing it, a small icon appears in your toolbar, which is where you do all your configuration and management. + +![GNOME Internet Radio icons][8] + +The first thing I did was go to the Settings menu. I enabled the following two options: show title notifications and show volume adjustment. + +![GNOME Internet Radio Settings][9] + +GNOME Internet Radio includes a few pre-configured stations, and it is really easy to add others. Just click the ( **+** ) sign. You'll need to enter a channel name, which can be anything you prefer (including the station name), and the station address. For example, I like to listen to Synthetic FM. I enter the name, e.g., "Synthetic FM," and the stream address, i.e., . + +Then click the star next to the stream to add it to your menu. + +However you listen to music and whatever genre you choose, it is obvious—coders need their music! The GNOME Internet Radio plugin makes it simple to get your favorite internet radio station queued up. + +In honor of the Gnome desktop's 18th birthday on August 15, we've rounded up 18 reasons to toast... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/gnome-internet-radio + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/r3bl +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) +[2]: https://opensource.com/article/18/12/linux-toy-mplayer +[3]: https://mpv.io/ +[4]: https://extensions.gnome.org/extension/836/internet-radio/ +[5]: https://opensource.com/sites/default/files/uploads/packagemanager_s.png (GNOME Internet Radio plugin) +[6]: https://audacious-media-player.org/ +[7]: https://help.gnome.org/users/rhythmbox/stable/ +[8]: https://opensource.com/sites/default/files/uploads/titlebaricons.png (GNOME Internet Radio icons) +[9]: https://opensource.com/sites/default/files/uploads/gnomeinternetradio_settings.png (GNOME Internet Radio Settings) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md new file mode 100644 index 0000000000..3ebf00a14f --- /dev/null +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -0,0 +1,431 @@ +从零写一个时间序列数据库 +============================================================ + + +我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。 + +在许多方面上 Kubernetes 展现出了所有 Prometheus 的设计用途。它使得持续部署continuous deployments弹性伸缩auto scaling和其他高动态环境highly dynamic environments下的功能可以轻易地访问。在众多概念上的决策中,查询语句和操作模型使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。记住了这一点,而不是回过头来看 Prometheus 已经解决的很好的问题,我们就可以明确目标去提升它高动态或瞬态服务transient services环境下的表现。 + +Prometheus 的存储层在很长一段时间里都展现出卓越的性能,单一服务器就能够以每秒数百多万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它更正了现存解决方案的缺点,并具备处理更大规模数据的能力。 + +注释:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上提出你的批评(fabxc) + +### 问题,难题,问题域 + +首先,快速地概览一下我们要完成的东西和它的关键难题。我们可以先看一下 Prometheus 当前的做法 ,它为什么做的这么好,以及我们打算用新设计解决哪些问题。 + +### 时间序列数据 + +我们有一个收集一段时间数据的系统。 + +``` +identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), .... +``` + +每个数据点是一个时间戳和值的元组。在监控中,时间戳是一个整数,值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有标签维度label dimensions字典的度量名称。标签维度分开了单一指标的测量空间。每一个指标名称加上一个独一无二的标签集就成了它自己的时间序列,它有一个与之关联的数据流value stream。 + +这是一个典型的序列标识符series identifiers 集,它是统计请求指标的一部分: + +``` +requests_total{path="/status", method="GET", instance=”10.0.0.1:80”} +requests_total{path="/status", method="POST", instance=”10.0.0.3:80”} +requests_total{path="/", method="GET", instance=”10.0.0.2:80”} +``` + +让我们简化一下表示方法:度量名称可以当作另一个维度标签,在我们的例子中是 `__name__`。对于查询语句,可以对它进行特殊处理,但与我们存储的方式无关,我们后面也会见到。 + +``` +{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”} +{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”} +{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”} +``` + +我们想通过标签来查询时间序列数据。在最简单的情况下,使用 `{__name__="requests_total"}` 选择所有属于 `requests_total` 指标的数据。对于所有选择的序列,我们在给定的时间窗口内获取数据点。 +在更复杂的语句中,我们或许想一次性选择满足多个标签的序列,并且表示比相等条件更复杂的情况。例如,非语句(`method!="GET"`)或正则表达式匹配(`method=~"PUT|POST"`)。 + +这些在很大程度上定义了存储的数据和它的获取方式。 + +### 纵与横 + +在简化的视图中,所有的数据点可以分布在二维平面上。水平维度代表着时间,序列标识符域经纵轴展开。 + +``` +series + ^ + │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} + │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} + │ . . . . . . . + │ . . . . . . . . . . . . . . . . . . . ... + │ . . . . . . . . . . . . . . . . . . . . . + │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} + │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} + │ . . . . . . . . . . . . . . + │ . . . . . . . . . . . . . . . . . . . ... + │ . . . . . . . . . . . . . . . . . . . . + v + <-------------------- time ---------------------> +``` + +Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。这里提供一些测量的规模:单一 Prometheus 实例从成千上万的目标中收集数据点,每个数据点都暴露在成百上千个不同的时间序列中。 + +在每秒采集数百万数据点这种规模下,批量写入是一个不能妥协的性能要求。在磁盘上分散地写入单个数据点会相当地缓慢。因此,我们想要按顺序写入更大的数据块。 +对于旋转式磁盘,它的磁头始终得物理上地向不同的扇区上移动,这是一个不足为奇的事实。而我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单独的字节,只能写入一页 4KiB 或更多的数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为部分上属于[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度,而且还毫不夸张地在几天或几个周内破坏掉你的硬件。 +关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想有什么收获:顺序写入和批量写入对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。 + +查询模式比起写入模式千差万别。我们可以查询单一序列的一个数据点,也可以为 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。 +[记录规则][6]减轻了已知查询的问题,但对于点对点ad-hoc查询来说并不是一个通用的解决方法。 + +我们知道自己想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页,HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读吞吐量。 +因此在理想上,相同序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。在上层,我们仅需要知道序列的起始位置就能访问所有的数据点。 + +显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着很强的张力。这是我们 TSDB 需要解决的一个基本问题。 + +#### 当前的解法 + +是时候看一下当前 Prometheus 是如何存储数据来解决这一问题的,让我们称它为“V2”。 +我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们打包 1Kib 样本序列的数据块在内存中,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。它还支持非常高效的压缩格式,基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 等的各种压缩格式。 + +``` + ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A + └──────────┴─────────┴─────────┴─────────┴─────────┘ + ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B + └──────────┴─────────┴─────────┴─────────┴─────────┘ + . . . + ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ + └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ + chunk 1 chunk 2 chunk 3 ... +``` + +尽管基于块存储的方法非常棒,但为每个序列保存一个独立的文件会给 V2 存储带来麻烦,因为: + +* 我们实际上需要比当前收集的时间序列数目使用更多的文件。多出的部分在序列分流Series Churn上。拥有几百万个文件,迟早会使用光文件系统中的 [inodes][1]。这种情况我们只可以通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常想要避免为了适应一个应用程序而格式化磁盘。 +* 即使是分块写入,每秒也会产生几千万块的数据块并且准备持久化。这依然需要每秒数千个次的磁盘写入量。尽管通过为每个序列打包好多个块来缓解,但反过来还是增加了等待持久化数据的总内存占用。 +* 要保持所有文件的打开状态进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果它还是被查询到,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。 +* 最终,旧的数据需要被删除并且数据需要从数百万文件的头部删除。这就意味着删除实际上是高强度的写入操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。 +* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失的时间要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。 + +我们能够从现有的设计中学到的关键部分是数据块的概念,这一点会依旧延续。最近一段时间的数据块会保持在内存中也大体上不错。毕竟,最近时间段的数据会大量的查询到。一个时间序列对应一个文件,这种概念是我们想要替换掉的。 + +### 序列分流 + +在 Prometheus 的上下文context中,我们使用术语序列分流series churn来描述不活越的时间序列集合,即不再接收数据点,取而代之的是出现一组新的活跃序列。 +例如,由给定微服务实例产生的所有序列都有一个相对的“instance”标签来标识它的起源。如果我们为微服务执行了滚动更新rolling update,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的集群编排Cluster orchestration系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。 + +``` +series + ^ + │ . . . . . . + │ . . . . . . + │ . . . . . . + │ . . . . . . . + │ . . . . . . . + │ . . . . . . . + │ . . . . . . + │ . . . . . . + │ . . . . . + │ . . . . . + │ . . . . . + v + <-------------------- time ---------------------> +``` + +所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿的序列中找到数据,查询效果还是会受到严重的影响。 + +#### 当前解法 + +当前 Prometheus 的 V2 存储系统对所有保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的标签对label pair,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。 +例如,从所有的序列中选择标签 `__name__="requests_total"` 非常高效,但是选择  `instance="A" AND __name__="requests_total"` 就有了可伸缩性的问题。我们稍后会重新考虑导致这一点的原因和能够提升查找延迟的调整方法。 + +事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万的时间序列改进索引方法。 + +### 资源消耗 + +当试图量化 Prometheus (或其他任何事情,真的)时,资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上,由于给定的需求,Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。由于自身的架构设计,V2 存储系统构建样本数据块相当缓慢,这一点导致内存占用随时间递增。当数据块完成之后,它们可以写到磁盘上并从内存中清除。最终,Prometheus 的内存使用到达平衡状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新,序列分流都会增加内存、CPU、磁盘 IO 的使用。如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。 + +为每个时间序列保存一个文件这种方法也使得单一查询很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中,查询的序列文件就会被打开,然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量,Prometheus 就会因 OOM 被杀死而退出。 +在查询语句完成之后,加载的数据便可以被再次释放掉,但通常会缓存更长的时间,以便更快地查询相同的数据。后者看起来是件不错的事情。 + +最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为拥有太多小批量数据以及在页的边界上未精确对齐的数据而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中发现会缩减硬件寿命的问题。这一点对于数据库应用的高写入吞吐量来说仍然相当普遍,但我们应该放眼看看是否可以解决它。 + +### 重新开始 + +到目前为止我们对于问题域,V2 存储系统是如何解决它的,以及设计上的问题有了一个清晰的认识。我们也看到了许多很棒的想法,这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。 + +性能与资源使用这种最关键的部分直接导致了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。 + +这就是我解决问题的捷径——跳过令人头疼,失败的想法,数不尽的草图,泪水与绝望。 + +### V3—宏观设计 + +我们存储系统的宏观布局是什么?简而言之,是当我们在数据文件夹里运行 `tree` 命令时显示的一切。看看它能给我们带来怎样一副惊喜的画面。 + +``` +$ tree ./data +./data +├── b-000001 +│ ├── chunks +│ │ ├── 000001 +│ │ ├── 000002 +│ │ └── 000003 +│ ├── index +│ └── meta.json +├── b-000004 +│ ├── chunks +│ │ └── 000001 +│ ├── index +│ └── meta.json +├── b-000005 +│ ├── chunks +│ │ └── 000001 +│ ├── index +│ └── meta.json +└── b-000006 + ├── meta.json + └── wal + ├── 000001 + ├── 000002 + └── 000003 +``` + +在最顶层,我们有一系列以 `b-` 为前缀编号的block。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列数据点的原始块raw chunks of data points。与 V2存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。 +`index`文件的存在应不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。 + +但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含“wal”文件夹?理解这两个疑问便能解决九成的问题 。 + +#### 许多小型数据库 + +我们分割横轴,即将时间域分割为不重叠的块。每一块扮演者完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。 + +``` + +t0 t1 t2 t3 now + ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ │ │ │ │ │ │ │ ┌────────────┐ + │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │ + │ │ │ │ │ │ │ │ └────────────┘ + └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^ + └──────────────┴───────┬──────┴──────────────┘ │ + │ query + │ │ + merge ─────────────────────────────────────────────────┘ +``` + +每一块的数据都是不可变的immutable。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有预传入的数据同样被写入临时的预写日志write ahead log中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们加载内存数据库。 +所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志,偏移量,变体和 CRC32 校验。纸上得来终觉浅,绝知此事要躬行。 + +这种布局允许我们扩展查询范围到所有相关的块上。每个块上的部分结果最终合并成完整的结果。 + +这种横向分割增加了一些很棒的功能: + +* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的一系列数据,它可以初步解决序列分流的问题。 +* 当完成一个块,我们可以通过顺序的写入大文件从内存数据库中保存数据。这样可以避免任何的写入放大,并且 SSD 与 HDD 均适用。 +* 我们延续了 V2 存储系统的一个好的特性,最近使用而被多次查询的数据块,总是保留在内存中。 +* 足够好了,我们也不再限定 1KiB 的数据块尺寸来使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。 +* 删除旧数据变得极为简单快捷。我们仅仅只需删除一个文件夹。记住,在旧的存储系统中我们不得不花数个小时分析并重写数亿个文件。 + +每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据以供人们简单的阅读。 + +##### mmap + +将数百万个小文件合并为一个大文件使得我们用很小的开销就能保持所有的文件都打开。这就引出了 [`mmap(2)`][8] 的使用,一个允许我们通过文件透明地回传虚拟内存的系统调用。为了简便,你也许想到了交换空间swap space,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。 + +这意味着我们可以当作所有数据库的内容都保留在内存中却不占用任何物理内存。仅当我们访问数据库文件确定的字节范围时,操作系统从磁盘上惰性加载lazy loads页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。大体上,操作系统已足够资格作出决定,因为它拥有整个机器和进程的视图。查询的数据可以相当积极的缓存进内存,但内存压力会使得页被逐出。如果机器拥有未使用的内存,Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回。 +因此,查询不再轻易地使我们的进程 OOM,因为查询的是更多的持久化的数据而不是装入内存中的数据。内存缓存大小变得完全自适应,并且仅当查询真正需要时数据才会被加载。 + +就个人理解,如果磁盘格式允许,这就是当今大多数数据库的理想工作方式——除非有人自信的在进程中智胜操作系统。我们做了很少的工作但确实从外面获得了很多功能。 + +#### 压缩 + +存储系统需要定期的“切”出新块并写入之前完成的块到磁盘中。仅在块成功的持久化之后,写之前用来恢复内存块的日志文件(wal)才会被删除。 +我们很乐意将每个块的保存时间设置的相对短一些(通常配置为 2 小时)以避免内存中积累太多的数据。当查询多个块,我们必须合并它们的结果为一个完成的结果。合并过程显然会消耗资源,一个周的查询不应该由 80 多个部分结果所合并。 + +为了实现两者,我们引入压缩compaction。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或为提升查询性能重建样本块。 + +``` + +t0 t1 t2 t3 t4 now + ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before + └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘ + ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐ + │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A) + └─────────────────────────────────────────┘ └───────────┘ └───────────┘ + ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐ + │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B) + └──────────────────────────┘ └──────────────────────────┘ └───────────┘ +``` + +在这个例子中我们有一系列块`[1,2,3,4]`。块 1,2 ,3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。 + +#### 保留 + +我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据?相当简单,只要根据块文件夹下的配置的保留窗口里有无数据而删除该文件夹。在下面的例子中,块 1 可以被安全地删除,而块 2 则必须一直保持到界限后面。 + +``` + | + ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . . + └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘ + | + | + retention boundary +``` + +得到越旧的数据,保存的块也就越大,因为我们会压缩之前的压缩块。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。 +方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的总磁盘块的消耗。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。 + +总结一下,保留与删除从非常昂贵到了几乎没有成本。 + +> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是。而且可能还会做的更好。 + +> 在内存中打包数据,定期的写入日志并刷新磁盘的模式在现在相当普遍。 +> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB,Cassandra,InfluxDB 和 HBase。关键是避免重复发明劣质的轮子,采用经得起验证的方法,并正确地运用它们。 +> 这里仍有地方来添加你自己的黑魔法。 + +### 索引 + +研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就有了改进后 `O(n^2)` 的复杂度。——恩,等等...糟糕。 +快速地想想“算法 101”课上提醒我们的,在理论上它并未带来任何好处。如果之前就很糟糕,那么现在也一样。理论是如此的残酷。 + +实际上,我们大多数的查询已经可以相当快地被相应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app=”nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。 + +为此,每个序列被赋上一个唯一的 ID 来在常数时间内获取,例如 O(1)。在这个例子中 ID 就是 我们的正向索引。 + +> 示例:如果 ID 为 10,29 ,9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据也不会影响查找速度。 + +简而言之,如果 n 是我们序列总数,m 是给定查询结果的大小,使用索引的查询复杂度现在就是 O(m)。查询语句跟随它获取数据的数量 m 而不是被搜索的数据体 n 所扩展是一个很好的特性,因为 m 一般相当小。 +为了简单起见,我们假设可以在常数时间内查找到倒排索引对应的列表。 + +实际上,这几乎就是 V2 存储系统已有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 m 又成了 O(n)。这一点在预料之中也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。 + +#### 标签组合 + +数百万个带有标签的数据很常见。假设横向扩展着数百个实例的“foo”微服务,并且每个实例拥有数千个序列。每个序列都会带有标签`app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`。 + +为了找到适应所有标签选择子的序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的尺寸为 O(n),所以在嵌套地为每个列表进行暴力求解brute force solution下,运行时间为 O(n^2)。与其他的集合操作耗费相同,例如取并集 (`app="foo" OR app="bar"`)。当添加更多标签选择子在查询语句上,耗费就会指数增长到 O(n^3), O(n^4), O(n^5), ... O(n^k)。有很多手段都能通过改变执行顺序优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。 + +这便是 V2 存储系统使用的基本方法,幸运的是,似乎稍微的改动就能获得很大的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样? + +假设这个例子的列表用于我们最初的查询: + +``` +__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ] + app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ] + + intersection => [ 1000, 1001 ] +``` + +它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 O(2n)=O(n),因为我们总是在一个列表上移动。 + +两个以上列表的不同集合操作也类似。因此 k 个集合操作仅仅改变了因子 O(k*n) 而不是最坏查找运行时间下的指数 O(n^k)。 +我在这里所描述的是任意一个[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中很多附加的数据,例如单词位置和和频率。 +似乎存在着无止境的研究来提升实际的运行时间,通常都是对输入数据做一些假设。不出意料的是,仍有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持基本的方法不使用压缩,仅做微小的调整来跳过大范围非交叉的 ID。 + +尽管维持排序好的 ID 听起来很简单,但实践过程中不是总能完成的。例如,V2 存储系统为新的序列赋上一个哈希值来当作 ID,我们就不能轻易地排序倒排索引。另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上独立的不可变索引来解决这一问题,仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。 + +### 基准测试 + +我发起了一个最初版本的基准测试,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点对应到这些序列中。这个方法仅仅测试单独的存储系统,快速的找到高并发负载场景下的运行瓶颈和触发死锁至关重要。 + +在概念性的运用完成之后,基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量—并且所有 Chrome 的页面和 Slack 都保持着运行。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值。(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。 + +我们实际上没有可重复的 Prometheus 基准测试配置,特别是对于不同版本的 A/B 测试。亡羊补牢为时不晚,[现在就有一个了][11]! + +工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用裸机服务器bare metal servers更能反映出用户基础。我们部署两个 Prometheus 1.5.2 服务器(V2 存储系统)和两个从 2.0 分支继续开发的 Prometheus (V3 存储系统) 。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上,并且让其暴露典型的微服务量。此外,Kubernetes 集群本身和节点也被监控着。整个配置由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pods 并衍生新的 pods,生成新的序列。查询负载通过典型的查询选择来模拟,对每个 Prometheus 版本都执行一次。 + +总体上,伸缩与查询的负载和采样频率一样极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作量。就结果而言,Prometheus 1.5.2 和 2.0 之间的性能差异在不温和的环境下会变得更大。 +总而言之,我们每秒从 850 个暴露 50 万数据的目标里收集了大约 11 万份样本。 + +在此配置运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。 + +> 请注意从 Prometheus 图形界面的截图中轻微截断的 Y 轴 + + ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png) +> 堆内存使用(GB) + +内存资源使用对用户来说是最为困扰的问题,因为它相对的不可预测且能够导致进程崩溃。 +显然,被查询的服务器正在消耗内存,这极大程度上归咎于查询引擎的开销,这一点可以当作以后优化的主题。总的来说,Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。 + + ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png) +> CPU 使用(核心/秒) + +类似的模式展示 CPU 使用,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的时间,我们新的存储系统 CPU 消耗可忽略不计。 + + ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png) +> 磁盘写入(MB/秒) + +图片展示出的磁盘利用率取得了令人意想不到的提升。这就清楚的展示了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。 +Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.Prometheus 2.0 在另一方面,每秒仅仅写入大约一兆字节的日志文件。当块压缩到磁盘之时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。 + + ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png) +> 磁盘大小(GB) + +与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的配置中,这样做很大程度上是正确地,但是因为我们需要处理高序列分流,所以还要考虑每个序列的开销。 +如我们所见,Prometheus 1.5 在两个版本达到稳定状态之前,使用的存储空间因保留操作而急速上升。Prometheus 2.0 似乎在每个序列上具有更少的消耗。我们可以清楚的看到写入日志线性地充满整个存储空间,然后当压缩完成后立刻掉下来。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。 + +前景大好。剩下最重要的部分是查询延迟。新的索引应当优化了查找的复杂度。没有实质上发生改变的是处理数据的过程,例如 `rate()` 函数或聚合。这些就是查询引擎要做的东西了。 + + ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png) +> 第 99 个百分位查询延迟(秒) + +数据完全符合预期。在 Prometheus 1.5 上,查询延迟随着存储的数据而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比,Prometheus 从一开始就保持在合适的位置。 +我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过估计以下方面被选中:查询范围和即时查询的组合,进行或轻或重的计算,访问或多或少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。 +尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且提升了高压基准测试场景下 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。 + + ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png) +> 摄入的样本/秒 + +最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两点之前的曲线完全匹配这一事实希望足够具有说服力) +尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流高压导致了无法采集更多的数据。 + +那么现在每秒可以摄入的绝对最大absolute maximum样本数是多少? + +我不知道——而且故意忽略。 + +存在的很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。 + +我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。 + +> 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,提高 CPU 使用率。我没有重复进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。 + +### 总结 + +Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西:超大规模hyper-scale高收敛度hyper-convergent,GIFEE 基础设施。好吧,它似乎运行的不错。 + +第一个配备 V3 存储系统的 [alpha 版本 Prometheus 2.0][12] 已经可以用来测试了。在早期阶段预计还会出现崩溃,死锁和其他 bug。 + +存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,之一点令人非常惊讶。 + +> 这里需要感谢很多人作出的贡献,以下排名不分先后: + +> Bjoern Rabenstein 和 Julius Volz 在 V2 存储引擎上的打磨工作以及 V3 存储系统的反馈,这为新一代的设计奠定了基础。 + +> Wilhelm Bierbaum 对新设计不断的建议与见解作出了很大的贡献。Brian Brazil 不断的反馈确保了我们最终得到的是语义上合理的方法。与 Peter Bourgon 深刻的讨论验证了设计并形成了这篇文章。 + +> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD,浮点数,序列化格式的同学。 + + +-------------------------------------------------------------------------------- + +via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/ + +作者:[Fabian Reinartz ][a] +译者:[译者ID](https://github.com/LuuMing) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://twitter.com/fabxc +[1]:https://en.wikipedia.org/wiki/Inode +[2]:https://prometheus.io/ +[3]:https://kubernetes.io/ +[4]:https://en.wikipedia.org/wiki/Write_amplification +[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ +[6]:https://prometheus.io/docs/practices/rules/ +[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf +[8]:https://en.wikipedia.org/wiki/Mmap +[9]:https://en.wikipedia.org/wiki/Inverted_index +[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices +[11]:https://github.com/prometheus/prombench +[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/ +[13]:https://github.com/prometheus/tsdb diff --git a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md deleted file mode 100644 index 1ac86027b9..0000000000 --- a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md +++ /dev/null @@ -1,83 +0,0 @@ -# Adobe Lightroom 的三个开源替代 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) - -如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深,清晰度以及真实质感是无法和口袋中的智能手机相比的。 - -所有的这些功能在便利性上仅有很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的RAW格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,因此Adobe Lightroom便无可替代。但是Adobe Lightroom的昂贵价格,月付的订阅费用以及专有许可证都使更多人开始关注其开源替代的软件。 - -Lightroom 有两大主要功能:处理 RAW 格式的图片文件,以及数字资产管理系统(DAM:Digital Asset Management) —— 通过标签,评星以及其他的元数据信息来简单清晰地整理照片。 - -在这篇文章中,我们将介绍三个开源的图片处理软件:Darktable,LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于 开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:"[开源项目的 DAM 管理][2]“,他分享了他在自己的 [_Lunatics!_][3] 电影项目研究过的开源多媒体软件。 - -### Darktable - -![Darktable][4] - -类似其他两个软件,darktable 可以处理RAW 格式的图像并将他们转换成可用的文件格式—— JPEG,PNG,TIFF, PPM, PFM 和 EXR,它同时支持Google 和 Facebook 的在线相册,上传至Flikr,通过邮件附件发送以及创建在线相册。 - -它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是”无损的“ —— 你的初始 RAW 图像文件始终会被保存。 - -Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG,CR2,DNG ,OpenEXR和PFM等格式的支持。图像在一个数据库中显示,因此你可以轻易地filter并查询这些元数据,包括了文字标签,评星以及颜色标签。软件同时支持21种语言,支持 Linux,MacOS,BSD,Solaris 11/GNOME 以及 Windows (Windows 版本是最新发布的,Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能) - -Darktable 在开源证书 [GPLv3][7] 下被公开,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。 - -### LightZone - -![LightZone's tool stack][11] - - [LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它跨平台,有 Windows,MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说LightZone 其他的特性。 - -这个软件最初是一个在专有许可证下的图像处理软件,后来在 BSD 证书下开源。以及,在你下载这个软件之前,你必须注册一个免费账号。因此 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。) - -除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制到另一些图像。如果想要编辑图片的一部分,你还可以通过矢量工具或者根据色彩和亮度来选择像素。 - -想要了解更多,见 LightZone 的[论坛][13] 或者查看Github上的 [源代码][14]。 - -### RawTherapee - -![RawTherapee][15] - -[RawTherapee][16] 是另一个开源([GPL][17])的RAW图像处理器。就像 Darktable 和 LightZone,它是跨平台的(支持 Windows,MacOS 和 Linux),一切修改都在无损条件下进行,因此不论你叠加多少滤镜做出多少改变,你都可以回到你最初的 RAW 文件。 - -RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板方便准确选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。 - -这个软件可以从多数相机直接导入 RAW 文件,并且支持超过25种语言得以广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及 CPU 性能。 - -RawTherapee 还提供了很多其他 [功能][19];可以查看它的 [官方文档][20] 以及 [源代码][21] 了解更多细节。 - -你是否在摄影中使用另一个开源的 RAW图像处理工具?有任何建议和推荐都可以在评论中分享。 - ------- - -via: https://opensource.com/alternatives/adobe-lightroom - -作者:[Opensource.com][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[scoutydren](https://github.com/scoutydren) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com -[1]: https://en.wikipedia.org/wiki/Raw_image_format -[2]: https://opensource.com/article/18/3/movie-open-source-software -[3]: http://lunatics.tv/ -[4]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC "Darktable" -[5]: http://www.darktable.org/ -[6]: https://www.darktable.org/about/faq/#faq-windows -[7]: https://github.com/darktable-org/darktable/blob/master/LICENSE -[8]: https://www.darktable.org/about/features/ -[9]: https://www.darktable.org/resources/ -[10]: https://github.com/darktable-org/darktable -[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ "LightZone's tool stack" -[12]: http://www.lightzoneproject.org/ -[13]: http://www.lightzoneproject.org/Forum -[14]: https://github.com/ktgw0316/LightZone -[15]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw "RawTherapee" -[16]: http://rawtherapee.com/ -[17]: https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt -[18]: https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions -[19]: http://rawpedia.rawtherapee.com/Features -[20]: http://rawpedia.rawtherapee.com/Main_Page -[21]: https://github.com/Beep6581/RawTherapee diff --git a/translated/tech/20180629 100 Best Ubuntu Apps.md b/translated/tech/20180629 100 Best Ubuntu Apps.md new file mode 100644 index 0000000000..c82f5cc099 --- /dev/null +++ b/translated/tech/20180629 100 Best Ubuntu Apps.md @@ -0,0 +1,1195 @@ +warmfrog translating + +100 Best Ubuntu Apps +====== + +今年早些时候我们发布了一个 [2018 年最好的 20 个 Ubuntu 应用][1]列表,可能对很多用户来说都很有用。现在我们几乎到 2018 年下半年了,所以今天我们打算看一下 Ubuntu 上最好的 100 个应用,你可能会觉得有帮助。 + +![100 Best Ubuntu Apps][2] + +很多用户最近从 Microsoft Windows 转换到了 Ubuntu,可能面临着这样一个困境:寻找它们之前使用了数年的操作系统上的应用软件的最好替代应用。Ubuntu 拥有上千个免费使用和开源应用软件比 Windows 和其它 OS 上的付费软件运行的更好。 + +下列列表归纳了各种分类下很多应用软件的功能特点,因此,你可以找到匹配你的需求的最好的应用。 + +### **1\. Google Chrome 浏览器** + +几乎所有 Linux 发行版默认安装了 Mozilla Firefox 网络浏览器,并且它是 Google Chrome 的强力竞争对手。但是 Chrome 相对 Firefox 有它自己的优点,给了你 Google 账户的入口,你可以通过它来同步你的书签,浏览历史,扩展。例如从其它操作系统和手机的 Chrome 浏览器同步。 + +![Chrome][3] + +Google Chrome 为 Linux 集成了最新的 Flash 播放器,其它 Linux 上的浏览器像 Mozilla Firefox 和 Opera 网络浏览器则不是这样。如果你在 Windows 上经常使用 Chrome,那么在 Linux 上也用它是最好的选择。 + +### 2\. **Steam** + +现在在 Linux 上玩游戏已经不是问题了,这在很多年前还是一个遥不可及的梦。在 2013 年,Valve 发布了 Linux 上的 Steam 游戏客户端,此后一切都变了。早期用户犹豫着从 Windows 转到 Linux,只是因为它们不能在 Ubuntu 上玩它们最喜欢的游戏,但是现在已经不是这样了。 + +![Steam][4] + +一些用户可能发现在 Linux 上安装 Steam 有点棘手,但如果能在 Linux 上玩上千的 Steam 游戏时这么做就是值得的。一些流行的像 Counter Strike 这样的高端游戏:Global Offensive, Hitman, Dota 2 在 Linux 上都能获取,你只需要确保你满足玩这些游戏的最小硬件需求。 + +``` +$ sudo add-apt-repository multiverse + +$ sudo apt-get update + +$ sudo apt-get install steam +``` + +### **3\. WordPress 桌面客户端** + +是的,没错,WordPress 有它专有的 Ubuntu 平台的客户端,你可以用来管理你的 WordPress 站点。你同样可以在桌面客户端上单独写和设计桌面客户端而不用转到浏览器。 + +![][5] + +如果你有 WordPress 支持的站点,那么这个桌面客户端能够让你在单个窗口内追踪所有的 WordPress 通知。你同样可以检查站点的博客性能状态。桌面客户端可以在 Ubuntu 软件中心中获取,你可以在那里下载和安装。 + +### **4\. VLC 媒体播放器** + +VLC 是一个非常流行的跨平台的开源媒体播放器,同样在 Ubuntu 中可以获取。使得 VLC 成为一个最好的媒体播放器的原因是它能够播放地球上能够获得的任何音频视频格式,而且不会出现任何问题。 + +![][6] + +VLC 有一个平滑的用户界面,易于使用,除此之外,它提供了很多特点包括在线视频流,音频和视频自定义,等。 + +``` +$ sudo add-apt-repository ppa:videolan/master-daily +$ sudo apt update +$ sudo apt-get install vlc qtwayland5 +``` + +### **5\. Atom 文本编辑器** + +由 Github 开发,Atom 是一个免费和开源的文本编辑器,它同样能够被用做集成开发环境(IDE)来进行主流编程语言的编码和编辑。Atom 开发者声称它是 21 世纪的完全可控制的文本编辑器。 + +![][7] + +Atom 文本编辑器属于最好的用户界面之一,它是一个富文本编辑器提供了自动补全,语法高亮、扩展与插件支持。 + +``` +$ sudo add-apt-repository ppa:webupd8team/atom +$ sudo apt-get update +$ sudo apt-get install atom +``` + +### **6\. GIMP 图像编辑器** + +GIMP (GNU 图形操作程序)是 Ubuntu 上免费和开源的图像编辑器。有争议说它是 Windows 上 Adobe Photoshop 的最好替代品。如果你过去经常用 Adobe Photoshop,会觉得很难习惯 GIMP,但是你可以自定义 GIMP 使它看起来与 Photoshop 非常相似。 + +![][8] + +GIMP 是一个功能丰富的图片编辑器,你可以随时通过安装扩展和插件来使用附加的功能。 + +``` +$ sudo apt-get install gimp +``` + +### **7\. Google Play 音乐桌面播放器** + +Google Play 音乐桌面播放器是一个开源的音乐播放器,它是 Google Play Music 的一个替代品,或者说更好。Google 总是缺少桌面的音乐客户端,但第三方的应用完美的填充了空白。 + +![][9] + +就像你在截屏里看到的,它的界面在外观和感觉上都是首屈一指的。你仅仅需要登录 Google 账户,之后会导入所有你的音乐和你的最爱到桌面客户端里。你可以从它的官方 [站点][10]下载安装文件并使用软件中心安装它。 + +### **8\. Franz** + +Franz 是一个即时消息客户端,结合了聊天和信息服务到一个应用中。它是现代化的即时消息平台之一,在单个应用中支持 Facebook Messenger, WhatsApp, Telegram, 微信,Google Hangouts, Skype。 + +![][11] + +Franz 是一个完整的消息平台,你同样可以在商业中用它来管理大量的客户服务。为了安装 Franz,你需要从它的[网站][12]下载安装包,在软件中心中打开。 + +### **9\. Synaptic 包管理器** + + Synaptic 包管理器是 Ubuntu 上必有工具之一,因为它为我们通常在命令行界面安装软件的 ‘apt-get’ 命令提供了用户图形界面。它是各种 Linux 发行版中默认的应用的强力对手。 + +![][13] + +Synaptic 拥有非常简单的用户图形界面,相比其它的应用商店非常快并易于使用。左手边你可以浏览不同分类的各种应用,也可以轻松安装和卸载。 + +``` +$ sudo apt-get install synaptic +``` + +### **10\. Skype** + +Skype 是一个非常流行的跨平台视频电话应用,如今在 Linux 系统的 Snap 应用中可以获取了。Skype 是一个即时通信应用,它提供了视频和音频通话,桌面共享等特点。 + +![][14] + +Skype 有一个优秀的用户图形界面,与 Windows 上的桌面客户端非常相似,易于使用。它对于从 Windows 上转换来的用户来说非常有用。 + +``` +$ sudo snap install skype +``` + +### **13\. VirtualBox** + +VirtualBox 是由 Oracle 公司开发的跨平台的虚拟化软件应用。如果你喜欢尝试新的操作系统,那么 VirtualBox 是为你准备的必备的 Ubuntu 应用。你可以尝试 Windows 内的 Linux,Mac 或者 Linux 系统中的 Windows 和 Mac。 + +![][15] + +VB 实际做的是让你在宿机操作系统里可视化的运行顾客操作系统。它创建虚拟硬盘并在上面安装顾客操作系统。你可以在 Ubuntu 软件中心直接下载和安装。 + +### **12\. Unity Tweak 工具** + +Unity Tweak 工具(Gnome Tweak 工具)对于每个 Linux 用户都是必须有的,因为它给了用户根据需要自定义桌面的能力。你可以尝试新的 GTK 主题,设置桌面角落,自定义图标集,改变 unity 启动器,等。 + +![][16] + +Unity Tweak 工具对于用户来说可能非常有用,因为它包含了从基础到高级配置的所有内容。 + +``` +$ sudo apt-get install unity-tweak-tool +``` + +### **13\. Ubuntu Cleaner** + +Ubuntu Cleaner是一个系统管理工具,尤其被设计为移除不再使用的包,移除不必要的应用和清理浏览器缓存的。Ubuntu Cleaner 有易于使用的简易用户界面。 + +![][17] + +Ubuntu Cleaner是 BleachBit 最好的替代品之一,BleachBit 是 Linux 发行版上的相当好的清理工具。 + +``` +$ sudo add-apt-repository ppa:gerardpuig/ppa +$ sudo apt-get update +$ sudo apt-get install ubuntu-cleaner +``` + +### 14\. Visual Studio Code + +Visual Studio Code 是一个代码编辑器,你会发现它与 Atom 文本编辑器和 Sublime Text 非常相似,如果你曾用过的话。Visual Studio Code 证明是非常好的教育工具,因为它解释了所有东西,从 HTML 标签到编程中的语法。 + +![][18] + +Visual Studio 自身集成了 Git,它有优秀的用户界面,你会发现它与 Atom Text Editor 和 Sublime Text 非常相似。你可以从 Ubuntu 软件中心下载和安装它。 + +### **15\. Corebird** + +如果你在找你可以用 Twitter 的桌面客户端,那 Corebird Twitter 客户端就是你在找的。它被争议是 Linux 发行版下可获得的最好的 Twitter 客户端,它提供了与你手机上的 Twitter 应用非常相似的功能。 + +![][19] + + 当有人喜欢或者转发你的 tweet 或者给你发消息时,Corebird Twitter 客户端同样会给你通知。你同样可以在这个客户端上添加多个 Twitter 账户。 + +``` +$ sudo snap install corebird +``` + +### **16\. Pixbuf** + +Pixbuf 是来自 Pixbuf 图片社区中心的一个桌面客户端,让你上传,分享和出售你的相片。它支持图片共享到社交媒体像 Facebook,Pinterest,Instagram,Twitter,等等,也包括照相服务像 Flickr,500px and Youpic。 + +![][20] + +Pixbuf提供了分析等功能,可以让你统计点击量、转发量、照片的回复、预定的帖子、专用的iOS 扩展。它同样有移动应用,因此你可以在任何地方连接到你的 Pixbuf 账户。 Pixbuf 在 Ubuntu 软件中心以 Snap 包的形式获得。 + +### **17\. Clementine 音乐播放器** + +Clementine 是一个跨平台的音乐播放器,并且是 Ubuntu 上默认音乐播放器 Rhythmbox 的良好竞争者。多亏它的友好的界面,它很快速并易于使用。它支持所有音频文件格式的声音回放。 + +![][21] + +除了播放本地库中的音乐,你也可以在线听 Spotify, SKY.fm, Soundcloud 等的广播。它也支持其它的功能像智能动态播放列表,从像 Dropbox,Google Drive 这样的云存储中同步音乐。 + +``` +$ sudo add-apt-repository ppa:me-davidsansome/clementine +$ sudo apt-get update +$ sudo apt-get install clementine +``` + +### **18\. Blender** + +Blender 是一个免费和开源的 3D 创建应用软件,你可以用来创建 3D 打印模型,动画电影,视频游戏,等。它自身集成了游戏引擎,你可以用来开发和测试视频游戏。 + +![blender][22] + +Blender 拥有赏心悦目的用户界面,易于使用,它包括了内置的渲染引擎,数字雕刻,仿真工具,动画工具,还有很多。考虑到它免费和它的特点,你甚至会认为它可能是 Ubuntu 上最好的应用之一。 + +### **19\. Audacity** + +Audacity 是一个开源的音频编辑应用,你可以用来记录、编辑音频文件。你可以从各种输入中录入音频,包括麦克风,电子吉它,等等。根据你的需要,它提供了编辑和裁剪音频的能力。 + +![][23] + +最近 Audacity 发布了 Ubuntu 上的新版本,新特点包括主题提升、放缩命令等。除了这些,它还提供了降噪等更多特点。 + +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity +$ sudo apt-get update +$ sudo apt-get install audacity +``` + +### **20\. Vim** + +Vim 是一个集成开发环境,你可以用作一个独立的应用或各种像 Python 等主流编程语言的命令行接口。 + +![][24] + +大多数程序员喜欢在 Vim 中编代码,因为它快速并且是一个可高度定制的集成开发环境。最初你可能觉得有点难用,但你会很快习惯它。 + +``` +$ sudo apt-get install vim +``` + +### **21\. Inkscape** + +Inkscape 是一个开源和跨平台的矢量图形编辑器,你会觉得它和 Corel Draw 和 Adobe Illustrator 很相似。用它可以创建和编辑矢量图形例如柱形图、logo、图表、插图等。 + +![][25] + +Inkscape 使用规模矢量图形(SVG),一个基于 XML 的 W3C 标准格式。它只是各种格式包括 JPEG、PNG、GIF、PDF、AI(Adobe Illustrator 格式)、VSD 等等。 + +``` +$ sudo add-apt-repository ppa:inkscape.dev/stable +$ sudo apt-get update +$ sudo apt-get install inkscape +``` + +### **22\. Shotcut** + +Shotcut 是一个免费、开源的跨平台的 Meltytech,LLC 在 MLT 多媒体框架下开发的视频编辑应用。你会发现它是 Linux 发行版上最强大的视频编辑器之一,因为它支持所有主要的音频,视频,图片格式。 + +![][26] + +它给了非线性编辑的各种文件格式多轨道视频的能力。它支持 4K 视频分辨率和各种音频,视频过滤,语气生成、音频混合和很多其它的。 + +``` +snap install shotcut -- classic +``` + +### **23\. SimpleScreenRecorder** + +SimpleScreenRecorder 是 Ubuntu 上的一个免费和轻量级的屏幕录制工具。屏幕录制非常有用,如果你是 YouTube 创作者或应用开发者。 + +![][27] + +它可以捕获桌面屏幕的视频/音频记录或直接录制视频游戏。在屏幕录制前你可以设置视频分辨率、帧率等。它有简单的用户界面,你会发现非常易用。 + +``` +$ sudo add-apt-repository ppa:marten-baert/simplescreenrecorder +$ sudo apt-get update +$ sudo apt-get install simplescreenrecorder +``` + +### **24\. Telegram** + +Telegram 是一个基于云的即时通信和网络电话平台,近年来非常流行。它是开源和跨平台的,用户可以用来发送消息,共享视频,图片,音频和其它文件。 + +![][28] + +Telegram 中容易发现的特点是加密聊天,语音信息,远程视频通话,在线位置和社交登录。在 Telegram 中隐私和安全拥有最高优先级,因此,所有你发送和接收的是端对端加密的。 + +``` +$ sudo snap install telegram-desktop +``` + +我们所知道的危害 Windows PC 的病毒不能危害 Ubuntu,因为在 Windows PC 中接收到的邮件的破坏性文件会破坏 Windows PC。因此,在 Linux 上有一些抗病毒应用是安全的。 + +![][29] + +ClamTk 是一个轻量级的病毒扫描器,可以扫描系统中的文件和文件夹并清理发现的有害文件。ClamTk 可以 Snap 包的形式获得,可以从 Ubuntu 软件中心下载。 + +### **26\. MailSpring** + +早期的 MailSpring 以 Nylas Mail 或 Nylas N1 而著名,是开源的邮件客户端。它保存所有的邮件在电脑本地,因此你可以在任何需要的时候访问它。它提供了高级搜索的功能,使用与或操作,因此你可以基于不同的参数搜索邮件。 + +![][30] + +MailSpring 有着和其它易于上手的邮件客户端同样优秀的用户界面。MailSpring 同样提供了私密性、安全性、规划期、通讯录管理、日历等功能特点。 + +### **27\. PyCharm** + +PyCharm 是我最喜欢的继 Vim 之后的 Python IDE 之一,因为它有优雅的用户界面,有很多扩展和插件支持。基本上,它有两个版本,一个是免费和开源的社区版,另一个是付费的专业版。 + +![][31] + +PyCharm 是高度自定义的 IDE 并且有很多功能像错误高亮、代码分析、集成单元测试和 Python 调试器等。PyCharm 对于大多数 Python 程序员和开发者来说是首选的。 + +### **28\. Caffeine** + +想象一下你在 Youtube 上看视频或阅读一篇新文章,突然你的 Ubuntu 锁屏了,我知道它很烦人。我们很多人都会遇到这种情况,所以 Caffeine 是一个阻止 Ubuntu 锁屏或屏幕保护程序的工具。 + +![][32] + +Caffeine Inhibitor 是一个轻量级的工具,它在通知栏添加图标,你可以在那里轻松的激活或禁止它。不需要额外的设置。 + +``` +$ sudo add-apt-repository ppa:eugenesan/ppa +$ sudo apt-get update +$ sudo apt-get install caffeine -y +``` + +### **29\. Etcher USB 镜像写入器** + +Etcher 是一个有 resin.io 开发的 USB 镜像写入器。它是一个跨平台的应用,帮助你将 ZIP、ISO、IMG 格式的镜像文件写入到 USB 存储中。如果你总是尝试新的操作系统,那么 Ethcher 是你必有的简单可靠的系统。 + +![][33] + +Etcher 有干净的用户界面指导你在三步内烧录镜像到 USB 驱动或 SD 卡的过程。步骤包括选择镜像文件,选择 USB 驱动 和最终的 flash(写文件到 USB 驱动)。你可以从它的[官网][34]下载和安装 Etcher。 + +### **30\. Neofetch** + +Neofetch 是一个酷炫的系统信息工具,通过在终端中运行 “neofetch” 命令,它会给你关于你的系统的所有信息。它酷是因为它给你关于桌面环境,内核版本,bash 版本和你正在运行的 GTK 主题信息。 + +![][35] + +与其它系统信息工具比较,Nefetch 是高度自定义的工具。你可以使用命令行进行各种自定义。 + +``` +$ sudo add-apt-repository ppa:dawidd0811/neofetch +$ sudo apt-get update +$ sudo apt-get update install neofetch +``` + +### 31\. Liferea + +Liferea(Linux 热点阅读器)是一个免费和开源的新闻聚集工具,用于在线新闻订阅。使用新的聚集工具非常快捷和简单,支持各种格式例如 RSS/RDF,Atom 等。 + +![][36] +Liferea 自带与 TinyTinyRSS 的同步支持,它给了你离线阅读的能力。你会发现,就可靠性和灵活性而言,它是 Linux 上最好的订阅工具之一。 + + +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/apps +$ sudo apt-get update +$ sudo apt-get install liferea +``` + +### 32\. Shutter + +在 Ubuntu 中很容易截屏,但当编辑截屏时 Shutter 是你必不可少的应用。它帮助你捕获,编辑和轻松的共享截屏。使用 Shutter 的选择工具,你可以选择屏幕的特定区域来截屏。 + +![][37] + +Shutter 是一个功能强大的截图工具,提供了添加截图效果,画线等功能。它同样给你上传截屏到各种图像保存站点的选项。你可以直接在 Ubuntu 软件中心中下载和安装。 + +### 33\. Weather + +Weather 是一个小的应用,给你关于你的城市或世界上其它位置的实时天气信息。它简单而且轻量级,给你超过 7 天的详细天气预报和今明两天的每个小时的细节信息。 + +![][38] + +它集成在 GNOME shell 中,给你关于最近搜索位置的当前天气状态。它有最小的用户界面,在最小硬件需求下运行很顺畅。 + +### 34\. Ramme + +Ramme 是一个很酷的非官方的 Instagram 桌面客户端,给你带来 Instagram 移动端的感觉。它是基于 Electron 的客户端,所以它替代了 Instagram 应用提供了主题自定义的功能。 + +![][39] + +但是由于 Instagram 的 API 限制,你不能使用 Ramme 客户端上传图像,但你总是可以通过订阅 Instagram,喜欢和评论,给好友发消息。你可以从 [Github]][40] 下载 Ramme 安装文件。 + +### **35\. Thunderbird** + +Thunderbird 是一个开源的邮件客户端,是很多 Linux 发行版的默认邮件客户端。尽管在 2017 年与 Mozilla 分离,Thunderbird 仍然是 Linux 平台非常流行的最好的邮件客户端。它自带特点像垃圾短信过滤,IMAP 和 POP 邮件同步,日历支持,通讯录集成和很多其它特定。 + +![][41] + +它是一个跨平台的邮件客户端,在所有支持的平台上完全由社区提供支持。多亏它的高度自定义特点,你总是可以改变它的外观和观感。 + +### **36\. Pidgin** + +Pidgin 是一个即时信息客户端,你能够在单个窗口下登录不同的即时网络。你可以登录到像 Google Talk,XMPP,AIM,Bonjour 等。 + +![][42] + +Pidgin 拥有所有你期待的即时通信的特点,你总是可以通过安装额外的插件来提升它的性能。 + +``` +$ sudo apt-get install pidgin +``` + +### **37\. Krita** + +Krita 是由 KDE 开发的免费和开源的数字打印,编辑和动画应用。它有优秀的用户界面,每个组件都放的很完美,因此你可以找到你需要的。 + +![][43] + +它使用 OpenGL 画布,这提升了 Krita 的性能,并且提供了很多特点相不同的绘画工具、动画工具、矢量工具、层、罩等很多。可在 Ubuntu 软件中心获取 Krita 并下载。 + +### **38\. Dropbox** + +Dropbox 是一个出色的云存储播放器,一旦安装,它在 Ubuntu 中运行得非常好。即使 Google Drive 在 Ubuntu 16.04 LTS 和以后的版本中运行得不错,就 Dropbox 提供的特点而言,Dropbox 仍然是 Linux 上的首选云存储工具。 + +![][44] +它总是在后台运行,备份你系统上的新文件到云存储,持续在你的电脑和云存储间同步。 + +``` +$ sudo apt-get install nautilus-dropbox +``` + +### 39\. Kodi + +Kodi 的前身是人们熟知的 Xbox 媒体中心(XBMC),是一个开源的媒体播放器。你可以在线或离线播放音乐、视频、播客、视频游戏等。这个软件最初是为第一代的 Xbox 游戏控制台开发的,之后慢慢地面向了个人电脑。 + +![][45] + +Kodi 有令人印象深刻的视频接口,快速而强大。它是可高度定制的媒体播放器,你可以通过安装插件,来获取在线流服务像 Pandora、 Spotify、Amazon Prime Video、Netflix and YouTube。 + +### **40\. Spotify** + +Spotify 是最好的在线媒体流站点之一。它提供免费和付费音乐、播客、视频流服务。早期的 Spotify 不支持 Linux,但现在它有全功能的 Ubuntu 客户端。 + +![][46] + + 与 Google Play 音乐播放器一样,Spotify 是必不可少的媒体播放器。你只需要登录你的 Spotify 账户,就能在任何地方获取你最爱的在线内容。 + +### 41\. Brackets + +Brackets 是一个有 Adobe 开发的开源的文本编辑器。它可被用来进行 web 开发和设计,例如 HTML,CSS 和 JavaScript。它随改变实时预览是一个很棒的特点,当你在脚本中修改时,你可以获得实时预览效果。 + +![][47] + +它是 Ubuntu 上的现代文本编辑器之一,拥有平滑的用户界面,这将 web 开发任务带到新的水平。它同样提供行内编辑器的特点,支持流行的扩展像 Emmet、Beautify、Git、File Icons 等等。 + +### 42\. Bitwarden + +现今,安全问题事件增加,用户密码被盗后,重要的数据受到连累,因此,账户安全性必须严肃对待。推荐你使用 Bitwarden,将你的所有账户和登录密码安全的存在一个地方。 + +![][48] + +Bitwarden 使用 AES-256 加密技术来存储所有的登录细节,只有用户可以访问这些数据。它同样帮你创建健壮的密码,因为弱密码容易被黑。 + +### 43\. Terminator + +Terminator 是一个开源终端模拟器,用 Java 语言开发的。它是一个跨平台的模拟器,允许你在单个窗口有多个终端,在 Linux 默认的终端模拟器中不是这样。 + +![][49] + +Terminator 其它杰出的特点包括自动日志、拖、丢、智能垂直和水平滚动等。 + +``` +$ sudo apt-get install terminator +``` + +### 44\. Yak Yak + +Yak Yak 是一个开源的非官方的 Google Hangouts 消息的桌面客户端。它可能是一个不错的 Microsort 的 Skype 的替代品,自身拥有很多让人吃惊的特点。你可以允许桌面通知,语言偏好,工作在最小内存或电源需求等。 + +![][50] + +Yak Yak 拥有你期待的所有任何即时消息应用的所有特点,例如类型指示、拖、拽媒体文件,音/视频电话。 + +### 45\. **Thonny** + +Thonny 是一个简单和轻量级的 IDE,尤其是为编程的初学者设计的。如果你是编程初学者,这是你必备的 IDE,因为当用 Python 编程的时候它会帮你学习。 + +![][51] + +Thonny 同样是一个很棒的调试工具,它支持调试过程中的活变量,除此之外,它还提供了执行函数调用是分离窗口、简易用户界面的特点。 + +``` +$ sudo apt-get install thonny +``` + +### **46\. Font Manager** + +Font Manager 是一个轻量级的工具,用于管理、添加、移除你的 Ubuntu 系统上的字体。尤其是为 Gnome 桌面环境构建的,当用户不知道如何在命令行管理字体的会发现这个工具非常有用。 + +![][52] + +Gtk+ Font Manager 不是为专业用户准备的,它有简单的用户界面,你会发现很容易导航。你只需要从网上下载字体文件,并使用 Font Manager 添加它们。 + +$ sudo add-apt-repository ppa:font-manager/staging +$ sudo apt-get update +$ sudo apt-get install font-manager + +### **47\. Atril Document Viewer** + +Atril 是一个简单的文件查看器,支持便携文件格式(PDF)、PostScript(PS)、Encapsulated PostScript(EPS)、DJVU 和 DVI。Atril 绑定在 MATE 桌面环境中,它比大多数 Linux 发行版中默认的文件查看器 Evince 更理想。 + +![][53] + +Atril 用简单和轻量级的用户界面,可高度自定义,提供了搜索、书签、UI 左侧的缩略图等特点。 + +``` +$ sudo apt-get install atril +``` + +### **48\. Notepadqq** + +如果你曾在 Windows 上用过 Notepad++,并且在 Linux 上寻找相似的程序,别担心,开发者们已经将它移植到 Linux,叫 Notepadqq。它是一个简单且强大的文本编辑器,你可以在日常生活中用它完成各种语言的任务。 + +![][54] + +尽管作为一个简单的文本编辑器,它有一些令人惊奇的特点,例如,你可以设置主题为暗黑或明亮模式、多选、正则搜索和实时高亮。 + +``` +$ sudo add-apt-repository ppa:notpadqq-team/notepadqq +$ sudo apt-get update +$ sudo apt-get install notepadqq +``` + +### **49\. Amarok** + +Amarok 是在 KDE 项目下开发的一个开源音乐播放器。它有直观的界面,让你感觉在家一样,因此你可以轻易的发现你最喜爱的音乐。除了 Clementine,当你在寻找 Ubuntu 上的完美的音乐播放器时,Amarok 是一个很棒的选择。 + +![][55] + +Amarok 上的一些顶尖的特点包括智能播放列表支持,集成在线服务像 MP3tunes、Last.fm、 Magnatune 等。 + +### **50\. Cheese** + +Cheese 是 Linux 默认的网络摄像头应用,在视频聊天或即时消息应用中非常有用。除了这些,你还可以用这个应用来照相或拍视频,附带一些迷人的特效。 + +![][56] + +它同样提供闪拍模式,让你快速拍摄多张相片,并提供你共享给你的朋友和家人的选项。Cheese 预装在大多数的 Linux 发行版中,但是你同样可以在软件中心下载它。 + +### **51\. MyPaint** + +MyPaint 是一个免费和开源的光栅图形编辑器,关注于数字绘画而不是图像操作和相片处理。它是跨平台的应用,与 Corel Painter 很相似。 + +![][57] + +MyPaint 可能是 Windows 上的 Microsoft Paint 应用的很好的替代。它有简单的用户界面,快速而强大。MyPaint 可以软件中心下载。 + +### **52\. PlayOnLinux** + +PlayOnLinux 是 WINE 模拟器的前端,允许你在 Linux 上运行 Windows 应用。你只需要在 WINE 中安装 Windows 应用,之后你就可以轻松的使用 PlayOnLinux 启动应用和游戏了。 + +![][58] + +### **53\. Akregator** + +Akregator 是在 KDE 项目下为 KDE Plasma 环境开发的默认的 RSS 阅读器。它有简单的用户界面,自带了 KDE 的 Konqueror 浏览器,所以你不需要在阅读新闻提要时切换应用。 + +Akregator 同样提供了桌面通知、自动提要等功能。你会发现在大多数 Linux 发行版中它是最好的提要阅读器。 + +### **54\. Brave** + +Brave 是一个开源的 web 浏览器,阻挡了广告和追踪,所以你可以快速和安全的浏览你的内容。它实际做的是代表你像网站和油管主播支付了费用。如果你喜欢像网站和和油管主播共享而不是看广告,这个浏览器更适合你。 + +![][60] + +对于那些想要安全浏览但有不想错过互联网上重要数据的人来说,这是一个新概念,一个不错的浏览器。 + +### **55\. Bitcoin Core** + +Bitcoin Core 是一个官方的客户端,非常安全和可靠。它持续追踪你的所有交易以保证你的所有交易都是有效的。它限制比特币矿工和银行完全掌控你的比特币钱包。 + +![][61] + +Bitcoin Core 同样提供了其它重要的特点像私钥备份、冷存储、安全通知等。 + +``` +$ sudo add-apt-repository ppa:bitcoin/bitcoin +$ sudo apt-get update +$ sudo apt-get install bitcoin-qt +``` + +### **56\. Speedy Duplicate Finder** + +Speedy Duplicate Finder 是一个跨平台的文件查找工具,用来帮助你查找你的系统上的重复文件,清理磁盘空间。它是一个智能工具,在整个硬盘上搜索重复文件,同样提供智能过滤功能,根据文件类型、扩展或大小帮你找到文件。 + +![][62] + +它有一个简单和整洁的用户界面,易于上手。从软件中心下载完后你就可以开始磁盘空间清理了。 + +### **57\. Zulip** + +Zulip 是一个免费和开源的群聊应用,被 Dropbox 获得了。它是用 Python 写的,用 PostgreSQL 数据库。它被设计和开发为其它聊天应用像 Slack 和 HipChat 的好的替代品。 + +![][63] + +Zulip 功能丰富,例如拖拽文件、群聊、私密聊天、图像预览等。它供养集成了 Github、JIRA、Sentry 和上百种其它服务。 + +### **58\. Okular** + +Okular 是 KDE 为 KDE 桌面环境开发的跨平台的文件查看器。它是一个简单的文件查看器,支持 Portable Document Format (PDF), PostScript, DjVu, Microsoft Compiled HTML help 和很多其它文件格式。 + +![][64] + +Okular 是在 Ubuntu 上你应该尝试的最好的文件查看器之一,它提供了 PDF 文件评论、画线、高亮等很多功能。你同样可以从 PDF 文件中提取文本文件。 + +### **59\. FocusWriter** + +FocusWriter 是一个集中注意力的字处理工具,隐藏了你的桌面屏幕,因此你能够专注写作。正如你看到的屏幕截图,整个 Ubuntu 屏被隐藏了,只有你和你的字处理工具。但你总是可以进入 Ubuntu 屏幕,当你需要的时候,只需要将光标移动到屏幕的边缘。 + +![][65] + +它是一个轻量级的字处理其,支持 TXT、RTF、ODT 文件格式。它同样提供可完全定制的用户界面,还有定时器、警报、每日目标、声音效果等特点,支持翻译为 20 种语言。 + +### **60\. Guake** + +Guake 是为 GNOME 桌面环境准备的酷炫的下拉式终端。当你任务完成后,你需要它消失时, Guake 会闪一下。你只需要按 F12 按钮来启动或退出,这样启动 Guake 币启动一个新的终端窗口更快。 + +![][66] + +Guake 是一个特点丰富的终端,支持多栏,只需要点击几下就能将你的终端内容保存到文件,并且有完全可定制的用户界面。 + +``` +$ sudo apt-get install guake +``` +### **61\. KDE Connect** + +KDE Connect 在 Ubuntu 上是一个很棒的应用,我很想在这篇马拉松文章中将它提早列出来,但是竞争激烈。总之 KDE Connect 可以将你的 Android 智能手机的通知直接转到 Ubuntu 桌面来。 + +![][67] + +有了 KDE Connect,你可以做很多事,例如检查手机电池水平,在电脑和 Android 手机间交换文件,剪贴板同步,发送短信,你还可以将你的手机当作无线鼠标或键盘。 + +``` +$ sudo add-apt-repository ppa:webupd8team/indicator-kedeconnect +$ sudo apt-get update +$ sudo apt-get install kdeconnect indicator-kdeconnect +``` + +### **62\. CopyQ** + +CopyQ 是一个简单但是非常有用的剪切板管理器,它保存你的系统剪切板内容,无论你做了什么改变,你都可以在你需要的时候恢复它。它是一个很棒的工具,支持文本、图像、HTML、和其它格式。 + +![][68] + +CopyQ 自身有很多功能像拖拽、复制/拷贝、编辑、移除、排序、创建等。它同样支持集成文本编辑器,像 Vim,所以如果你是程序员,这非常有用。 + +``` +$ sudo add-apt-repository ppa:hluk/copyq +$ sudo apt-get update +$ sudo apt-get install copyq +``` + +### **63\. Tilix** + +Tilix 是一个功能丰富的高级 GTX3 瓷砖终端模拟器。如果你使用 GNOME 桌面环境,那你会爱上 Tilix,因为它遵循了 GNOME 人类界面指导。Tilix 模拟器与大多数 Linux 发行版上默认终端模拟器相比,它给了你分离终端窗口为多个终端面板。 + +![][69] + +Tilix 提供了自定义链接、图片支持、多面板、拖拽、持续布局等功能。它同样支持键盘快捷方式,你可以根据你的需要在偏好设置中自定义快捷方式。 + +``` +$ sudo add-apt-repository ppa:webupd8team/terminix +$ sudo apt-get update +$ sudo apt-get install tilix +``` + +### **64\. Anbox** + +Anbox 是一个 Android 模拟器,让你在 Linux 系统中安装和运行 Android 应用。它是免费和开源的 Android 模拟器,通过使用 Linux 容器来执行 Android 运行时环境。它使用最新的 Linux 技术 和 Android 发布版,所以你可以运行任何原生的 Android 应用。 + +![][70] + +Anbox 是现代和功能丰富的模拟器之一,提供的功能包括无限制的应用使用,强大的用户界面,与宿主系统无缝集成。 + +首先你需要安装内核模块。 + +``` +$ sudo add-apt-repository ppa:morphis/anbox-support +$ sudo apt-get update +$ sudo apt install anbox-modules-dkms Now install Anbox using snap +$ snap install --devmode -- beta anbox +``` + +### **65\. OpenShot** + +你会发现 OpenShot 是 Linux 发行版中最好的开源视频编辑器。它是跨平台的视频编辑器,易于使用,不在性能方面妥协。它支持所有主流的音频、视频、图像格式。 + +![][71] + +OpenShot 有干净的用户界面,功能有拖拽、裁剪大小调整、大小调整、裁剪、快照、实时预览、音频混合和编辑等多种功能。 + +``` +$ sudo add-apt-repository ppa:openshot.developers/ppa +$ sudo apt-get update +$ sudo apt-get install openshot -qt +``` + +### **66\. Plank** + +如果你在为你的 Ubuntu 桌面寻找一个 Dock 导航栏,那 Plank 应该是一个选择。它是完美的,安装后你不需要任何的修改,除非你想这么做,它有内置的偏好面板,你可以自定义主题,Dock 大小和位置。 + +![][72] + +尽管是一个简单的导航栏,Plank 提供了通过拖拽来重新安排的功能,固定和运行应用图标,转换主题支持。 + + +``` +$ sudo add-apt-repository ppa:ricotz/docky +$ sudo apt-get update +$ sudo apt-get install plank +``` + +### **67\. Filezilla** + +Filezilla 是一个免费和跨平台的 FTP 应用,包括 Filezilla 客户端和服务器。它让你使用 FTP 和加密的 FTP 像 FTPS 和 SFTP 传输文件,支持 IPv6 网络协议。 + +![][73] + +它是一个简单的文件传输应用,支持拖拽,支持世界范围的各种语言,多任务的强大用户界面,控制和配置传输速度。 + +### **68\. Stacer** + +Stacer 是一个开源的系统诊断和优化工具,使用 Electron 开发框架开发的。它有一个优秀的用户界面,你可以清理缓存内存,启动应用,卸载不需要的应用,掌控后台系统进程。 + +![][74] + +它同样让你检查磁盘,内存和 CPU 使用情况,给你下载和上传的实时状态。它看起来像 Ubuntu clener 的强力竞争者,但是两者都有独特的特点。 + +``` +$ sudo add-apt-repository ppa:oguzhaninan/stacer +$ sudo apt-get update +$ sudo apt-get install stacer +``` + +### **69\. 4K Video Downloader** + +4K Video Downloader 是一个简单的视频下载工具,你可以用来从 Vimeo、Facebook、YouTube 和其它在线视频流站点下载视频,播放列表,频道。它支持下载 YouTuBe 播放列表和频道,以 MP4, MKV, M4A, 3GP 和很多其它 音/视频格式。 + +![][75] + +4K Video Downloader 不是你想的那么简单,除了正常的视频下载,它支持 3D 和 360 度 视频下载。它同样提供内置应用协议链接,直连 iTunes。你可以从[这里][76]下载。 + +### 70\. **Qalculate** + +Qalculate 是一个多目的、跨平台的桌面计算器,简单但是强大。它可以用来解决复杂的数学问题和等式,货币汇率转换和很多其它日常计算。 + +![][77] + +它有优秀的用户界面,提供了自定义功能,单元计算,符号计算,算数,画图,和很多你可以在科学计算器上发现的功能。 + +### **71\. Hiri** + +Hiri 是一个跨平台的邮件客户端,使用 Python 语言开发的。它有平滑的用户界面,就它的功能和服务而言,是 Micorsoft Outlook 的很好的替代品。这是很棒的邮件客户端,可以用来发送和接收邮件,管理通讯录,日历和任务。 + +![][78] + +它是一个丰富特点的邮件客户端,提供的功能有集成的任务管理器,邮件同步,邮件评分,邮件过滤等多种功能。 + + +``` +$ sudo snap install hiri +``` + +### **72\. Sublime Text** + +Sublime Text 是一个跨平台的源代码编辑器,用 C++ 和 Python 写的。它有 Python 语言编程接口(API),支持所有主流的编程语言和标记语言。它是简单轻量级的文本编辑器,可被用作 IDE,包含自动补全,语法高亮,分窗口编辑等功能。 + +![][79] + +这个文本编辑器包括一些额外特点:去任何地方、去定义、多选、命令模板和完全定制的用户界面。 + +``` +$ sudo apt-get install sublime-text +``` + +### **73\. TeXstudio** + +Texstudio 是一个创建和编辑 LaTex 文件的集成写作环境。它是开源的编辑器,提供了语法高亮、集成查看、交互式拼写检查、代码折叠、拖拽等特点。 + +![][80] + +它是跨平台的编辑器,有简单轻量级的用户界面,易于使用。它集成了 BibTex和 BibLatex 目录管理器,同样有集成的 PDF 查看器。你可以从[官网][81]和 Ubuntu 软件中心下载 Texstudio。 + +### **74\. QtQR** + +QtQR 是一个基于 Qt 的应用,让你在 Ubuntu 中创建和读取二维码。它是用 Python 和 Qt 开发的,有简单和轻量级的用户界面,你可以编码网站地址、邮件、文本、短消息等。你也可以用网络相机解码二维码。 + +![][82] + +如果你经常处理产品销售和服务,QtQR 会证明是有用的工具,但是我不认为在最小硬件要求下有和 QtQR 这样相似并顺畅运行的应用了。 + +``` +$ sudo add-apt-repository ppa: qr-tools-developers/qr-tools-stable +$ sudo apt-get update +$ sudo apt-get install qtqr +``` + +### **75\. Kontact** + +Kontact 是一个由 KDE 为 KDE 桌面环境开发的集成的个人信息管理器(PIM)。它集成了多个软件到一个集合中,集成了 KMail、KOrganizer和 KAddressBook 到一个用户界面,你可以管理所有的邮件、通讯录、日程表等。 + +![][83] + +它可能是 Microsoft Outlook 的非常好的替代,因为它快速且高度高配置的消息管理工具。它有很好的用户界面,你会发现很容易上手。 + +``` +$ sudo apt-get install kontact +``` + +### **76\. NitroShare** + +NitroShare 是一个跨平台、开源的网络文件共享应用。它让你轻松的在局域网的多个操作系统中共享文件。它简单而强大,当在局域网中运行 NitroShare 时,它会自动侦查其它设备。 + +![][84] + +文件传输速度让 NitroShare 称为一个杰出的文件共享应用,它在能够胜任的硬件中能够达到 GB 级的传输速度。没有必要额外配置,安装完成后你就可以开始文件传输。 + +``` +$ sudo apt-add-repository ppa:george-edison55/nitroshare +$ sudo apt-get update +$ sudo apt-get install nitroshare +``` + +### **77\. Konversation** + +Konversation 是一个为 KDE 桌面环境开发的开源的网络中继聊天(IRC)客户端。它给了到 Freenode 网络平岛的快速入口,你可以为大多数发行版找到支持。 + +![][85] + +它是一个简单的聊天客户端,支持 IPv6 链接, SSL 服务器支持,书签,屏幕通知,UTF-8 检测和另外的主题。它易于使用的 GUI 是高度可配置的。 + +``` +$ sudo apt-get install konversation +``` + +### **78\. Discord** + +如果你是硬核游戏玩家,经常玩在线游戏,我有一个很棒的应用推荐给你。Discord 是一个免费的网络电话应用,尤其是为在线游戏者们设计的。它是一个跨平台的应用,可用来文字或语音聊天。 + +![][86] + +Discord 是一个非常流行的语音通话应用游戏社区,因为它是完全免费的,它是 Skype,Ventrilo,Teamspeak 的很好的竞争者。它同样提供清晰的语音质量,现代的文本聊天,你可以共享图片,视频和链接。 + +### **79\. QuiteRSS** + +QuiteRSS是一个开源的 RSS 的新闻聚合和 Atom 新闻要点应用。它是跨平台的要点阅读器,用 Qt 和 C++ 写的。它有简单的用户界面,你可以改变为经典或者报纸模式。它集成了 webkit 浏览器,因此,你可以在单个窗口执行所有任务。 + +![][87] + +QuiteRSS 自带了很多功能,像内容过滤,自动规划摘要,导入/到处 OPML,系统托盘集成和很多其它你期待任何要点阅读器有的特点。 + +``` +$ sudo apt-get install quiterss +``` + +### **80\. MPV Media Player** + +MPV 是一个免费和开源的媒体播放器,基于 MPlayer 和 MPlayer 2。它有简单的用户界面,用户之需要拖拽音/视频文件来播放,因为在 GUI 上没有添加媒体文件的选项。 + +![][88] + +关于 MPV 的一件事是它可以轻松播放 4K 视频,Linux 发行版中的其它媒体播放器可能不是这样。它同样给了用户播放在线视频流站点像 YouTube 和 Dailymotion 的能力。 + +``` +$ sudo add-apt-repository ppa:mc3man/mpv-tests +$ sudo apt-get update +$ sudo apt-get install -y mpv +``` + +### **81\. Plume Creator** + +如果你是一个作家,那么 Plume Creator 是你必不可少的应用,因为你在 Ubuntu 上找不到其它应用像 Plume Creater 这样。写作和编辑故事、章节是繁重的任务,在 Plume Creator 这个令人惊奇的工具的帮助下,将大大帮你简化这个任务。 + +![][89] + +它是一个开源的应用,拥有最小的用户界面,开始你可能觉得困惑,但不久你就会习惯的。它提供的功能有:编辑笔记,摘要,导出 HTML 和 ODT 格式支持,富文本编辑。 + +``` +$ sudo apt-get install plume-creator +``` + +### **82\. Chromium Web Browser** + +Chromium 是一个由 Google 开发和发布的开源 web 浏览器。Chromium 就其外观和特点而言,很容易被误认为是 Chrome。它是轻量级和快速的网络浏览器,拥有最小用户界面。 + +### **83\. Simple Weather Indicator** + +Simple Weather Indicator 是用 Python 开发的开源天气提示应用。它自动侦查你的位置,并显示你天气信息像温度,下雨的可能性,湿度,风速和可见度。 + +![][91] + +Weather indicator 自带一些可配置项,例如位置检测,温度 SI 单元,位置可见度开关等等。它是一个酷应用,可以根据你的桌面来调整舒服。 + +### **84\. SpeedCrunch** + +SpeedCrunch 是一个快速和高精度的科学计算器。它预置了数学函数,用户定义函数,复数和单位转换支持。它有简单的用户界面并易于使用。 + +![][92] + +这个科学计算器有令人吃惊的特点像结果预览,语法高亮和自动补全。它是跨平台的,并有多语言支持。 + + +### **85\. Scribus** + +Scribus 是一个免费和开源的桌面出版应用,允许你创建宣传海报,杂志和图书。它是基于 Qt 工具器的跨平台英语,在 GNU 通用公共证书下发布。它是一个专业的应用,拥有 CMYK 和 ICC 颜色管理的功能,基于 Python 的脚本引擎,和 PDF 创建。 + +![][93] + +Scribus 有相当好的用户界面,易于使用,轻松在低配置下使用。在所有的最新的 Linux 发行版的软件中心可以下载它。 + +### **86.** **Cura** + +Cura 是一个由 David Braam 开发的开源的 3D 打印应用。Ultimaker Cura 是 3D 打印世界最流行的软件,并世界范围的百万用户使用。它遵守 3D 打印模型的 3 步:设计,准备,打印。 + +![][94] + +它是功能丰富的 3D 打印应用,为 CAD 插件和扩展提供了无缝支持。它是一款简单易于使用的工具,新手艺术家可以马上开始了。它支持主流的文件格式像 STL,3MF,和 OBJ。 + +### **87\. Nomacs** + +Nomacs 是一款开源、跨平台的图像浏览器,支持所有主流的图片格式包括 RAW 和 psd 图像。它可以浏览 zip 文件中的图像或者 Microsoft Office 文件并提取到目录。 + +![][95] + +Nomacs 拥有非常简单的用户界面,图像缩略图在顶部,它提供了一些基本的图像操作特点像修改,调整大小,旋转,颜色纠正等。 + +``` +$ sudo add-apt-repository ppa:nomacs/stable +$ sudo apt-get update +$ sudo apt-get install nomacs +``` + +### **88\. BitTicker** + +BitTicker 是一个 Ubuntu 上的在线比特币-USDT 收报。它是一个简单的工具,能够连接到 bittrex.com 市场,提取最新的 BTC-USDT 并在系统托盘显示 Ubuntu 时钟。 + +![][96] + +它是简单但有用的工具,如果你经常有规律的投资比特币并且必须了解价格波动的话。 + +### **89\. Organize My Files** + +Organize My Files 是一个跨平台的一键点击文件组织工具,它在 Ubuntu 中以 Snap 包的形式获得。它简单但是强大,帮助你找到未组织的文件,通过简单点击来掌控它们。 + +![][97] + +它有易于理解的用户界面,强大且快速。它提供了很多功能:自动组织,递归组织,智能过滤和多层文件夹组织等。 + +### **90\. GnuCash** + +GnuCash 是一个金融账户软件,在 GNU 公共证书下免费使用。它是个人和商业账户的理想软件。它有简单的用户界面允许你追踪银行账户,股票,收入和花费。 + +![][98] + +GnuCash 实现了双入口记账系统,基于专业的账户原则来确保帐薄的平衡和报告的精确。 + +v### **91\. Calibre** + +Calibre 是一个跨平台开源的面向你所有电子书需要的解决方案。它是一个简单的电子书管理器,提供显示、创建、编辑电子书、组织已存在的电子书到虚拟书库、同步和其它更多。 + +![][99] + +Calibre 同样帮你转换电子书到你需要的格式并发送到你的电子书阅读设备。Calibre 是一个很棒的工具,如果你经常阅读和管理电子书的话。 + +### **92\. MATE Dictionary** + +MATE Dictionary 是一个简单的词典,基本上是为 MATE 桌面环境开发的。你可以仅仅输入字然后这个字典会显示意思和引用。 + +![][100] + +它简单和轻量级的在线词典,有最小的用户接口。 + +### **93\. Converseen** + +Converseen 是免费的跨平台的批量图片处理应用,允许你转换、编辑、调整、旋转、裁剪大量图像,仅仅一次鼠标调集。它提供了图像批量重命名的功能,使用前缀或后缀,从一个 Windows 图标文件提取图像等。 + +![][101] + +它有很好的用户界面,易于使用,即时你是一个新手。它也能够转换整个 PDF 文件为图像集合。 + +### **94\. Tiled Map Editor** + +Tiled 是一个免费的水平映射图编辑器,允许你编辑各种形式的投影映射图,例如正交、等轴、和六角的。这个工具对于游戏开发者在游戏引擎开发周期内可能非常有用。 + +![][102] + +Tiled 是一个通用的映射图编辑器,让你创建强大的促进位置,图布局,碰撞区域和敌人位置。它保存所有的数据为 tmx 格式。 + +### **95.** **Qmmp** + +Qmmp 是一个免费和开源的音频播放器,用 C++ 和 Qt 开发的。它是跨平台的音频播放器,界面与 Winamp 很相似。 它有简单的直观的用户界面,Winamp 皮肤可替代默认的 UI。 + +![][103] + +它提供了自动唱片集封面获取,多艺术家支持,另外的插件和扩展支持和其它与 Winamp 相似的特点。 + +``` +$ sudo add-apt-repository ppa:forkotov02/ppa +$ sudo apt-get update +$ sudo apt-get install qmmp qmmp-q4 qmmp-plugin-pack-qt4 +``` + +### **96\. Arora** + +Arora 是一个免费开源的 web 浏览器,提供了专有的下载管理器,书签,私密模式,选项卡式浏览。 + +![][104] + +Arora web 浏览器由 Benjamin C. Meyer 开发,它由于它的轻量自然灵活的特点在 Linux 用户间很受欢迎。 + +### **97\. XnSketch** + +XnSketch 是一个 Ubuntu 上的酷应用,只需要几次点击,就能帮你转换你的图片为卡通或素描图。它展现了 18 个不同的效果例如:锐化、白描、铅笔画和其它的。 + +![][105] + +它有优秀的用户界面,易于使用。一些额外的特点包括容量和边缘强度调整,对比,亮度,饱和度调整。 + +### **98\. Geany** + +Geany 是一个简单轻量级的文本编辑器,像一个集成开发环境。它是跨平台的文本编辑器,支持所有主流编程语言包括Python、C++、LaTex、Pascal、 C#、 etc。 + +``` +$ sudo apt-get install geany +``` + +### **99\. Mumble** + +Mumble 是另一个 IP 上的语音应用相似与 Discord。Mumble 同样最初是为在线游戏者使用的,在端到端的聊天上用了客户端-服务器架构。声音质量在 Mumble 上非常好,它提供了端到端的加密来确保私密性。 + +![][107] + +Mumble 是一个开源应用,有简单的用户界面,易于使用。Mumble 在 Ubuntu 软件中心可以下载。 + +``` +$ sudo apt-get install mumble-server +``` + +### **100\. Deluge** + +Deluge 是一个跨平台的轻量级的 BitTorrent 客户端,能够用来在 Ubuntu 上下载文件。BitTorrent 客户端与很多 Linux 发行版一起发行,但 Deluge 是最好的 BitTorrent 客户端,界面简单,易于使用。 + +![][108] + +Deluge 拥有你发现的其它 BitTorrent 客户端有的所有功能,但是一个功能特别突出,它给了用户从其它设备进入客户端的能力,这样你可以远程下载文件了。 + +``` +$ sudo add-apt-repository ppa:deluge-team/ppa +$ sudo apt-get update +$ sudo apt-get install deluge +``` + +所以这些就是 2018 年我为大家选择的 Ubuntu 上最好的 100 个应用了。所有列出的应用都在 Ubuntu 18.04 上测试了,肯定在老版本上也能运行。请在 [@LinuxHint][109] 和 [@SwapTirthakar][110] 自由分享你的观点。 + +-------------------------------------------------------------------------------- + +via: https://linuxhint.com/100_best_ubuntu_apps/ + +作者:[Swapnil Tirthakar][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[warmfrog](https://github.com/warmfrog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxhint.com/author/swapnil/ +[1]:https://linuxhint.com/applications-2018-ubuntu/ +[2]:https://linuxhint.com/wp-content/uploads/2018/06/100-Best-Ubuntu-Apps.png +[3]:https://linuxhint.com/wp-content/uploads/2018/06/Chrome.png +[4]:https://linuxhint.com/wp-content/uploads/2018/06/Steam.png +[5]:https://linuxhint.com/wp-content/uploads/2018/06/Wordpress.png +[6]:https://linuxhint.com/wp-content/uploads/2018/06/VLC.png +[7]:https://linuxhint.com/wp-content/uploads/2018/06/Atom-Text-Editor.png +[8]:https://linuxhint.com/wp-content/uploads/2018/06/GIMP.png +[9]:https://linuxhint.com/wp-content/uploads/2018/06/Google-Play.png +[10]:https://www.googleplaymusicdesktopplayer.com/ +[11]:https://linuxhint.com/wp-content/uploads/2018/06/Franz.png +[12]:https://meetfranz.com/#download +[13]:https://linuxhint.com/wp-content/uploads/2018/06/Synaptic.png +[14]:https://linuxhint.com/wp-content/uploads/2018/06/Skype.png +[15]:https://linuxhint.com/wp-content/uploads/2018/06/VirtualBox.png +[16]:https://linuxhint.com/wp-content/uploads/2018/06/Unity-Tweak-Tool.png +[17]:https://linuxhint.com/wp-content/uploads/2018/06/Ubuntu-Cleaner.png +[18]:https://linuxhint.com/wp-content/uploads/2018/06/Visual-Studio-Code.png +[19]:https://linuxhint.com/wp-content/uploads/2018/06/Corebird.png +[20]:https://linuxhint.com/wp-content/uploads/2018/06/Pixbuf.png +[21]:https://linuxhint.com/wp-content/uploads/2018/06/Clementine.png +[22]:https://linuxhint.com/wp-content/uploads/2016/06/blender.jpg +[23]:https://linuxhint.com/wp-content/uploads/2018/06/Audacity.png +[24]:https://linuxhint.com/wp-content/uploads/2018/06/Vim.png +[25]:https://linuxhint.com/wp-content/uploads/2018/06/Inkscape-1.png +[26]:https://linuxhint.com/wp-content/uploads/2018/06/ShotCut.png +[27]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Screen-Recorder.png +[28]:https://linuxhint.com/wp-content/uploads/2018/06/Telegram.png +[29]:https://linuxhint.com/wp-content/uploads/2018/06/ClamTk.png +[30]:https://linuxhint.com/wp-content/uploads/2018/06/Mailspring.png +[31]:https://linuxhint.com/wp-content/uploads/2018/06/PyCharm.png +[32]:https://linuxhint.com/wp-content/uploads/2018/06/Caffeine.png +[33]:https://linuxhint.com/wp-content/uploads/2018/06/Etcher.png +[34]:https://etcher.io/ +[35]:https://linuxhint.com/wp-content/uploads/2018/06/Neofetch.png +[36]:https://linuxhint.com/wp-content/uploads/2018/06/Liferea.png +[37]:https://linuxhint.com/wp-content/uploads/2018/06/Shutter.png +[38]:https://linuxhint.com/wp-content/uploads/2018/06/Weather.png +[39]:https://linuxhint.com/wp-content/uploads/2018/06/Ramme.png +[40]:https://github.com/terkelg/ramme/releases +[41]:https://linuxhint.com/wp-content/uploads/2018/06/Thunderbird.png +[42]:https://linuxhint.com/wp-content/uploads/2018/06/Pidgin.png +[43]:https://linuxhint.com/wp-content/uploads/2018/06/Krita.png +[44]:https://linuxhint.com/wp-content/uploads/2018/06/Dropbox.png +[45]:https://linuxhint.com/wp-content/uploads/2018/06/kodi.png +[46]:https://linuxhint.com/wp-content/uploads/2018/06/Spotify.png +[47]:https://linuxhint.com/wp-content/uploads/2018/06/Brackets.png +[48]:https://linuxhint.com/wp-content/uploads/2018/06/Bitwarden.png +[49]:https://linuxhint.com/wp-content/uploads/2018/06/Terminator.png +[50]:https://linuxhint.com/wp-content/uploads/2018/06/Yak-Yak.png +[51]:https://linuxhint.com/wp-content/uploads/2018/06/Thonny.png +[52]:https://linuxhint.com/wp-content/uploads/2018/06/Font-Manager.png +[53]:https://linuxhint.com/wp-content/uploads/2018/06/Atril.png +[54]:https://linuxhint.com/wp-content/uploads/2018/06/Notepadqq.png +[55]:https://linuxhint.com/wp-content/uploads/2018/06/Amarok.png +[56]:https://linuxhint.com/wp-content/uploads/2018/06/Cheese.png +[57]:https://linuxhint.com/wp-content/uploads/2018/06/MyPaint.png +[58]:https://linuxhint.com/wp-content/uploads/2018/06/PlayOnLinux.png +[59]:https://linuxhint.com/wp-content/uploads/2018/06/Akregator.png +[60]:https://linuxhint.com/wp-content/uploads/2018/06/Brave.png +[61]:https://linuxhint.com/wp-content/uploads/2018/06/Bitcoin-Core.png +[62]:https://linuxhint.com/wp-content/uploads/2018/06/Speedy-Duplicate-Finder.png +[63]:https://linuxhint.com/wp-content/uploads/2018/06/Zulip.png +[64]:https://linuxhint.com/wp-content/uploads/2018/06/Okular.png +[65]:https://linuxhint.com/wp-content/uploads/2018/06/Focus-Writer.png +[66]:https://linuxhint.com/wp-content/uploads/2018/06/Guake.png +[67]:https://linuxhint.com/wp-content/uploads/2018/06/KDE-Connect.png +[68]:https://linuxhint.com/wp-content/uploads/2018/06/CopyQ.png +[69]:https://linuxhint.com/wp-content/uploads/2018/06/Tilix.png +[70]:https://linuxhint.com/wp-content/uploads/2018/06/Anbox.png +[71]:https://linuxhint.com/wp-content/uploads/2018/06/OpenShot.png +[72]:https://linuxhint.com/wp-content/uploads/2018/06/Plank.png +[73]:https://linuxhint.com/wp-content/uploads/2018/06/FileZilla.png +[74]:https://linuxhint.com/wp-content/uploads/2018/06/Stacer.png +[75]:https://linuxhint.com/wp-content/uploads/2018/06/4K-Video-Downloader.png +[76]:https://www.4kdownload.com/download +[77]:https://linuxhint.com/wp-content/uploads/2018/06/Qalculate.png +[78]:https://linuxhint.com/wp-content/uploads/2018/06/Hiri.png +[79]:https://linuxhint.com/wp-content/uploads/2018/06/Sublime-text.png +[80]:https://linuxhint.com/wp-content/uploads/2018/06/TeXstudio.png +[81]:https://www.texstudio.org/ +[82]:https://linuxhint.com/wp-content/uploads/2018/06/QtQR.png +[83]:https://linuxhint.com/wp-content/uploads/2018/06/Kontact.png +[84]:https://linuxhint.com/wp-content/uploads/2018/06/Nitro-Share.png +[85]:https://linuxhint.com/wp-content/uploads/2018/06/Konversation.png +[86]:https://linuxhint.com/wp-content/uploads/2018/06/Discord.png +[87]:https://linuxhint.com/wp-content/uploads/2018/06/QuiteRSS.png +[88]:https://linuxhint.com/wp-content/uploads/2018/06/MPU-Media-Player.png +[89]:https://linuxhint.com/wp-content/uploads/2018/06/Plume-Creator.png +[90]:https://linuxhint.com/wp-content/uploads/2018/06/Chromium.png +[91]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Weather-Indicator.png +[92]:https://linuxhint.com/wp-content/uploads/2018/06/SpeedCrunch.png +[93]:https://linuxhint.com/wp-content/uploads/2018/06/Scribus.png +[94]:https://linuxhint.com/wp-content/uploads/2018/06/Cura.png +[95]:https://linuxhint.com/wp-content/uploads/2018/06/Nomacs.png +[96]:https://linuxhint.com/wp-content/uploads/2018/06/Bit-Ticker-1.png +[97]:https://linuxhint.com/wp-content/uploads/2018/06/Organize-My-Files.png +[98]:https://linuxhint.com/wp-content/uploads/2018/06/GNU-Cash.png +[99]:https://linuxhint.com/wp-content/uploads/2018/06/Calibre.png +[100]:https://linuxhint.com/wp-content/uploads/2018/06/MATE-Dictionary.png +[101]:https://linuxhint.com/wp-content/uploads/2018/06/Converseen.png +[102]:https://linuxhint.com/wp-content/uploads/2018/06/Tiled-Map-Editor.png +[103]:https://linuxhint.com/wp-content/uploads/2018/06/Qmmp.png +[104]:https://linuxhint.com/wp-content/uploads/2018/06/Arora.png +[105]:https://linuxhint.com/wp-content/uploads/2018/06/XnSketch.png +[106]:https://linuxhint.com/wp-content/uploads/2018/06/Geany.png +[107]:https://linuxhint.com/wp-content/uploads/2018/06/Mumble.png +[108]:https://linuxhint.com/wp-content/uploads/2018/06/Deluge.png +[109]:https://twitter.com/linuxhint +[110]:https://twitter.com/SwapTirthakar + + + + + + + + + + + + + + + + + + diff --git a/translated/tech/20190422 4 open source apps for plant-based diets.md b/translated/tech/20190422 4 open source apps for plant-based diets.md deleted file mode 100644 index 96107a526a..0000000000 --- a/translated/tech/20190422 4 open source apps for plant-based diets.md +++ /dev/null @@ -1,67 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (4 open source apps for plant-based diets) -[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) -[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) - -4 款基于植物饮食的开源应用 -====== -这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。 -![][1] - -减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以帮助你切换成基于植物的饮食。无论你是参加[无肉星期一][2],参加 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到[全植物性饮食][4],这些应用能帮助你找出要吃什么,发现素食和素食友好的餐馆,并轻松地将你的饮食偏好传达给他人,来助你更好地走这条路。所有这些应用都是开源的,可从 [F-Droid 仓库][5]下载。 - -### Daily Dozen - -![Daily Dozen app][6] - -[Daily Dozen][7] 提供了医学博士、美国法律医学院院士 Michael Greger 推荐的项目清单,作为健康饮食和生活方式的一部分。Greger 博士建议食用全食,由多种食物组成的基于植物的饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、芽甘蓝等许多其他建议。 - -### Food Restrictions - -![Food Restrictions app][8] - -[Food Restrictions][9] 是一个简单的应用,它可以帮助你将饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 X 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在挑选中是没有的。但是,就像挑选方式的文字版本一样,这些句子仅提供英语和葡萄牙语。 - -### OpenFoodFacts - -![Open Food Facts app][10] - -购买杂货时避免不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。 - -### OpenVegeMap - -![OpenVegeMap app][12] - -使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、纯素友好,仅限素食主义者,素食友好者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/apps-plant-based-diets - -作者:[Joshua Allen Holm ][a] -选题:[lujun9972][b] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/holmja -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 -[2]: https://www.meatlessmonday.com/ -[3]: https://www.amazon.com/dp/0385344740/ -[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/ -[5]: https://f-droid.org/ -[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app) -[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/ -[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app) -[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/ -[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app) -[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/ -[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app) -[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/ -[14]: https://www.openstreetmap.org/ diff --git a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md new file mode 100644 index 0000000000..c0c631ccd2 --- /dev/null +++ b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md @@ -0,0 +1,158 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Testinfra with Ansible to verify server state) +[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) +[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) + +使用 Testinfra 和 Ansible 验证服务器状态 +====== +Testinfra 是一个功能强大的库,可用于编写测试来验证基础设施的状态。另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 +![Terminal command prompt on orange background][1] + +根据设计,[Ansible][2] 传递机器的期望状态,以确保 Ansible playbook 或角色的内容部署到目标机器上。但是,如果你需要确保所有基础架构更改都在 Ansible 中,该怎么办?或者随时验证服务器的状态? + +[Testinfra][3] 是一个基础架构测试框架,它可以轻松编写单元测试来验证服务器的状态。它是一个 Python 库,使用强大的 [pytest][4] 测试引擎。 + +### 开始使用 Testinfra + +可以使用 Python 包管理器 (pip) 和 Python 虚拟环境轻松安装 Testinfra。 + + +``` +$ python3 -m venv venv +$ source venv/bin/activate +(venv) $ pip install testinfra +``` + +Testinfra 也可以使用 EPEL 仓库的 Fedora 和 CentOS 中使用。例如,在 CentOS 7 上,你可以使用以下命令安装它: + + +``` +$ yum install -y epel-release +$ yum install -y python-testinfra +``` + +#### 一个简单的测试脚本 + +在 Testinfra 中编写测试很容易。使用你选择的代码编辑器,将以下内容添加到名为 **test_simple.py** 的文件中: + + +``` +import testinfra + +def test_os_release(host): + assert host.file("/etc/os-release").contains("Fedora") + +def test_sshd_inactive(host): + assert host.service("sshd").is_running is False +``` + +默认情况下,Testinfra 为测试用例提供了一个主机对象,该对象能访问不同的辅助模块。例如,第一个测试使用 **file** 模块来验证主机上文件的内容,第二个测试用例使用 **service** 模块来检查 systemd 服务的状态。 + +要在本机运行这些测试,请执行以下命令: + + +``` +(venv)$ pytest test_simple.py +================================ test session starts ================================ +platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 +rootdir: /home/cverna/Documents/Python/testinfra +plugins: testinfra-3.0.0 +collected 2 items +test_simple.py .. + +================================ 2 passed in 0.05 seconds ================================ +``` + +有关 Testinfra API 的完整列表,你可以参考[文档][5]。 + +### Testinfra 和 Ansible + +Testinfra 支持的后端之一是 Ansible,这意味着 Testinfra 可以直接使用 Ansible 的 inventory 文件和 inventory 中定义的一组机器来对它们进行测试。 + +我们使用以下 inventory 文件作为示例: + + +``` +[web] +app-frontend01 +app-frontend02 + +[database] +db-backend01 +``` + +我们希望确保我们的 Apache Web 服务器在 **app-frontend01** 和 **app-frontend02** 上运行。让我们在名为 **test_web.py** 的文件中编写测试: + + +``` +def check_httpd_service(host): + """Check that the httpd service is running on the host""" + assert host.service("httpd").is_running +``` + +要使用 Testinfra 和 Ansible 运行此测试,请使用以下命令: + + +``` +(venv) $ pip install ansible +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py +``` + +在调用测试时,我们使用 Ansible inventory 的 **[web]** 组作为目标计算机,并指定我们要使用 Ansible 作为连接后端。 + +#### 使用 Ansible 模块 + +Testinfra 还为 Ansible 提供了一个很好的可用于测试的 API。该 Ansible 模块能够在 测试中运行 Ansible play,并且能够轻松检查 play 的状态。 + +``` +def check_ansible_play(host): + """ + Verify that a package is installed using Ansible + package module + """ + assert not host.ansible("package", "name=httpd state=present")["changed"] +``` + +B默认情况下,Ansible 的[检查模式][6]已启用,这意味着 Ansible 将报告在远程主机上执行 play 时会发生的变化。 + +### Testinfra 和 Nagios + +现在我们可以轻松地运行测试来验证机器的状态,我们可以使用这些测试来触发监控系统上的警报。这是捕获意外更改的好方法。 + +Testinfra 提供了与 [Nagios][7] 的集成,它是一种流行的监控解决方案。默认情况下,Nagios 使用 [NRPE][8] 插件对远程主机进行检查,但使用 Testinfra 可以直接从 Nagios master 上运行测试。 + +要使 Testinfra 输出与 Nagios 兼容,我们必须在触发测试时使用 **\--nagios** 标志。我们还使用 **-qq** 这个 pytest 标志来启用 pytest 的**静默**模式,这样就不会显示所有测试细节。 + +``` +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py +TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds +``` + +Testinfra 是一个功能强大的库,可用于编写测试以验证基础架构的状态。 另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 它也是使用 [Molecule][9] 开发 Ansible 角色过程中添加测试的关键组件。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state + +作者:[Clement Verna][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) +[2]: https://www.ansible.com/ +[3]: https://testinfra.readthedocs.io/en/latest/ +[4]: https://pytest.org/ +[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules +[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html +[7]: https://www.nagios.org/ +[8]: https://en.wikipedia.org/wiki/Nagios#NRPE +[9]: https://github.com/ansible/molecule diff --git a/translated/tech/20190522 Securing telnet connections with stunnel.md b/translated/tech/20190522 Securing telnet connections with stunnel.md new file mode 100644 index 0000000000..cc637cc495 --- /dev/null +++ b/translated/tech/20190522 Securing telnet connections with stunnel.md @@ -0,0 +1,201 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Securing telnet connections with stunnel) +[#]: via: (https://fedoramagazine.org/securing-telnet-connections-with-stunnel/) +[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) + +使用 stunnel 保护 telnet 连接 +====== + +![][1] + +Telnet 是一种客户端-服务端协议,通过 TCP 的 23 端口连接到远程服务器。Telnet 并不加密数据,被认为是不安全的,因为数据是以明文形式发送的,所以密码很容易被嗅探。但是,仍有老旧系统需要使用它。这就是用到 **stunnel** 的地方。 + +stunnel 旨在为使用不安全连接协议的程序增加 SSL 加密。本文将以 telnet 为例介绍如何使用它。 + +### 服务端安装 + +[使用 sudo][2] 安装 stunnel 以及 telnet 的服务端和客户端: + +``` +sudo dnf -y install stunnel telnet-server telnet +``` + +添加防火墙规则,在提示时输入你的密码: + +``` +firewall-cmd --add-service=telnet --perm +firewall-cmd --reload +``` + +接下来,生成 RSA 私钥和 SSL 证书: + +``` +openssl genrsa 2048 > stunnel.key +openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt +``` + +系统将一次提示你输入以下信息。当询问 _Common Name_ 时,你必须输入正确的主机名或 IP 地址,但是你可以按**回车**键跳过其他所有内容。 + +``` +You are about to be asked to enter information that will be +incorporated into your certificate request. +What you are about to enter is what is called a Distinguished Name or a DN. +There are quite a few fields but you can leave some blank +For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [XX]: +State or Province Name (full name) []: +Locality Name (eg, city) [Default City]: +Organization Name (eg, company) [Default Company Ltd]: +Organizational Unit Name (eg, section) []: +Common Name (eg, your name or your server's hostname) []: +Email Address [] +``` + +将 RSA 密钥和 SSL 证书合并到单个 _.pem_ 文件中,并将其复制到 SSL 证书目录: + +``` +cat stunnel.crt stunnel.key > stunnel.pem +sudo cp stunnel.pem /etc/pki/tls/certs/ +``` + +现在可以定义服务和用于加密连接的端口了。选择尚未使用的端口。此例使用 450 端口进行隧道传输 telnet。编辑或创建 _/etc/stunnel/telnet.conf_ : + +``` +cert = /etc/pki/tls/certs/stunnel.pem +sslVersion = TLSv1 +chroot = /var/run/stunnel +setuid = nobody +setgid = nobody +pid = /stunnel.pid +socket = l:TCP_NODELAY=1 +socket = r:TCP_NODELAY=1 +[telnet] +accept = 450 +connect = 23 +``` + +**accept** 选项是服务器将监听传入 **accept** 请求的接口。**connect** 选项是 telnet 服务器的内部监听接口。 + +接下来,创建一个 systemd 单元文件的副本来覆盖原来的版本: + +``` +sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system +``` + +编辑 _/etc/systemd/system/stunnel.service_ 来添加两行。这些行在启动时为服务创建 chroot 监狱。 + +``` +[Unit] +Description=TLS tunnel for network daemons +After=syslog.target network.target + +[Service] +ExecStart=/usr/bin/stunnel +Type=forking +PrivateTmp=true +ExecStartPre=-/usr/bin/mkdir /var/run/stunnel +ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel + +[Install] +WantedBy=multi-user.target +``` + +接下来,配置 SELinux 以在你刚刚指定的新端口上监听 telnet: + +``` +sudo semanage port -a -t telnetd_port_t -p tcp 450 +``` + +最后,添加新的防火墙规则: + +``` +firewall-cmd --add-port=450/tcp --perm +firewall-cmd --reload +``` + +现在你可以启用并启动 telnet 和 stunnel。 + +``` +systemctl enable telnet.socket stunnel@telnet.service --now +``` + +要注意 _systemctl_ 命令是有序的。systemd 和 stunnel 包默认提供额外的[模板单元文件][3]。该模板允许你将 stunnel 的多个配置文件放到 _/etc/stunnel_ 中,并使用文件名启动该服务。例如,如果你有一个 _foobar.conf_ 文件,那么可以使用 _systemctl start stunnel@foobar.service_ 启动该 stunnel 实例,而无需自己编写任何单元文件。 + +如果需要,可以将此 stunnel 模板服务设置为在启动时启动: + +``` +systemctl enable stunnel@telnet.service +``` + +### 客户端安装 + +本文的这部分假设你在客户端系统上以普通用户([拥有 sudo 权限][2])身份登录。安装 stunnel 和 telnet 客户端: + +``` +dnf -y install stunnel telnet +``` + +将 _stunnel.pem_ 从远程服务器复制到客户端的 _/etc/pki/tls/certs_ 目录。在此例中,远程 telnet 服务器的 IP 地址为 192.168.1.143。 + +``` +sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem +/etc/pki/tls/certs/ +``` + +创建 _/etc/stunnel/telnet.conf_: + +``` +cert = /etc/pki/tls/certs/stunnel.pem +client=yes +[telnet] +accept=450 +connect=192.168.1.143:450 +``` + +**accept** 选项是用于 telnet 会话的端口。**connect** 选项是你远程服务器的 IP 地址以及监听的端口。 + +接下来,启用并启动 stunnel: + +``` +systemctl enable stunnel@telnet.service --now +``` + +测试你的连接。由于有一条已建立的连接,你会 telnet 到 _localhost_ 而不是远程 telnet 服务器的主机名或者 IP 地址。 + +``` +[user@client ~]$ telnet localhost 450 +Trying ::1... +telnet: connect to address ::1: Connection refused +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. + +Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0) +server login: myuser +Password: XXXXXXX +Last login: Sun May 5 14:28:22 from localhost +[myuser@server ~]$ +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/ + +作者:[Curt Warfield][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/rcurtiswarfield/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/stunnel-816x345.jpg +[2]: https://fedoramagazine.org/howto-use-sudo/ +[3]: https://fedoramagazine.org/systemd-template-unit-files/ diff --git a/translated/tech/20190527 How to write a good C main function.md b/translated/tech/20190527 How to write a good C main function.md new file mode 100644 index 0000000000..8cce949bfc --- /dev/null +++ b/translated/tech/20190527 How to write a good C main function.md @@ -0,0 +1,479 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to write a good C main function) +[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny) + +如何写好 C main 函数 +====== +学习如何构造一个 C 文件并编写一个 C main 函数来处理命令行参数,像 champ 一样。 +(to 校正:champ 是一个命令行程序吗?但我并没有找到,这里按一个程序来解释了) +![Hand drawing out the word "code"][1] + +我知道,现在孩子们用 Python 和 JavaScript 编写他们疯狂的“应用程序”。但是不要这么快就否定 C 语言-- 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找工作保障或者学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来处理像 champ 这样的命令行参数。 + +**我**:一个顽固的 Unix 系统程序员。 +**你**:一个有编辑器,C 编译器,并有时间打发的人。 + +_让我们开工吧。_ + +### 一个无聊但正确的 C 程序 + +![Parody O'Reilly book cover, "Hating Other People's Code"][3] + +一个 C 程序以 **main()** 函数开头,通常保存在名为 **main.c** 的文件中。 + +``` +/* main.c */ +int main(int argc, char *argv[]) { + +} +``` + +这个程序会 _编译_ 但不 _执行_ 任何操作。 + +``` +$ gcc main.c +$ ./a.out -o foo -vv +$ +``` + +正确但无聊。 + +### main 函数是唯一的。 + +**main()** 函数是程序开始执行时执行的第一个函数,但不是第一个执行的函数。_第一个_ 函数是 **_start()**,它通常由 C 运行库提供,在编译程序时自动链接。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 + +**main()** 函数有两个参数,通常称为 **argc** 和 **argv**,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 **0**(零),失败时返回 **-1**(负一)。 + +参数 | 名称 | 描述 +---|---|--- +argc | 参数个数 | 参数向量的个数 +argv | 参数向量 | 字符指针数组 + +参数向量 **argv** 是调用程序的命令行的标记化表示形式。在上面的例子中,**argv** 将是以下字符串的列表: + + +``` +`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` +``` + +参数向量保证在第一个索引中始终至少有一个字符串 **argv[0]**,这是执行程序的完整路径。 + +### main.c 文件的剖析 + +当我从头开始编写 **main.c** 时,它的结构通常如下: + +``` +/* main.c */ +/* 0 copyright/licensing */ +/* 1 includes */ +/* 2 defines */ +/* 3 external declarations */ +/* 4 typedefs */ +/* 5 全局变量声明 */ +/* 6 函数原型 */ + +int main(int argc, char *argv[]) { +/* 7 命令行解析 */ +} + +/* 8 函数声明 */ +``` + +下面我将讨论这些编号的各个部分,除了编号为 0 的那部分。如果你必须把版权或许可文本放在源代码中,那就放在那里。 + +另一件我不想谈论的事情是注释。 + +``` +"Comments lie." +\- A cynical but smart and good looking programmer. +``` + +使用有意义的函数名和变量名而不是注释。 + +为了迎合程序员固有的惰性,一旦添加了注释,维护负荷就会增加一倍。如果更改或重构代码,则需要更新或扩展注释。随着时间的推移,代码会发生变化,与注释所描述的内容完全不同。 + +如果你必须写注释,不要写关于代码正在做 _什么_,相反,写下 _为什么_ 代码需要这样写。写一些你想在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。_不要有压力。_ + +#### 1\. Includes + +我添加到 **main.c** 文件的第一个东西是 include 文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 **/usr/include** 中的头文件,了解它们可以为你做些什么。 + +**#include** 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 **.h** 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、typedef、外部变量和函数原型。字符串 **** 告诉 cpp 在系统定义的头文件路径中查找名为 **header.h** 的文件,通常在 **/usr/include** 目录中。 + + +``` +/* main.c */ +#include +#include +#include +#include +#include +#include +#include +#include +``` + +以下内容是我默认会包含的最小全局 include 集合: + +#include 文件 | 提供的东西 +---|--- +stdio | 提供 FILE, stdin, stdout, stderr 和 fprint() 函数系列 +stdlib | 提供 malloc(), calloc() 和 realloc() +unistd | 提供 EXIT_FAILURE, EXIT_SUCCESS +libgen | 提供 basename() 函数 +errno | 定义外部 errno 变量及其可以接受的所有值 +string | 提供 memcpy(), memset() 和 strlen() 函数系列 +getopt | 提供 外部 optarg, opterr, optind 和 getopt() 函数 +sys/types | 类型定义快捷方式,如 uint32_t 和 uint64_t + +#### 2\. Defines + +``` +/* main.c */ +<...> + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" +``` + +这在现在没有多大意义,但 **OPTSTR** 定义是我说明程序将推荐的命令行切换。参考 [**getopt(3)**][5] man 页面,了解 **OPTSTR** 将如何影响 **getopt()** 的行为。 + +**USAGE_FMT** 定义了一个 **printf()** 风格形式的格式字符串,在 **usage()** 函数中被引用。 + +我还喜欢将字符串常量放在文件的这一部分作为 **#defines**。如果需要,收集它们可以更容易地修复拼写、重用消息和国际化消息。 + +最后,在命名 **#define** 时使用全部大写字母,以区别变量和函数名。如果需要,可以将单词放在一起或使用下划线分隔,只要确保它们都是大写的就行。 + +#### 3\. 外部声明 + + +``` +/* main.c */ +<...> + +extern int errno; +extern char *optarg; +extern int opterr, optind; +``` + +**extern** 声明将该名称带入当前编译单元的命名空间(也称为 "file"),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。**getopt()** 函数使用 **opt** 前缀变量,C 标准库使用 **errno** 作为带外通信通道来传达函数可能的失败原因。 + +#### 4\. Typedefs + + +``` +/* main.c */ +<...> + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; +``` + +在外部声明之后,我喜欢为结构、联合和枚举声明 **typedefs**。命名 **typedef** 本身就是一种传统行为。我非常喜欢 **_t** 后缀来表示该名称是一种类型。在这个例子中,我将 **options_t** 声明为一个包含 4 个成员的 **struct**。C 是一种与空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 + +#### 5\. 全局变量声明 + + +``` +/* main.c */ +<...> + +int dumb_global_variable = -11; +``` + +全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明并确保给它们一个默认值。说真的,_不要使用全局变量_。 + +#### 6\. 函数原型 + + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); +``` + +在编写函数时,将它们添加到 **main()** 函数之后而不是之前,这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码中使用的编译器,所以请编写函数原型并继续。 + +当然,我总是包含一个 **usage()** 函数,当 **main()** 函数不理解你从命令行传入的内容时,它会调用这个函数。 + +#### 7\. 命令行解析 + + +``` +/* main.c */ +<...> + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} +``` + +好吧,代码有点多。**main()** 函数的目的是收集用户提供的参数,执行最小的输入验证,然后将收集的参数传递给使用它们的函数。这个示例声明使用默认值初始化的 **options** 变量,并解析命令行,根据需要更新 **options**。 + +**main()** 函数的核心是一个 **while** 循环,它使用 **getopt()** 来遍历 **argv**,寻找命令行选项及其参数(如果有的话)。文件前面的 **OPTSTR** **#define** 是驱动 **getopt()** 行为的模板。**opt** 变量接受 **getopt()** 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 **switch** 语句中。 + +现在你注意到了可能会问,为什么 **opt** 被声明为 32 位 **int**,但是预期是 8 位 **char**?事实上 **getopt()** 返回一个 **int**,当它到达 **argv** 末尾时取负值,我会使用 **EOF**(_文件末尾_ 标记)匹配。**char** 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 + +当检测到一个已知的命令行选项时,会发生特定的行为。有些选项有一个参数,在 **OPTSTR** 中指定了一个以冒号结尾的参数。当一个选项有一个参数时,**argv** 中的下一个字符串可以通过外部定义的变量 **optarg** 提供给程序。我使用 **optarg** 打开文件进行读写,或者将命令行参数从字符串转换为整数值。 + +这里有几个关于风格的要点: + + * 将 **opterr** 初始化为 0,禁止 **getopt** 发出 **?**。 + * 在 **main()** 的中间使用 **exit(EXIT_FAILURE);** 或 **exit(EXIT_SUCCESS);**。 + * **/* NOTREACHED */** 是我喜欢的一个 lint 指令。 + * 在函数末尾使用 **return EXIT_SUCCESS;** 返回一个 int 类型。 + * 显示强制转换隐式类型。 + +这个程序的命令行签名经过编译如下所示: +``` +$ ./a.out -h +a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h] +``` + +事实上,**usage()** 在编译后就会向 **stderr** 发出这样的命令。 + +#### 8\. 函数声明 + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +最后,我编写的函数不是样板函数。在本例中,函数 **do_the_needful()** 接受指向 **options_t** 结构的指针。我验证 **options** 指针不为 **NULL**,然后继续验证 **input** 和 **output** 结构成员。如果其中一个测试失败,返回 **EXIT_FAILURE**,并且通过将外部全局变量 **errno** 设置为常规错误代码,我向调用者发出一个原因的信号。调用者可以使用便捷函数 **perror()** 来根据 **errno** 的值发出人类可读的错误消息。 + +函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。**usage()** 函数使用 **fprintf()** 调用中的条件赋值验证 **progname** 参数。**usage()** 函数无论如何都要退出,所以我不需要设置 **errno** 或为使用正确的程序名大吵一场。 + +在这里,我要避免的最大错误是取消引用 **NULL** 指针。这将导致操作系统向我的进程发送一个名为 **SYSSEGV** 的特殊信号,导致不可避免的死亡。用户希望看到的是由 **SYSSEGV** 引起的崩溃。为了发出更好的错误消息并优雅地关闭程序,捕获 **NULL** 指针要好得多。 + +有些人抱怨在函数体中有多个 **return** 语句,他们争论“控制流的连续性”和其他东西。老实说,如果函数中间出现错误,那么这个时候是返回错误条件的好时机。写一大堆嵌套的 **if** 语句只有一个 return 绝不是一个“好主意。”™ + +最后,如果您编写的函数接受四个或更多参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那么不要担心。 + +### 等等,你说没有注释 !?!! + +在 **do_the_needful()** 函数中,我写了一种特殊类型的注释,它被设计为占位符而不是记录代码: + + +``` +`/* XXX do needful stuff */` +``` + +当你在该区域时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己一点面包屑的地方。我插入一个带有 **XXX** 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 **XXX**。使用什么并不重要,只要确保它不太可能在另一个上下文中以函数名或变量形式显示在代码库中。 + +### 把它们放在一起 + +好吧,当你编译这个程序后,它 _仍_ 仍几乎没有任何作用。但是现在你有了一个坚实的骨架来构建你自己的命令行解析 C 程序。 + +``` +/* main.c - the complete listing */ + +#include +#include +#include +#include +#include +#include +#include + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" + +extern int errno; +extern char *optarg; +extern int opterr, optind; + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; + +int dumb_global_variable = -11; + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +现在,你已经准备好编写更易于维护的 C 语言。如果你有任何问题或反馈,请在评论中分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/how-write-good-c-main-function + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code") +[2]: https://www.owasp.org/index.php/Null_Dereference +[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code") +[4]: https://en.wikipedia.org/wiki/C_preprocessor +[5]: https://linux.die.net/man/3/getopt +[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html +[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html +[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html +[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html +[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html diff --git a/translated/tech/20190529 NVMe on Linux.md b/translated/tech/20190529 NVMe on Linux.md new file mode 100644 index 0000000000..36ccc1a0fa --- /dev/null +++ b/translated/tech/20190529 NVMe on Linux.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: (warmfrog) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (NVMe on Linux) +[#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Linux 上的 NVMe +=============== + +如果你还没注意到,一些极速的固态磁盘技术对于Linux和其他操作系统都是可用的。 +![Sandra Henry-Stocker][1] + +NVMe 代表“非易失性内存快车”,它是一个主机控制器接口和存储协议,用于加速企业和客户端系统以及固态驱动器(SSD)之间的数据传输。它通过电脑的高速外围组件互联快车总线(PCIe)工作。当我看到这些名词时,我感到“羡慕”。羡慕的原因很重要。 + +使用 NVMe,数据传输的速度比旋转磁盘快很多。事实上,NVMe 驱动能够比 SATA SSD 快 7 倍。这比我们今天很多人用的固态硬盘快了 7 倍多。这意味着,如果你用一个 NVMe 驱动盘作为启动盘,你的系统能够启动的非常快。事实上,如今任何人买一个新的系统可能都不会考虑那些没有自带 NVMe 的,不管是服务器或者个人电脑。 + +### NVMe 在 Linux 下能工作吗? + +是的!NVMe 自 Linux 内核 3.3 版本就支持了。更新一个系统,然而,通常同时需要一个 NVMe 控制器和一个 NVMe 磁盘。一些外置磁盘也行,但是为了添加到系统中,需要的不仅仅是通用的 USB 接口。 + +[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][2] + +为了检查内核版本,使用下列命令: + +``` +$ uname -r +5.0.0-15-generic +``` + +如果你的系统已经用了 NVMe,你将看到一个设备(例如, /dev/nvme0),但是仅在你安装了 NVMe 控制器的情况下。如果你没有,你可以用下列命令获取使用 NVMe 的相关信息。 + +``` +$ modinfo nvme | head -6 +filename: /lib/modules/5.0.0-15-generic/kernel/drivers/nvme/host/nvme.ko +version: 1.0 +license: GPL +author: Matthew Wilcox +srcversion: AA383008D5D5895C2E60523 +alias: pci:v0000106Bd00002003sv*sd*bc*sc*i* +``` + +### 了解更多 + +如果你想了解极速的 NVMe 存储的更多细节,可在 _[PCWorld][3]_ 获取。 + +规格,白皮书和其他资源可在 [NVMexpress.org][4] 获取。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397006/nvme-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[warmfrog](https://github.com/warmfrog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/nvme-100797708-large.jpg +[2]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb +[3]: https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html +[4]: https://nvmexpress.org/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world