From 13c9edd44a0325bf40fb579f61a45f695d7d54ba Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:00:49 +0800 Subject: [PATCH 01/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190520=20How=20To?= =?UTF-8?q?=20Map=20Oracle=20ASM=20Disk=20Against=20Physical=20Disk=20And?= =?UTF-8?q?=20LUNs=20In=20Linux=3F=20sources/tech/20190520=20How=20To=20Ma?= =?UTF-8?q?p=20Oracle=20ASM=20Disk=20Against=20Physical=20Disk=20And=20LUN?= =?UTF-8?q?s=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Against Physical Disk And LUNs In Linux.md | 229 ++++++++++++++++++ 1 file changed, 229 insertions(+) create mode 100644 sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md diff --git a/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md b/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md new file mode 100644 index 0000000000..4e9df8a0ff --- /dev/null +++ b/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux?) +[#]: via: (https://www.2daygeek.com/shell-script-map-oracle-asm-disks-physical-disk-lun-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux? +====== + +You might already know about ASM, Device Mapper Multipathing (DM-Multipathing) if you are working quit long time as a Linux administrator. + +There are multiple ways to check these information. However, you will be getting part of the information when you use the default commands. + +It doesn’t show you all together in the single output. + +If you want to check all together in the single output then we need to write a small shell script to achieve this. + +We have added two shell script to get those information and you can use which one is suitable for you. + +Major and Minor numbers can be used to match the physical devices in Linux system. + +This tutorial helps you to find which ASM disk maps to which Linux partition or DM Device. + +If you want to **[manage Oracle ASM disks][1]** (such as start, enable, stop, list, query and etc) then navigate to following URL. + +### What Is ASMLib? + +ASMLib is an optional support library for the Automatic Storage Management feature of the Oracle Database. + +Automatic Storage Management (ASM) simplifies database administration and greatly reduces kernel resource usage (e.g. the number of open file descriptors). + +It eliminates the need for the DBA to directly manage potentially thousands of Oracle database files, requiring only the management of groups of disks allocated to the Oracle Database. + +ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. + +### What Is Device Mapper Multipathing (DM-Multipathing)? + +Device Mapper Multipathing or DM-multipathing is a Linux host-side native multipath tool, which allows us to configure multiple I/O paths between server nodes and storage arrays into a single device by utilizing device-mapper. + +### Method-1 : Shell Script To Map ASM Disks To Physical Devices? + +In this shell script we are using for loop to achieve the results. + +Also, we are not using any ASM related commands. + +``` +# vi asm_disk_mapping.sh + +#!/bin/bash + +ls -lh /dev/oracleasm/disks > /tmp/asmdisks1.txt + +for ASMdisk in `cat /tmp/asmdisks1.txt | tail -n +2 | awk '{print $10}'` + +do + +minor=$(grep -i "$ASMdisk" /tmp/asmdisks1.txt | awk '{print $6}') + +major=$(grep -i "$ASMdisk" /tmp/asmdisks1.txt | awk '{print $5}' | cut -d"," -f1) + +phy_disk=$(ls -l /dev/* | grep ^b | grep "$major, *$minor" | awk '{print $10}') + +echo "ASM disk $ASMdisk is associated on $phy_disk [$major, $minor]" + +done +``` + +Set an executable permission to port_scan.sh file. + +``` +$ chmod +x asm_disk_mapping.sh +``` + +Finally run the script to achieve this. + +``` +# sh asm_disk_mapping.sh + +ASM disk MP4E6D_DATA01 is associated on /dev/dm-1 +3600a0123456789012345567890234q11 [253, 1] +ASM disk MP4E6E_DATA02 is associated on /dev/dm-2 +3600a0123456789012345567890234q12 [253, 2] +ASM disk MP4E6F_DATA03 is associated on /dev/dm-3 +3600a0123456789012345567890234q13 [253, 3] +ASM disk MP4E70_DATA04 is associated on /dev/dm-4 +3600a0123456789012345567890234q14 [253, 4] +ASM disk MP4E71_DATA05 is associated on /dev/dm-5 +3600a0123456789012345567890234q15 [253, 5] +ASM disk MP4E72_DATA06 is associated on /dev/dm-6 +3600a0123456789012345567890234q16 [253, 6] +ASM disk MP4E73_DATA07 is associated on /dev/dm-7 +3600a0123456789012345567890234q17 [253, 7] +``` + +### Method-2 : Shell Script To Map ASM Disks To Physical Devices? + +In this shell script we are using while loop to achieve the results. + +Also, we are using ASM related commands. + +``` +# vi asm_disk_mapping_1.sh + +#!/bin/bash + +/etc/init.d/oracleasm listdisks > /tmp/asmdisks.txt + +while read -r ASM_disk + +do + +major="$(/etc/init.d/oracleasm querydisk -d $ASM_disk | awk -F[ '{ print $2 }'| awk -F] '{ print $1 }' | cut -d"," -f1)" + +minor="$(/etc/init.d/oracleasm querydisk -d $ASM_disk | awk -F[ '{ print $2 }'| awk -F] '{ print $1 }' | cut -d"," -f2)" + +phy_disk="$(ls -l /dev/* | grep ^b | grep "$major, *$minor" | awk '{ print $10 }')" + +echo "ASM disk $ASM_disk is associated on $phy_disk [$major, $minor]" + +done < /tmp/asmdisks.txt +``` + +Set an executable permission to port_scan.sh file. + +``` +$ chmod +x asm_disk_mapping_1.sh +``` + +Finally run the script to achieve this. + +``` +# sh asm_disk_mapping_1.sh + +ASM disk MP4E6D_DATA01 is associated on /dev/dm-1 +3600a0123456789012345567890234q11 [253, 1] +ASM disk MP4E6E_DATA02 is associated on /dev/dm-2 +3600a0123456789012345567890234q12 [253, 2] +ASM disk MP4E6F_DATA03 is associated on /dev/dm-3 +3600a0123456789012345567890234q13 [253, 3] +ASM disk MP4E70_DATA04 is associated on /dev/dm-4 +3600a0123456789012345567890234q14 [253, 4] +ASM disk MP4E71_DATA05 is associated on /dev/dm-5 +3600a0123456789012345567890234q15 [253, 5] +ASM disk MP4E72_DATA06 is associated on /dev/dm-6 +3600a0123456789012345567890234q16 [253, 6] +ASM disk MP4E73_DATA07 is associated on /dev/dm-7 +3600a0123456789012345567890234q17 [253, 7] +``` + +### How To List Oracle ASM Disks? + +If you would like to list only Oracle ASM disk then use the below command to List available/created Oracle ASM disks in Linux. + +``` +# oracleasm listdisks + +ASM_Disk1 +ASM_Disk2 +ASM_Disk3 +ASM_Disk4 +ASM_Disk5 +ASM_Disk6 +ASM_Disk7 +``` + +### How To List Oracle ASM Disks Against Major And Minor Number? + +If you would like to map Oracle ASM disks against major and minor number then use the below commands to List available/created Oracle ASM disks in Linux. + +``` +# for ASMdisk in `oracleasm listdisks`; do /etc/init.d/oracleasm querydisk -d $ASMdisk; done + +Disk "ASM_Disk1" is a valid Disk on device [253, 1] +Disk "ASM_Disk2" is a valid Disk on device [253, 2] +Disk "ASM_Disk3" is a valid Disk on device [253, 3] +Disk "ASM_Disk4" is a valid Disk on device [253, 4] +Disk "ASM_Disk5" is a valid Disk on device [253, 5] +Disk "ASM_Disk6" is a valid Disk on device [253, 6] +Disk "ASM_Disk7" is a valid Disk on device [253, 7] +``` + +Alternatively, we can get the same results using the ls command. + +``` +# ls -lh /dev/oracleasm/disks + +total 0 +brw-rw---- 1 oracle oinstall 253, 1 May 19 14:44 ASM_Disk1 +brw-rw---- 1 oracle oinstall 253, 2 May 19 14:44 ASM_Disk2 +brw-rw---- 1 oracle oinstall 253, 3 May 19 14:44 ASM_Disk3 +brw-rw---- 1 oracle oinstall 253, 4 May 19 14:44 ASM_Disk4 +brw-rw---- 1 oracle oinstall 253, 5 May 19 14:44 ASM_Disk5 +brw-rw---- 1 oracle oinstall 253, 6 May 19 14:44 ASM_Disk6 +brw-rw---- 1 oracle oinstall 253, 7 May 19 14:44 ASM_Disk7 +``` + +### How To List Physical Disks Against LUNs? + +If you would like to map physical disks against LUNs then use the below command. + +``` +# multipath -ll | grep NETAPP + +3600a0123456789012345567890234q11 dm-1 NETAPP,LUN C-Mode +3600a0123456789012345567890234q12 dm-2 NETAPP,LUN C-Mode +3600a0123456789012345567890234q13 dm-3 NETAPP,LUN C-Mode +3600a0123456789012345567890234q14 dm-4 NETAPP,LUN C-Mode +3600a0123456789012345567890234q15 dm-5 NETAPP,LUN C-Mode +3600a0123456789012345567890234q16 dm-6 NETAPP,LUN C-Mode +3600a0123456789012345567890234q17 dm-7 NETAPP,LUN C-Mode +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/shell-script-map-oracle-asm-disks-physical-disk-lun-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/start-stop-restart-enable-reload-oracleasm-service-linux-create-scan-list-query-rename-delete-configure-oracleasm-disk/ From 11cb9b0485829b7d9b472e87b2c331aee041be04 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:01:01 +0800 Subject: [PATCH 02/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190520=20xsos=20?= =?UTF-8?q?=E2=80=93=20A=20Tool=20To=20Read=20SOSReport=20In=20Linux=20sou?= =?UTF-8?q?rces/tech/20190520=20xsos=20-=20A=20Tool=20To=20Read=20SOSRepor?= =?UTF-8?q?t=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...sos - A Tool To Read SOSReport In Linux.md | 406 ++++++++++++++++++ 1 file changed, 406 insertions(+) create mode 100644 sources/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md diff --git a/sources/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md b/sources/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md new file mode 100644 index 0000000000..af4c38f06d --- /dev/null +++ b/sources/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md @@ -0,0 +1,406 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (xsos – A Tool To Read SOSReport In Linux) +[#]: via: (https://www.2daygeek.com/xsos-a-tool-to-read-sosreport-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +xsos – A Tool To Read SOSReport In Linux +====== + +We all are already know about **[sosreport][1]**. It’s used to collect system information that can be used for diagnostic. + +Redhat support advise us to provide a sosreport when we raise a case with them to analyze the current system status. + +It’s collecting all kind of reports that can help user to identify the root causes of issue. + +We can easily extract and read the sosreport but it’s very difficult to read. Since it has created a separate file for everything. + +If you are looking for performance bottleneck tool then i would recommend you to check the **[oswbb (OSWatcher) utility][2]**. + +So, what is the best way to read all together with syntax highlighting in Linux. + +Yes, it can be achieved via xsos tool. + +### What Is sosreport? + +The sosreport command is a tool that collects bunch of configuration details, system information and diagnostic information from running system (especially RHEL & OEL system). + +It helps technical support engineer to analyze the system in many aspect. + +This reports contains bunch of information about the system such as boot information, filesystem, memory, hostname, installed rpms, system IP, networking details, OS version, installed kernel, loaded kernel modules, list of open files, list of PCI devices, mount point and it’s details, running process information, process tree output, system routing, all the configuration files which is located in /etc folder, and all the log files which is located in /var folder. + +This will take a while to generate a report and it’s depends on your system installation and configuration. + +Once completed, sosreport will generate a compressed archive file under /tmp directory. + +### What Is xsos? + +[xsos][3] is a tool that help user to easily read sosreport on Linux systems. In other hand, we can say sosreport examiner. + +It instantly summarize system info from a sosreport or a running system. + +xsos will attempt to make it easy, parsing and calculating and formatting data from dozens of files (and commands) to give you a detailed overview about a system. + +You can instantly summarize your system information by running the following command. + +``` +# curl -Lo ./xsos bit.ly/xsos-direct; chmod +x ./xsos; ./xsos -ya +``` + +[![][4]![][4]][5] + +### How To Install xsos In Linux? + +We can easily install xsos using the following two methods. + +If you are looking for latest bleeding-edge version. Use the following steps. + +``` +# curl -Lo /usr/local/bin/xsos bit.ly/xsos-direct + +# chmod +x /usr/local/bin/xsos +``` + +This is the recommended method to install xsos. It will install xsos from rpm file. + +``` +# yum install http://people.redhat.com/rsawhill/rpms/latest-rsawaroha-release.rpm + +# yum install xsos +``` + +### How To Use xsos In Linux? + +Once xsos is installed by one of the above methods. Simply run the xsos command without any options, which show you the basic information about your system. + +``` +# xsos + +OS + Hostname: CentOS7.2daygeek.com + Distro: [redhat-release] CentOS Linux release 7.6.1810 (Core) + [centos-release] CentOS Linux release 7.6.1810 (Core) + [os-release] CentOS Linux 7 (Core) 7 (Core) + RHN: (missing) + RHSM: (missing) + YUM: 2 enabled plugins: fastestmirror, langpacks + Runlevel: N 5 (default graphical) + SELinux: enforcing (default enforcing) + Arch: mach=x86_64 cpu=x86_64 platform=x86_64 + Kernel: + Booted kernel: 3.10.0-957.el7.x86_64 + GRUB default: 3.10.0-957.el7.x86_64 + Build version: + Linux version 3.10.0-957.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red + Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018 + Booted kernel cmdline: + root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet + LANG=en_US.UTF-8 + GRUB default kernel cmdline: + root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet + LANG=en_US.UTF-8 + Taint-check: 0 (kernel untainted) + - - - - - - - - - - - - - - - - - - - + Sys time: Sun May 12 10:05:21 CDT 2019 + Boot time: Sun May 12 09:50:20 CDT 2019 (epoch: 1557672620) + Time Zone: America/Chicago + Uptime: 15 min, 1 user + LoadAvg: [1 CPU] 0.00 (0%), 0.04 (4%), 0.09 (9%) + /proc/stat: + procs_running: 2 procs_blocked: 0 processes [Since boot]: 6423 + cpu [Utilization since boot]: + us 1%, ni 0%, sys 1%, idle 99%, iowait 0%, irq 0%, sftirq 0%, steal 0% +``` + +### How To Use xsos Command To View Generated sosreport Output In Linux? + +We need the sosreport to read further using xsos command. To do so, navigate the following URL to install and generate sosreport on Linux. + +Yes, i have already generated a sosreport and file is below. + +``` +# ls -lls -lh /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa.tar.xz +9.8M -rw-------. 1 root root 9.8M May 12 10:13 /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa.tar.xz +``` + +Run the following command to untar it. + +``` +# tar xf sosreport-CentOS7-01-1005-2019-05-12-pomeqsa.tar.xz +``` + +To view all the info, run xsos with `-a, --all` switch. + +``` +# xsos --all /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +``` + +To view the bios info, run xsos with `-b, --bios` switch. + +``` +# xsos --bios /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +DMIDECODE + BIOS: + Vend: innotek GmbH + Vers: VirtualBox + Date: 12/01/2006 + BIOS Rev: + FW Rev: + System: + Mfr: innotek GmbH + Prod: VirtualBox + Vers: 1.2 + Ser: 0 + UUID: 002f47b8-2af2-48f5-be1d-67b67e03514c + CPU: + 0 of 0 CPU sockets populated, 0 cores/0 threads per CPU + 0 total cores, 0 total threads + Mfr: + Fam: + Freq: + Vers: + Memory: + Total: 0 MiB (0 GiB) + DIMMs: 0 of 0 populated + MaxCapacity: 0 MiB (0 GiB / 0.00 TiB) +``` + +To view the system basic info such as hostname, distro, SELinux, kernel info, uptime, etc, run xsos with `-o, --os` switch. + +``` +# xsos --os /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +OS + Hostname: CentOS7.2daygeek.com + Distro: [redhat-release] CentOS Linux release 7.6.1810 (Core) + [centos-release] CentOS Linux release 7.6.1810 (Core) + [os-release] CentOS Linux 7 (Core) 7 (Core) + RHN: (missing) + RHSM: (missing) + YUM: 2 enabled plugins: fastestmirror, langpacks + SELinux: enforcing (default enforcing) + Arch: mach=x86_64 cpu=x86_64 platform=x86_64 + Kernel: + Booted kernel: 3.10.0-957.el7.x86_64 + GRUB default: 3.10.0-957.el7.x86_64 + Build version: + Linux version 3.10.0-957.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red + Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018 + Booted kernel cmdline: + root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet + LANG=en_US.UTF-8 + GRUB default kernel cmdline: + root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet + LANG=en_US.UTF-8 + Taint-check: 536870912 (see https://access.redhat.com/solutions/40594) + 29 TECH_PREVIEW: Technology Preview code is loaded + - - - - - - - - - - - - - - - - - - - + Sys time: Sun May 12 10:12:22 CDT 2019 + Boot time: Sun May 12 09:50:20 CDT 2019 (epoch: 1557672620) + Time Zone: America/Chicago + Uptime: 22 min, 1 user + LoadAvg: [1 CPU] 1.19 (119%), 0.27 (27%), 0.14 (14%) + /proc/stat: + procs_running: 8 procs_blocked: 2 processes [Since boot]: 9005 + cpu [Utilization since boot]: + us 1%, ni 0%, sys 1%, idle 99%, iowait 0%, irq 0%, sftirq 0%, steal 0% +``` + +To view the kdump configuration, run xsos with `-k, --kdump` switch. + +``` +# xsos --kdump /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +KDUMP CONFIG + kexec-tools rpm version: + kexec-tools-2.0.15-21.el7.x86_64 + Service enablement: + UNIT STATE + kdump.service enabled + kdump initrd/initramfs: + 13585734 Feb 19 05:51 initramfs-3.10.0-957.el7.x86_64kdump.img + Memory reservation config: + /proc/cmdline { crashkernel=auto } + GRUB default { crashkernel=auto } + Actual memory reservation per /proc/iomem: + 2a000000-340fffff : Crash kernel + kdump.conf: + path /var/crash + core_collector makedumpfile -l --message-level 1 -d 31 + kdump.conf "path" available space: + System MemTotal (uncompressed core size) { 1.80 GiB } + Available free space on target path's fs { 22.68 GiB } (fs=/) + Panic sysctls: + kernel.sysrq [bitmask] = "16" (see proc man page) + kernel.panic [secs] = 0 (no autoreboot on panic) + kernel.hung_task_panic = 0 + kernel.panic_on_oops = 1 + kernel.panic_on_io_nmi = 0 + kernel.panic_on_unrecovered_nmi = 0 + kernel.panic_on_stackoverflow = 0 + kernel.softlockup_panic = 0 + kernel.unknown_nmi_panic = 0 + kernel.nmi_watchdog = 1 + vm.panic_on_oom [0-2] = 0 (no panic) +``` + +To view the information about CPU, run xsos with `-c, --cpu` switch. + +``` +# xsos --cpu /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +CPU + 1 logical processors + 1 Intel Core i7-6700HQ CPU @ 2.60GHz (flags: aes,constant_tsc,ht,lm,nx,pae,rdrand) +``` + +To view about memory utilization, run xsos with `-m, --mem` switch. + +``` +# xsos --mem /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +MEMORY + Stats graphed as percent of MemTotal: + MemUsed ▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊..................... 58.8% + Buffers .................................................. 0.6% + Cached ▊▊▊▊▊▊▊▊▊▊▊▊▊▊▊................................... 29.9% + HugePages .................................................. 0.0% + Dirty .................................................. 0.7% + RAM: + 1.8 GiB total ram + 1.1 GiB (59%) used + 0.5 GiB (28%) used excluding Buffers/Cached + 0.01 GiB (1%) dirty + HugePages: + No ram pre-allocated to HugePages + LowMem/Slab/PageTables/Shmem: + 0.09 GiB (5%) of total ram used for Slab + 0.02 GiB (1%) of total ram used for PageTables + 0.01 GiB (1%) of total ram used for Shmem + Swap: + 0 GiB (0%) used of 2 GiB total +``` + +To view the added disks information, run xsos with `-d, --disks` switch. + +``` +# xsos --disks /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +STORAGE + Whole Disks from /proc/partitions: + 2 disks, totaling 40 GiB (0.04 TiB) + - - - - - - - - - - - - - - - - - - - - - + Disk Size in GiB + ---- ----------- + sda 30 + sdb 10 +``` + +To view the network interface configuration, run xsos with `-e, --ethtool` switch. + +``` +# xsos --ethtool /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +ETHTOOL + Interface Status: + enp0s10 0000:00:0a.0 link=up 1000Mb/s full (autoneg=Y) rx ring 256/4096 drv e1000 v7.3.21-k8-NAPI / fw UNKNOWN + enp0s9 0000:00:09.0 link=up 1000Mb/s full (autoneg=Y) rx ring 256/4096 drv e1000 v7.3.21-k8-NAPI / fw UNKNOWN + virbr0 N/A link=DOWN rx ring UNKNOWN drv bridge v2.3 / fw N/A + virbr0-nic tap link=DOWN rx ring UNKNOWN drv tun v1.6 / fw UNKNOWN +``` + +To view the information about IP address, run xsos with `-i, --ip` switch. + +``` +# xsos --ip /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +IP4 + Interface Master IF MAC Address MTU State IPv4 Address + ========= ========= ================= ====== ===== ================== + lo - - 65536 up 127.0.0.1/8 + enp0s9 - 08:00:27:0b:bc:e9 1500 up 192.168.1.8/24 + enp0s10 - 08:00:27:b2:08:91 1500 up 192.168.1.9/24 + virbr0 - 52:54:00:ae:01:94 1500 up 192.168.122.1/24 + virbr0-nic virbr0 52:54:00:ae:01:94 1500 DOWN - + +IP6 + Interface Master IF MAC Address MTU State IPv6 Address Scope + ========= ========= ================= ====== ===== =========================================== ===== + lo - - 65536 up ::1/128 host + enp0s9 - 08:00:27:0b:bc:e9 1500 up fe80::945b:8333:f4bc:9723/64 link + enp0s10 - 08:00:27:b2:08:91 1500 up fe80::7ed4:1fab:23c3:3790/64 link + virbr0 - 52:54:00:ae:01:94 1500 up - - + virbr0-nic virbr0 52:54:00:ae:01:94 1500 DOWN - - +``` + +To view the running processes via ps, run xsos with `-p, --ps` switch. + +``` +# xsos --ps /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa +PS CHECK + Total number of threads/processes: + 501 / 171 + Top users of CPU & MEM: + USER %CPU %MEM RSS + root 20.6% 14.1% 0.30 GiB + gdm 0.3% 16.8% 0.33 GiB + postfix 0.0% 0.6% 0.01 GiB + polkitd 0.0% 0.6% 0.01 GiB + daygeek 0.0% 0.2% 0.00 GiB + colord 0.0% 0.4% 0.01 GiB + Uninteruptible sleep threads/processes (0/0): + [None] + Defunct zombie threads/processes (0/0): + [None] + Top CPU-using processes: + USER PID %CPU %MEM VSZ-MiB RSS-MiB TTY STAT START TIME COMMAND + root 6542 15.6 4.2 875 78 pts/0 Sl+ 10:11 0:07 /usr/bin/python /sbin/sosreport + root 7582 3.0 0.1 10 2 pts/0 S 10:12 0:00 /bin/bash /usr/sbin/dracut --print-cmdline + root 7969 0.7 0.1 95 4 ? Ss 10:12 0:00 /usr/sbin/certmonger -S -p + root 7889 0.4 0.2 24 4 ? Ss 10:12 0:00 /usr/lib/systemd/systemd-hostnamed + gdm 3866 0.3 7.1 2856 131 ? Sl 09:50 0:04 /usr/bin/gnome-shell + root 8553 0.2 0.1 47 3 ? S 10:12 0:00 /usr/lib/systemd/systemd-udevd + root 6971 0.2 0.4 342 9 ? Sl 10:12 0:00 /usr/sbin/abrt-dbus -t133 + root 3200 0.2 0.9 982 18 ? Ssl 09:50 0:02 /usr/sbin/libvirtd + root 2855 0.1 0.1 88 3 ? Ss 09:50 0:01 /sbin/rngd -f + rtkit 2826 0.0 0.0 194 2 ? SNsl 09:50 0:00 /usr/libexec/rtkit-daemon + Top MEM-using processes: + USER PID %CPU %MEM VSZ-MiB RSS-MiB TTY STAT START TIME COMMAND + gdm 3866 0.3 7.1 2856 131 ? Sl 09:50 0:04 /usr/bin/gnome-shell + root 6542 15.6 4.2 875 78 pts/0 Sl+ 10:11 0:07 /usr/bin/python /sbin/sosreport + root 3264 0.0 1.2 271 23 tty1 Ssl+ 09:50 0:00 /usr/bin/X :0 -background + root 3200 0.2 0.9 982 18 ? Ssl 09:50 0:02 /usr/sbin/libvirtd + root 3189 0.0 0.9 560 17 ? Ssl 09:50 0:00 /usr/bin/python2 -Es /usr/sbin/tuned + gdm 4072 0.0 0.9 988 17 ? Sl 09:50 0:00 /usr/libexec/gsd-media-keys + gdm 4076 0.0 0.8 625 16 ? Sl 09:50 0:00 /usr/libexec/gsd-power + gdm 4056 0.0 0.8 697 16 ? Sl 09:50 0:00 /usr/libexec/gsd-color + root 2853 0.0 0.7 622 14 ? Ssl 09:50 0:00 /usr/sbin/NetworkManager --no-daemon + gdm 4110 0.0 0.7 544 14 ? Sl 09:50 0:00 /usr/libexec/gsd-wacom + Top thread-spawning processes: + # USER PID %CPU %MEM VSZ-MiB RSS-MiB TTY STAT START TIME COMMAND + 17 root 3200 0.2 0.9 982 18 ? - 09:50 0:02 /usr/sbin/libvirtd + 12 root 6542 16.1 4.5 876 83 pts/0 - 10:11 0:07 /usr/bin/python /sbin/sosreport + 10 gdm 3866 0.3 7.1 2856 131 ? - 09:50 0:04 /usr/bin/gnome-shell + 7 polkitd 2864 0.0 0.6 602 13 ? - 09:50 0:01 /usr/lib/polkit-1/polkitd --no-debug + 6 root 2865 0.0 0.0 203 1 ? - 09:50 0:00 /usr/sbin/gssproxy -D + 5 root 3189 0.0 0.9 560 17 ? - 09:50 0:00 /usr/bin/python2 -Es /usr/sbin/tuned + 5 root 2823 0.0 0.3 443 6 ? - 09:50 0:00 /usr/libexec/udisks2/udisksd + 5 gdm 4102 0.0 0.2 461 5 ? - 09:50 0:00 /usr/libexec/gsd-smartcard + 4 root 3215 0.0 0.2 470 4 ? - 09:50 0:00 /usr/sbin/gdm + 4 gdm 4106 0.0 0.2 444 5 ? - 09:50 0:00 /usr/libexec/gsd-sound +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/xsos-a-tool-to-read-sosreport-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-create-collect-sosreport-in-linux/ +[2]: https://www.2daygeek.com/oswbb-how-to-install-and-configure-oswatcher-black-box-for-system-diagnostics/ +[3]: https://github.com/ryran/xsos +[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[5]: https://www.2daygeek.com/wp-content/uploads/2019/05/xsos-a-tool-to-read-sosreport-in-linux-1.jpg From 5f11a40c62c99f26d8c76a37ef0bd2aa3b23e5cd Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:02:04 +0800 Subject: [PATCH 03/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190513=20Manage?= =?UTF-8?q?=20business=20documents=20with=20OpenAS2=20on=20Fedora=20source?= =?UTF-8?q?s/tech/20190513=20Manage=20business=20documents=20with=20OpenAS?= =?UTF-8?q?2=20on=20Fedora.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...siness documents with OpenAS2 on Fedora.md | 153 ++++++++++++++++++ 1 file changed, 153 insertions(+) create mode 100644 sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md diff --git a/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md b/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md new file mode 100644 index 0000000000..c8e82151ef --- /dev/null +++ b/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md @@ -0,0 +1,153 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage business documents with OpenAS2 on Fedora) +[#]: via: (https://fedoramagazine.org/manage-business-documents-with-openas2-on-fedora/) +[#]: author: (Stuart D Gathman https://fedoramagazine.org/author/sdgathman/) + +Manage business documents with OpenAS2 on Fedora +====== + +![][1] + +Business documents often require special handling. Enter Electronic Document Interchange, or **EDI**. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that: + +1\. It goes to the right destination, and is not intercepted by competitors. +2\. Your invoice cannot be forged by a 3rd party. +3\. Your customer can’t claim in court that they never got the invoice. + +The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part. + +This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup. + +### Centralized EDI + +The traditional solution is to use a Value Added Network, or **VAN**. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers + +### AS Protocols and MDN + +The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or **MDN**. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party. + +The [AS2 protocol][2] uses HTTP or HTTPS for transport. Other AS protocols target [FTP][3] and [SMTP][4]. AS2 is used by companies big and small to avoid depending on (and paying) a VAN. + +### OpenAS2 + +OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with: + +``` +$ sudo dnf install openas2 +$ cd /etc/openas2 +``` + +Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords. + +Edit _/etc/openas2/config.xml_ and search for _ChangeMe_. Change those passwords. The default password on the certificate store is _testas2_ , but that doesn’t matter much as anyone who can read the certificate store can read _config.xml_ and get the password. + +### What to share with AS2 partners + +There are 3 things you will exchange with an AS2 peer. + +#### AS2 ID + +Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters. + +#### SSL certificate + +For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with _keytool_ which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using _openssl_ to generate keys and certificates. Simply note that _sudo keytool -list -keystore as2_certs.p12_ will list the two factory practice certs. + +#### AS2 URL + +This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in _config.xml_ , and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here. + +By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using __ as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install [Cjdns][5] to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like _http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080_. + +Most businesses will also want a list of IPs to add to their firewall. This is actually [bad practice][6]. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner. + +### OpenAS2 Partners + +With that in mind, open _partnerships.xml_ in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document. + +### OpenAS2 Partnerships + +Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the _as2_receipt_option_ for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application. + +The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented. + +If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the _as2_url_ on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need: + +``` +# sudo firewall-cmd --zone=public --add-port=10080/tcp +``` + +Now start the _openas2_ service (on both machines if needed): + +``` +# sudo systemctl start openas2 +``` + +### Resetting the MDN password + +This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password: + +``` +$ sudo systemctl stop openas2 +$ cat >h2passwd <<'DONE' +#!/bin/bash +AS2DIR="/var/lib/openas2" +java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \ + -url jdbc:h2:"$AS2DIR"/db/openas2 \ + -user sa -password "$1" <testdoc <<'DONE' +This is not a real EDI format, but is nevertheless a document. +DONE +$ sudo chown openas2 testdoc +$ sudo mv testdoc /var/spool/openas2/toOpenAS2B +$ sudo journalctl -f -u openas2 +... log output of sending file, Control-C to stop following log +^C +``` + +OpenAS2 does not send a document until it is writable by the _openas2_ user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document. + +Now, on the OpenAS2B machine, _/var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox_ shows the message received. That should get you started! + +* * * + +_Photo by _[ _Beatriz Pérez Moya_][7]_ on _[_Unsplash_][8]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/manage-business-documents-with-openas2-on-fedora/ + +作者:[Stuart D Gathman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/sdgathman/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/openas2-816x345.jpg +[2]: https://en.wikipedia.org/wiki/AS2 +[3]: https://en.wikipedia.org/wiki/AS3_(networking) +[4]: https://en.wikipedia.org/wiki/AS1_(networking) +[5]: https://fedoramagazine.org/decentralize-common-fedora-apps-cjdns/ +[6]: https://www.ld.com/as2-part-2-best-practices/ +[7]: https://unsplash.com/photos/XN4T2PVUUgk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[8]: https://unsplash.com/search/photos/documents?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From 3ac3aac3d6130ab7e68c337dde1db75c00069d89 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:02:23 +0800 Subject: [PATCH 04/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190518=20Change?= =?UTF-8?q?=20Power=20Modes=20in=20Ubuntu=20with=20Slimbook=20Battery=20Op?= =?UTF-8?q?timizer=20sources/tech/20190518=20Change=20Power=20Modes=20in?= =?UTF-8?q?=20Ubuntu=20with=20Slimbook=20Battery=20Optimizer.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Ubuntu with Slimbook Battery Optimizer.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md diff --git a/sources/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md b/sources/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md new file mode 100644 index 0000000000..874cd4ccf1 --- /dev/null +++ b/sources/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Change Power Modes in Ubuntu with Slimbook Battery Optimizer) +[#]: via: (https://itsfoss.com/slimbook-battry-optimizer-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Change Power Modes in Ubuntu with Slimbook Battery Optimizer +====== + +_**Brief: Slimbook Battery is a nifty applet indicator that allows you to quickly change the power mode on your Linux laptop and thus save battery life.**_ + +[Slimbook][1], the Spanish computer vendor that sells [laptops preloaded with Linux][2], has released a handy little application to optimize battery performance in Ubuntu-based Linux distributions. + +Since Slimbook sells its own Linux systems, they have created a few applications to tweak the performance of Linux on their hardware. This battery optimizer is one such tool. + +You don’t need to buy a Slimbook product to use this nifty application because Slimbook has made it available via [their official PPA][3]. + +### Slimbook battery optimizer application + +The application is called Slimbook Battery. It is basically an applet indicator that sits on the top panel and gives you quick access to various power/battery modes. + +![Slimbook Battery Mode Ubuntu][4] + +You might have seen it in Windows where you can put your laptop in one of the power modes. Slimbook Battery also offers similar battery saving modes here: + + * Energy Saving: For maximum battery saving + * Balanced: A compromise between performance and power saving + * Maximum Performance: For maximum performance obviously + + + +You can configure all these modes from the advanced mode: + +![Configure various power modes][5] + +If you feel like you have messed up the configuration, you can set things back to normal with ‘restore default values’ option. + +You can also change the general configuration of the application like auto-start, default power mode etc. + +![Slimbook Battery general configuration][6] + +Skimbook has a dedicated page that provides more information on various power saving parameters. If you want to configure things on your own, you should refer to [this page][7]. + +I have noticed that the interface of Slimbook Battery needs some improvements. For example, the ‘question mark’ icon on some parameters should be clickable to provide more information. But clicking the question mark icon doesn’t do anything at the time of writing this article. + +Altogether, Slimbook Battery is a handy app you can use for quickly switching the power mode. If you decide to install it on Ubuntu and other Ubuntu-based distributions such as Linux Mint, elementary OS etc, you can use its official [PPA][8]. + +[][9] + +Suggested read Ubuntu Forums Hacked, User Data Stolen!!! + +#### Install Slimbook Batter in Ubuntu-based distributions + +Open a terminal and use the following commands one by one: + +``` +sudo add-apt-repository ppa:slimbook/slimbook +sudo apt update +sudo apt install slimbookbattery +``` + +Once installed, search for Slimbook Battery in the menu: + +![Start Slimbook Battery Optimizer][10] + +When you click on it to start it, you’ll find it in the top panel. From here, you can select your desired power mode. + +![Slimbook Battery power mode][4] + +#### Remove Slimbook Battery + +If you don’t want to use this application, you can remove it using the following commands: + +``` +sudo apt remove slimbookbattery +sudo add-apt-repository -r ppa:slimbook/slimbook +``` + +In my opinion, such applications serves a certain purpose and should be encouraged. This tool gives you an easy way to change the power mode and at the same time, it gives you more tweaking options for various performance settings. + +Did you use Slimbook Battery? What’s your experience with it? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/slimbook-battry-optimizer-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://slimbook.es/en/ +[2]: https://itsfoss.com/get-linux-laptops/ +[3]: https://launchpad.net/~slimbook/+archive/ubuntu/slimbook +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/slimbook-battery-mode-ubuntu.jpg?resize=800%2C400&ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/slimbook-battery-optimizer-2.jpg?ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/slimbook-battery-optimizer-1.jpg?ssl=1 +[7]: https://slimbook.es/en/tutoriales/aplicaciones-slimbook/398-slimbook-battery-3-application-for-optimize-battery-of-your-laptop +[8]: https://itsfoss.com/ppa-guide/ +[9]: https://itsfoss.com/ubuntu-forums-hacked-again/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/slimbook-battery-optimizer.jpg?ssl=1 From d514878a3657fc06176f6e1668c8f4e42f607785 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:02:32 +0800 Subject: [PATCH 05/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190517=2010=20Pla?= =?UTF-8?q?ces=20Where=20You=20Can=20Buy=20Linux=20Computers=20sources/tec?= =?UTF-8?q?h/20190517=2010=20Places=20Where=20You=20Can=20Buy=20Linux=20Co?= =?UTF-8?q?mputers.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...laces Where You Can Buy Linux Computers.md | 311 ++++++++++++++++++ 1 file changed, 311 insertions(+) create mode 100644 sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md diff --git a/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md b/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md new file mode 100644 index 0000000000..36e3a0972b --- /dev/null +++ b/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md @@ -0,0 +1,311 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 Places Where You Can Buy Linux Computers) +[#]: via: (https://itsfoss.com/get-linux-laptops/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +10 Places Where You Can Buy Linux Computers +====== + +_**Looking for Linux laptops? Here I list some online shops that either sell Linux computers or specialize only in Linux systems.**_ + +Almost all the computers (except Apple) sold these days come with Windows preinstalled on it. The standard procedure for Linux users is to buy such a computer and then either remove Windows and install Linux or [dual boot Linux with Windows][1]. + +But you don’t always have to go through Windows. You can buy Linux computers as well. + +But why buy a computer preinstalled with Linux when you can easily install Linux on any computer? Here are some reasons: + + * A computer with Windows always has an extra cost for the Windows license. You can avoid that. + * Computers preinstalled with Linux are well-tested for hardware compatibility. You can be sure that your system will have WiFi and Bluetooth working instead of figuring these things on your own. + * Buying Linux laptops and desktops supports Linux indirectly. More sale indicates that there is a demand for Linux products and thus more vendors may incline to provide Linux as a choice of operating system. + + + +If you are looking to get a new Linux laptop, let me suggest you a few manufacturers and vendors that provide ready-to-use Linux systems. + +![][2] + +### 10 places to buy Linux laptops and computers + +A couple of disclaimer/information before you see the list of shops offering computers with Linux preloaded. + +Please make a purchase on your own decision. I am simply listing the Linux computer sellers here, I cannot vouch for their product quality, after sale service or other such things. + +This is not a ranking list. The items listed here are in no particular order. The numbers are used for the purpose of counting the items, not ranking them. + +Let’s see from where you can get desktops and laptops with Linux preinstalled. + +#### 1\. Dell + +![Dell XPS Ubuntu | Image Credit: Lifehacker][3] + +Dell has been offering Ubuntu laptops for several years now. Their flagship product XPS features a Developer Edition series that comes with Ubuntu preinstalled. + +If you read my [review of Dell XPS Ubuntu edition][4], you know that I loved this laptop. It’s been more than two years and this laptop is still in great condition and performance has not deteriorated. + +Dell XPS is an expensive device with a price tag of over $1000. If that’s out of your budget, Dell also has inexpensive offering in its Inspiron laptop range. + +Do note that Dell doesn’t display the Ubuntu/Linux laptops on its website. Unless you already know that Linux laptops are offered by Dell, you wouldn’t be able to find them. + +So, go to Dell’s website and enter Ubuntu in its search box to see the products that ship with Ubuntu Linux preinstalled. + +**Availability** : Most part of the world. + +[Dell][5] + +#### 2\. System76 + +[System76][6] is a prominent name in the Linux computers world. This US-based company specializes in high-end computing devices that run Linux. Their targeted user-base is software developers. + +Initially, System76 used to offer Ubuntu on their machines. In 2017, they released their own Linux distribution [Pop!_OS][7] based on Ubuntu. Since then, Pop!_OS is the default OS on their machine with Ubuntu still available as a choice. + +Apart from performance, System76 has put a great emphasis on the design of its computer. Their [Thelio desktop series][8] has a handcrafted wooden design. + +![System76 Thelio Desktop][9] + +You may check their Linux laptops offering [here][10]. They also offer [Linux-based mini PCs][11] and [servers][12]. + +Did I mention that System76 manufactures its computers in America instead of the obvious choice of China and Taiwan? The products are on the expensive side, perhaps for this reason. + +**Availability** : USA and 60 other countries. Extra custom duty may be applicable outside the US. More info [here][13]. + +[System76][6] + +#### 3\. Purism + +Purism is a US-based company that takes pride in creating products and services that help you secure your data and privacy. That’s the reason why Purism calls itself a ‘Social Purpose Corporation’. + +[][14] + +Suggested read How To Use Google Drive In Linux + +Purism started with a crowdfunding campaign for creating a high-end open source laptop with (almost) no proprietary software. The [successful $250,000 crowdfunding campaign][15] gave birth to [Librem 15][16] laptop in 2015. + +![Purism Librem 13][17] + +Later Purism released a 13″ version called [Librem 13][18]. Purism also created a Linux distribution [Pure OS][19] keeping privacy and security in mind. + +[Pure OS can run on both desktop and mobile devices][20] and it is the default choice of operating system on its Librem laptops and [Librem 5 Linux phone][21]. + +Purism gets its components from China, Taiwan, Japan, and the United States and builds/assemble them in the US. All their devices have hardware kill switches to turn off the microphone/camera and wireless/bluetooth. + +**Availability** : Worldwide with free international shipping. Custom duty may cost extra. + +[Purism][22] + +#### 4\. Slimbook + +Slimbook is a Linux computer vendor based in Spain. Slimbook came to limelight after launching the [first KDE branded laptop][23]. + +Their offering is not limited to just KDE Neon. They offer Ubuntu, Kubuntu, Ubuntu MATE, Linux Mint and Spanish distributions like [Lliurex][24] and [Max][25]. You can also choose Windows at an additional cost or opt for no operating system at all. + +Slimbook has a wide variety of Linux laptops, desktops and mini PCs available. An iMac like 24″ [curved monitor that has in-built CPU][26] is an awesome addition to their collection. + +![Slimbook Kymera Aqua Liquid Cool Linux Computer][27] + +Want a liquid cooled Linux computer? Slimbook’s [Kymera Aqua][28] is for you. + +**Availability** : Worldwide but may cost extra in shipping and custom duty + +[Slimbook][29] + +#### 5\. TUXEDO Computers + +Another European candidate in this list of Linux computer vendors. [TUXEDO Computers][30] is based out of Germany and mainly focuses on German users and then European users. + +TUXEDO Computers only uses Linux and the computers are ‘manufactured in Germany’ and come with 5 years of guarantee and lifetime support. + +TUXEDO Computers has put up some real good effort in customizing its hardware to run on Linux. And if you ever run into trouble or want to start afresh, you have the system recovery option to restore factory settings automatically. + +![Tuxedo Computers supports a wide variety of distributions][31] + +TUXEDO Computers has a number of Linux laptops, desktops, mini-PCs available. They have both Intel and AMD processors. Apart from the computers, TUXEDO Computers also has a range of Linux supported accessories like docking stations, DVD/Blue-Ray burners, power bank and other peripheral devices. + +**Availability** : Free shipping in Germany and Europe (for orders above 150 Euro). Extra shipping charges and custom duty for non-EU countries. More info [here][32]. + +[TUXEDO Computers][33] + +#### 6\. Vikings + +[Vikings][34] is based in Germany (instead of Scandinavia :D). Certified by [Free Software Foundation][35], Vikings focuses exclusively on Libre-friendly hardware. + +![Vikings’s products are certified by Free Software Foundation][36] + +The Linux laptops and desktops by Vikings come with [coreboot][37] or [Libreboot][38] instead of proprietary boot systems like BIOS and UEFI. You can also buy [server hardware][39] running no proprietary software. + +Vikings also has other accessories like router, docking station etc. The products are assembled in Germany. + +**Availability** : Worldwide (except North Korea). Non-EU countries may charge custom duty. More information [here][40]. + +[Vikings][41] + +#### 7\. Ubuntushop.be + +No! It’s not the official Ubuntu Shop even though it has Ubuntu in its name. Ubuntushop is based in Belgium and originally started selling computers installed with Ubuntu. + +Today, you can get laptops preloaded with Linux distributions like Mint, Manjaro, elementrayOS. You can also request a distribution of your choice to be installed on the system you buy. + +![][42] + +One unique thing about Ubuntushop is that all of its computers come with default Tails OS live option. So even if it has a Linux distribution installed for regular use, you can always choose to boot into the Tails OS (without live USB). [Tails OS][43] is a Debian based distribution that deletes all traces of its use after logging out and it uses Tor network by default. + +[][44] + +Suggested read Things to do After Installing Ubuntu 18.04 and 18.10 + +Unlike many other big players on this list, I feel that Ubuntushop is more of a ‘domestic operation’ where someone manually assembles your computer and installs Linux on it. But they have done quite some job on providing options like easy re-install, own cloud server etc. + +Got an old PC, send it to them while buying a new Linux computer and they will send it back to you after installing [lightweight Linux][45] on it so that the old computer is recycled and can still be put to some use. + +**Availability** : Belgium and rest of Europe. + +[Ubuntushop.be][46] + +#### 8\. Minifree + +[Minifree][47], short for Ministry of Freedom, is a company registered in England. + +You can guess that Minifree focuses on the freedom. It provides secure and privacy-respcting computers that come with [Libreboot][38] instead of BIOS or UEFI. + +Minifree devices are certified by [Free Software Foundation][48] which means that you can be sure that your computer adhere to guidelines and principals of Free and Open Source Software. + +![][49] + +Unlike many other Linux laptops vendors on this list, computers from Minifree are not super-expensive. You can get a Libreboot Linux laptop running [Trisquel GNU/Linux][50] from 200 euro. + +Apart from laptops, Minifree also has a range of accessories like a Libre Router, tablet, docking station, batteries, keyboard, mouse etc. + +If you care to run only 100% free software like [Richard Stallman][51], Minifree is for you. + +**Availability** : Worldwide. Shipping information is available [here][52]. + +[Minifree][47] + +#### 9\. Entroware + +[Entroware][53] is another UK-based vendor that specializes in Linux-based laptops, desktop and servers. + +Like many others on the list, Entroware also has Ubuntu as its choice of Linux distribution. [Ubuntu MATE is also available as a choice on Entroware Linux laptops][54]. + +![][55] + +Apart from laptops, desktop and servers, Entroware also has their [mini-PC Aura][56] and the iMac style [monitor with built-in CPU Ares][57]. + +Availability: UK, Ireland France, Germany, Italy, Spain + +[Entroware][58] + +#### 10\. Juno Computers + +This is a new Linux laptop vendor on our list. Juno Computers is also based in UK and offers computers preinstalled with Linux. elementary OS, Ubuntu and Solus OS are the choices of Linux distributions here. + +Juno offers a range of laptops and a mini-PC called Olympia. Like almost all the mini-PCs offered by other vendors here, Olympia is also basically [Intel NUC][59]. + +The main highlight from Juno Computers is a low-cost Chromebook alternative, Juve that costs £299. It runs a dual-booted system with Solus/elementray with an Android-based desktop operating system, [Prime OS][60]. + +![Juve With Android-based Prime Os][61] + +Availability: UK, USA, Canada, Mexico, Most part of South America and Europe, Australia, New Zealand, some part of Asia and Africa. More information [here][62]. + +[Juno Computers][63] + +#### Honorable mentions + +I have listed 10 places to get Linux computers but there are several other such shops available. I cannot include all of them in the main list and a couple of them seem to be out of stock for most products. However, I am going to mention them here so that you may check them on your own: + + * [ZaReason][64] + * [Libiquity][65] + * [StationX][66] + * [Linux Certified][67] + * [Think Penguin][68] + + + +Other mainstream computer manufacturers like Acer, Lenovo etc may also have some Linux systems in their catalog so you may check their products as well. + +Have you ever bought a Linux computer? Where did you buy it? How’s your experience with it? Is it worth buying a Linux laptop? Do share your thoughts. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/get-linux-laptops/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/buy-linux-laptops.jpeg?resize=800%2C450&ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/dell-xps-ubuntu.jpg?resize=800%2C450&ssl=1 +[4]: https://itsfoss.com/dell-xps-13-ubuntu-review/ +[5]: https://www.dell.com +[6]: https://system76.com/ +[7]: https://itsfoss.com/pop-os-linux-review/ +[8]: https://system76.com/desktops +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/system76-thelio-desktop.jpg?ssl=1 +[10]: https://system76.com/laptops +[11]: https://itsfoss.com/4-linux-based-mini-pc-buy-2015/ +[12]: https://system76.com/servers +[13]: https://system76.com/shipping +[14]: https://itsfoss.com/use-google-drive-linux/ +[15]: https://www.crowdsupply.com/purism/librem-15 +[16]: https://puri.sm/products/librem-15/ +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/purism-librem-13.jpg?resize=800%2C471&ssl=1 +[18]: https://puri.sm/products/librem-13/ +[19]: https://www.pureos.net/ +[20]: https://itsfoss.com/pureos-convergence/ +[21]: https://itsfoss.com/librem-linux-phone/ +[22]: https://puri.sm/ +[23]: https://itsfoss.com/slimbook-kde/ +[24]: https://distrowatch.com/table.php?distribution=lliurex +[25]: https://en.wikipedia.org/wiki/MAX_(operating_system) +[26]: https://slimbook.es/en/aio-curve-all-in-one-for-gnu-linux +[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Slimbook-Kymera-Aqua-Liquid-Cool-Linux-Computer.jpg?ssl=1 +[28]: https://slimbook.es/en/kymera-aqua-the-gnu-linux-computer-with-custom-water-cooling +[29]: https://slimbook.es/en/ +[30]: https://www.tuxedocomputers.com/ +[31]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/tuxedo-computers.jpeg?resize=800%2C400&ssl=1 +[32]: https://www.tuxedocomputers.com/en/Shipping-Returns.tuxedo +[33]: https://www.tuxedocomputers.com/en# +[34]: https://store.vikings.net/index.php?route=common/home +[35]: https://www.fsf.org +[36]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/vikings-computer.jpeg?resize=800%2C450&ssl=1 +[37]: https://www.coreboot.org/ +[38]: https://libreboot.org/ +[39]: https://store.vikings.net/libre-friendly-hardware/the-server-1u +[40]: https://store.vikings.net/index.php?route=information/information&information_id=8 +[41]: https://store.vikings.net/libre-friendly-hardware +[42]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/manjarobook-by-ubuntushop.jpeg?ssl=1 +[43]: https://tails.boum.org/ +[44]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ +[45]: https://itsfoss.com/lightweight-linux-beginners/ +[46]: https://www.ubuntushop.be/index.php/en/ +[47]: https://minifree.org/ +[48]: https://www.fsf.org/ +[49]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/minifree.jpg?resize=800%2C550&ssl=1 +[50]: https://trisquel.info/ +[51]: https://en.wikipedia.org/wiki/Richard_Stallman +[52]: https://minifree.org/shipping-costs/ +[53]: https://www.entroware.com/ +[54]: https://itsfoss.com/ubuntu-mate-entroware/ +[55]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/entroware.jpg?resize=800%2C450&ssl=1 +[56]: https://itsfoss.com/ubuntu-entroware-aura-mini-pc/ +[57]: https://www.entroware.com/store/ares +[58]: https://www.entroware.com/store/index.php?route=common/home +[59]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (Intel NUC) +[60]: https://primeos.in/ +[61]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/juve-with-prime-os.jpeg?ssl=1 +[62]: https://junocomputers.com/shipping +[63]: https://junocomputers.com/ +[64]: https://zareason.com/ +[65]: https://libiquity.com/ +[66]: https://stationx.rocks/ +[67]: https://www.linuxcertified.com/linux_laptops.html +[68]: https://www.thinkpenguin.com/ From 7cbbb8f1aa43d39ca4374812e1e63259da589794 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:02:52 +0800 Subject: [PATCH 06/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190513=20How=20to?= =?UTF-8?q?=20SSH=20into=20a=20Raspberry=20Pi=20[Beginner=E2=80=99s=20Tip]?= =?UTF-8?q?=20sources/tech/20190513=20How=20to=20SSH=20into=20a=20Raspberr?= =?UTF-8?q?y=20Pi=20-Beginner-s=20Tip.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...SSH into a Raspberry Pi -Beginner-s Tip.md | 130 ++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md diff --git a/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md new file mode 100644 index 0000000000..d1bcf06138 --- /dev/null +++ b/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @@ -0,0 +1,130 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to SSH into a Raspberry Pi [Beginner’s Tip]) +[#]: via: (https://itsfoss.com/ssh-into-raspberry/) +[#]: author: (Chinmay https://itsfoss.com/author/chinmay/) + +How to SSH into a Raspberry Pi [Beginner’s Tip] +====== + +_**In this Raspberry Pi article series, you’ll learn how to enable SSH in Raspberry Pi and then how to SSH into a Raspberry Pi device.**_ + +Out of all the things you can do with [Raspberry Pi][1], using it as a server in a home network is very popular. The tiny footprint and low power consumption makes it a perfect device to run light weight servers. + +One of the things you should be able to do in such a case is run commands on your Raspberry Pi without needing to plug in a display, keyboard, mouse and having to move yourself to the location of your Raspberry Pi each time. + +You achieve this by logging into your Raspberry Pi via SSH ([Secure Shell][2]) from any other computer, your laptop, desktop or even your phone. Let me show you how + +### How to SSH into Raspberry Pi + +![][3] + +I assume that you are [running Raspbian on your Pi][4] and have successfully connected to a network via Ethernet or WiFi. It’s important that your Raspberry Pi is connected to a network otherwise you won’t be able to connect to it via SSH (sorry for stating the obvious). + +#### Step 1: Enable SSH on Raspberry Pi + +SSH is disabled by default in Raspberry Pi, hence you’ll have to enable it when you turn on the Pi after a fresh installation of Raspbian. + +First go to the Raspberry Pi configuration window by navigating through the menu. + +![Raspberry Pi Menu, Raspberry Pi Configuration][5] + +Now, go to the interfaces tab, enable SSH and restart your Pi. + +![Enable SSH on Raspberry Pi][6] + +You can also enable SSH without via the terminal. Just enter the command _**sudo raspi-config**_ and then go to Advanced Options to enable SSH. + +#### Step 2. Find the IP Address of Raspberry Pi + +In most cases your Raspberry Pi will be assigned a local IP address which looks like **192.168.x.x** or **10.x.x.x**. You can [use various Linux commands to find the IP address][7]. + +[][8] + +Suggested read This Linux Malware Targets Unsecure Raspberry Pi Devices + +I am using the good old ifconfig command here but you can also use _**ip address**_. + +``` +ifconfig +``` + +![Raspberry Pi Network Configuration][9] + +This command shows all the list of active network adapters and their configuration. The first entry( **eth0** ) shows IP address as **192.168.2.105** which is valid.I have used Ethernet to connect my Raspberry Pi to the network, hence it is under **eth0**. If you use WiFi check under the entry named ‘ **wlan0** ‘ . + +You can also find out the IP address by other means like checking the network devices list on your router/modem. + +#### Step 3. SSH into your Raspberry Pi + +Now that you have enabled SSH and found out your IP address you can go ahead and SSH into your Raspberry Pi from any other computer. You’ll also need the username and the password for the Raspberry Pi. + +Default Username and Password is: + + * username: pi + * password: raspberry + + + +If you have changed the default password then use the new password instead of the above. Ideally you must change the default password. In the past, a [malware infected thousands of Raspberry Pi devices that were using the default username and password][8]. + +Open a terminal (on Mac and Linux) on the computer from which you want to SSH into your Pi and type the command below. On Windows, you can use a SSH client like [Putty][10]. + +Here, use the IP address you found out in the previous step. + +``` +ssh [email protected] +``` + +_**Note: Make sure your Raspberry Pi and the computer you are using to SSH into your Raspberry Pi are connected to the same network**_. + +![SSH through terminal][11] + +You’ll see a warning the first time, type **yes** and press enter. + +![Type the password \(default is ‘raspberry‘\)][12] + +Now, type in the password and press enter. + +![Successful Login via SSH][13] + +On a successful login you’ll be presented with the terminal of your Raspberry Pi. Now you can any commands on your Raspberry Pi through this terminal remotely(within the current network) without having to access your Raspberry Pi physically. + +[][14] + +Suggested read Speed Up Ubuntu Unity On Low End Systems [Quick Tip] + +Furthermore you can also set up SSH-Keys so that you don’t have to type in the password every time you log in via SSH, but that’s a different topic altogether. + +I hope you were able to SSH into your Raspberry Pi after following this tutorial. Let me know how you plan to use your Raspberry Pi in the comments below! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ssh-into-raspberry/ + +作者:[Chinmay][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/chinmay/ +[b]: https://github.com/lujun9972 +[1]: https://www.raspberrypi.org/ +[2]: https://en.wikipedia.org/wiki/Secure_Shell +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ssh-into-raspberry-pi.png?resize=800%2C450&ssl=1 +[4]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Raspberry-pi-configuration.png?ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/enable-ssh-raspberry-pi.png?ssl=1 +[7]: https://linuxhandbook.com/find-ip-address/ +[8]: https://itsfoss.com/raspberry-pi-malware-threat/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/ifconfig-rapberry-pi.png?ssl=1 +[10]: https://itsfoss.com/putty-linux/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-warning.png?fit=800%2C199&ssl=1 +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-password.png?fit=800%2C202&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-Pi-successful-login.png?fit=800%2C306&ssl=1 +[14]: https://itsfoss.com/speed-up-ubuntu-unity-on-low-end-system/ From dd1c951c1b45f00d6b04813dd0083980519b546a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:03:04 +0800 Subject: [PATCH 07/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190517=20Announci?= =?UTF-8?q?ng=20Enarx=20for=20running=20sensitive=20workloads=20sources/te?= =?UTF-8?q?ch/20190517=20Announcing=20Enarx=20for=20running=20sensitive=20?= =?UTF-8?q?workloads.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g Enarx for running sensitive workloads.md | 83 +++++++++++++++++++ 1 file changed, 83 insertions(+) create mode 100644 sources/tech/20190517 Announcing Enarx for running sensitive workloads.md diff --git a/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md b/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md new file mode 100644 index 0000000000..81d021f7d7 --- /dev/null +++ b/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Announcing Enarx for running sensitive workloads) +[#]: via: (https://opensource.com/article/19/5/enarx-security) +[#]: author: (Mike Bursell https://opensource.com/users/mikecamel/users/wgarry155) + +Announcing Enarx for running sensitive workloads +====== +Enarx leverages the capabilities of a TEE to change the trust model for +your application. +![cubes coming together to create a larger cube][1] + +Running software is something that most of us do without thinking about it. We run in "on premises"—our own machines—or we run it in the cloud - on somebody else's machines. We don't always think about what those differences mean, or about what assumptions we're making about the securtiy of the data that's being processed, or even of the software that's doing that processing. Specifically, when you run software (a "workload") on a system (a "host") on the cloud or on your own premises, there are lots and lots of layers. You often don't see those layers, but they're there. + +Here's an example of the layers that you might see in a standard cloud virtualisation architecture. The different colours represent different entities that "own" different layers or sets of layers. + +![Layers in a standard cloud virtualisation architecture][2] + +Here's a similar diagram depicting a standard cloud container architecture. As before, each different colour represents a different "owner" of a layer or set of layers. + +![Standard cloud container architecture][3] + +These owners may be of very different types, from hardware vendors to OEMs to cloud service providers (CSPs) to middleware vendors to operating system vendors to application vendors to you, the workload owner. And for each workload that you run, on each host, the exact list of layers is likely to be different. And even when they're the same, the versions of the layers instances may be different, whether it's a different BIOS version, a different bootloader, a different kernel version, or whatever else. + +Now, in many contexts, you might not worry about this, and your CSP goes out of its way to abstract these layers and their version details away from you. But this is a security article, for security people, and that means that anybody who's reading this probably does care. + +The reason we care is not just the different versions and the different layers, but the number of different things—and different entities—that we need to trust if we're going to be happy running any sort of sensitive workload on these types of stacks. I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised. This is a _big_ stretch when it comes to running my sensitive workloads. + +### What's Enarx? + +Enarx is a new project that is trying to address this problem of having to trust all of those layers. A few of us at Red Hat have been working on it for a few months now. My colleague Nathaniel McCallum demoed an early incarnation of it at [Red Hat Summit 2019][4] in Boston, and we're ready to start announcing it to the world. We have code, we have a demo, we have a GitHub repository, we have a logo: what more could a project want? Well, people—but we'll get to that. + +![Enarx logo][5] + +With Enarx, we made the decision that we wanted to allow people running workloads to be able to reduce the number of layers—and owners—that they need to trust to the absolute minimum. We plan to use trusted execution environments ("TEEs"—see "[Oh, how I love my TEE (or do I?)][6]") to provide an architecture that looks a little more like this: + +![Enarx architecture][7] + +In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware—of which Enarx is part—but you don't need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application. The Enarx project will provide attestation of the TEE, so that you know you're running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath your application. + +The initial code is out there—working on AMD's SEV TEE at the momen—and enough of it works now that we're ready to tell you about it. + +Making sure that your application meets your own security requirements is down to you. :-) + +### How do I find out more? + +The easiest way to learn more is to visit the [Enarx GitHub][8]. + +We'll be adding more information there—it's currently just code—but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we'd like to have, but we wanted to get things started. + +We'd love to have people in the community getting involved in the project. It's currently quite low-level and requires quite a lot of knowledge to get running, but we'll work on that. You will need some specific hardware to make it work, of course. Oh, and if you're an early boot or a low-level KVM hacker, we're _particularly_ interested in hearing from you. + +I will, of course, respond to comments on this article. + +* * * + +_This article was originally published on[Alice, Eve, and Bob][9] and is reprinted with the author's permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/enarx-security + +作者:[Mike Bursell ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mikecamel/users/wgarry155 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube) +[2]: https://opensource.com/sites/default/files/uploads/classic-cloud-virt-arch-1.png (Layers in a standard cloud virtualisation architecture) +[3]: https://opensource.com/sites/default/files/uploads/cloud-container-arch.png (Standard cloud container architecture) +[4]: https://www.redhat.com/en/summit/2019 +[5]: https://opensource.com/sites/default/files/uploads/enarx.png (Enarx logo) +[6]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/ +[7]: https://opensource.com/sites/default/files/uploads/reduced-arch.png (Enarx architecture) +[8]: https://github.com/enarx +[9]: https://aliceevebob.com/2019/05/07/announcing-enarx/ From 61716f452702cc2a6fdf8e213a16984f87cf43f5 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:03:15 +0800 Subject: [PATCH 08/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190517=20Using=20?= =?UTF-8?q?Testinfra=20with=20Ansible=20to=20verify=20server=20state=20sou?= =?UTF-8?q?rces/tech/20190517=20Using=20Testinfra=20with=20Ansible=20to=20?= =?UTF-8?q?verify=20server=20state.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...fra with Ansible to verify server state.md | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 sources/tech/20190517 Using Testinfra with Ansible to verify server state.md diff --git a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md new file mode 100644 index 0000000000..c14652a7f4 --- /dev/null +++ b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md @@ -0,0 +1,168 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Testinfra with Ansible to verify server state) +[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) +[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) + +Using Testinfra with Ansible to verify server state +====== +Testinfra is a powerful library for writing tests to verify an +infrastructure's state. Coupled with Ansible and Nagios, it offers a +simple solution to enforce infrastructure as code. +![Terminal command prompt on orange background][1] + +By design, [Ansible][2] expresses the desired state of a machine to ensure that the content of an Ansible playbook or role is deployed to the targeted machines. But what if you need to make sure all the infrastructure changes are in Ansible? Or verify the state of a server at any time? + +[Testinfra][3] is an infrastructure testing framework that makes it easy to write unit tests to verify the state of a server. It is a Python library and uses the powerful [pytest][4] test engine. + +### Getting started with Testinfra + +Testinfra can be easily installed using the Python package manager (pip) and a Python virtual environment. + + +``` +$ python3 -m venv venv +$ source venv/bin/activate +(venv) $ pip install testinfra +``` + +Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, on CentOS 7 you can install it with the following commands: + + +``` +$ yum install -y epel-release +$ yum install -y python-testinfra +``` + +#### A simple test script + +Writing tests in Testinfra is easy. Using the code editor of your choice, add the following to a file named **test_simple.py** : + + +``` +import testinfra + +def test_os_release(host): +assert host.file("/etc/os-release").contains("Fedora") + +def test_sshd_inactive(host): +assert host.service("sshd").is_running is False +``` + +By default, Testinfra provides a host object to the test case; this object gives access to different helper modules. For example, the first test uses the **file** module to verify the content of the file on the host, and the second test case uses the **service** module to check the state of a systemd service. + +To run these tests on your local machine, execute the following command: + + +``` +(venv)$ pytest test_simple.py +================================ test session starts ================================ +platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 +rootdir: /home/cverna/Documents/Python/testinfra +plugins: testinfra-3.0.0 +collected 2 items +test_simple.py .. + +================================ 2 passed in 0.05 seconds ================================ +``` + +For a full list of Testinfra's APIs, you can consult the [documentation][5]. + +### Testinfra and Ansible + +One of Testinfra's supported backends is Ansible, which means Testinfra can directly use Ansible's inventory file and a group of machines defined in the inventory to run tests against them. + +Let's use the following inventory file as an example: + + +``` +[web] +app-frontend01 +app-frontend02 + +[database] +db-backend01 +``` + +We want to make sure that our Apache web server service is running on **app-frontend01** and **app-frontend02**. Let's write the test in a file called **test_web.py** : + + +``` +def check_httpd_service(host): +"""Check that the httpd service is running on the host""" +assert host.service("httpd").is_running +``` + +To run this test using Testinfra and Ansible, use the following command: + + +``` +(venv) $ pip install ansible +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py +``` + +When invoking the tests, we use the Ansible inventory **[web]** group as the targeted machines and also specify that we want to use Ansible as the connection backend. + +#### Using the Ansible module + +Testinfra also provides a nice API to Ansible that can be used in the tests. The Ansible module enables access to run Ansible plays inside a test and makes it easy to inspect the result of the play. + + +``` +def check_ansible_play(host): +""" +Verify that a package is installed using Ansible +package module +""" +assert not host.ansible("package", "name=httpd state=present")["changed"] +``` + +By default, Ansible's [Check Mode][6] is enabled, which means that Ansible will report what would change if the play were executed on the remote host. + +### Testinfra and Nagios + +Now that we can easily run tests to validate the state of a machine, we can use those tests to trigger alerts on a monitoring system. This is a great way to catch unexpected changes. + +Testinfra offers an integration with [Nagios][7], a popular monitoring solution. By default, Nagios uses the [NRPE][8] plugin to execute checks on remote hosts, but using Testinfra allows you to run the tests directly from the Nagios master. + +To get a Testinfra output compatible with Nagios, we have to use the **\--nagios** flag when triggering the test. We also use the **-qq** pytest flag to enable pytest's **quiet** mode so all the test details will not be displayed. + + +``` +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py +TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds +``` + +Testinfra is a powerful library for writing tests to verify an infrastructure's state. Coupled with Ansible and Nagios, it offers a simple solution to enforce infrastructure as code. It is also a key component of adding testing during the development of your Ansible roles using [Molecule][9]. + +* * * + +Sysadmins who think the cloud is a buzzword and a bunch of hype should check out Ansible. + +Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state + +作者:[Clement Verna][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) +[2]: https://www.ansible.com/ +[3]: https://testinfra.readthedocs.io/en/latest/ +[4]: https://pytest.org/ +[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules +[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html +[7]: https://www.nagios.org/ +[8]: https://en.wikipedia.org/wiki/Nagios#NRPE +[9]: https://github.com/ansible/molecule From 6346e592872ec9e55be9e80bed212b825fd8ca47 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:03:33 +0800 Subject: [PATCH 09/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190516=20System76?= =?UTF-8?q?'s=20secret=20sauce=20for=20success=20sources/tech/20190516=20S?= =?UTF-8?q?ystem76-s=20secret=20sauce=20for=20success.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...516 System76-s secret sauce for success.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/tech/20190516 System76-s secret sauce for success.md diff --git a/sources/tech/20190516 System76-s secret sauce for success.md b/sources/tech/20190516 System76-s secret sauce for success.md new file mode 100644 index 0000000000..9409de535f --- /dev/null +++ b/sources/tech/20190516 System76-s secret sauce for success.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (System76's secret sauce for success) +[#]: via: (https://opensource.com/article/19/5/system76-secret-sauce) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins/users/don-watkins) + +System76's secret sauce for success +====== +Linux computer maker's approach to community-informed software and +hardware development embodies the open source way. +![][1] + +In [_The Open Organization_][2], Jim Whitehurst says, "show passion for the purpose of your organization and constantly drive interest in it. People are drawn to and generally, want to follow passionate people." Carl Richell, the founder and CEO of Linux hardware maker [System76][3], pours that secret sauce to propel his company in the world of open hardware, Linux, and open source. + +Carl demonstrates quiet confidence and engages the team at System76 in a way that empowers their creative synergy. During a recent visit to System76's Denver factory, I could immediately tell that the employees love what they do, what they produce, and their interaction with each other and their customers, and Carl sets that example. They are as they [describe themselves][4]: a diverse team of creators, makers, and builders; a small company innovating the next big things; and a group of extremely hard-core nerds. + +### A revolutionary approach + +In 2005, Carl had a vision, which began as talk over some beers, to produce desktop and laptop computers that come installed with Linux. He's transformed that idea into a highly successful company founded on the [belief][5] that "the computer and operating system are the most powerful and versatile tools ever created." And by producing the best tools, System76 can inspire the curious to make their greatest discovery or complete their greatest project. + +![System 76 founder and CEO Carl Richell][6] + +Carl Richell's enthusiasm was obvious at System 76's [Thelio launch event][7]. + +System76 lives up to its name, which was inspired by the American Revolution of 1776. The company views itself as a leader in the open source revolution, granting people freedom and independence from proprietary hardware and software. + +But the revolution does not end there; it continues with the company's business practices and diverse environment that aims to close the gender gap in technology leadership. Eight of the company's 28 employees are women, including vice president of marketing Louisa Bisio, creative manager Kate Hazen, purchasing manager May Liu, head of technical support Emma Marshall, and manufacturing control and logistics manager Sarah Zinger. + +### Community-informed design + +The staff members' passion and ingenuity for making the Linux experience enjoyable for customers creates an outstanding culture. Because the company believes the Linux desktop deserves a dedicated PC manufacturer, in 2018, it brought manufacturing in-house. This allows System76's engineers to make design changes more quickly, based on their frequent interactions with Linux users to learn about their needs and wants. It also opens up its parts and process to the public, including publishing design files under GPL on [GitHub][8], consistent with its commitment to openness and open source. + +For example, when System76 decided to create its own version of Linux, [Pop!_OS][9], it hosted online meetings to discuss and learn what features and software its customers wanted. This decision to work closely with the community has been instrumental in making Pop!_OS successful. + +System76 again turned to the community when it began developing [Thelio][10], its new line of desktop computers. Marketing VP Louisa Bisio says, "Taking a similar approach to open hardware has been great. We started in-house design in 2016, prototyping different desktop designs. Then we moved from prototyping acrylic to sheet metal. Then the first few prototypes of Thelio were presented to our [Superfan][11] attendees in 2017, and their feedback was really important in adjusting the desktop designs and progressing Thelio iterations forward." + +Thelio is the product of research and development focusing on high-quality components and design. It features a unique cabling layout, innovative airflow within the computer case, and the Thelio Io open hardware SATA controller. Many of System76's customers use platforms like [CUDA][12] to do their work; to support them, System76 works backward and pulls out proprietary functionality, piece by piece, until everything is open. + +### Open roads ahead + +Manufacturing open laptops are on the long-range roadmap, but the company is actively working on an open motherboard and maintaining Pop!_OS and System76 drivers, which are open. This commitment to openness, customer-driven design, and culture give System 76 a unique place in computer manufacturing. All of this stems from founder Carl Richell and his philosophy "that technology should be open and accessible to everyone." [As Carl says][13], "open hardware benefits all of us. It's how we further advance technology and make it more available to everyone." + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/system76-secret-sauce + +作者:[Don Watkins ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S +[2]: https://www.amazon.com/Open-Organization-Igniting-Passion-Performance/dp/1511392460 +[3]: https://system76.com/ +[4]: https://system76.com/about +[5]: https://system76.com/pop +[6]: https://opensource.com/sites/default/files/uploads/carl_richell.jpg (System 76 founder and CEO Carl Richell) +[7]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-w6XNmKS +[8]: https://github.com/system76 +[9]: https://opensource.com/article/18/1/behind-scenes-popos-linux +[10]: https://system76.com/desktops +[11]: https://system76.com/superfan +[12]: https://en.wikipedia.org/wiki/CUDA +[13]: https://opensource.com/article/19/4/system76-hardware From 2bb98a105647fddff4ff2e250852b57c703175ab Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:03:43 +0800 Subject: [PATCH 10/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190516=20Create?= =?UTF-8?q?=20flexible=20web=20content=20with=20a=20headless=20management?= =?UTF-8?q?=20system=20sources/tech/20190516=20Create=20flexible=20web=20c?= =?UTF-8?q?ontent=20with=20a=20headless=20management=20system.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntent with a headless management system.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 sources/tech/20190516 Create flexible web content with a headless management system.md diff --git a/sources/tech/20190516 Create flexible web content with a headless management system.md b/sources/tech/20190516 Create flexible web content with a headless management system.md new file mode 100644 index 0000000000..df58e96d0d --- /dev/null +++ b/sources/tech/20190516 Create flexible web content with a headless management system.md @@ -0,0 +1,113 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create flexible web content with a headless management system) +[#]: via: (https://opensource.com/article/19/5/headless-cms) +[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta) + +Create flexible web content with a headless management system +====== +Get the versatility and freedom to deliver content however you think is +best. +![Browser of things][1] + +In recent years, we’ve witnessed an explosion in the number of technological devices that deliver web-based content to users. Smartphones, tablets, smartwatches, and more—all with progressively advancing technical capabilities and support for an ever-widening list of operating systems and web browsers—swarm anew onto the market each year. + +What does this trend have to do with web development and headless versus traditional Content Management Systems (CMS)? Quite a lot. + +### CMS creates the internet + +A CMS is an application or set of computer programs used to manage digital content like images, videos, blog posts—essentially anything you would post on a website. An obvious example of a CMS is [WordPress][2]. + +The word "manage" is used broadly here. It can refer to creating, editing, or updating any kind of digital content on a website, as well as indexing the site to make it easily searchable. + +So, a CMS essentially separates the content displayed on a website from how that content is displayed. It also allows you, the website administrator, to set permissions on who can access, edit, modify, or otherwise manage that content. + +Suppose you want to post a new blog entry, update or correct something in an old post, write on your Facebook page, share a social media link to a video or article, or embed a video, music file, or pre-written set of text into a page on your website. If you have ever done anything like this, you have made use of CMS features. + +### Traditional CMS architecture: Benefits and flaws + +There are two major components that make up a CMS: the Content Management Application (CMA) and the Content Delivery Application (CDA). The CMA pertains to the front-end portion of the website. This is what allows authors or other content managers to edit and create content without help from a web developer. The CDA pertains to the back end portion of a website. By organizing and compiling content to make website content updates possible, it automates the function of a website administrator. + +Traditionally, these two pieces are joined into a single unit as a "coupled" CMS architecture. A **coupled CMS** uses a specific front-end delivery system (CMA) built into the application itself. The term "coupled" comes from the fact that the front-end framework—the templates and layout of the pages and how those pages respond to being opened in certain browsers—is coupled to the website’s content. In other words, in a coupled CMS architecture the Content Management Application (CMA) and Content Delivery Application (CDA) are inseparably merged. + +#### Benefits of the traditional CMS + +Coupled architecture does offer advantages, mainly in simplicity and ease of use for those who are not technically sophisticated. This fact explains why a platform like WordPress, which retains a traditional CMS setup, [remains so popular][3] for those who create websites or blogs. + +Further simplifying the web development process [are website builder applications][4], such as [Wix][5] and [Squarespace][6], which allow you to build drag-and-drop websites. The most popular of these builders use open source libraries but are themselves closed source. These sites allow almost anyone who can find the internet to put a website together without wading through the relatively short weeds of a CMS environment. While builder applications were [the object of derision][7] not so long ago amongst many in the open source community—mainly because they tended to give websites a generic and pre-packaged look and feel—they have grown increasingly functional and variegated. + +#### Security is an issue + +However, for all but the simplest web apps, a traditional CMS architecture results in inflexible technology. Modifying a static website or web app with a traditional CMS requires tremendous time and effort to produce updates, patches, and installations, preventing developers from keeping up with the growing number of devices and browsers. + +Furthermore, coupled CMSs have two built-in security flaws: + +**Risk #1** : Since content management and delivery are bound together, hackers who breach your website through the front end automatically gain access to the back-end database. This lack of separation between data and its presentation increases the likelihood that data will be stolen. Depending on the kind of user data stored on your website’s servers, a large-scale theft could be catastrophic. + +**Risk #2** : The risk of successful [Distributed Denial of Service][8] (DDoS) attacks increases without a separate system for delivering content to your website. DDoS attacks flood content delivery networks with so many traffic requests that they become overwhelmed and go offline. If your content delivery network is separated from your actual web servers, attackers will be less able to bring down your site. + +To avoid these problems, developers have introduced headless and decoupled CMSs. + +### Comparing headless and decoupled CMSs + +The "head" of a CMS is a catch-all term for the Content Delivery Application. Therefore, a CMS without one—and so with no way of delivering content to a user—is called "headless." + +This lack of an established delivery method gives headless CMSs enormous versatility. Without a CDA there is no pre-established delivery method, so developers can design separate frameworks as the need arises. The problem of constantly patching your website, web apps, and other code to guarantee compatibility disappears. + +Another option, a **decoupled CMS** , includes many of the same features and benefits as a headless CMS, but there is one crucial difference. Where a headless CMS leaves it entirely to the developer to deliver and present content to their users, a decoupled CMS offers pre-established delivery tools that developers can either take or leave. Decoupled CMSs thus offer both the simplicity of the traditional CMS and the versatility of the headless ones. + +In short, a decoupled CMS is sometimes called a **hybrid CMS ****since it's a hybrid of the coupled and headless designs. Decoupled CMSs are not a new concept. As far back as 2015, PHP core repository developer David Buchmann was [calling on devs][9] to decouple their CMSs to meet a wider set of challenges. + +### Security improvements with a headless CMS + +Perhaps the most important point to make about headless versus decoupled content management architectures, and how they both differ from traditional architecture, is the added security benefit. In both the headless and decoupled designs, content and user data are located on a separate back-end system protected by a firewall. The user can’t access the content management application itself. + +However, it's important to keep in mind that the major consequence of this change in architectures is that since the architecture is fragmented, developers have to fill in the gaps and design content delivery and presentation mechanisms on their own. This means that whether you opt to go headless or decoupled, your developer needs to understand security. While separating content management and content delivery gives hackers one fewer vector through which to attack, this isn’t a security benefit in itself. The burden will be on your devs to properly secure your resulting CDA. + +A firewall protecting the back end provides a [crucial layer of security][10]. Headless and decoupled architectures can distribute your content among multiple databases, so if you take advantage of this possibility you can lower the chance of successful DDoS attacks even further. Open source headless CMS can also benefit from the installation of a [Linux VPN][11] or Linux kernel firewall management tool like [iptables][12]. All of these options combine to provide the added security developers need to create no matter what kind of CDA or back end setup they choose. + +Benefits aside, keep in mind that headless CMS platforms are a fairly new tech. Before making the switch to headless or decoupled, consider whether the host you’re using can support your added security so that you can host your application behind network security systems to block attempts at unauthorized access. If they cannot, a host change might be in order. When evaluating new hosts, also consider any existing contracts or security and compliance restrictions in place (GDPR, CCPA, etc.) which could cause migration troubles. + +### Open source options + +As you can see, headless architecture offers designers the versatility and freedom to deliver content however they think best. This spirit of freedom fits naturally with the open source paradigm in software design, in which all source code is available to public view and may be taken and modified by anyone for any reason. + +There are a number of open source headless CMS platforms that allow developers to do just that: [Mura,][13] [dotCMS][14], and [Cockpit CMS][15] to name a few. For a deeper dive into the world of open source headless CMS platforms, [check out this article][16]. + +### Final thoughts + +For web designers and developers, the idea of a headless CMS marks a significant rethinking of how sites are built and delivered. Moving to this architecture is a great way to future-proof your website against changing preferences and whatever tricks future hackers may cook up, while at the same time creating a seamless user experience no matter what device or browser is used. You might also take a look at [this guide][17] for UX tips on designing your website in a way that meshes with headless and decoupled architectures. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/headless-cms + +作者:[Sam Bocetta][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sambocetta +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://wordpress.org/ +[3]: https://kinsta.com/wordpress-market-share/ +[4]: https://hostingcanada.org/website-builders/ +[5]: https://www.wix.com/ +[6]: https://www.squarespace.com +[7]: https://arstechnica.com/information-technology/2016/11/wordpress-and-wix-trade-shots-over-alleged-theft-of-open-source-code/ +[8]: https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/ +[9]: https://opensource.com/business/15/3/decoupling-your-cms +[10]: https://www.hostpapa.com/blog/security/why-your-small-business-needs-a-firewall/ +[11]: https://surfshark.com/download/linux +[12]: https://www.linode.com/docs/security/firewalls/control-network-traffic-with-iptables/ +[13]: https://www.getmura.com/ +[14]: https://dotcms.com/ +[15]: https://getcockpit.com/ +[16]: https://www.cmswire.com/web-cms/13-headless-cmss-to-put-on-your-radar/ +[17]: https://medium.com/@mat_walker/tips-for-content-modelling-with-the-headless-cms-contentful-7e886a911962 From 1549059cd509415d08f8700011ca0ec502c32fbe Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:04:14 +0800 Subject: [PATCH 11/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190515=20How=20to?= =?UTF-8?q?=20manage=20access=20control=20lists=20with=20Ansible=20sources?= =?UTF-8?q?/tech/20190515=20How=20to=20manage=20access=20control=20lists?= =?UTF-8?q?=20with=20Ansible.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...anage access control lists with Ansible.md | 139 ++++++++++++++++++ 1 file changed, 139 insertions(+) create mode 100644 sources/tech/20190515 How to manage access control lists with Ansible.md diff --git a/sources/tech/20190515 How to manage access control lists with Ansible.md b/sources/tech/20190515 How to manage access control lists with Ansible.md new file mode 100644 index 0000000000..692dd70599 --- /dev/null +++ b/sources/tech/20190515 How to manage access control lists with Ansible.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage access control lists with Ansible) +[#]: via: (https://opensource.com/article/19/5/manage-access-control-lists-ansible) +[#]: author: (Taz Brown https://opensource.com/users/heronthecli) + +How to manage access control lists with Ansible +====== +Automating ACL management with Ansible's ACL module is a smart way to +strengthen your security strategy. +![Data container block with hexagons][1] + +Imagine you're a new DevOps engineer in a growing agile environment, and recently your company has experienced phenomenal growth. To support expansion, the company increased hiring by 25% over the last quarter and added 5,000 more servers and network devices to its infrastructure. The company now has over 13,000 users, and you need a tool to scale the existing infrastructure and manage your large number of users and their thousands of files and directories. The company decided to adopt [Ansible][2] company-wide to manage [access control lists (ACLs)][3] and answer the call of effectively managing files and directories and permissions. + +Ansible can be used for a multitude of administration and maintenance tasks and, as a DevOps engineer or administrator, it's likely you've been tasked with using it to manage ACLs. + +### About managing ACLs + +ACLs allow regular users to share their files and directories selectively with other users and groups. With ACLs, a user can grant others the ability to read, write, and execute files and directories without leaving those filesystem elements open. + +ACLs are set and removed at the command line using the **setfacl** utility. The command is usually followed by the name of a file or directory. To set permissions, you would use the Linux command **setfacl -m d ⭕rx ** (e.g., **setfacl -m d ⭕rx Music/**). To view the current permissions on a directory, you would use the command **getfacl ** (e.g., **getfacl Music/** ). To remove an ACL from a file or directory, you would type the command, **# setfacl -x ** (to remove only the specified ACL from the file/directory) or **# setfacl -b ** (to remove all ACLs from the file/directory). + +Only the owner assigned to the file or directory can set ACLs. (It's important to understand this before you, as the admin, take on Ansible to manage your ACLs.) There are also default ACLs, which control directory access; if a file inside a directory has no ACL, then the default ACL is applied. + + +``` +sudo setfacl -m d⭕rx Music +getfacl Music/ +# file: Music/ +# owner: root +# group: root +user::rwx +group::--- +other::--- +default:user::rwx +default:group::--- +default:other::r-x +``` + +### Enter Ansible + +So how can Ansible, in all its wisdom, tackle the task of applying permissions to users, files, directories, and more? Ansible can play nicely with ACLs, just as it does with a lot of features, utilities, APIs, etc. Ansible has an out-of-the-box [ACL module][3] that allows you to create playbooks/roles around granting a user access to a file, removing ACLs for users on a specific file, setting default ACLs for users on files, or obtaining ACLs on particular files. + +Anytime you are administering ACLs, you should use the best practice of "least privilege," meaning you should give a user access only to what they need to perform their role or execute a task, and no more. Restraint and minimizing the attack surface are critical. The more access extended, the higher the risk of unauthorized access to company assets. + +Here's an example Ansible playbook: + +![Ansible playbook][4] + +As an admin, automating ACL management demands that your Ansible playbooks can scale across your infrastructure to increase speed, improve efficiency, and reduce the time it takes to achieve your goals. There will be times when you need to determine the ACL for a specific file. This is essentially the same as using **getfacl ** in Linux. If you want to determine the ACLs of many, specific files, start with a playbook that looks like this: + + +``` +\--- +\- hosts: all +tasks: +\- name: obtain the acl for a specific file +acl: +path: /etc/logrotate.d +user_nfsv4_acls: true +register: acl_info +``` + +You can use the following playbook to set permissions on files/directories: + +![Ansible playbook][5] + +This playbook grants user access to a file: + + +``` +\- hosts: +become: yes +gather_facts: no +tasks: +\- name: Grant user Shirley read access to a file +acl: +path: /etc/foo.conf +entity: shirley +etype: user +permissions: r +state: present +``` + +And this playbook grants user access to a directory: + + +``` +\--- +\- hosts: all +become: yes +gather_facts: no +tasks: +\- name: setting permissions on directory and user +acl: +path: /path/to/scripts/directory +entity: "{{ item }}" +etype: user +permissions: rwx +state: present +loop: +\- www-data +\- root +``` + +### Security realized? + +Applying ACLs to files and users is a practice you should take seriously in your role as a DevOps engineer. Security best practices and formal compliance often get little or no attention. When you allow access to files with sensitive data, you are always risking that the data will be tampered with, stolen, or deleted. Therefore, data protection must be a focal point in your security strategy. Ansible can be part of your security automation strategy, as demonstrated here, and your ACL application is as good a place to start as any. + +Automating your security practices will, of course, go beyond just managing ACLs; it might also involve [SELinux][6] configuration, cryptography, security, and compliance. Remember that Ansible also allows you to define your systems for security, whether it's locking down users and groups (e.g., managing ACLs), setting firewall rules, or applying custom security policies. + +Your security strategy should start with a baseline plan. As a DevOps engineer or admin, you should examine the current security strategy (or the lack thereof), then chart your plan for automating security in your environment. + +### Conclusion + +Using Ansible to manage your ACLs as part of your overall security automation strategy depends on the size of both the company you work for and the infrastructure you manage. Permissions, users, and files can quickly get out of control, potentially placing your security in peril and putting the company in a position you definitely don't want to it be. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/manage-access-control-lists-ansible + +作者:[Taz Brown ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/heronthecli +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw (Data container block with hexagons) +[2]: https://opensource.com/article/19/2/quickstart-guide-ansible +[3]: https://docs.ansible.com/ansible/latest/modules/acl_module.html +[4]: https://opensource.com/sites/default/files/images/acl.yml_.png (Ansible playbook) +[5]: https://opensource.com/sites/default/files/images/set_filedir_permissions.png (Ansible playbook) +[6]: https://opensource.com/article/18/8/cheat-sheet-selinux From 406f6d1f7c8a13d669e76540782319c706328121 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:18:35 +0800 Subject: [PATCH 12/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190517=20HPE=20to?= =?UTF-8?q?=20buy=20Cray,=20offer=20HPC=20as=20a=20service=20sources/talk/?= =?UTF-8?q?20190517=20HPE=20to=20buy=20Cray,=20offer=20HPC=20as=20a=20serv?= =?UTF-8?q?ice.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...HPE to buy Cray, offer HPC as a service.md | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 sources/talk/20190517 HPE to buy Cray, offer HPC as a service.md diff --git a/sources/talk/20190517 HPE to buy Cray, offer HPC as a service.md b/sources/talk/20190517 HPE to buy Cray, offer HPC as a service.md new file mode 100644 index 0000000000..a1dafef683 --- /dev/null +++ b/sources/talk/20190517 HPE to buy Cray, offer HPC as a service.md @@ -0,0 +1,68 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (HPE to buy Cray, offer HPC as a service) +[#]: via: (https://www.networkworld.com/article/3396220/hpe-to-buy-cray-offer-hpc-as-a-service.html) +[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/) + +HPE to buy Cray, offer HPC as a service +====== +High-performance computing offerings from HPE plus Cray could enable things like AI, ML, high-speed financial trading, creation digital twins for entire enterprise networks. +![Cray Inc.][1] + +HPE has agreed to buy supercomputer-maker Cray for $1.3 billion, a deal that the companies say will bring their corporate customers high-performance computing as a service to help with analytics needed for artificial intelligence and machine learning, but also products supporting high-performance storage, compute and software. + +In addition to bringing HPC capabilities that can blend with and expand HPE’s current products, Cray brings with it customers in government and academia that might be interested in HPE’s existing portfolio as well. + +**[ Now read:[Who's developing quantum computers][2] ]** + +The companies say they expect to close the cash deal by the end of next April. + +The HPC-as-a-service would be offered through [HPE GreenLake][3], the company’s public-, private-, hybrid-cloud service. Such a service could address periodic enterprise need for fast computing that might otherwise be too expensive, says Tim Zimmerman, an analyst with Gartner. + +Businesses could use the service, for example, to create [digital twins][4] of their entire networks and use them to test new code to see how it will impact the network before deploying it live, Zimmerman says. + +Cray has HPC technology that HPE Labs might be exploring on its own, but that can be brought to market in a much quicker timeframe. + +HPE says that overall, buying cray give it technologies needed for massively data-intensive workloads such as AI and ML that is used for engineering services, transaction-based trading by financial firms, pharmaceutical research and academic studies into weather and genomes, for instance, Zimmerman says. + +As HPE puts it, Cray supercomputing platforms “have the ability to handle massive data sets, converged modelling, simulation, AI and analytics workloads.” + +Cray is working on [what it says will be the world’s fastest supercomputer][5] when it’s finished in 2021, cranking out 1.5 exaflops. The current fastest supercomputer is 143.5 petaflops. [Click [here][6] to see the current top 10 fastest supercomputers.] + +In general, HPE says it hopes to create a comprehensive line of products to support HPC infrastructure including “compute, high-performance storage, system interconnects, software and services.” + +Together, the talent in the two companies and their combined technologies should be able to increase innovation, HPE says. + +Earlier this month, HPE’s CEO Antonio Neri said in [an interview with _Network World_][7] that the company will be investing $4 billion over four years in a range of technology to boost “connectivity, security, and obviously cloud and analytics.” In laying out the company’s roadmap he made no specific mention of HPC. + +HPE net revenues last fiscal year were $30.9 billion. Cray’s total revenue was $456 million, with a gross profit of $130 million. + +The acquisition will pay $35 per share for Cray stock. + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396220/hpe-to-buy-cray-offer-hpc-as-a-service.html + +作者:[Tim Greene][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Tim-Greene/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/06/the_cray_xc30_piz_daint_system_at_the_swiss_national_supercomputing_centre_via_cray_inc_3x2_978x652-100762113-large.jpg +[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html +[3]: https://www.networkworld.com/article/3280996/hpe-adds-greenlake-hybrid-cloud-to-enterprise-service-offerings.html +[4]: https://www.networkworld.com/article/3280225/what-is-digital-twin-technology-and-why-it-matters.html +[5]: https://www.networkworld.com/article/3373539/doe-plans-worlds-fastest-supercomputer.html +[6]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html +[7]: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world From d3da7124905f7feafe3deaa8c02da3bfab0babfb Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:19:00 +0800 Subject: [PATCH 13/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190517=20The=20mo?= =?UTF-8?q?dern=20data=20center=20and=20the=20rise=20in=20open-source=20IP?= =?UTF-8?q?=20routing=20suites=20sources/talk/20190517=20The=20modern=20da?= =?UTF-8?q?ta=20center=20and=20the=20rise=20in=20open-source=20IP=20routin?= =?UTF-8?q?g=20suites.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e rise in open-source IP routing suites.md | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md diff --git a/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md b/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md new file mode 100644 index 0000000000..02063687a0 --- /dev/null +++ b/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The modern data center and the rise in open-source IP routing suites) +[#]: via: (https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +The modern data center and the rise in open-source IP routing suites +====== +Open source enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything. +![fdecomite \(CC BY 2.0\)][1] + +As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted. + +Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking. + +### Disaggregation + +The concept of network disaggregation involves breaking-up of the vertical networking landscape into individual pieces, where each piece can be used in the best way possible. The hardware can be separated from the software, along with open or closed IP routing suites. This enables the network operators to use the best of breed for the hardware, software and the applications. + +**[ Now see[7 free network tools you must have][2]. ]** + +Networking has always been built as an appliance and not as a platform. The mindset is that the network vendor builds an appliance and as a specialized appliance, they will completely control what you can and cannot do on that box. In plain words, they will not enable anything that is not theirs. As a result, they act as gatekeepers and not gate-enablers. + +Network disaggregation empowers the network operators with the ability to lay hands on the features they need when they need them. However, this is impossible in case of non-disaggregated hardware. + +### Disaggregation leads to using best-of-breed + +In the traditional vertically integrated networking market, you’re forced to live with the software because you like the hardware, or vice-versa. But network disaggregation drives different people to develop things that matter to them. This allows multiple groups of people to connect, with each one focused on doing what he or she does the best. Switching silicon manufacturers can provide the best merchant silicon. Routing suites can be provided by those who are the best at that. And the OS vendors can provide the glue that enables all of these to work well together. + +With disaggregation, people are driven to do what they are good at. One company does the hardware, whereas another does the software and other company does the IP routing suites. Hence, today the networking world looks like more of the server world. + +### Open source + +Within this rise of the modern data center, there is another element that is driving network disaggregation; the notion of open source. Open source is “denoting software for which the original source code is made freely available, it may be redistributed and modified.” It enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything. + +As a matter of fact, the networking world has always been very vendor driven. However, the advent of open source gives the opportunity to like-minded people rather than the vendor controlling the features. This eliminates the element of vendor lock-in, thereby enabling interesting work. Open source allows more than one company to be involved. + +### Open source in the data center + +The traditional enterprise and data center networks were primarily designed by bridging and Spanning Tree Protocol (STP). However, the modern data center is driven by IP routing and the CLOS topology. As a result, you need a strong IP routing suite. + +That was the point where the need for an open-source routing suite surfaced, the suite that can help drive the modern data center. The primary open-source routing suites are [FRRouting (FRR)][3], BIRD, GoBGP and ExaBGP. + +Open-source IP routing protocol suites are slowly but steadily gaining acceptance and are used in data centers of various sizes. Why? It is because they allow a community of developers and users to work on finding solutions to common problems. Open-source IP routing protocol suites equip them to develop the specific features that they need. It also helps the network operators to create simple designs that make sense to them, as opposed to having everything controlled by the vendor. They also enable routing suites to run on compute nodes. Kubernetes among others uses this model of running a routing protocol on a compute node. + +Today many startups are using FRR. Out of all of the IP routing suites, FRR is preferred in the data center as the primary open-source IP routing protocol suite. Some traditional network vendors have even demonstrated the use of FRR on their networking gear. + +There are lots of new features currently being developed for FRR, not just by the developers but also by the network operators. + +### Use cases for open-source routing suites + +When it comes to use-cases, where do IP routing protocol suites sit? First and foremost, if you want to do any type of routing in the disaggregated network world, you need an IP routing suite. + +Some operators are using FRR at the edge of the network as well, thereby receiving full BGP feeds. Many solutions which use Intel’s DPDK for packet forwarding use FRR as the control plane, receiving full BGP feeds. In addition, there are other vendors using FRR as the core IP routing suite for a full leaf and spine data center architecture. You can even get a version of FRR on pfSense which is a free and open source firewall. + +We need to keep in mind that reference implementations are important. Open source allows you to test at scale. But vendors don’t allow you to do that. However, with FRR, we have the ability to spin up virtual machines (VMs) or even containers by using software like Vagrant to test your network. Some vendors do offer software versions, but they are not fully feature-compatible. + +Also, with open source you do not need to wait. This empowers you with flexibility and speed which drives the modern data center. + +### Deep dive on FRRouting (FRR) + +FRR is a Linux foundation project. In a technical Linux sense, FRR is a group of daemons that work together, providing a complete routing suite that includes BGP, IS-IS, LDP, OSPF, BFD, PIM, and RIP. + +Each one of these daemons communicate with the common routing information base (RIB) daemon called Zebra in order to interface with the OS and to resolve conflicts between the multiple routing protocols providing the same information. Interfacing with the OS is used to receive the link up/down events, to add and delete routes etc. + +### FRRouting (FRR) components: Zebra + +Zebra is the RIB of the routing systems. It knows everything about the state of the system relevant to routing and is able to pass and disseminate this information to all the interested parties. + +The RIB in FRR acts just like a traditional RIB. When a route wins, it goes into the Linux kernel data plane where the forwarding occurs. All of the routing protocols run as separate processes and each of them have their source code in FRR. + +For example, when BGP starts up, it needs to know, for instance, what kind of virtual routing and forwarding (VRF) and IP interfaces are available. Zebra collects and passes this information back to the interested daemons. It passes all the relevant information about state of the machine. + +Furthermore, you can also register information with Zebra. For example, if a particular route changes, the daemon can be informed. This can also be used for reverse path forwarding (RPF). FRR doesn't need to do a pull when changes happen on the network. + +There are a myriad of ways through which you can control Linux and the state. Sometimes you have to use options like the Netlink bus and sometimes you may need to read the state in proc file system of Linux. The goal of Zebra is to gather all this data for the upper level protocols. + +### FRR supports remote data planes + +FRR also has the ability to manage the remote data planes. So, what does this mean? Typically, the data forwarding plane and the routing protocols run on the same box. Another model, adopted by Openflow and SDN for example, is one in which the data forwarding plane can be on one box while FRR runs on a different box on behalf of the first box and pushes the computed routing state on the first box. In other words, the data plane and the control plane run on different boxes. + +If you examine the traditional world, it’s like having one large chassis with different line cards with the ability to install routes in those different line cards. FRR operates with the same model which has one control plane and the capability to offer 3 boxes, if needed. It does this via the forwarding plane manager. + +### Forwarding plane manager + +Zebra can either install routes directly into the data plane of the box it is running on or use a forwarding plane manager to install routes on a remote box. When it installs a route, the forwarding plane manager abstracts the data which displays the route and the next hops. It then pushes the data to a remote system where the remote machine processes it and programs the ASIC appropriately. + +After the data is abstracted, you can use whatever protocol you want in order to push the data to the remote machine. You can even include the data in an email. + +### What is holding people back from open source? + +Since last 30 years the networking world meant that you need to go to a vendor to solve a problem. But now with open-source routing suites, such as, FRR, there is a major drift in the mindset as to how you approach troubleshooting. + +This causes the fear of not being able to use it properly because with open source you are the one who has to fix it. This at first can be scary and daunting. But it doesn’t necessarily have to be. Also, to switch to FRR on a traditional network gear, you need the vendor to enable it, but they may be reluctant as they are on competing platforms which can be another road blocker. + +### The future of FRR + +If we examine FRR from the use case perspective of the data center, FRR is feature-complete. Anyone building an IP based data center FRR has everything available. The latest 7.0 release of FRR adds Yang/NetConf, BGP Enhancements and OpenFabric. + +FRR is not just about providing features, boosting the performance or being the same as or better than the traditional network vendor’s software, it is also about simplifying the process for the end user. + +Since the modern data center is focused on automation and ease of use, FRR has made such progress that the vendors have not caught up with. FRR is very automation friendly. For example, FRR takes BGP and makes it automation-friendly without having to change the protocol. It supports BGP unnumbered that is unmatched by any other vendor suite. This is where the vendors are trying to catch up. + +Also, while troubleshooting, FRR shows peer’s and host’s names and not just the IP addresses. This allows you to understand without having spent much time. However, vendors show the peer’s IP addresses which can be daunting when you need to troubleshoot. + +FRR provides the features that you need to run an efficient network and data center. It makes easier to configure and manage the IP routing suite. Vendors just add keep adding features over features whether they are significant or not. Then you need to travel the certification paths that teach you how to twiddle 20 million nobs. How many of those networks are robust and stable? + +FRR is about supporting features that matter and not every imaginable feature. FRR is an open source project that brings like-minded people together, good work that is offered isn’t turned away. As a case in point, FRR has an open source implementation of EIGRP. + +The problem surfaces when you see a bunch of things, you think you need them. But in reality, you should try to keep the network as simple as possible. FRR is laser-focused on the ease of use and simplifying the use rather than implementing features that are mostly not needed to drive the modern data center. + +For more information and to contribute, why not join the [FRR][4] [mailing list group][4]. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][5]** + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/12/modular_humanoid_polyhedra_connections_structure_building_networking_by_fdecomite_cc_by_2-0_via_flickr_1200x800-100782334-large.jpg +[2]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html +[3]: https://frrouting.org/community/7.0-launch.html +[4]: https://frrouting.org/#participate +[5]: /contributor-network/signup.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world From 5280040401a0eea1e7bd6ed3c81af514849c3e0c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:20:44 +0800 Subject: [PATCH 14/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190516=20Will=205?= =?UTF-8?q?G=20be=20the=20first=20carbon-neutral=20network=3F=20sources/ta?= =?UTF-8?q?lk/20190516=20Will=205G=20be=20the=20first=20carbon-neutral=20n?= =?UTF-8?q?etwork.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 5G be the first carbon-neutral network.md | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 sources/talk/20190516 Will 5G be the first carbon-neutral network.md diff --git a/sources/talk/20190516 Will 5G be the first carbon-neutral network.md b/sources/talk/20190516 Will 5G be the first carbon-neutral network.md new file mode 100644 index 0000000000..decacfac5d --- /dev/null +++ b/sources/talk/20190516 Will 5G be the first carbon-neutral network.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Will 5G be the first carbon-neutral network?) +[#]: via: (https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Will 5G be the first carbon-neutral network? +====== +Increased energy consumption in new wireless networks could become ecologically unsustainable. Engineers think they have solutions that apply to 5G, but all is not certain. +![Dushesina/Getty Images][1] + +If wireless networks transfer 1,000 times more data, does that mean they will use 1,000 times more energy? It probably would with the old 4G LTE wireless technologies— LTE doesn’t have much of a sleep-standby. But with 5G, we might have a more energy-efficient option. + +More customers want Earth-friendly options, and engineers are now working on how to achieve it — meaning 5G might introduce the first zero-carbon networks. It’s not all certain, though. + +**[ Related:[What is 5G wireless? And how it will change networking as we know it][2] ]** + +“When the 4G technology for wireless communication was developed, not many people thought about how much energy is consumed in transmitting bits of information,” says Emil Björnson, associate professor of communication systems at Linkoping University, [in an article on the school’s website][3]. + +Standby was never built into 4G, Björnson explains. Reasons include overbuilding — the architects wanted to ensure connections didn’t fail, so they just kept the power up. The downside to that redundancy was that almost the same amount of energy is used whether the system is transmitting data or not. + +“We now know that this is not necessary,” Björnson says. 5G networks don’t use much power during periods of low traffic, and that reduces power consumption. + +Björnson says he knows how to make future-networks — those 5G networks that one day may become the enterprise broadband replacement — super efficient even when there is heavy use. Massive-MIMO (multiple-in, multiple-out) antennas are the answer, he says. That’s hundreds of connected antennas taking advantage of multipath. + +I’ve written before about some of Björnson's Massive-MIMO ideas. He thinks [Massive-MIMO will remove all capacity ceilings from wireless networks][4]. However, he now adds calculations to his research that he claims prove that the Massive-MIMO antenna technology will also reduce power use. He and his group are actively promoting their academic theories in a paper ([pdf][5]). + +**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]** + +### Nokia's plan to reduce wireless networks' CO2 emissions + +Björnson's isn’t the only 5G-aimed eco-concept out there. Nokia points out that it isn't just radios transmitting that use electricity. Cooling is actually the main electricity hog, says the telcommunications company, which is one of the world’s principal manufacturers of mobile network equipment. + +Nokia says the global energy cost of Radio Access Networks (RANs) in 2016 (the last year numbers were available), which includes base transceiver stations (BTSs) needed by mobile networks, was around $80 billion. That figure increases with more users coming on stream, something that’s probable. Of the BTS’s electricity use, about 90% “converts to waste heat,” [Harry Kuosa, a marketing executive, writes on Nokia’s blog][7]. And base station sites account for about 80% of a mobile network’s entire energy use, Nokia expands on its website. + +“A thousand-times more traffic that creates a thousand-times higher energy costs is unsustainable,” Nokia says in its [ebook][8] on the subject, “Turning the zero carbon vision into business opportunity,” and it’s why Nokia plans liquid-cooled 5G base stations among other things, including chip improvements. It says the liquid-cooling can reduce CO2 emissions by up to 80%. + +### Will those ideas work? + +Not all agree power consumption can be reduced when implementing 5G, though. Gabriel Brown of Heavy Reading, quotes [in a tweet][9] a China Mobile executive as saying that 5G BTSs will use three times as much power as 4G LTE ones because the higher frequencies used in 5G mean one needs more BTS units to provide the same geographic coverage: For physics reasons, higher frequencies equals shorter range. + +If, as is projected, 5G develops into the new enterprise broadband for the internet of things (IoT), along with associated private networks covering everything else, then these eco- and cost-important questions are going to be salient — and they need answers quickly. 5G will soon be here, and [Gartner estimates that 60% of organizations will adopt it][10]. + +**More about 5G networks:** + + * [How enterprises can prep for 5G networks][11] + * [5G vs 4G: How speed, latency and apps support differ][12] + * [Private 5G networks are coming][13] + * [5G and 6G wireless have security issues][14] + * [How millimeter-wave wireless could help support 5G and IoT][15] + + + +Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/01/4g-versus-5g_horizon_sunrise-100784230-large.jpg +[2]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html +[3]: https://liu.se/en/news-item/okningen-av-mobildata-kraver-energieffektivare-nat +[4]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html +[5]: https://arxiv.org/pdf/1812.01688.pdf +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[7]: https://www.nokia.com/blog/nokia-has-ambitious-plans-reduce-network-power-consumption/ +[8]: https://pages.nokia.com/2364.Zero.Emissions.ebook.html?did=d000000001af&utm_campaign=5g_in_action_&utm_source=twitter&utm_medium=organic&utm_term=0dbf430c-1c94-47d7-8961-edc4f0ba3270 +[9]: https://twitter.com/Gabeuk/status/1099709788676636672?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1099709788676636672&ref_url=https%3A%2F%2Fwww.lightreading.com%2Fmobile%2F5g%2Fpower-consumption-5g-basestations-are-hungry-hungry-hippos%2Fd%2Fd-id%2F749979 +[10]: https://www.gartner.com/en/newsroom/press-releases/2018-12-18-gartner-survey-reveals-two-thirds-of-organizations-in +[11]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html +[12]: https://www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html +[13]: https://www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html +[14]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html +[15]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html +[16]: https://www.facebook.com/NetworkWorld/ +[17]: https://www.linkedin.com/company/network-world From d84d3f59358b75a0435eb7a20e9a751bba9ecffa Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:21:01 +0800 Subject: [PATCH 15/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190515=20IBM=20ov?= =?UTF-8?q?erhauls=20mainframe-software=20pricing,=20adds=20hybrid,=20priv?= =?UTF-8?q?ate-cloud=20services=20sources/talk/20190515=20IBM=20overhauls?= =?UTF-8?q?=20mainframe-software=20pricing,=20adds=20hybrid,=20private-clo?= =?UTF-8?q?ud=20services.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng, adds hybrid, private-cloud services.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/talk/20190515 IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services.md diff --git a/sources/talk/20190515 IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services.md b/sources/talk/20190515 IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services.md new file mode 100644 index 0000000000..b69109641d --- /dev/null +++ b/sources/talk/20190515 IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services) +[#]: via: (https://www.networkworld.com/article/3395776/ibm-overhauls-mainframe-software-pricing-adds-hybrid-private-cloud-services.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services +====== +IBM brings cloud consumption model to the mainframe, adds Docker container extensions +![Thinkstock][1] + +IBM continues to adopt new tools and practices for its mainframe customers to keep the Big Iron relevant in a cloud world. + +First of all, the company switched-up its 20-year mainframe software pricing scheme to make it more palatable to hybrid and multicloud users who might be thinking of moving workloads off the mainframe and into the cloud. + +**[ Check out[What is hybrid cloud computing][2] and learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** + +Specifically IBM rolled out Tailored Fit Pricing for the IBM Z mainframe which offers two consumption-based pricing models that can help customers cope with ever-changing workload – and hence software – costs. + +Tailored Fit Pricing removes the need for complex and restrictive capping, which typically weakens responsiveness and can impact service level availability, IBM said. IBM’s standard monthly mainframe licensing model calculates costs as a “rolling four-hour average” (R4HA) which would determine cost based on a customer’s peak usage during the month. Customers would many time cap usage to keep costs down, experts said + +Systems can now be configured to support optimal response times and service level agreements, rather than artificially slowing down workloads to manage software licensing costs, IBM stated. + +Predicting demand for IT services can be a major challenge and in the era of hybrid and multicloud, everything is connected and workload patterns constantly change, wrote IBM’s Ross Mauri, General Manager, IBM Z in a [blog][5] about the new pricing and services. “In this environment, managing demand for IT services can be a major challenge. As more customers shift to an enterprise IT model that incorporates on-premises, private cloud and public we’ve developed a simple cloud pricing model to drive the transformation forward.” + +[Tailored Fit Pricing][6] for IBM Z comes in two flavors, the Enterprise Consumption Solution and the Enterprise Capacity Solution. + +IBM said the Enterprise Consumption model is a tailored usage-based pricing model, where customers pay only for what they use, removing the need for complex and restrictive capping, IBM said. + +The Enterprise Capacity model lets customers mix and match workloads to help maximize use of the full capacity of the platform. Charges are referenced to the overall size of the physical environment and are calculated based on the estimated mix of workloads running, while providing the flexibility to vary actual usage across workloads, IBM said. + +The software pricing changes should be a welcome benefit to customers, experts said. + +“By making access to Z mainframes more flexible and ‘cloud-like,’ IBM is making it less likely that customers will consider shifting Z workloads to other systems and environments. As cloud providers become increasingly able to support mission critical applications, that’s a big deal,” wrote Charles King, president and principal analyst for Pund-IT in a [blog][7] about the IBM changes. + +“A notable point about both models is that discounted growth pricing is offered on all workloads – whether they be 40-year old Assembler programs or 4-day old JavaScript apps. This is in contrast to previous models which primarily rewarded only brand-new applications with growth pricing. By thinking outside the Big Iron box, the company has substantially eased the pain for its largest clients’ biggest mainframe-related headaches,” King wrote. + +IBM’s Tailored Fit Pricing supports an increasing number of enterprises that want to continue to grow and build new services on top of this mission-critical platform, wrote [John McKenny][8] vice president of strategy for ZSolutions Optimization at BMC Software. “In not yet released results from the 2019 BMC State of the Mainframe Survey, 62% of the survey respondents reported that they are planning to expand MIPS/MSU consumption and are growing their mainframe workloads. For customers with no current plans for growth, the affordability and cost-competitiveness of the new pricing model will re-ignite interest in also using this platform as an integral part of their hybrid cloud strategies.” + +In addition to the pricing, IBM announced some new services that bring the mainframe closer to cloud workloads. + +First, IBM rolled out z/OS Container Extensions (zCX), which makes it possible to run Linux on Z applications that are packaged as Docker Container images on z/OS. Application developers can develop and data centers can operate popular open source packages, Linux applications, IBM software, and third-party software together with z/OS applications and data, IBM said. zCX will let customers use the latest open source tools, popular NoSQL databases, analytics frameworks, application servers, and so on within the z/OS environment. + +“With z/OS Container Extensions, customers will be able to access the most recent development tools and processes available in Linux on the Z ecosystem, giving developers the flexibility to build new, cloud-native containerized apps and deploy them on z/OS without requiring Linux or a Linux partition,” IBM’s Mauri stated. + +Big Blue also rolled out z/OS Cloud Broker which will let customers access and deploy z/OS resources and services on [IBM Cloud Private][9]. [IBM Cloud Private][10] is the company’s Kubernetes-based Platform as a Service (PaaS) environment for developing and managing containerized applications. IBM said z/OS Cloud Broker is designed to help cloud application developers more easily provision and deprovision apps in z/OS environments. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3395776/ibm-overhauls-mainframe-software-pricing-adds-hybrid-private-cloud-services.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.techhive.com/images/article/2015/08/thinkstockphotos-520137237-100610459-large.jpg +[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html +[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.ibm.com/blogs/systems/ibm-z-defines-the-future-of-hybrid-cloud/ +[6]: https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS219-014&appname=USN +[7]: https://www.pund-it.com/blog/ibm-reinvents-the-z-mainframe-again/ +[8]: https://www.bmc.com/blogs/bmc-supports-ibm-tailored-fit-pricing-ibm-z/ +[9]: https://www.ibm.com/marketplace/cloud-private-on-z-and-linuxone +[10]: https://www.networkworld.com/article/3340043/ibm-marries-on-premises-private-and-public-cloud-data.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From e6678516cdc50b4c2aac712d0e2ea51481e89ab5 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:21:32 +0800 Subject: [PATCH 16/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190515=20Extreme?= =?UTF-8?q?=20addresses=20networked-IoT=20security=20sources/talk/20190515?= =?UTF-8?q?=20Extreme=20addresses=20networked-IoT=20security.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...xtreme addresses networked-IoT security.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/talk/20190515 Extreme addresses networked-IoT security.md diff --git a/sources/talk/20190515 Extreme addresses networked-IoT security.md b/sources/talk/20190515 Extreme addresses networked-IoT security.md new file mode 100644 index 0000000000..1ad756eded --- /dev/null +++ b/sources/talk/20190515 Extreme addresses networked-IoT security.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Extreme addresses networked-IoT security) +[#]: via: (https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Extreme addresses networked-IoT security +====== +The ExtremeAI security app features machine learning that can understand typical behavior of IoT devices and alert when it finds anomalies. +![Getty Images][1] + +[Extreme Networks][2] has taken the wraps off a new security application it says will use machine learning and artificial intelligence to help customers effectively monitor, detect and automatically remediate security issues with networked IoT devices. + +The application – ExtremeAI security—features machine-learning technology that can understand typical behavior of IoT devices and automatically trigger alerts when endpoints act in unusual or unexpected ways, Extreme said. + +**More about edge networking** + + * [How edge networking and IoT will reshape data centers][3] + * [Edge computing best practices][4] + * [How edge computing can help secure the IoT][5] + + + +Extreme said that the ExtremeAI Security application can tie into all leading threat intelligence feeds, and had close integration with its existing [Extreme Workflow Composer][6] to enable automatic threat mitigation and remediation. + +The application integrates the company’s ExtremeAnalytics application which lets customers view threats by severity, category, high-risk endpoints and geography. An automated ticketing feature integrates with variety of popular IT tools such as Slack, Jira, and ServiceNow, and the application interoperates with many popular security tools, including existing network taps, the vendor stated. + +There has been an explosion of new endpoints ranging from million-dollar smart MRI machines to five-dollar sensors, which creates a complex and difficult job for network and security administrators, said Abby Strong, vice president of product marketing for Extreme. “We need smarter, secure and more self-healing networks especially where IT cybersecurity resources are stretched to the limit.” + +Extreme is trying to address an issue that is important to enterprise-networking customers: how to get actionable, usable insights as close to real-time as possible, said Rohit Mehra, Vice President of Network Infrastructure at IDC. “Extreme is melding automation, analytics and security that can look at network traffic patterns and allow the system to take action when needed.” + +The ExtremeAI application, which will be available in October, is but one layer of IoT security Extreme offers. Already on the market, its [Defender for IoT][7] package, which includes a Defender application and adapter, lets customers monitor, set policies and isolate IoT devices across an enterprise. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]** + +The Extreme AI and Defender packages are now part of what the company calls Extreme Elements, which is a menu of its new and existing Smart OmniEdge, Automated Campus and Agile Data Center software, hardware and services that customers can order to build a manageable, secure system. + +Aside from the applications, the Elements include Extreme Management Center, the company’s network management software; the company’s x86-based intelligent appliances, including the ExtremeCloud Appliance; and [ExtremeSwitching X465 premium][9], a stackable multi-rate gigabit Ethernet switch. + +The switch and applications are just the beginning of a very busy time for Extreme. In its [3Q earnings cal][10]l this month company CEO Ed Meyercord noted Extreme was in the “early stages of refreshing 70 percent of our products” and seven different products will become generally available this quarter – a record for Extreme, he said. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/iot_security_tablet_conference_digital-100787102-large.jpg +[2]: https://www.networkworld.com/article/3289508/extreme-facing-challenges-girds-for-future-networking-battles.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[6]: https://www.extremenetworks.com/product/workflow-composer/ +[7]: https://www.extremenetworks.com/product/extreme-defender-for-iot/ +[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[9]: https://community.extremenetworks.com/extremeswitching-exos-223284/extremexos-30-2-and-smart-omniedge-premium-x465-switches-are-now-available-7823377 +[10]: https://seekingalpha.com/news/3457137-extreme-networks-minus-15-percent-quarterly-miss-light-guidance +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From b855033e8c1aa53187aa753efed44cd3c5794084 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:22:48 +0800 Subject: [PATCH 17/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190514=20Las=20Ve?= =?UTF-8?q?gas=20targets=20transport,=20public=20safety=20with=20IoT=20dep?= =?UTF-8?q?loyments=20sources/talk/20190514=20Las=20Vegas=20targets=20tran?= =?UTF-8?q?sport,=20public=20safety=20with=20IoT=20deployments.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ort, public safety with IoT deployments.md | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 sources/talk/20190514 Las Vegas targets transport, public safety with IoT deployments.md diff --git a/sources/talk/20190514 Las Vegas targets transport, public safety with IoT deployments.md b/sources/talk/20190514 Las Vegas targets transport, public safety with IoT deployments.md new file mode 100644 index 0000000000..84a563c8bc --- /dev/null +++ b/sources/talk/20190514 Las Vegas targets transport, public safety with IoT deployments.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Las Vegas targets transport, public safety with IoT deployments) +[#]: via: (https://www.networkworld.com/article/3395536/las-vegas-targets-transport-public-safety-with-iot-deployments.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +Las Vegas targets transport, public safety with IoT deployments +====== + +![Franck V. \(CC0\)][1] + +The city of Las Vegas’ pilot program with NTT and Dell, designed to crack down on wrong-way driving on municipal roads, is just part of the big plans that Sin City has for leveraging IoT tech in the future, according to the city's director of technology Michael Sherwood, who sat down with Network World at the IoT World conference in Silicon Valley this week. + +The system uses smart cameras and does most of its processing at the edge, according to Sherwood. The only information that gets sent back to the city’s private cloud is metadata – aggregated information about overall patterns, for decision-making and targeting purposes, not data about individual traffic incidents and wrong-way drivers. + +**[ Also see[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]** + +It’s an important public safety consideration, he said, but it’s a small part of the larger IoT-enabled framework that the city envisions for the future. + +“Our goal is to make our data open to the public, not only for transparency purposes, but to help spur development and create new applications to make Vegas a better place to live,” said Sherwood. + +[The city’s public data repository][4] already boasts a range of relevant data, some IoT-generated, some not. And efforts to make that data store more open have already begun to bear fruit, according to Sherwood. For example, one hackathon about a year ago resulted in an Alexa app that tells users how many traffic lights are out, by tracking energy usage data via the city’s portal, among other applications. + +As with IoT in general, Sherwood said that the city’s efforts have been bolstered by an influx of operational talen. Rather than additional IT staff to run the new systems, they’ve brought in experts from the traffic department to help get the most out of the framework. + +Another idea for leveraging the city’s traffic data involves tracking the status of the curb. Given the rise of Uber and Lyft and other on-demand transportation services, linking a piece of camera-generated information like “rideshares are parked along both sides of this street” directly into a navigation app could help truck drivers avoid gridlock. + +“We’re really looking to make the roads a living source of information,” Sherwood said. + +**Safer parks** + +Las Vegas is also pursuing related public safety initiatives. One pilot project aims to make public parks safer by installing infrared cameras so authorities can tell whether people are in parks after hours without incurring undue privacy concerns, given that facial recognition is very tricky in infrared. + +It’s the test-and-see method of IoT development, according to Sherwood. + +“That’s a way of starting with an IoT project: start with one park. The cost to do something like this is not astronomical, and it allows you to gauge some other information from it,” he said. + +The city has also worked to keep the costs of these projects low or even show a returnon investment, Sherwood added. Workforce development programs could train municipal workers to do simple maintenance on smart cameras in parks or along roadways, and the economic gains made from the successful use of the systems ought to outweigh deployment and operational outlay. + +“If it’s doing it’s job, those efficiencies should cover the system’s cost,” he said. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3395536/las-vegas-targets-transport-public-safety-with-iot-deployments.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/07/pedestrian-walk-sign_go_start_begin_traffic-light_by-franck-v-unsplaash-100765089-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://opendata.lasvegasnevada.gov/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From d2d3754d49c7fb54db301140e6bb162f517a5abe Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:23:13 +0800 Subject: [PATCH 18/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190514=20Brillio?= =?UTF-8?q?=20and=20Blue=20Planet=20Partner=20to=20Bring=20Network=20Autom?= =?UTF-8?q?ation=20to=20the=20Enterprise=20sources/talk/20190514=20Brillio?= =?UTF-8?q?=20and=20Blue=20Planet=20Partner=20to=20Bring=20Network=20Autom?= =?UTF-8?q?ation=20to=20the=20Enterprise.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng Network Automation to the Enterprise.md | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 sources/talk/20190514 Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise.md diff --git a/sources/talk/20190514 Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise.md b/sources/talk/20190514 Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise.md new file mode 100644 index 0000000000..e821405199 --- /dev/null +++ b/sources/talk/20190514 Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise) +[#]: via: (https://www.networkworld.com/article/3394687/brillio-and-blue-planet-partner-to-bring-network-automation-to-the-enterprise.html) +[#]: author: (Rick Hamilton, Senior Vice President, Blue Planet Software ) + +Brillio and Blue Planet Partner to Bring Network Automation to the Enterprise +====== +Rick Hamilton, senior vice president of Blue Planet, a division of Ciena, explains how partnering with Brillio brings the next generation of network capabilities to enterprises—just when they need it most. +![Kritchanut][1] + +![][2] + +_Rick Hamilton, senior vice president of Blue Planet, a division of Ciena, explains how partnering with Brillio brings the next generation of network capabilities to enterprises—just when they need it most._ + +In February 2019, we announced that Blue Planet was evolving into a more independent division, helping us increase our focus on innovative intelligent automation solutions that help our enterprise and service provider customers accelerate and achieve their business transformation goals. + +Today we’re excited to make another leap forward in delivering these benefits to enterprises of all types via our partnership with digital transformation services and solutions leader Brillio. Together, we are co-creating intelligent cloud and network management solutions that increase service visibility and improve service assurance by effectively leveraging the convergence of cloud, IoT, and AI. + +**Accelerating digital transformation in the enterprise** + +Enterprises continue to look toward cloud services to create new and incremental revenue streams based on innovative solution offerings and on-demand product/solution delivery models, and to optimize their infrastructure investments. In fact, Gartner predicts that enterprise IT spending for cloud-based offerings will continue to grow faster than non-cloud IT offerings, making up 28% of spending by 2022, up from 19% in 2018. + +As enterprises adopt cloud, they realize there are many challenges associated with traditional approaches to operating and managing complex and hybrid multi-cloud environments. Our partnership with Brillio enables us to help these organizations across industries such as manufacturing, logistics, retail, and financial services meet their technical and business needs with high-impact solutions that improve customer experiences, drive operational efficiencies, and improve quality of service. + +This is achieved by combining the Blue Planet intelligent automation platform and the Brillio CLIP™services delivery excellence platform and user-centered design (UCD) lead solution framework. Together, we offer end-to-end visibility of application and infrastructure assets in a hybrid multi-cloud environment and provide service assurance and self-healing capabilities that improve network and service availability. + +**Partnering on research and development** + +Brillio will also partner with Blue Planet on longer-term R&D efforts. As one of a preferred product engineering services providers, Brillio will work closely with our engineering team to develop and deliver network intelligence and automation solutions to help enterprises build dynamic, programmable infrastructure that leverage analytics and automation to realize the Adaptive Network vision. + +Of course, a partnership like this is a two-way street, and we consider Brillio’s choice to work with us to be a testament to our expertise, vision, and execution. In the words of Brillio Chairman and CEO Raj Mamodia, “Blue Planet’s experience in end-to-end service orchestration coupled with Brillio’s expertise in cloudification, user-centered enterprise solutions design, and rapid software development delivers distinct advantages to the industry. Through integration of technologies like cloud, IoT, and AI into our combined solutions, our partnership spurs greater innovation and helps us address the large and growing enterprise networking automation market.” + +Co-creating intelligent hybrid cloud and network management solutions with Brillio is key to advancing enterprise digital transformation initiatives. Partnering with Brillio helps us address the plethora of challenges facing enterprises today on their digital journey. Our partnership enables Blue Planet to achieve faster time-to-market and greater efficiency in developing new solutions to enable enterprises to continue to thrive and grow. + +[Learn more about Blue Planet here][3] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3394687/brillio-and-blue-planet-partner-to-bring-network-automation-to-the-enterprise.html + +作者:[Rick Hamilton, Senior Vice President, Blue Planet Software][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/istock-952625346-100796314-large.jpg +[2]: https://images.idgesg.net/images/article/2019/05/rick-100796315-small.jpg +[3]: https://www.blueplanet.com/?utm_campaign=X1058319&utm_source=NWW&utm_term=BPWeb_Brillio&utm_medium=sponsoredpost3Q19 From dc706e33bf99540607430dd42c7302e400c8590c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:23:23 +0800 Subject: [PATCH 19/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190514=20Mobility?= =?UTF-8?q?=20and=20SD-WAN,=20Part=201:=20SD-WAN=20with=204G=20LTE=20is=20?= =?UTF-8?q?a=20Reality=20sources/talk/20190514=20Mobility=20and=20SD-WAN,?= =?UTF-8?q?=20Part=201-=20SD-WAN=20with=204G=20LTE=20is=20a=20Reality.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Part 1- SD-WAN with 4G LTE is a Reality.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md diff --git a/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md b/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md new file mode 100644 index 0000000000..1ecd68fa41 --- /dev/null +++ b/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality) +[#]: via: (https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html) +[#]: author: (Francisca Segovia ) + +Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality +====== + +![istock][1] + +Without a doubt, 5G — the fifth generation of mobile wireless technology — is the hottest topic in wireless circles today. You can’t throw a stone without hitting 5G news. While telecommunications providers are in a heated competition to roll out 5G, it’s important to reflect on current 4G LTE (Long Term Evolution) business solutions as a preview of what we have learned and what’s possible. + +This is part one of a two-part blog series that will explore the [SD-WAN][2] journey through the evolution of these wireless technologies. + +### **Mobile SD-WAN is a reality** + +4G LTE commercialization continues to expand. According to [the GSM (Groupe Spéciale Mobile) Association][3], 710 operators have rolled out 4G LTE in 217 countries, reaching 83 percent of the world’s population. The evolution of 4G is transforming the mobile industry and is setting the stage for the advent of 5G. + +Mobile connectivity is increasingly integrated with SD-WAN, along with MPLS and broadband WAN services today. 4G LTE represents a very attractive transport alternative, as a backup or even an active member of the WAN transport mix to connect users to critical business applications. And in some cases, 4G LTE might be the only choice in locations where fixed lines aren’t available or reachable. Furthermore, an SD-WAN can optimize 4G LTE connectivity and bring new levels of performance and availability to mobile-based business use cases by selecting the best path available across several 4G LTE connections. + +### **Increasing application performance and availability with 4G LTE** + +Silver Peak has partnered with [BEC Technologies][4] to create a joint solution that enables customers to incorporate one or more low-cost 4G LTE services into any [Unity EdgeConnect™][5] SD-WAN edge platform deployment. All the capabilities of the EdgeConnect platform are supported across LTE links including packet-based link bonding, dynamic path control, path conditioning along with the optional [Unity Boost™ WAN Optimization][6] performance pack. This ensures always-consistent, always-available application performance even in the event of an outage or degraded service. + +EdgeConnect also incorporates sophisticated NAT traversal technology that eliminates the requirement for provisioning the LTE service with extra-cost static IP addresses. The Silver Peak [Unity Orchestrator™][7] management software enables the prioritization of LTE bandwidth usage based on branch and application requirements – active-active or backup-only. This solution is ideal in retail point-of-sale and other deployment use cases where always-available WAN connectivity is critical for the business. + +### **Automated SD-WAN enables innovative services** + +An example of an innovative mobile SD-WAN service is [swyMed’s DOT Telemedicine Backpack][8] powered by the EdgeConnect [Ultra Small][9] hardware platform. This integrated telemedicine solution enables first responders to connect to doctors and communicate patient vital statistics and real-time video anywhere, any time, greatly improving and expediting care for emergency patients. Using a lifesaving backpack provisioned with two LTE services from different carriers, EdgeConnect continuously monitors the underlying 4G LTE services for packet loss, latency and jitter. In the case of transport failure or brownout, EdgeConnect automatically initiates a sub-second failover so that voice, video and data connections continue without interruption over the remaining active 4G service. By bonding the two LTE links together with the EdgeConnect SD-WAN, swyMed can achieve an aggregate signal quality in excess of 90 percent, bringing mobile telemedicine to areas that would have been impossible in the past due to poor signal strength. + +To learn more about SD-WAN and the unique advantages that SD-WAN provides to enterprises across all industries, visit the [SD-WAN Explained][2] page on our website. + +### **Prepare for the 5G future** + +In summary, the adoption of 4G LTE is a reality. Service providers are taking advantage of the distinct benefits of SD-WAN to offer managed SD-WAN services that leverage 4G LTE. + +As the race for the 5G gains momentum, service providers are sure to look for ways to drive new revenue streams to capitalize on their initial investments. Stay tuned for part 2 of this 2-blog series where I will discuss how SD-WAN is one of the technologies that can help service providers to transition from 4G to 5G and enable the monetization of a new wave of managed 5G services. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html + +作者:[Francisca Segovia][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/istock-952414660-100796279-large.jpg +[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[3]: https://www.gsma.com/futurenetworks/resources/all-ip-statistics/ +[4]: https://www.silver-peak.com/resource-center/edgeconnect-4glte-solution-bec-technologies +[5]: https://www.silver-peak.com/products/unity-edge-connect +[6]: https://www.silver-peak.com/products/unity-boost +[7]: https://www.silver-peak.com/products/unity-orchestrator +[8]: https://www.silver-peak.com/resource-center/mobile-telemedicine-helps-save-lives-streaming-real-time-clinical-data-and-patient +[9]: https://www.silver-peak.com/resource-center/edgeconnect-us-ec-us-specification-sheet From 8dd3bb25fdd7a0cdb41b9a0f876a0a274642d894 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:24:23 +0800 Subject: [PATCH 20/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190513=20When=20t?= =?UTF-8?q?o=20be=20concerned=20about=20memory=20levels=20on=20Linux=20sou?= =?UTF-8?q?rces/tech/20190513=20When=20to=20be=20concerned=20about=20memor?= =?UTF-8?q?y=20levels=20on=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... concerned about memory levels on Linux.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 sources/tech/20190513 When to be concerned about memory levels on Linux.md diff --git a/sources/tech/20190513 When to be concerned about memory levels on Linux.md b/sources/tech/20190513 When to be concerned about memory levels on Linux.md new file mode 100644 index 0000000000..3306793c9f --- /dev/null +++ b/sources/tech/20190513 When to be concerned about memory levels on Linux.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (When to be concerned about memory levels on Linux) +[#]: via: (https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +When to be concerned about memory levels on Linux +====== +Memory management on Linux systems is complicated. Seeing high usage doesn’t necessarily mean there’s a problem. There are other things you should also consider. +![Qfamily \(CC BY 2.0\)][1] + +Running out of memory on a Linux system is generally _not_ a sign that there's a serious problem. Why? Because a healthy Linux system will cache disk activity in memory, basically gobbling memory that isn't being used, which is a very good thing. + +In other words, it doesn't allow memory to go to waste. It uses the spare memory to increase disk access speed, and it does this _without_ taking memory away from running applications. This memory caching, as you might well imagine, is hundreds of times faster than working directly with the hard-disk drives (HDD) and significantly faster than solid-state drives. Full or near full memory normally means that a system is running as efficiently as it can — not that it's running into problems. + +**[ Also see:[Must-know Linux Commands][2] ]** + +### How caching works + +Disk caching simply means that a system is taking advantage of unused resources (free memory) to speed up disk reads and writes. Applications don't lose anything and most of the time can acquire more memory whenever they need it. In addition, disk caching does not cause applications to resort to using swap. Instead, memory used for disk caching is always returned immediately when needed and disk content updated. + +### Major and minor page faults + +Linux systems allocate memory to processes by breaking physical memory into chunks called "pages" and then mapping those pages into process virtual memory. Pages that appear to no longer be used may be removed from memory — even if the related process is still running. When a process needs a page that is no longer mapped or no longer in memory, a fault is generated. So, "fault" does not mean "error" but instead means "unavailable," and faults play an important role in memory management. + +A minor fault means the page is in memory but not allocated to the requesting process or not marked as present in the memory management unit. A major fault means the page in no longer in memory. + +If you'd like to get a feel for how often minor and major page faults occur, try a **ps** command like this one. Note that we're asking for the fields related to page faults and the commands to be listed. Numerous lines were omitted from the output. The MINFL displays the number of minor faults, while MAJFL represents the number of major faults. + +``` +$ ps -eo min_flt,maj_flt,cmd + MINFL MAJFL CMD +230760 150 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 + 0 0 [kthreadd] + 0 0 [rcu_gp] + 0 0 [rcu_par_gp] + 0 0 [kworker/0:0H-kblockd] + ... + 166 20 gpg-agent --homedir /var/lib/fwupd/gnupg --use-standard-socket --daemon + 525 1 /usr/libexec/gvfsd-trash --spawner :1.16 /org/gtk/gvfs/exec_spaw/0 + 4966 4 /usr/libexec/gnome-terminal-server + 3617 0 bash + 0 0 [kworker/1:0H-kblockd] + 927 0 gdm-session-worker [pam/gdm-password] +``` + +To report on a single process, you might try a command like this: + +``` +$ ps -o min_flt,maj_flt 1 + MINFL MAJFL +230064 150 +``` + +You can also add other fields such as the process owner's UID and GID. + +``` +$ ps -o min_flt,maj_flt,cmd,args,uid,gid 1 + MINFL MAJFL CMD COMMAND UID GID +230064 150 /usr/lib/systemd/systemd -- /usr/lib/systemd/systemd -- 0 0 +``` + +### How full is full? + +One way to get a better handle on how memory is being used is with the **free -m** command. The **-m** option reports the numbers in mebibytes (MiBs) instead of bytes. + +``` +$ free -m + total used free shared buff/cache available +Mem: 3244 3069 35 49 140 667 +Swap: 3535 0 3535 +``` + +Note that "free" (unused) memory can be running low while "available" (available for starting new applications) might report a larger number. The distinction between these two fields is well worth paying attention to. Available means that it can be recovered and used when needed, while free means that it's available now. + +### When to worry + +If performance on a Linux systems appears to be good — applications are responsive, the command line shows no indications of a problem — chances are the system's in good shape. Keep in mind that some application might be slowed down for some reason that doesn't affect the overall system. + +An excessive number of hard faults may indeed indicate a problem, but balance this with observed performance. + +A good rule of thumb is to worry when available memory is close to zero or when the "swap used" field grows or fluctuates noticeably. Don't worry if the "available" figure is a reasonable percentage of the total memory available as it is in the example from above repeated here: + +``` +$ free -m + total used free shared buff/cache available +Mem: 3244 3069 35 49 140 667 +Swap: 3535 0 3535 +``` + +### Linux performance is complicated + +All that aside, memory on a Linux system can fill up and performance can slow down. Just don't take one report on memory usage as an indication that your system's in trouble. + +Memory management on Linux systems is complicated because of the measures taken to ensure the best use of system resources. Don't let the initial appearance of full memory trick you into believing that your system is in trouble when it isn't. + +**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]** + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/full-swimming-pool-100796221-large.jpg +[2]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html +[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world From 9d077cf2d282069cacdcacb454a8b4eb00991d78 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:24:46 +0800 Subject: [PATCH 21/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190513=20Top=20au?= =?UTF-8?q?to=20makers=20rely=20on=20cloud=20providers=20for=20IoT=20sourc?= =?UTF-8?q?es/talk/20190513=20Top=20auto=20makers=20rely=20on=20cloud=20pr?= =?UTF-8?q?oviders=20for=20IoT.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... makers rely on cloud providers for IoT.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md diff --git a/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md b/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md new file mode 100644 index 0000000000..5adf5f65a7 --- /dev/null +++ b/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md @@ -0,0 +1,53 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top auto makers rely on cloud providers for IoT) +[#]: via: (https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +Top auto makers rely on cloud providers for IoT +====== + +For the companies looking to implement the biggest and most complex [IoT][1] setups in the world, the idea of pairing up with [AWS][2], [Google Cloud][3] or [Azure][4] seems to be one whose time has come. Within the last two months, BMW and Volkswagen have both announced large-scale deals with Microsoft and Amazon, respectively, to help operate their extensive network of operational technology. + +According to Alfonso Velosa, vice president and analyst at Gartner, part of the impetus behind those two deals is that the automotive sector fits in very well with the architecture of the public cloud. Public clouds are great at collecting and processing data from a diverse array of different sources, whether they’re in-vehicle sensors, dealerships, mechanics, production lines or anything else. + +**[ RELATED:[What hybrid cloud means in practice][5]. | Get regularly scheduled insights by [signing up for Network World newsletters][6]. ]** + +“What they’re trying to do is create a broader ecosystem. They think they can leverage the capabilities from these folks,” Velosa said. + +### Cloud providers as IoT partners + +The idea is automated analytics for service and reliability data, manufacturing and a host of other operational functions. And while the full realization of that type of service is still very much a work in progress, it has clear-cut advantages for big companies – a skilled partner handling tricky implementation work, built-in capability for sophisticated analytics and security, and, of course, the ability to scale up in a big way. + +Hence, the structure of the biggest public clouds has upside for many large-scale IoT deployments, not just the ones taking place in the auto industry. The cloud giants have vast infrastructures, with multiple points of presence all over the world. + +To continue reading this article register now + +[Get Free Access][7] + +[Learn More][8] Existing Users [Sign In][7] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[2]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html +[3]: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html +[4]: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html +[5]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice +[6]: https://www.networkworld.com/newsletters/signup.html +[7]: javascript:// +[8]: /learn-about-insider/ From 7960b918ea62ccbe292560c117ffd048e1268957 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 13:24:58 +0800 Subject: [PATCH 22/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190513=20HPE?= =?UTF-8?q?=E2=80=99s=20CEO=20lays=20out=20his=20technology=20vision=20sou?= =?UTF-8?q?rces/talk/20190513=20HPE-s=20CEO=20lays=20out=20his=20technolog?= =?UTF-8?q?y=20vision.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...PE-s CEO lays out his technology vision.md | 162 ++++++++++++++++++ 1 file changed, 162 insertions(+) create mode 100644 sources/talk/20190513 HPE-s CEO lays out his technology vision.md diff --git a/sources/talk/20190513 HPE-s CEO lays out his technology vision.md b/sources/talk/20190513 HPE-s CEO lays out his technology vision.md new file mode 100644 index 0000000000..c9a8de9c8a --- /dev/null +++ b/sources/talk/20190513 HPE-s CEO lays out his technology vision.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (HPE’s CEO lays out his technology vision) +[#]: via: (https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html) +[#]: author: (Eric Knorr ) + +HPE’s CEO lays out his technology vision +====== +In an exclusive interview, HPE CEO Antonio Neri unpacks his portfolio of technology initiatives, from edge computing to tomorrow’s memory-driven architecture. +![HPE][1] + +Like Microsoft's Satya Nadella, HPE CEO Antonio Neri is a technologist with a long history of leading initiatives in his company. Meg Whitman, his former boss at HPE, showed her appreciation of Neri’s acumen by promoting him to HPE Executive Vice President in 2015 – and gave him the green light to acquire [Aruba][2], [SimpliVity][3], [Nimble Storage][4], and [Plexxi][5], all of which added key items to HPE’s portfolio. + +Neri succeeded Whitman as CEO just 16 months ago. In a recent interview with Network World, Neri’s engineering background was on full display as he explained HPE’s technology roadmap. First and foremost, he sees a huge opportunity in [edge computing][6], into which HPE is investing $4 billion over four years to further develop edge “connectivity, security, and obviously cloud and analytics.” + +**More about edge networking** + + * [How edge networking and IoT will reshape data centers][7] + * [Edge computing best practices][8] + * [How edge computing can help secure the IoT][9] + + + +Although his company abandoned its public cloud efforts in 2015, Neri is also bullish on the self-service “cloud experience,” which he asserts HPE is already implementing on-prem today in a software-defined, consumption-driven model. More fundamentally, he believes we are on the brink of a memory-driven computing revolution, where storage and memory become one and, depending on the use case, various compute engines are brought to bear on zettabytes of data. + +This interview, conducted by Network World Editor-in-Chief Eric Knorr and edited for length and clarity, digs into Neri’s technology vision. [A companion interview on CIO][10] centers on Neri’s views of innovation, management, and company culture. + +**Eric Knorr: ** Your biggest and highest profile investment so far has been in edge computing. My understanding of edge computing is that we’re really talking about mini-data centers, defined by IDC as less than 100 square feet in size. What’s the need for a $4 billion investment in that? + +**Antonio Neri:** It’s twofold. We focus first on connectivity. Think about Aruba as a platform company, a cloud-enabled company. Now we offer branch solutions and edge data center solutions that include [wireless][11], LAN, [WAN][12] connectivity and soon [5G][13]. We give you a control plane so that that connectivity experience can be seen consistently the same way. All the policy management, the provisioning and the security aspects of it. + +**Knorr:** Is 5G a big focus? + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][14] ]** + +**Neri:** It’s a big focus for us. What customers are telling us is that it’s hard to get 5G inside the building. How you do hand off between 5G and Wi-Fi and give them the same experience? Because the problem is that we have LAN, wireless, and WAN already fully integrated into the control plane, but 5G sits over here. If you are an enterprise, you have to manage these two pipes independently. + +With the new spectrum, though, they are kind of comingling anyway. [Customers ask] why don’t you give me [a unified] experience on top of that, with all this policy management and cloud-enablement, so I can provision the right connectivity for the right use case? A sensor can use a lower radio access or [Bluetooth][15] or other type of connectivity because you don’t need persistent connectivity and you don’t have the power to do it. + +In some cases, you just put a SIM on it, and you have 5G, but in another one it’s just wireless connectivity. Wi-Fi connectivity is significantly lower cost than 5G. The use cases will dictate what type of connectivity you need, but the reality is they all want one experience. And we can do that because we have a great platform and a great partnership with MSPs, telcos, and providers. + +**Knorr:** So it sounds like much of your investment is going into that integration. + +**Neri:** The other part is how we provide the ability to provision the right cloud computing at the edge for the right use cases. Think about, for example, a manufacturing floor. We can converge the OT and IT worlds through a converged infrastructure aspect that digitizes the analog process into a digital process. We bring the cloud compute in there, which is fully virtualized and containerized, we integrate Wi-Fi connectivity or LAN connectivity, and we eliminate all these analog processes that are multi-failure touchpoints because you have multiple things that have to come together. + +That’s a great example of a cloud at the edge. And maybe that small cloud is connected to a big cloud which could be in the large data center, which the customer owns – or it can be one of the largest public cloud providers. + +**Knorr:** It’s difficult to talk about the software-defined data center and private cloud without talking about [VMware][16]. Where do your software-defined solutions leave off and where does VMware begin? + +**Neri:** Where we stop is everything below the hypervisor, including the software-defined storage and things like SimpliVity. That has been the advantage we’ve had with [HPE OneView][17], so we can provision and manage the infrastructure-life-cycle and software-defined aspects at the infrastructure level. And let’s not forget security, because we’ve integrated [silicon root of trust][18] into our systems, which is a good advantage for us in the government space. + +Then above that we continue to develop capabilities. Customers want choice. That’s why [the partnership with Nutanix][19] was important. We offer an alternative to vSphere and vCloud Foundation with Nutanix Prism and Acropolis. + +**Knorr:** VMware has become the default for the private cloud, though. + +**Neri:** Obviously, VMware owns 60 percent of the on-prem virtualized environment, but more and more, containers are becoming the way to go in a cloud-native approach. For us, we own the full container stack, because we base our solution on Kubernetes. We deploy that. That’s why the partnership with Nutanix is important. With Nutanix, we offer KVM and the Prism stack and then we’re fully integrated with HPE OneView for the rest of the infrastructure. + +**Knorr:** You also offer GKE [Google [Kubernetes][20] Engine] on-prem. + +**Neri:** Correct. We’re working with Google on the next version of that. + +**Knorr:** How long do you think it will be before you start seeing Kubernetes and containers on bare metal? + +**Neri:** It’s an interesting question. Many customers tell us it’s like going back to the future. It’s like we’re paying this tax on the virtualization layer. + +**Knorr:** Exactly. + +**Neri:** I can go bare metal and containers and be way more efficient. It is a little bit back to the future. But it’s a different future. + +**Knorr:** And it makes the promise of [hybrid cloud][21] a little more real. I know HPE has been very bullish on hybrid. + +**Neri:** We have been the one to say the world would be hybrid. + +**Knorr:** But today, how hybrid is hybrid really? I mean, you have workloads in the public cloud, you have workloads in a [private cloud][22]. Can you really rope it all together into hybrid? + +**Neri:** I think you have to have portability eventually. + +**Knorr:** Eventually. It’s not really true now, though. + +**Neri:** No, not true now. If you look at it from the software brokering perspective that makes hybrid very small. We know this eventually has to be all connected, but it’s not there yet. More and more of these workloads have to go back and forth. + +If you ask me what the CIO role of the future will look like, it would be a service provider. I wake up in the morning, have a screen that says – oh, you know what? Today it’s cheaper to run that app here. I just slice it there and then it just moves. Whatever attributes on the data I want to manage and so forth – oh, today I have capacity here and by the way, why are you not using it? Slide it back here. That’s the hybrid world. + +Many people, when they started with the cloud, thought, “I’ll just virtualize everything,” but that’s not the cloud. You’re [virtualizing][23], but you have to make it self-service. Obviously, cloud-native applications have developed that are different today. That’s why containers are definitely a much more efficient way, and that’s why I agree that the bare-metal piece of this is coming back. + +**Knorr:** Do you worry about public cloud incursions into the [data center][24]? + +**Neri:** It’s happening. Of course I’m worried. But what at least gives me comfort is twofold. One is that the customer wants choice. They don’t want to be locked in. Service is important. It’s one thing to say: Here’s the system. The other is: Who’s going to maintain it for me? Who is going to run it for me? And even though you have all the automation tools in the world, somebody has to watch this thing. Our job is to bring the public-cloud experience on prem, so that the customer has that choice. + +**Knorr:** Part of that is economics. + +**Neri:** When you look at economics it’s no longer just the cost of compute anymore. What we see more and more is the cost of the data bandwidth back and forth. That’s why the first question a customer asks is: Where should I put my data? And that dictates a lot of things, because today the data transfer bill is way higher than the cost of renting a VM. + +The other thing is that when you go on the public cloud you can spin up a VM, but the problem is if you don’t shut it off, the bill keeps going. We brought, in the context of [composability][25], the ability to shut it off automatically. That’s why composability is important, because we can run, first of all, multi-workloads in the same infrastructure – whether it’s bare metal, virtualized or containerized. It’s called composable because the software layers of intelligence compose the right solutions from compute, storage, fabric and memory to that workload. When it doesn’t need it, it gives it back. + +**Knorr:** Is there any opportunity left at the hardware level to innovate? + +**Neri:** That’s why we think about memory-driven computing. Today we have a very CPU-centric approach. This is a limiting factor, and the reality is, if you believe data is the core of the architecture going forward, then the CPU can’t be the core of the architecture anymore. + +You have a bunch of inefficiency by moving data back and forth across the system, which also creates energy waste and so forth. What we are doing is basically rearchitecting this for once in 70 years. We take memory and storage and collapse the two into one, so this becomes one central pool, which is nonvolatile and becomes the core. And then we bring the right computing capability to the data. + +In an AI use case, you don’t move the data. You bring accelerators or GPUs to the data. For general purpose, you may use an X86, and maybe in video transcoding, you use an ARM-based architecture. The magic is this: You can do this on zettabytes of data and the benefit is there is no waste, very little power to keep it alive, and it’s persistent. + +We call this the Generation Z fabric, which is based on a data fabric and silicon photonics. Now we go from copper, which is generating a lot of waste and a lot of heat and energy, to silicon photonics. So we not only scale this to zettabytes, we can do massive amounts of computation by bringing the right compute at the speed that’s needed to the data – and we solve a cost and scale problem too, because copper today costs a significant amount of money, and gold-plated connectors are hundreds of dollars. + +We’re going to actually implement this capability in silicon photonics in our current architectures by the end of the year. In Synergy, for example, which is a composable blade system, at the back of the rack you can swap from Ethernet to silicon photonics. It was designed that way. We already prototyped this in a simple 2U chassis with 160 TB of memory and 2000 cores. We were able to process a billion-record database with 55 million combinations of algorithms in less than a minute. + +**Knorr:** So you’re not just focusing on the edge, but the core, too. + +**Neri:** As you go down from the cloud to the edge, that architecture actually scales to the smallest things. You can do it on a massive scale or you can do it on a small scale. We will deploy these technologies in our systems architectures now. Once the whole ecosystem is developed, because we also need an ISV ecosystem that can code applications in this new world or you’re not taking advantage of it. Also, the current Linux kernel can only handle so much memory, so you have to rewrite the kernel. We are working with two universities to do that. + +The hardware will continue to evolve and develop, but there still is a lot of innovation that has to happen. What’s holding us back, honestly, is the software. + +**Knorr:** And that’s where a lot of your investment is going? + +**Neri:** Correct. Exactly right. Systems software, not application software. It’s the system software that makes this infrastructure solution-oriented, workload-optimized, autonomous and efficient. + +Join the Network World communities on [Facebook][26] and [LinkedIn][27] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html + +作者:[Eric Knorr][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/antonio-neri_hpe_new-100796112-large.jpg +[2]: https://www.networkworld.com/article/2891130/aruba-networks-is-different-than-hps-failed-wireless-acquisitions.html +[3]: https://www.networkworld.com/article/3158784/hpe-buying-simplivity-for-650-million-to-boost-hyperconvergence.html +[4]: https://www.networkworld.com/article/3177376/hpe-to-pay-1-billion-for-nimble-storage-after-cutting-emc-ties.html +[5]: https://www.networkworld.com/article/3273113/hpe-snaps-up-hyperconverged-network-hcn-vendor-plexxi.html +[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html +[7]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[10]: https://www.cio.com/article/3394598/hpe-ceo-antonio-neri-rearchitects-for-the-future.html +[11]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html +[12]: https://www.networkworld.com/article/3248989/what-is-a-wide-area-network-a-definition-examples-and-where-wans-are-headed.html +[13]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[14]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[15]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html +[16]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html +[17]: https://www.networkworld.com/article/2174203/hp-expands-oneview-into-vmware-environs.html +[18]: https://www.networkworld.com/article/3199826/hpe-highlights-innovation-in-software-defined-it-security-at-discover.html +[19]: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html +[20]: https://www.infoworld.com/article/3268073/what-is-kubernetes-container-orchestration-explained.html +[21]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html +[22]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html +[23]: https://www.networkworld.com/article/3285906/whats-the-future-of-server-virtualization.html +[24]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[25]: https://www.networkworld.com/article/3266106/what-is-composable-infrastructure.html +[26]: https://www.facebook.com/NetworkWorld/ +[27]: https://www.linkedin.com/company/network-world From 8a452c05616b743fe619adf614f6ead0f6d531e7 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 20 May 2019 16:37:15 +0800 Subject: [PATCH 23/24] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190520=20Blockcha?= =?UTF-8?q?in=202.0=20=E2=80=93=20Explaining=20Distributed=20Computing=20A?= =?UTF-8?q?nd=20Distributed=20Applications=20[Part=2011]=20sources/tech/20?= =?UTF-8?q?190520=20Blockchain=202.0=20-=20Explaining=20Distributed=20Comp?= =?UTF-8?q?uting=20And=20Distributed=20Applications=20-Part=2011.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g And Distributed Applications -Part 11.md | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md diff --git a/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md b/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md new file mode 100644 index 0000000000..c34effe6be --- /dev/null +++ b/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Explaining Distributed Computing And Distributed Applications [Part 11]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Explaining Distributed Computing And Distributed Applications [Part 11] +====== + +![Explaining Distributed Computing And Distributed Applications][1] + +### How DApps serve the purpose of [Blockchain 2.0][2] + +**Blockchain 1.0** was about introducing the “blockchain” into the list of modern buzzwords along with the advent of **bitcoin**. Multiple white papers detailing bitcoin’s underlying blockchain network specified the use of the blockchain for other uses as well. Although most of the said uses was around the basic concept of using the blockchain as a **decentralized medium** for storage, a use that stems from this property is utilizing it for carrying out **Distributed computing** on top of this layer. + +**DApps** or **Distributed Applications** are computer programs that are stored and run on a distributed storage system such as the [**Ethereum**][3] blockchain for instance. To understand how DApps function and how they’re different from traditional applications on your desktop or phone, we’ll need to delve into what distributed computing is. This post will explore some fundamental concepts of distributed computing and the role of blockchains in executing the said objective. Furthermore, well also look at a few applications or DApps, in blockchain lingo, to get a hang of things. + +### What is Distributed Computing? + +We’re assuming many readers are familiar with multi-threaded applications and multi-threading in general. Multi-threading is the reason why processor manufacturers are forever hell bent on increasing the core count on their products. Fundamentally speaking, some applications such as video rendering software suites are capable of dividing their work (in this case rendering effects and video styles) into multiple chunks and parallelly get them processed from a supporting computing system. This reduces the lead time on getting the work done and is generally more efficient in terms of time, money and energy usage. Applications such as some games however, cannot make use of this system since processing and responses need to be obtained real time based on user inputs rather than via planned execution. Nonetheless, the fact that more processing power may be exploited from existing hardware using these computing methods remains true and significant. + +Even supercomputers are basically a bunch of powerful CPUs all tied up together in a circuit to enable faster processing as mentioned above. The average core count on flagship CPUs from the lead manufacturers AMD and Intel have in fact gone up in the last few years, because increasing core count has recently been the only method to claim better processing and claim upgrades to their product lines. This information notwithstanding, the fact remains that distributed computing and related concepts of parallel computing are the only legitimate ways to improve processing capabilities in the near future. There are minor differences between distributed and parallel computing models as well, however that is beyond the scope off this post. + +Another method to get many computers executing programs simultaneously is to connect them through the internet and have a cloud-based program to be implemented in parts by all of the participating systems. This is the basic fundamental behind distributed applications. + +For a more detailed account and primer regarding what and how parallel computing works, interested readers may visit [this][4] webpage. For a more detailed study of the topic, for people who have a background in computer science, you may refer to [this][5] website and the accompanying book. + +### What are DApps or Distributed Applications + +Application that can make use of the capabilities offered by a distributed computing system is called a **distributed application**. The execution and structure of such an application’s back end needs to be carefully designed in order to be compatible with the system. + +The blockchain presents an opportunity to store data in a distributed system of participating nodes. Stepping up from this opportunity we can logically build systems and applications running on such a network (think about how you used to download files via the Torrent protocol). + +Such decentralized applications present a lot of benefits over conventional applications that typically run from a central server. Some highlights are: + + * DApps run on a network of such participating nodes and any user request is parsed through such network nodes to provide the user with the requested functionality. _**Program is executed on the network instead of a single computer or a server**_. + * DApps will have codified methods of filtering through requests and executing them so as to always be fair and transparent when users interact with it. To create a new block of data in the chain, the same has to be approved via a **consensus algorithm** by the participating nodes. This fundamental idea of peer to peer approval applies for DApps as well. This essentially means that DApps cannot by extension of this principle provide different outputs to the same query or input. All users will be given the same priority unless it is explicitly mentioned and all users will receive similar results from the DApp as well. This will prove to be important in developing better industry practices for insurance and finance companies for instance. A DApp that specializes in microlending, for instance, cannot differentiate and offer different interest rates for different borrowers other than their credit history. This also means that all users will eventually end up paying for their required operations uniformly depending on the computational complexity of the task they passed on to the application. For instance, combing through 10000 entries of data will cost proportionately more than combing through say 100. The payment or incentivisation system might be different for different applications and blockchain protocols though. + * Most DApps are by default redundant and fail safe. If you’re using a service which is run on a central server, a failure from the server end will freeze the application. Think of a service such as PayPal for instance. If the PayPal server in your immediate region fails due to some reason and somehow the central server cannot re route your request, your payment will not go through. However, even in case multiple participating nodes in the blockchain dies, you will still find the application live and running provided at least one node is live. This presents a use case for applications which are by definition supposed to be live all the time. Emergency services, insurance, communications etc., are some key areas where investors hope such DApps will bring in much needed reliability. + * DApps are usually cost-effective owing to them not requiring a central server to be maintained for their functionality. Once they become mainstream, the mean computing cost of running tasks on the same is also supposed to decrease. + * DApps will as mentioned exist till eternity at least until one participant is live on the chain. This essentially means that DApps cannot be censored or hacked into bowing and shutting down. + + + +The above list of features seems very few, however, combine that with all the other capabilities of the blockchain, the advancement of wireless network access, and, the increasing capabilities of millions of smartphones and here we have in our hands nothing less than a paradigm shift in how the apps that we rely on work. + +We will look deeper into how DApps function and how you can make your own DApps on the Ethereum blockchain in a proceeding post. To give you an idea of the DApp environment right now, we present 4 carefully chosen examples that are fairly advanced and popular. + +##### 1\. BITCOIN (or any Cryptocurrency) + +We’re very sure that readers did not expect BITCOIN to be one among a list of applications in this post. The point we’re trying to make here however, is that any cryptocurrency currently running on a blockchain backbone can be termed as a DApp. Cryptocurrencies are in fact the most popular DApp format out there and a revolutionary one at that too. + +##### 2\. [MELON][6] + +We’ve talked about how asset management can be an easier task utilizing blockchain and [**smart contracts**][7]. **Melon** is a company that aims to provide its users with usable relevant tools to manage and maximize their returns from the assets they own. They specialize in cryptographic assets as of now with plans to turn to real digitized assets in the future. + +##### 3\. [Request][8] + +**Request** is primarily a ledger system that handles financial transactions, invoicing, and taxation among other things. Working with other compatible databases and systems it is also capable of verifying payer data and statistics. Large corporations which typically have a significant number of defaulting customers will find it easier to handle their operations with a system such as this. + +##### 4\. [CryptoKitties][9] + +Known the world over as the video game that broke the Ethereum blockchain, **CryptoKitties** is a video game that runs on the Ethereum blockchain. The video game identifies each user individually by building your own digital profiles and gives you unique **virtual cats** in return. The game went viral and due to the sheer number of users it actually managed to slow down the Ethereum blockchain and its transaction capabilities. Transactions took longer than usual with users having to pay significantly extra money for simple transactions even. Concerns regarding scalability of the Ethereum blockchain have been raised by several stakeholders since then. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Distributed-Computing-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[4]: https://www.techopedia.com/definition/7/distributed-computing-system +[5]: https://www.distributed-systems.net/index.php/books/distributed-systems-3rd-edition-2017/ +[6]: https://melonport.com/ +[7]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[8]: https://request.network/en/use-cases/ +[9]: https://www.cryptokitties.co/ From a6b5bfd91c28a1472145a020f5c281ebdaf4501d Mon Sep 17 00:00:00 2001 From: David Dai Date: Mon, 20 May 2019 17:31:22 +0800 Subject: [PATCH 24/24] Apply for translating Kubernetes on Fedora IoT with k3s --- sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md index 5650e80aee..c2f75bc1d4 100644 --- a/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md +++ b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (StdioA) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -183,7 +183,7 @@ via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/ 作者:[Lennart Jern][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[StdioA](https://github.com/StdioA) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出