Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-08-03 10:29:14 +08:00
commit 58f5ce18d7
8 changed files with 1023 additions and 449 deletions

View File

@ -0,0 +1,421 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11179-1.html)
[#]: subject: (Bash aliases you cant live without)
[#]: via: (https://opensource.com/article/19/7/bash-aliases)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
不可或缺的 Bash 别名
======
> 厌倦了一遍又一遍地输入相同的长命令你觉得在命令行上工作效率低吗Bash 别名可以为你创造一个与众不同的世界。
![](https://img.linux.net.cn/data/attachment/album/201908/03/095855ip0h0jpi5u9t3r00.jpg)
Bash 别名是一种用新的命令补充或覆盖 Bash 命令的方法。Bash 别名使用户可以轻松地在 [POSIX][2] 终端中自定义其体验。它们通常定义在 `$HOME/.bashrc``$HOME/bash_aliases` 中(它是由 `$HOME/.bashrc` 加载的)。
大多数发行版在新用户帐户的默认 `.bashrc` 文件中至少添加了一些流行的别名。这些可以用来简单演示 Bash 别名的语法:
```
alias ls='ls -F'
alias ll='ls -lh'
```
但并非所有发行版都附带预先添加好的别名。如果你想手动添加别名,则必须将它们加载到当前的 Bash 会话中:
```
$ source ~/.bashrc
```
否则,你可以关闭终端并重新打开它,以便重新加载其配置文件。
通过 Bash 初始化脚本中定义的那些别名,你可以键入 `ll` 而得到 `ls -l` 的结果,当你键入 `ls` 时,得到也不是原来的 [ls][3] 的普通输出。
那些别名很棒,但它们只是浅尝辄止。以下是十大 Bash 别名,一旦你试过它们,你会发现再也不能离开它们。
### 首先设置
在开始之前,创建一个名为 `~/.bash_aliases` 的文件:
```
$ touch ~/.bash_aliases
```
然后,确认这些代码出现在你的 `~/.bashrc` 文件当中:
```
if [ -e $HOME/.bash_aliases ]; then
    source $HOME/.bash_aliases
fi
```
如果你想亲自尝试本文中的任何别名,请将它们输入到 `.bash_aliases` 文件当中,然后使用 `source ~/.bashrc` 命令将它们加载到当前 Bash 会话中。
### 按文件大小排序
如果你一开始使用过 GNOME 中的 Nautilus、MacOS 中的 Finder 或 Windows 中的资源管理器等 GUI 文件管理器,那么你很可能习惯了按文件大小排序文件列表。你也可以在终端上做到这一点,但这条命令不是很简洁。
将此别名添加到 GNU 系统上的配置中:
```
alias lt='ls --human-readable --size -1 -S --classify'
```
此别名将 `lt` 替换为 `ls` 命令,该命令在单个列中显示每个项目的大小,然后按大小对其进行排序,并使用符号表示文件类型。加载新别名,然后试一下:
```
$ source ~/.bashrc
$ lt
total 344K
140K configure*
 44K aclocal.m4
 36K LICENSE
 32K config.status*
 24K Makefile
 24K Makefile.in
 12K config.log
8.0K README.md
4.0K info.slackermedia.Git-portal.json
4.0K git-portal.spec
4.0K flatpak.path.patch
4.0K Makefile.am*
4.0K dot-gitlab.ci.yml
4.0K configure.ac*
   0 autom4te.cache/
   0 share/
   0 bin/
   0 install-sh@
   0 compile@
   0 missing@
   0 COPYING@
```
在 MacOS 或 BSD 上,`ls` 命令没有相同的选项,因此这个别名可以改为:
```
alias lt='du -sh * | sort -h'
```
这个版本的结果稍有不同:
```
$ du -sh * | sort -h
0       compile
0       COPYING
0       install-sh
0       missing
4.0K    configure.ac
4.0K    dot-gitlab.ci.yml
4.0K    flatpak.path.patch
4.0K    git-portal.spec
4.0K    info.slackermedia.Git-portal.json
4.0K    Makefile.am
8.0K    README.md
12K     config.log
16K     bin
24K     Makefile
24K     Makefile.in
32K     config.status
36K     LICENSE
44K     aclocal.m4
60K     share
140K    configure
476K    autom4te.cache
```
实际上,即使在 Linux上上面这个命令也很有用因为使用 `ls` 列出的目录和符号链接的大小为 0这可能不是你真正想要的信息。使用哪个看你自己的喜好。
*感谢 Brad Alexander 提供的这个别名的思路。*
### 只查看挂载的驱动器
`mount` 命令过去很简单。只需一个命令,你就可以获得计算机上所有已挂载的文件系统的列表,它经常用于概览连接到工作站有哪些驱动器。在过去看到超过三、四个条目就会令人印象深刻,因为大多数计算机没有那么多的 USB 端口,因此这个结果还是比较好查看的。
现在计算机有点复杂,有 LVM、物理驱动器、网络存储和虚拟文件系统`mount` 的结果就很难一目了然:
```
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
[...]
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
```
要解决这个问题,试试这个别名:
```
alias mnt='mount | awk -F' ' '{ printf "%s\t%s\n",$1,$3; }' | column -t | egrep ^/dev/ | sort'
```
此别名使用 `awk` 按列解析 `mount` 的输出,将输出减少到你可能想要查找的内容(挂载了哪些硬盘驱动器,而不是文件系统):
```
$ mnt
/dev/mapper/fedora-root  /
/dev/nvme0n1p1           /boot/efi
/dev/nvme0n1p2           /boot
/dev/sda1                /run/media/seth/pocket
/dev/sdc1                /run/media/seth/trip
```
在 MacOS 上,`mount` 命令不提供非常详细的输出,因此这个别名可能过度精简了。但是,如果你更喜欢简洁的报告,请尝试以下方法:
```
alias mnt='mount | grep -E ^/dev | column -t'
```
结果:
```
$ mnt
/dev/disk1s1  on  /                (apfs,  local,  journaled)
/dev/disk1s4  on  /private/var/vm  (apfs,  local,  noexec,     journaled,  noatime,  nobrowse)
```
### 在你的 grep 历史中查找命令
有时你好不容易弄清楚了如何在终端完成某件事,并觉得自己永远不会忘记你刚学到的东西。然后,一个小时过去之后你就完全忘记了你做了什么。
搜索 Bash 历史记录是每个人不时要做的事情。如果你确切地知道要搜索的内容,可以使用 `Ctrl + R` 对历史记录进行反向搜索,但有时你无法记住要查找的确切命令。
这是使该任务更容易的别名:
```
alias gh='history|grep'
```
这是如何使用的例子:
```
$ gh bash
482 cat ~/.bashrc | grep _alias
498 emacs ~/.bashrc
530 emacs ~/.bash_aliases
531 source ~/.bashrc
```
### 按修改时间排序
每个星期一都会这样:你坐在你的电脑前开始工作,你打开一个终端,你发现你已经忘记了上周五你在做什么。你需要的是列出最近修改的文件的别名。
你可以使用 `ls` 命令创建别名,以帮助你找到上次离开的位置:
```
alias left='ls -t -1'
```
输出很简单,但如果你愿意,可以使用 `--long` 选项扩展它。这个别名列出的显示如下:
```
$ left
demo.jpeg
demo.xcf
design-proposal.md
rejects.txt
brainstorm.txt
query-letter.xml
```
### 文件计数
如果你需要知道目录中有多少文件,那么该解决方案是 UNIX 命令构造的最典型示例之一:使用 `ls` 命令列出文件,用`-1` 选项将其输出控制为只有一列,然后输出到 `wc`(单词计数)命令的管道,以计算有多少行。
这是 UNIX 理念如何允许用户使用小型的系统组件构建自己的解决方案的精彩演示。如果你碰巧每天都要做几次,这个命令组合也要输入很多字母,如果没有使用 `-R` 选项,它就不能用于目录,这会为输出引入新行并导致无用的结果。
而这个别名使这个过程变得简单:
```
alias count='find . -type f | wc -l'
```
这个别名会计算文件,忽略目录,但**不会**忽略目录的内容。如果你有一个包含两个目录的项目文件夹,每个目录包含两个文件,则该别名将返回 4因为整个项目中有 4 个文件。
```
$ ls
foo   bar
$ count
4
```
### 创建 Python 虚拟环境
你用 Python 编程吗?
你用 Python 编写了很多程序吗?
如果是这样,那么你就知道创建 Python 虚拟环境至少需要 53 次击键。
这个数字里有 49 次是多余的,它很容易被两个名为 `ve``va` 的新别名所解决:
```
alias ve='python3 -m venv ./venv'
alias va='source ./venv/bin/activate'
```
运行 `ve` 会创建一个名为 `venv` 的新目录,其中包含 Python 3 的常用虚拟环境文件系统。`va` 别名在当前 shell 中的激活该环境:
```
$ cd my-project
$ ve
$ va
(venv) $
```
### 增加一个复制进度条
每个人都会吐槽进度条因为它们似乎总是不合时宜。然而在内心深处我们似乎都想要它们。UNIX 的 `cp` 命令没有进度条,但它有一个 `-v` 选项用于显示详细信息,它回显了复制的每个文件名到终端。这是一个相当不错的技巧,但是当你复制一个大文件并且想要了解还有多少文件尚未传输时,它的作用就没那么大了。
`pv` 命令可以在复制期间提供进度条,但它并不常用。另一方面,`rsync` 命令包含在几乎所有的 POSIX 系统的默认安装中,并且它被普遍认为是远程和本地复制文件的最智能方法之一。
更好的是,它有一个内置的进度条。
```
alias cpv='rsync -ah --info=progress2'
```
像使用 `cp` 命令一样使用此别名:
```
$ cpv bigfile.flac /run/media/seth/audio/
          3.83M 6%  213.15MB/s    0:00:00 (xfr#4, to-chk=0/4)
```
使用此命令的一个有趣的副作用是 `rsync` 无需 `-r` 标志就可以复制文件和目录,而 `cp` 则需要。
### 避免意外删除
你不应该使用 `rm` 命令。`rm` 手册甚至这样说:
> **警告:**如果使用 `rm` 删除文件,通常可以恢复该文件的内容。如果你想要更加确保内容真正无法恢复,请考虑使用 `shred`
如果要删除文件,则应将文件移动到“废纸篓”,就像使用桌面时一样。
POSIX 使这很简单,因为垃圾桶是文件系统中可访问的一个实际位置。该位置可能会发生变化,具体取决于你的平台:在 [FreeDesktop][4] 上,“垃圾桶”位于 `~/.local/share/Trash`,而在 MacOS 上则是 `~/.Trash`,但无论如何,它只是一个目录,你可以将文件藏在那个看不见的地方,直到你准备永久删除它们为止。
这个简单的别名提供了一种从终端将文件扔进垃圾桶的方法:
```
alias tcn='mv --force -t ~/.local/share/Trash '
```
该别名使用一个鲜为人知的 `mv` 标志(`-t`),使你能够提供作为最终移动目标的参数,而忽略了首先列出要移动的文件的通常要求。现在,你可以使用新命令将文件和文件夹移动到系统垃圾桶:
```
$ ls
foo  bar
$ tcn foo
$ ls
bar
```
现在文件已“消失”,只有在你一头冷汗的时候才意识到你还需要它。此时,你可以从系统垃圾桶中抢救该文件;这肯定可以给 Bash 和 `mv` 开发人员提供一些帮助。
**注意:**如果你需要一个具有更好的 FreeDesktop 兼容性的更强大的垃圾桶命令,请参阅 [Trashy][5]。
### 简化 Git 工作流
每个人都有自己独特的工作流程,但无论如何,通常都会有重复的任务。如果你经常使用 Git那么你可能会发现自己经常重复的一些操作序列。也许你会发现自己回到主分支并整天一遍又一遍地拉取最新的变化或者你可能发现自己创建了标签然后将它们推到远端抑或可能完全是其它的什么东西。
无论让你厌倦一遍遍输入的 Git 魔咒是什么,你都可以通过 Bash 别名减轻一些痛苦。很大程度上由于它能够将参数传递给钩子Git 拥有着丰富的内省命令,可以让你不必在 Bash 中执行那些丑陋冗长的命令。
例如,虽然你可能很难在 Bash 中找到项目的顶级目录(就 Bash 而言,它是一个完全随意的名称,因为计算机的绝对顶级是根目录),但 Git 可以通过简单的查询找到项目的顶级目录。如果你研究过 Git 钩子,你会发现自己能够找到 Bash 一无所知的各种信息,而你可以利用 Bash 别名来利用这些信息。
这是一个来查找 Git 项目的顶级目录的别名,无论你当前在哪个项目中工作,都可以将目录改变为顶级目录,切换到主分支,并执行 Git 拉取:
```
alias startgit='cd `git rev-parse --show-toplevel` && git checkout master && git pull'
```
这种别名绝不是一个普遍有用的别名,但它演示了一个相对简单的别名如何能够消除大量繁琐的导航、命令和等待提示。
一个更简单,可能更通用的别名将使你返回到 Git 项目的顶级目录。这个别名非常有用,因为当你在一个项目上工作时,该项目或多或少会成为你的“临时家目录”。它应该像回家一样简单,就像回你真正的家一样,这里有一个别名:
```
alias cg='cd `git rev-parse --show-toplevel`'
```
现在,命令 `cg` 将你带到 Git 项目的顶部,无论你下潜的目录结构有多深。
### 切换目录并同时查看目录内容
(据称)曾经一位著名科学家提出过,我们可以通过收集极客输入 `cd` 后跟 `ls` 消耗的能量来解决地球上的许多能量问题。
这是一种常见的用法,因为通常当你更改目录时,你都会有查看周围的内容的冲动或需要。
但是在你的计算机的目录树中移动并不一定是一个走走停停的过程。
这是一个作弊,因为它根本不是别名,但它是探索 Bash 功能的一个很好的借口。虽然别名非常适合快速替换一个命令,但 Bash 也允许你在 `.bashrc` 文件中添加本地函数(或者你加载到 `.bashrc` 中的单独函数文件,就像你的别名文件一样)。
为了保持模块化,创建一个名为 `~/.bash_functions` 的新文件,然后让你的 `.bashrc` 加载它:
```
if [ -e $HOME/.bash_functions ]; then
    source $HOME/.bash_functions
fi
```
在该函数文件中,添加这些代码:
```
function cl() {
    DIR="$*";
        # if no DIR given, go home
        if [ $# -lt 1 ]; then
                DIR=$HOME;
    fi;
    builtin cd "${DIR}" && \
    # use your preferred ls command
        ls -F --color=auto
}
```
将函数加载到 Bash 会话中,然后尝试:
```
$ source ~/.bash_functions
$ cl Documents
foo bar baz
$ pwd
/home/seth/Documents
$ cl ..
Desktop  Documents  Downloads
[...]
$ pwd
/home/seth
```
函数比别名更灵活,但有了这种灵活性,你就有责任确保代码有意义并达到你的期望。别名是简单的,所以要保持简单而有用。要正式修改 Bash 的行为,请使用保存到 `PATH` 环境变量中某个位置的函数或自定义的 shell 脚本。
附注,有一些巧妙的奇技淫巧来实现 `cd``ls` 序列作为别名,所以如果你足够耐心,那么即使是一个简单的别名也永无止限。
### 开始别名化和函数化吧
可以定制你的环境使得 Linux 变得如此有趣,提高效率使得 Linux 可以改变生活。开始使用简单的别名,进而使用函数,并在评论中发布你必须拥有的别名!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/bash-aliases
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/article/19/7/master-ls-command
[4]: https://www.freedesktop.org/wiki/
[5]: https://gitlab.com/trashy/trashy

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IBM fuses its software with Red Hats to launch hybrid-cloud juggernaut)
[#]: via: (https://www.networkworld.com/article/3429596/ibm-fuses-its-software-with-red-hats-to-launch-hybrid-cloud-juggernaut.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
IBM fuses its software with Red Hats to launch hybrid-cloud juggernaut
======
IBM is starting a potentially huge run at hybrid cloud by tying more than 100 of its products to the Red Hat OpenShift platform.
![Hans \(CC0\)][1]
IBM has wasted no time aligning its own software with its newly acquired [Red Hat technoloogy][2],saying its portfolio would be transformed to work cloud natively and augmented to run on Red Hats OpenShift platform.
IBM in July [finalized its $34 billion][3] purchase of Red Hat and says it will use the Linux powerhouse's open-source know-how and Linux expertise to grow larger scale hybrid-cloud customer projects and to create a web of partnerships to simplify carrying them out.
**[ Check out [What is hybrid cloud computing][4] and learn [what you need to know about multi-cloud][5]. | Get regularly scheduled insights by [signing up for Network World newsletters][6]. ]**
The effort has started with IBM bundling Red Hats Kubernetes-based OpenShift Container Platform with more than 100 IBM products in what it calls Cloud Paks. OpenShift lets enterprise customers deploy and manage containers on their choice of infrastructure of choice, be it private or public clouds, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.
The prepackaged Cloud Paks include a secured Kubernetes container and containerized IBM middleware designed to let customers quickly spin-up enterprise-ready containers, the company said. 
Five Cloud Paks exist today: Cloud Pak for Data, Application, Integration, Automation and Multicloud Management. The Paks will ultimately include IBMs DB2, WebSphere, [API Connect][7], Watson Studio, [Cognos Analytics][8] and more.
In addition, IBM said it will bring the Red Hat OpenShift Container Platform over to IBM Z mainframes and IBM LinuxONE. Together these two platforms power about 30 billion transactions a day globally, [IBM said][9].  Some of the goals here are to increase container density and help customers build containerized applications that can scale vertically and horizontally.
“The vision is for OpenShift-enabled IBM software to become the foundational building blocks clients can use to transform their organizations and build across hybrid, multicloud environments,” Hillery Hunter, VP & CTO IBM Cloud said in an [IBM blog][10] about the announcement.
OpenShift is the underlying Kubernetes and Container orchestration layer that supports the containerized software, she wrote, and placing the Cloud Paks atop Red Hat OpenShift gives IBM a broad reach immediately. "OpenShift is also where the common services such as logging, metering, and security that IBM Cloud Paks leverage let businesses effectively manage and understand their workloads,” Hunter stated.
Analysts said the moves were expected but still extremely important for the company to ensure this acquisition is successful.
“We expect IBM and Red Hat will do the obvious stuff first, and thats what this mostly is,” said Lee Doyle, principal analyst at Doyle Research. "The challenge will be getting deeper integrations and taking the technology to the next level. What they do in the next six months to a year will be critical.”
Over the last few years IBM has been evolving its strategy to major on-cloud computing and cognitive computing. Its argument against cloud providers like AWS, Microsoft Azure, and Google Cloud is that only 20 percent of enterprise workloads have so far moved to the cloud the easy 20 percent. The rest are the difficult 80 percent of workloads that are complex, legacy applications, often mainframe based, that have run banking and big business for decades, wrote David Terrar, executive advisor for [Bloor Research][11]. "How do you transform those?"
That background gives IBM enterprise expertise and customer relationships competitors don't. “IBM has been talking hybrid cloud and multicloud to these customers for a while, and the Red Hat move is like an injection of steroids to the strategy, " Terrar wrote. "When you add in its automation and cognitive positioning with Watson, and the real-world success with enterprise-grade blockchain implementations like TradeLens and the Food Trust network, Id argue that IBM is positioning itself as the Enterprise Cloud Company.”
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3429596/ibm-fuses-its-software-with-red-hats-to-launch-hybrid-cloud-juggernaut.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/06/moon-2117426_1280-100726933-large.jpg
[2]: https://www.networkworld.com/article/3317517/the-ibm-red-hat-deal-what-it-means-for-enterprises.html
[3]: https://www.networkworld.com/article/3316960/ibm-closes-34b-red-hat-deal-vaults-into-multi-cloud.html
[4]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[5]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://www.ibm.com/cloud/api-connect
[8]: https://www.ibm.com/products/cognos-analytics
[9]: https://www.ibm.com/blogs/systems/announcing-our-direction-for-red-hat-openshift-for-ibm-z-and-linuxone/?cm_mmc=OSocial_Twitter-_-Systems_Systems+-+LinuxONE-_-WW_WW-_-OpenShift+IBM+Z+and+LinuxONE+BLOG+still+image&cm_mmca1=000001BT&cm_mmca2=10009456&linkId=71365692
[10]: https://www.ibm.com/blogs/think/2019/08/ibm-software-on-any-cloud/
[11]: https://www.bloorresearch.com/
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Self-organizing micro robots may soon swarm the industrial IoT)
[#]: via: (https://www.networkworld.com/article/3429200/self-organizing-micro-robots-may-soon-swarm-the-industrial-iot.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Self-organizing micro robots may soon swarm the industrial IoT
======
Masses of ant-like, self-organizing, microbots could soon perform tasks such as push objects in factories, swarm industrial production-line trouble-spots, and report environmental data.
![Marc Delachaux / 2019 EPFL][1]
Miniscule robots that can jump and crawl could soon be added to the [industrial internet of things][2] arsenal. The devices, a kind of printed circuit board with leg-like appendages, wouldnt need wide networks to function but would self-organize and communicate efficiently, mainly with one another.
Breakthrough inventions announced recently make the likelihood of these ant-like helpers a real possibility.
**[ Also see: [What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
### Vibration-powered micro robots
The first invention is the ability to harness vibration from ultrasound and other sources, such as piezoelectric actuators, to get micro robots to respond to commands. The piezoelectric effect is when some kinds of materials generate an electrical charge in response to mechanical stresses.
[Researchers at Georgia Tech have created 3D-printed micro robots][5] that are vibration powered. Only 0.07 inches long, the ant-like devices — which they call "micro-bristle-bots" — have four or six spindly legs and can can respond to differing quivering frequencies and move uniquely, based on their leg design.
Researcher say the microbots could be used to sense environmental changes and to move materials.
“As the micro-bristle-bots move up and down, the vertical motion is translated into a directional movement by optimizing the design of the legs,” says assistant professor Azadeh Ansari in a Georgia Tech article. Steering will be accomplished by frequencies and amplitudes. Jumping and swimming might also be possible, the researchers say.
### Self-organizing micro robots that traverse any surface
In another advancement, scientists at Ecole Polytechnique Fédérale de Lausanne (EPFL) say they have overcome limitations on locomotion and can now get [tiny, self-organizing robot devices to traverse any kind of surface][6]. Pushing objects in factories could be one use.
The robots already jump, and now they self-organize. The Swiss schools PCB-with-legs robots, en masse, figure for themselves how many fellow microbots to recruit for a particular job. Additionally, the ad hoc, swarming and self-organizing nature of the group means it cant fail catastrophically—substitute robots get marshalled and join the work environment as necessary.
Ad hoc networks are the way to go for robots. One advantage to an ad hoc network in IoT is that one can distribute the sensors randomly, and the sensors, which are basically nodes, figure out how to communicate. Routers dont get involved. The nodes sample to find out which other nodes are nearby, including how much bandwidth is needed.
The concept works on the same principal as how a marketer samples public opinion by just asking a representative group what they think, not everyone. Ants, too, size their nests like that—they bump into other ants, never really counting all of their neighbors.
Its a strong networking concept for locations where the sensor can get moved inadvertently. [I used the example of environmental sensors being strewn randomly in an active volcano][7] when I wrote about the theory some years ago.
The Swiss robots (developed in conjunction with Osaka University) use the same concept. They, too, can travel to places requiring the environmental observation. Heat spots in a factory is one example. The collective intelligence also means one could conceivably eliminate GPS or visual feedback, which is unlike current aerial drone technology.
### Even smaller industrial robots
University of Pennsylvania professor Marc Miskin, presenting at the American Physical Society in March, said he is working on even smaller robots.
“They could crawl into cellphone batteries and clean and rejuvenate them,” writes [Kenneth Chang in a New York Times article][8]. “Millions of them in a petri dish could be used to test ideas in networking and communications.”
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][4]
* [Edge computing best practices][9]
* [How edge computing can help secure the IoT][10]
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3429200/self-organizing-micro-robots-may-soon-swarm-the-industrial-iot.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/08/epfl-robot-ants-100807019-large.jpg
[2]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.news.gatech.edu/2019/07/16/tiny-vibration-powered-robots-are-size-worlds-smallest-ant
[6]: https://actu.epfl.ch/news/robot-ants-that-can-jump-communicate-and-work-toge/
[7]: https://www.networkworld.com/article/3098572/how-new-ad-hoc-networks-will-organize.html
[8]: https://www.nytimes.com/2019/04/30/science/microbots-robots-silicon-wafer.html
[9]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[10]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,449 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash aliases you cant live without)
[#]: via: (https://opensource.com/article/19/7/bash-aliases)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/samwebstudiohttps://opensource.com/users/greg-phttps://opensource.com/users/greg-p)
Bash aliases you cant live without
======
Tired of typing the same long commands over and over? Do you feel
inefficient working on the command line? Bash aliases can make a world
of difference.
![bash logo on green background][1]
A Bash alias is a method of supplementing or overriding Bash commands with new ones. Bash aliases make it easy for users to customize their experience in a [POSIX][2] terminal. They are often defined in **$HOME/.bashrc** or **$HOME/bash_aliases** (which must be loaded by **$HOME/.bashrc**).
Most distributions add at least some popular aliases in the default **.bashrc** file of any new user account. These are simple ones to demonstrate the syntax of a Bash alias:
```
alias ls='ls -F'
alias ll='ls -lh'
```
Not all distributions ship with pre-populated aliases, though. If you add aliases manually, then you must load them into your current Bash session:
```
$ source ~/.bashrc
```
Otherwise, you can close your terminal and re-open it so that it reloads its configuration file.
With those aliases defined in your Bash initialization script, you can then type **ll** and get the results of **ls -l**, and when you type **ls** you get, instead of the output of plain old ****[ls][3].
Those aliases are great to have, but they just scratch the surface of whats possible. Here are the top 10 Bash aliases that, once you try them, you wont be able to live without.
### Set up first
Before beginning, create a file called **~/.bash_aliases**:
```
$ touch ~/.bash_aliases
```
Then, make sure that this code appears in your **~/.bashrc** file:
```
if [ -e $HOME/.bash_aliases ]; then
    source $HOME/.bash_aliases
fi
```
If you want to try any of the aliases in this article for yourself, enter them into your **.bash_aliases** file, and then load them into your Bash session with the **source ~/.bashrc** command.
### Sort by file size
If you started your computing life with GUI file managers like Nautilus in GNOME, the Finder in MacOS, or Explorer in Windows, then youre probably used to sorting a list of files by their size. You can do that in a terminal as well, but its not exactly succinct.
Add this alias to your configuration on a GNU system:
```
alias lt='ls --human-readable --size -1 -S --classify'
```
This alias replaces **lt** with an **ls** command that displays the size of each item, and then sorts it by size, in a single column, with a notation to indicate the kind of file. Load your new alias, and then try it out:
```
$ source ~/.bashrc
$ lt
total 344K
140K configure*
 44K aclocal.m4
 36K LICENSE
 32K config.status*
 24K Makefile
 24K Makefile.in
 12K config.log
8.0K README.md
4.0K info.slackermedia.Git-portal.json
4.0K git-portal.spec
4.0K flatpak.path.patch
4.0K Makefile.am*
4.0K dot-gitlab.ci.yml
4.0K configure.ac*
   0 autom4te.cache/
   0 share/
   0 bin/
   0 install-sh@
   0 compile@
   0 missing@
   0 COPYING@
```
On MacOS or BSD, the **ls** command doesnt have the same options, so this alias works instead:
```
alias lt='du -sh * | sort -h'
```
The results of this version are a little different:
```
$ du -sh * | sort -h
0       compile
0       COPYING
0       install-sh
0       missing
4.0K    configure.ac
4.0K    dot-gitlab.ci.yml
4.0K    flatpak.path.patch
4.0K    git-portal.spec
4.0K    info.slackermedia.Git-portal.json
4.0K    Makefile.am
8.0K    README.md
12K     config.log
16K     bin
24K     Makefile
24K     Makefile.in
32K     config.status
36K     LICENSE
44K     aclocal.m4
60K     share
140K    configure
476K    autom4te.cache
```
In fact, even on Linux, that command is useful, because ****using **ls** lists directories and symlinks as being 0 in size, which may not be the information you actually want. Its your choice.
_Thanks to Brad Alexander for this alias idea._
### View only mounted drives
The **mount** command used to be so simple. With just one command, you could get a list of all the mounted filesystems on your computer, and it was frequently used for an overview of what drives were attached to a workstation. It used to be impressive to see more than three or four entries because most computers dont have many more USB ports than that, so the results were manageable.
Computers are a little more complicated now, and between LVM, physical drives, network storage, and virtual filesystems, the results of **mount** can be difficult to parse:
```
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
[...]
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
```
To solve that problem, try an alias like this:
```
alias mnt='mount | awk -F' ' '{ printf "%s\t%s\n",$1,$3; }' | column -t | egrep ^/dev/ | sort'
```
This alias uses **awk** to parse the output of **mount** by column, reducing the output to what you probably looking for (what hard drives, and not file systems, are mounted):
```
$ mnt
/dev/mapper/fedora-root  /
/dev/nvme0n1p1           /boot/efi
/dev/nvme0n1p2           /boot
/dev/sda1                /run/media/seth/pocket
/dev/sdc1                /run/media/seth/trip
```
On MacOS, the **mount** command doesnt provide terribly verbose output, so an alias may be overkill. However, if you prefer a succinct report, try this:
```
alias mnt='mount | grep -E ^/dev | column -t'
```
The results:
```
$ mnt
/dev/disk1s1  on  /                (apfs,  local,  journaled)
/dev/disk1s4  on  /private/var/vm  (apfs,  local,  noexec,     journaled,  noatime,  nobrowse)
```
### Find a command in your grep history
Sometimes you figure out how to do something in the terminal, and promise yourself that youll never forget what youve just learned. Then an hour goes by, and youve completely forgotten what you did.
Searching through your Bash history is something everyone has to do from time to time. If you know exactly what youre searching for, you can use **Ctrl+R** to do a reverse search through your history, but sometimes you cant remember the exact command you want to find.
Heres an alias to make that task a little easier:
```
alias gh='history|grep'
```
Heres an example of how to use it:
```
$ gh bash
482 cat ~/.bashrc | grep _alias
498 emacs ~/.bashrc
530 emacs ~/.bash_aliases
531 source ~/.bashrc
```
### Sort by modification time
It happens every Monday: You get to work, you sit down at your computer, you open a terminal, and you find youve forgotten what you were doing last Friday. What you need is an alias to list the most recently modified files.
You can use the **ls** command to create an alias to help you find where you left off:
```
alias left='ls -t -1'
```
The output is simple, although you can extend it with the --**long** option if you prefer. The alias, as listed, displays this:
```
$ left
demo.jpeg
demo.xcf
design-proposal.md
rejects.txt
brainstorm.txt
query-letter.xml
```
### Count files
If you need to know how many files you have in a directory, the solution is one of the most classic examples of UNIX command construction: You list files with the **ls** command, control its output to be only one column with the **-1** option, and then pipe that output to the **wc** (word count) command to count how many lines of single files there are.
Its a brilliant demonstration of how the UNIX philosophy allows users to build their own solutions using small system components. This command combination is also a lot to type if you happen to do it several times a day, and it doesnt exactly work for a directory of directories without using the **-R** option, which introduces new lines to the output and renders the exercise useless.
Instead, this alias makes the process easy:
```
alias count='find . -type f | wc -l'
```
This one counts files, ignoring directories, but _not_ the contents of directories. If you have a project folder containing two directories, each of which contains two files, the alias returns four, because there are four files in the entire project.
```
$ ls
foo   bar
$ count
4
```
### Create a Python virtual environment
Do you code in Python?
Do you code in Python a lot?
If you do, then you know that creating a Python virtual environment requires, at the very least, 53 keystrokes.
Thats 49 too many, but thats easily circumvented with two new aliases called **ve** and **va**:
```
alias ve='python3 -m venv ./venv'
alias va='source ./venv/bin/activate'
```
Running **ve** creates a new directory, called **venv**, containing the usual virtual environment filesystem for Python3. The **va** alias activates the environment in your current shell:
```
$ cd my-project
$ ve
$ va
(venv) $
```
### Add a copy progress bar
Everybody pokes fun at progress bars because theyre infamously inaccurate. And yet, deep down, we all seem to want them. The UNIX **cp** command has no progress bar, but it does have a **-v** option for verbosity, meaning that it echoes the name of each file being copied to your terminal. Thats a pretty good hack, but it doesnt work so well when youre copying one big file and want some indication of how much of the file has yet to be transferred.
The **pv** command provides a progress bar during copy, but its not common as a default application. On the other hand, the **rsync** command is included in the default installation of nearly every POSIX system available, and its widely recognized as one of the smartest ways to copy files both remotely and locally.
Better yet, it has a built-in progress bar.
```
alias cpv='rsync -ah --info=progress2'
```
Using this alias is the same as using the **cp** command:
```
$ cpv bigfile.flac /run/media/seth/audio/
          3.83M 6%  213.15MB/s    0:00:00 (xfr#4, to-chk=0/4)
```
An interesting side effect of using this command is that **rsync** copies both files and directories without the **-r** flag that **cp** would otherwise require.
### Protect yourself from file removal accidents
You shouldnt use the **rm** command. The **rm** manual even says so:
> _Warning_: If you use rm to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using shred.
If you want to remove a file, you should move the file to your Trash, just as you do when using a desktop.
POSIX makes this easy, because the Trash is an accessible, actual location in your filesystem. That location may change, depending on your platform: On a [FreeDesktop][4], the Trash is located at **~/.local/share/Trash**, while on MacOS its **~/.Trash**, but either way, its just a directory into which you place files that you want out of sight until youre ready to erase them forever.
This simple alias provides a way to toss files into the Trash bin from your terminal:
```
alias tcn='mv --force -t ~/.local/share/Trash '
```
This alias uses a little-known **mv** flag that enables you to provide the file you want to move as the final argument, ignoring the usual requirement for that file to be listed first. Now you can use your new command to move files and folders to your system Trash:
```
$ ls
foo  bar
$ tcn foo
$ ls
bar
```
Now the file is "gone," but only until you realize in a cold sweat that you still need it. At that point, you can rescue the file from your system Trash; be sure to tip the Bash and **mv** developers on the way out.
**Note:** If you need a more robust **Trash** command with better FreeDesktop compliance, see [Trashy][5].
### Simplify your Git workflow
Everyone has a unique workflow, but there are usually repetitive tasks no matter what. If you work with Git on a regular basis, then theres probably some sequence you find yourself repeating pretty frequently. Maybe you find yourself going back to the master branch and pulling the latest changes over and over again during the day, or maybe you find yourself creating tags and then pushing them to the remote, or maybe its something else entirely.
No matter what Git incantation youve grown tired of typing, you may be able to alleviate some pain with a Bash alias. Largely thanks to its ability to pass arguments to hooks, Git has a rich set of introspective commands that save you from having to perform uncanny feats in Bash.
For instance, while you might struggle to locate, in Bash, a projects top-level directory (which, as far as Bash is concerned, is an entirely arbitrary designation, since the absolute top level to a computer is the root directory), Git knows its top level with a simple query. If you study up on Git hooks, youll find yourself able to find out all kinds of information that Bash knows nothing about, but you can leverage that information with a Bash alias.
Heres an alias to find the top level of a Git project, no matter where in that project you are currently working, and then to change directory to it, change to the master branch, and perform a Git pull:
```
alias startgit='cd `git rev-parse --show-toplevel` && git checkout master && git pull'
```
This kind of alias is by no means a universally useful alias, but it demonstrates how a relatively simple alias can eliminate a lot of laborious navigation, commands, and waiting for prompts.
A simpler, and probably more universal, alias returns you to the Git projects top level. This alias is useful because when youre working on a project, that project more or less becomes your "temporary home" directory. It should be as simple to go "home" as it is to go to your actual home, and heres an alias to do it:
```
alias cg='cd `git rev-parse --show-toplevel`'
```
Now the command **cg** takes you to the top of your Git project, no matter how deep into its directory structure you have descended.
### Change directories and view the contents at the same time
It was once (allegedly) proposed by a leading scientist that we could solve many of the planets energy problems by harnessing the energy expended by geeks typing **cd** followed by **ls**.
Its a common pattern, because generally when you change directories, you have the impulse or the need to see whats around.
But "walking" your computers directory tree doesnt have to be a start-and-stop process.
This ones cheating, because its not an alias at all, but its a great excuse to explore Bash functions. While aliases are great for quick substitutions, Bash allows you to add local functions in your **.bashrc** file (or a separate functions file that you load into **.bashrc**, just as you do your aliases file).
To keep things modular, create a new file called **~/.bash_functions** and then have your **.bashrc** load it:
```
if [ -e $HOME/.bash_functions ]; then
    source $HOME/.bash_functions
fi
```
In the functions file, add this code:
```
function cl() {
    DIR="$*";
        # if no DIR given, go home
        if [ $# -lt 1 ]; then
                DIR=$HOME;
    fi;
    builtin cd "${DIR}" && \
    # use your preferred ls command
        ls -F --color=auto
}
```
Load the function into your Bash session and then try it out:
```
$ source ~/.bash_functions
$ cl Documents
foo bar baz
$ pwd
/home/seth/Documents
$ cl ..
Desktop  Documents  Downloads
[...]
$ pwd
/home/seth
```
Functions are much more flexible than aliases, but with that flexibility comes the responsibility for you to ensure that your code makes sense and does what you expect. Aliases are meant to be simple, so keep them easy, but useful. For serious modifications to how Bash behaves, use functions or custom shell scripts saved to a location in your **PATH**.
For the record, there _are_ some clever hacks to implement the **cd** and **ls** sequence as an alias, so if youre patient enough, then the sky is the limit even using humble aliases.
### Start aliasing and functioning
Customizing your environment is what makes Linux fun, and increasing your efficiency is what makes Linux life-changing. Get started with simple aliases, graduate to functions, and post your must-have aliases in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/bash-aliases
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/samwebstudiohttps://opensource.com/users/greg-phttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/article/19/7/master-ls-command
[4]: https://www.freedesktop.org/wiki/
[5]: https://gitlab.com/trashy/trashy

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with the BBC Microbit)
[#]: via: (https://opensource.com/article/19/8/getting-started-bbc-microbit)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Getting started with the BBC Microbit
======
Tiny open hardware board makes it easy for anyone to learn to code with
fun projects.
![BBC Microbit board][1]
Whether you are a maker, a teacher, or someone looking to expand your Python skillset, the [BBC:Microbit][2] has something for you. It was designed by the British Broadcasting Corporation to support computer education in the United Kingdom.
The [open hardware board][3] is half the size of a credit card and packed with an ARM processor, a three-axis accelerometer, a three-axis magnetometer, a Micro USB port, a 25-pin edge connector, and 25 LEDs in a 5x5 array.
I purchased my Microbit online for $19.99. It came in a small box and included a battery pack and a USB-to-Micro USB cable. It connects to my Linux laptop very easily and shows up as a USB drive.
![BBC Microbit board][4]
### Start coding
Microbit's website offers several ways to start exploring and [coding][5] quickly, including links to [third-party code editors][6] and its two official editors: [Microsoft MakeCode][7] and [MicroPython][8], which both operate in any browser using any computer (including a Chromebook). MakeCode is a block coding editor, like the popular Scratch interface, and MicroPython is a Python 3 implementation that includes a small subset of the Python library and is designed to work on microcontrollers. Both save your created code as a HEX file, which you can download and copy to the device, just as you would with any other file you are writing to a USB drive.
The [documentation][9] suggests using the [Mu Python editor][10], which I [wrote about][11] last year, because it's designed to work with the Microbit. One advantage of the Mu editor is it uses the Python REPL (readevaluateprint loop) to enter code directly to the device, rather than having to download and copy it over.
When you're writing code for the Microbit, it's important to begin each program with **From microbit import ***. This is true even when you're using the REPL in Mu because it imports all the objects and functions in the Microbit library.
![Beginning a Microbit project][12]
### Example projects
The documentation provides a wealth of code examples and [projects][13] that got me started hacking these incredible devices right away.
You can start out easy by getting the Microbit to say "Hello." Load your new code using the **Flash** button at the top of the Mu editor.
![Flash button loads new code][14]
There are many built-in [images][15] you can load, and you can make your own, too. To display an image, enter the code **display.show(Image.IMAGE)** where IMAGE is the name of the image you want to use. For example, **display.show(Image.HEART)** shows the built-in heart image.
The **sleep** command adds time between display commands, which I found useful for making the display work a little slower.
Here is a simple **for** loop with images and a scrolling banner for my favorite NFL football team, the Buffalo Bills. In the code, **display** is a Python object that controls the 25 LEDs on the front of the Microbit. The **show** method within the **display** object indicates which image to show. The **scroll** within the **display** object scrolls the string _"The Buffalo Bills are my team"_ across the LED array.
![Code for Microbit to display Buffalo Bills tribute][16]
The Microbit also has two buttons, Button A and Button B, that can be programmed to perform many tasks. Here is a simple example.
![Code to program Microbit buttons][17]
By attaching a speaker, the device can speak, beep, and play music. You can also program it to function as a compass and accelerometer and to respond to gestures and movement. Check the documentation for more information about these and other capabilities.
### Get involved
Research studies have found that 90% of [teachers in Denmark][18] and [students in the United Kingdom][19] learned to code by using the Microbit. As pressure mounts for programming to become a larger part of the K-12 school curriculum, inexpensive devices like the Microbit can play an important role in achieving that goal. If you want to get involved with the Microbit, be sure to join its [developer community][20].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/getting-started-bbc-microbit
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bbc_microbit_board_hardware.jpg?itok=3HiIupG- (BBC Microbit board)
[2]: https://microbit.org/
[3]: https://tech.microbit.org/hardware/
[4]: https://opensource.com/sites/default/files/uploads/image-1.jpg (BBC Microbit board)
[5]: https://microbit.org/code/
[6]: https://microbit.org/code-alternative-editors/
[7]: https://makecode.microbit.org/
[8]: https://python.microbit.org/v/1.1
[9]: https://microbit-micropython.readthedocs.io/en/latest/tutorials/introduction.html
[10]: https://codewith.mu/en/
[11]: https://opensource.com/article/18/8/getting-started-mu-python-editor-beginners
[12]: https://opensource.com/sites/default/files/uploads/microbit1_frommicrobitimport.png (Beginning a Microbit project)
[13]: https://microbit.org/ideas/
[14]: https://opensource.com/sites/default/files/uploads/microbit2_hello.png (Flash button loads new code)
[15]: https://microbit-micropython.readthedocs.io/en/latest/tutorials/images.html
[16]: https://opensource.com/sites/default/files/uploads/microbit3_bills.png (Code for Microbit to display Buffalo Bills tribute)
[17]: https://opensource.com/sites/default/files/uploads/microbit4_buttons.png (Code to program Microbit buttons)
[18]: https://microbit.org/assets/2019-03-05-ultrabit.pdf
[19]: https://www.bbc.co.uk/mediacentre/latestnews/2017/microbit-first-year
[20]: https://tech.microbit.org/get-involved/where-to-find/

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New research article type embeds live code and data)
[#]: via: (https://opensource.com/article/19/8/reproducible-research)
[#]: author: (Giuliano Maciocci https://opensource.com/users/gmacioccihttps://opensource.com/users/etsang)
New research article type embeds live code and data
======
The non-profit scientific research publication platform eLife recently
announced the Reproducible Document Stack (RDS).
![magnifying glass on computer screen][1]
While science is supposed to be about building on each other's findings to improve our understanding of the world around us, reproducing and reusing previously published results remains challenging, even in the age of the internet. The basic format of the scientific paper—the primary means through which scientists communicate their findings—has more or less remained the same since the first papers were published in the 18th century.
This is particularly problematic because, thanks to the technological advancements in research over the last two decades, the richness and sophistication of the methods used by researchers have far outstripped the publishing industry's ability to publish them in full. Indeed, the Methods section in research articles remains primarily a huge section of text that does not reflect the complexity or facilitate the reuse of the methods used to obtain the published results.
### Working together on a solution
To counter these challenges, eLife [teamed up][2] with [Substance][3] and [Stencila][4] in 2017 to develop a stack of open source tools for authoring, compiling, and publishing computationally reproducible manuscripts online. Our vision for the project is to create a new type of research article that embeds live code, data, and interactive figures within the flow of the traditional manuscript and to provide authors and publishers with the tools to support this new format throughout the publishing lifecycle.
As a result of our collaboration, we published eLife's [first computationally reproducible article][5] in February 2019. It was based on [a paper][6] in the [Reproducibility Project: Cancer Biology][7] collection. The reproducible version of the article showcases some of the possibilities with the new RDS tools: scientists can share the richness of their research more fully, telling the complete story of their work, while others can directly interact with the authors, interrogate them, and build on their code and data with minimal effort.
The response from the research community to the release of our first reproducible manuscript was overwhelmingly positive. Thousands of scientists explored the paper's inline code re-execution abilities by manipulating its plots, and several authors approached us directly to ask how they might publish a reproducible version of their own manuscripts.
Encouraged by this interest and feedback, [in May we announced][8] our roadmap towards an open, scalable infrastructure for the publication of computationally reproducible articles. The goal of this next phase in the RDS project is to ship researcher-centered, publisher-friendly open source solutions that will allow for the hosting and publication of reproducible documents, at scale, by anyone. This includes:
1. Developing conversion, rendering, and authoring tools to allow researchers to compose articles from multiple starting points, including GSuite tools, Microsoft Word, and Jupyter notebooks
2. Optimizing containerization tools to provide reliant and performant reproducible computing environments
3. Building the backend infrastructure needed to enable the options for live-code re-execution in the browser and PDF export at the same time
4. Formalizing an open, portable format (DAR) for reproducible document archives
### What's next, and how can you get involved?
Our first step is to publish reproducible articles as companions of already accepted papers. We will endeavor to accept submissions of reproducible manuscripts in the form of DAR files by the end of 2019. You can learn more about the key areas of innovation in this next phase of development in our article "[Reproducible Document Stack: Towards a scalable solution for reproducible articles][8]."
The RDS project is being built with three core principles in mind:
* **Openness:** We prioritize building on top of existing open technologies as well as engaging and involving a community of open source technologists and researchers to create an inclusive tool stack that evolves continuously based on user needs.
* **Interoperability:** We want to make it easy for scientists to create and for publishers to publish reproducible documents from multiple starting points.
* **Modularity:** We're developing tools within the stack in such a way that they can be taken out and integrated into other publisher workflows.
And you can help. We welcome all developers and researchers who wish to contribute to this exciting project. Since the release of eLife's first reproducible article, we have been actively collecting feedback from both the research and open source communities, and this has been (and will continue to be) crucial to shaping the development of the RDS.
If you'd like to stay up to date on our progress, please sign up for the [RDS community newsletter][9]. For any questions or comments, please [contact us][10]. We look forward to having you with us on the journey.
* * *
_This article is based in part on "[Reproducible Document Stack: Towards a scalable solution for reproducible articles][8]" by Giuliano Maciocci, Emmy Tsang, Nokome Bentley, and Michael Aufreiter._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/reproducible-research
作者:[Giuliano Maciocci][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gmacioccihttps://opensource.com/users/etsang
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://elifesciences.org/for-the-press/e6038800/elife-supports-development-of-open-technology-stack-for-publishing-reproducible-manuscripts-online
[3]: https://substance.io/
[4]: https://stenci.la/
[5]: https://repro.elifesciences.org/example.html
[6]: https://elifesciences.org/articles/30274
[7]: https://osf.io/e81xl/wiki/home/
[8]: https://elifesciences.org/labs/b521cf4d/reproducible-document-stack-towards-a-scalable-solution-for-reproducible-articles
[9]: https://crm.elifesciences.org/crm/RDS-updates
[10]: mailto:innovation@elifesciences.org

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding file paths and how to use them in Linux)
[#]: via: (https://opensource.com/article/19/8/understanding-file-paths-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/jrbarnett)
Understanding file paths and how to use them in Linux
======
If you're used to drag-and-drop environments, then file paths may be
frustrating. Learn here how they work, and some basic tricks to make
using them easier.
![open source button on keyboard][1]
A file path is the human-readable representation of a file or folders location on a computer system. Youve seen file paths, although you may not realize it, on the internet: An internet URL, despite ancient battles fought by proprietary companies like AOL and CompuServe, is actually just a path to a (sometimes dynamically created) file on someone elses computer. For instance, when you navigate to example.com/index.html, you are actually viewing the HTML file index.html, probably located in the **var** directory on the example.com server. Files on your computer have file paths, too, and this article explains how to understand them, and why theyre important.
When computers became a household item, they took on increasingly stronger analogies to real-world models. For instance, instead of accounts and directories, personal computers were said to have _desktops_ and _folders_, and eventually, people developed the latent impression that the computer was a window into a virtual version of the real world. Its a useful analogy, because everyone is familiar with the concept of desktops and file cabinets, while fewer people understand digital storage and memory addresses.
Imagine for a moment that you invented computers or operating systems. You would probably have created a way to group common files together, because humans love to classify and organize things. Since all files on a computer are on the hard drive, the biggest container you probably would have designated is the drive itself; that is, all files on a drive are in the drive.
As it turns out, the creators of UNIX had the same instinct, only they called these units of organization _directories_ or _folders_. All files on your computers drive are in the systems base (root) directory. Even external drives are brought into this root directory, just as you might place important related items into one container if you were organizing your office space or hobby room.
Files and folders on Linux are given names containing the usual components like the letters, numbers, and other characters on a keyboard. But when a file is inside a folder, or a folder is inside another folder, the **/** character shows the relationship between them. Thats why you often see files listed in the format **/usr/bin/python3** or **/etc/os-release**. The forward slashes indicate that one item is stored inside of the item preceding it.
Every file and folder on a [POSIX][2] system can be expressed as a path. If I have the file **penguin.jpg** in the **photos** folder within my home directory, and my username is **seth**, then the file path can be expressed as **/home/seth/Pictures/penguin.jpg**.
Most users interact primarily with their home directory, so the tilde (**~**) character is used as a shorthand. That fact means that I can express my example penguin picture as either **/home/seth/Pictures/penguin.jpg** or as **~/Pictures/penguin.jpg**.
### Practice makes perfect
Computers use file paths whether youre thinking of what that path is or not. Theres not necessarily a reason for you to have to think of files in terms of a path. However, file paths are part of a useful framework for understanding how computers work, and learning to think of files in a path can be useful if youre looking to become a developer (you need to understand the paths to support libraries), a web designer (file paths ensure youre pointing your HTML to the appropriate CSS), a system administrator, or just a power user.
#### When in doubt, drag and drop
If youre not used to thinking of the structure of your hard drive as a path, then it can be difficult to construct a full path for an arbitrary file. On Linux, most file managers either natively display (or have the option to) the full file path to where you are, which helps reinforce the concept on a daily basis:
![Dolphin file manager][3]
opensource.com
If youre using a terminal, it might help to know that modern terminals, unlike the teletype machines they emulate, can accept files by way of drag-and-drop. When you copying a file to a server over SSH, for instance, and youre not certain of how to express the file path, try dragging the file from your GUI file manager into your terminal. The GUI object representing the file gets translated into a text file path in the terminal:
![Terminals accept drag and drop actions][4]
opensource.com
Dont waste time typing in guesses. Just drag and drop.
#### **Tab** is your friend
On a system famous for eschewing three-letter commands when two or even one-letter commands will do, rest assured that no seasoned POSIX user _ever_ types out everything. In the Bash shell, the **Tab** key means _autocomplete_, and autocomplete never lies. For instance, to type the example **penguin.jpg** files location, you can start with:
```
$ ~/Pi
```
and then press the **Tab** key. As long as there is only one item starting with Pi, the folder **Pictures** autocompletes for you.
If there are two or more items starting with the letters you attempt to autocomplete, then Bash displays what those items are. You manually type more until you reach a unique string that the shell can safely autocomplete. The best thing about this process isnt necessarily that it saves you from typing (though thats definitely a selling point), but that autocomplete is never wrong. No matter how much you fight the computer to autocomplete something that isnt there, in the end, youll find that autocomplete understands paths better than anyone.
Assume that you, in a fit of late-night reorganization, move **penguin.jpg** from your **~/Pictures** folder to your **~/Spheniscidae** directory. You fall asleep and wake up refreshed, but with no memory that youve reorganized, so you try to copy **~/Pictures/penguin.jpg** to your web server, in the terminal, using autocomplete.
No matter how much you pound on the **Tab** key, Bash refuses to autocomplete. The file you want simply does not exist in the location where you think it exists. That feature can be helpful when youre trying to point your web page to a font or CSS file _you were sure_ youd uploaded, or when youre pointing a compiler to a library youre _100% positive_ you already compiled.
#### This isnt your grandmas autocompletion
If you like Bashs autocompletion, youll come to scoff at it once you try the autocomplete in [Zsh][5]. The Z shell, along with the [Oh My Zsh][6] site, provides a dynamic experience filled with plugins for specific programming languages and environments, visual themes packed with useful feedback, and a vibrant community of passionate shell users:
![A modest Zsh configuration.][7]
If youre a visual thinker and find the display of most terminals stagnant and numbing, Zsh may well change the way you interact with your computer.
### Practice more
File paths are important on any system. You might be a visual thinker who prefers to think of files as literal documents inside of literal folders, but the computer sees files and folders as named tags in a pool of data. The way it identifies one collection of data from another is by following its designated path. If you understand these paths, you can also come to visualize them, and you can speak the same language as your OS, making file operations much, much faster.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/understanding-file-paths-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/jrbarnett
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/sites/default/files/images/dolphin-file-path.jpg
[4]: https://opensource.com/sites/default/files/images/terminal-drag-drop.jpg
[5]: https://opensource.com/article/18/9/tips-productivity-zsh
[6]: https://ohmyz.sh/
[7]: https://opensource.com/sites/default/files/uploads/zsh-simple.jpg (A modest Zsh configuration.)

View File

@ -0,0 +1,179 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use Postfix to get email from your Fedora system)
[#]: via: (https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
Use Postfix to get email from your Fedora system
======
![][1]
Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent ([MTA][2]) isnt properly configured, you might not be getting the notifications. Postfix is a MTA [thats easy to configure and known for a strong security record][3]. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.
### Install packages
Use _dnf_ to install the required packages ([you configured][4] _[sudo][4]_[, right?][4]):
```
$ sudo -i
# dnf install postfix mailx
```
If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the _alternatives_ command to set your system default MTA:
```
$ sudo alternatives --config mta
There are 2 programs which provide 'mta'.
Selection Command
*+ 1 /usr/sbin/sendmail.sendmail
2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2
```
### Create a _password_maps_ file
You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:
```
# MY_EMAIL_ADDRESS=glb@gmail.com
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c
```
If you are using a Gmail account, youll need to configure an “app password” for Postfix, rather than using your gmail password. See “[Sign in using App Passwords][5]” for instructions on configuring an app password.
Next, you must run the _postmap_ command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:
```
# postmap /etc/postfix/password_maps
```
The hashed version will have the same file name but it will be suffixed with _.db_.
### Update the _main.cf_ file
Update Postfixs _main.cf_ configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.
```
relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps
```
The example assumes youre using Gmail for the _relayhost_ setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.
For the most up-to-date details about the above configuration options, see the man page:
```
$ man postconf.5
```
### Enable, start, and test Postfix
After you have updated the main.cf file, enable and start the Postfix service:
```
# systemctl enable --now postfix.service
```
You can then exit your _sudo_ session as root using the _exit_ command or **Ctrl+D**. You should now be able to test your configuration with the _mail_ command:
```
$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com
```
### Update services
If you have services like [logwatch][6], [mdadm][7], [fail2ban][8], [apcupsd][9] or [certwatch][10] installed, you can now update their configurations so that their email notifications will go to your internet email address.
Optionally, you may want to configure all email that is sent to your local systems root account to go to your internet email address. Add this line to the _/etc/aliases_ file on your system (youll need to use _sudo_ to edit this file, or switch to the _root_ account first):
```
root: glb+root@gmail.com
```
Now run this command to re-read the aliases:
```
# newaliases
```
* TIP: If you are using Gmail, you can [add an alpha-numeric mark][11] between your username and the **@** symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).
### Troubleshooting
**View the mail queue:**
```
$ mailq
```
**Clear all email from the queues:**
```
# postsuper -d ALL
```
**Filter the configuration settings for interesting values:**
```
$ postconf | grep "^relayhost\|^smtp_"
```
**View the _postfix/smtp_ logs:**
```
$ journalctl --no-pager -t postfix/smtp
```
**Reload _postfix_ after making configuration changes:**
```
$ systemctl reload postfix
```
* * *
_Photo by _[_Sharon McCutcheon_][12]_ on [Unsplash][13]_.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/postfix-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Message_transfer_agent
[3]: https://en.wikipedia.org/wiki/Postfix_(software)
[4]: https://fedoramagazine.org/howto-use-sudo/
[5]: https://support.google.com/accounts/answer/185833
[6]: https://src.fedoraproject.org/rpms/logwatch
[7]: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
[8]: https://fedoraproject.org/wiki/Fail2ban_with_FirewallD
[9]: https://src.fedoraproject.org/rpms/apcupsd
[10]: https://www.linux.com/learn/automated-certificate-expiration-checks-centos
[11]: https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html
[12]: https://unsplash.com/@sharonmccutcheon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[13]: https://unsplash.com/search/photos/envelopes?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText