mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
67553cb8ae
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (silentdawn-zz)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12459-1.html)
|
||||
[#]: subject: (4 Python tools for getting started with astronomy)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-astronomy-open-data)
|
||||
[#]: author: (Gina Helfrich, Ph.D. https://opensource.com/users/ginahelfrich)
|
||||
|
||||
开启天文之路的 4 个 Python 工具
|
||||
======
|
||||
|
||||
> 使用 NumPy、SciPy、Scikit-Image 和 Astropy 探索宇宙
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/27/223146sjfgbj9jfu9m1z2c.jpg)
|
||||
|
||||
### 天文学与 Python
|
||||
|
||||
对科学界而言,尤其是对天文学界来说,Python 是一种伟大的语言工具。各种软件包,如 [NumPy][5]、[SciPy][6]、[Scikit-Image][7] 和 [Astropy][8],(仅举几例) ,都充分证明了 Python 对天文学的适用性,而且有很多用例。(NumPy、Astropy 和 SciPy 是 NumFOCUS 提供资金支持的项目;Scikit-Image 是个隶属项目)。我在十几年前脱离天文研究领域,成为了软件开发者之后,对这些工具包的演进一直很感兴趣。我的很多前天文界同事在他们的研究中,使用着前面提到的大部分甚至是全部工具包。以我为例,我也曾为位于智利的超大口径望远镜(VLT)上的仪器编写过专业天文软件工具包。
|
||||
|
||||
最近令我吃惊的是,Python 工具包竟然演进到如此好用,任何人都可以轻松编写 <ruby>[数据还原][9]<rt>data reduction</rt></ruby> 脚本,产生出高质量的数据产品。天文数据易于获取,而且大部分是可以公开使用的,你要做的只是去寻找相关数据。
|
||||
|
||||
比如,负责 VLT 运行的 ESO,直接在他们的网站上提供数据下载服务,只要访问 [www.eso.org/UserPortal][10] 并在首页创建用户就可以享有数据下载服务。如果你需要 SPHERE 数据,可以下载附近任何一个包含<ruby>系外行星<rt>exoplanet</rt></ruby>或者<ruby>原恒星盘<rt>proto-stellar discs</rt></ruby>的恒星的全部数据集。对任何 Python 高手而言,通过还原数据发现深藏于噪声中的行星或者原恒星盘,实在是件令人兴奋的事。
|
||||
|
||||
我鼓励你下载 ESO 或其它天文影像数据,开启你的探索历程。这里提供几条建议:
|
||||
|
||||
1. 首先要有一个高质量的数据集。看一些有关包含系外行星或者原恒星盘的较近恒星的论文,然后在 <http://archive.eso.org/wdb/wdb/eso/sphere/query> 之类的网站检索数据。需要注意的是,前述网站上的数据有的标注为红色,有的标注为绿色,标注为红色的数据是尚未公开的,在相应的“发布日期”处会注明数据将来公开的时间。
|
||||
2. 了解一些用于获取你所用数据的仪器的信息。尽量对数据的获取有一个基本的理解,对标准的数据还原之后应该是什么样子做到心中有数。所有的望远镜和仪器都有这方面的文档供公开获取。
|
||||
3. 必须考虑天文数据的标准问题,并予以校正:
|
||||
|
||||
1. 数据以 FITS 格式文件保存。需要使用 `pyfits` 或者 `astropy` (包含 `pyfits` )将其读入到 `NumPy` 数组。有些情况下,数据是三维的,需要沿 z 轴使用 `numpy.median` 将数据转换为二维数组。有些 SPHERE 数据在同一幅影像中包含了同一片天空的两份拷贝(各自使用了不同的滤波器),这时候需要使用 **索引** 和 **切片** 将它们分离出来。
|
||||
2. <ruby>全黑图<rt>master dark</rt></ruby>和<ruby>坏点图<rt>bad pixel map</rt></ruby>。所有仪器都有快门全关(完全无光)状态拍摄的特殊图片,使用 **NumPy 掩膜数组** 从中分离出坏点图。坏点图非常重要,你在合成最终的清晰图像过程中,需要持续跟踪坏点。有些情况下,这还有助于你从原始科学数据中扣除暗背景的操作。
|
||||
3. 一般情况下,天文仪器还要拍<ruby>标准响应图<rt>master flat frame</rt></ruby>。这是对均匀的单色标准光源拍摄的一张或者一组图片。你需要将所有的原始数据除以标准响应之后再做后续处理(同样,使用 Numpy 掩膜数组实现的话,这仅仅是一个简单的除法运算)。
|
||||
4. 对行星影像,为了使行星在明亮恒星背景下变得可见,需要仰仗<ruby>日冕仪<rt>coronagraph</rt></ruby>和<ruby>角差分成像<rt>angular differential imaging</rt></ruby>技术。这一步需要识别影像的光学中心,这是比较棘手的环节之一,过程中要使用 `skimage.feature.blob_dog` 从原始影像中寻找一些人工辅助影像作为帮助。
|
||||
4. 要有耐心。理解数据格式并弄清如何操作需要一些时间,绘出像素数据曲线图或者统计图有助于你的理解。贵在坚持,必有收获!你会从中学到很多关于图像数据及其处理的知识。
|
||||
|
||||
综合应用 NumPy、SciPy、Astropy、scikit-image 及其它工具,结合耐心和恒心,通过分析大量可用的天文数据分析实现重大的发现是非常有可能的。说不定,你会成为某个之前被忽略的系外行星的第一发现者呢。祝你好运!
|
||||
|
||||
---
|
||||
|
||||
NumFOCUS 是个非盈利组织,维护着一套科学计算与数据科学方面的杰出开源工具集。如果想了解我们的任务及代码,可以访问 [numfocus.org][3]。如果你有兴趣以个人身份加入 NumFOCUS 社区,可以关注你所在地区的 [PyData 活动][4]。
|
||||
|
||||
本文基于 Pivigo CTO [Ole Moeller-Nilsson][12] 的一次 [谈话][11],最初发布于 NumFOCUS 的博客,蒙允再次发布。如果你有意支持 NumFOCUS,可以 [捐赠][13],也可以参与遍布全球的 [PyData 活动][4] 中你身边的那些。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-astronomy-open-data
|
||||
|
||||
作者:[Gina Helfrich, Ph.D.][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[silentdawn-zz](https://github.com/silentdawn-zz)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ginahelfrich
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/space_stars_cosmos_person.jpg?itok=XUtz_LyY (Person looking up at the stars)
|
||||
[2]: https://numfocus.org/blog
|
||||
[3]: https://numfocus.org
|
||||
[4]: https://pydata.org/
|
||||
[5]: http://numpy.scipy.org/
|
||||
[6]: http://www.scipy.org/
|
||||
[7]: http://scikit-image.org/
|
||||
[8]: http://www.astropy.org/
|
||||
[9]: https://en.wikipedia.org/wiki/Data_reduction
|
||||
[10]: http://www.eso.org/UserPortal
|
||||
[11]: https://www.slideshare.net/OleMoellerNilsson/pydata-lonon-finding-planets-with-python
|
||||
[12]: https://twitter.com/olly_mn
|
||||
[13]: https://numfocus.org/donate
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12458-1.html)
|
||||
[#]: subject: (What you need to know about automation testing in CI/CD)
|
||||
[#]: via: (https://opensource.com/article/20/7/automation-testing-cicd)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
CI/CD 中的自动化测试的概要知识
|
||||
======
|
||||
|
||||
> 持续集成和持续交付是由测试驱动的。以下是如何做到的。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/27/210026blobu65f77accbob.jpg)
|
||||
|
||||
> “如果一切似乎都在控制之中,那只是你走的不够快而已。” —Mario Andretti
|
||||
|
||||
测试自动化是指在软件开发过程中尽可能早、尽可能快地持续关注检测缺陷、错误和 bug。这是通过使用那些追求质量为最高价值的工具完成的,它们旨在*确保*质量,而不仅仅是追求质量。
|
||||
|
||||
持续集成/持续交付(CI/CD)解决方案(也称为 DevOps 管道)最引人注目的功能之一是可以更频繁地进行测试,而又不会给开发人员或操作人员增加更多的手动工作。让我们谈谈为什么这很重要。
|
||||
|
||||
### 为什么要在 CI/CD 中实现自动化测试?
|
||||
|
||||
敏捷团队要更快的迭代,以更高的速度交付软件和客户满意度,而这些压力可能会危及质量。全球竞争制造了对缺陷的*低容忍度*,同时也增加了敏捷团队的压力,要求软件交付的*迭代更快*。减轻这种压力的行业解决方案是什么?是 [DevOps][2]。
|
||||
|
||||
DevOps 是一个大概念,有很多定义,但是对 DevOps 成功至关重要的一项技术是 CI/CD。通过软件开发流程设计一个连续的改进循环,可以为测试带来新的机会。
|
||||
|
||||
### 这对测试人员意味着什么?
|
||||
|
||||
对于测试人员,这通常意味着他们必须:
|
||||
|
||||
* 更早且更频繁地进行测试(使用自动化)
|
||||
* 持续测试“真实世界”的工作流(自动和手动)
|
||||
|
||||
更具体地说,任何形式的测试,无论是由编写代码的开发人员运行还是由质量保证工程师团队设计,其作用都是利用 CI/CD 基础架构在快速推进的同时提高质量。
|
||||
|
||||
### 测试人员还需要做什么?
|
||||
|
||||
具体点说,测试人员负责:
|
||||
|
||||
* 测试新的和现有的软件应用
|
||||
* 根据系统要求评估软件来验证和确认功能
|
||||
* 利用自动化测试工具来开发和维护可重复使用的自动化测试
|
||||
* 与 scrum 团队的所有成员合作,了解正在开发的功能以及实施的技术设计,以设计和开发准确、高质量的自动化测试
|
||||
* 分析记录在案的用户需求,并针对中等到高度复杂的软件或 IT 系统制定或协助设计测试计划
|
||||
* 开发自动化测试,并与功能团队一起审查和评估测试方案
|
||||
* 与技术团队合作,确定在开发环境中自动化测试的正确方法
|
||||
* 与团队合作,通过自动化测试来了解和解决软件问题,并回应有关修改或增强的建议
|
||||
* 参与需求梳理、估算和其他敏捷 scrum 仪式
|
||||
* 协助制定标准和流程,以支持测试活动和材料(例如脚本、配置、程序、工具、计划和结果)
|
||||
|
||||
测试是一项艰巨的工作,但这是有效构建软件的重要组成部分。
|
||||
|
||||
### 哪些持续测试很重要?
|
||||
|
||||
你可以使用多种测试。不同的类型并不是学科之间的牢固界限。相反,它们是表示如何测试的不同方式。比较测试类型不太重要,更重要的是对每一种测试类型都要有覆盖率。
|
||||
|
||||
* **功能测试:** 确保软件具有其要求的功能
|
||||
* **单元测试:** 独立测试软件的较小单元/组件以检查其功能
|
||||
* **负载测试:** 测试软件在重负载或使用期间的性能
|
||||
* **压力测试:** 确定软件承受压力(最大负载)时的断点
|
||||
* **集成测试:** 测试组合或集成的一组组件的输出
|
||||
* **回归测试:** 当修改任意组件(无论多么小),测试整个应用的功能
|
||||
|
||||
### 总结
|
||||
|
||||
任何包含持续测试的软件开发过程都将朝着建立关键反馈环路的方向发展,以实现快速和构建有效的软件。最重要的是,该实践将质量内置到 CI/CD 管道中,并意味着了解在软件开发生命周期中提高速度同时减少风险和浪费之间的联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/automation-testing-cicd
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://opensource.com/resources/devops
|
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started as an open source builder and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/20/7/open-source-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Getting started as an open source builder and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a principal communication strategist at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are three of my and their favorite articles from that update.
|
||||
|
||||
## [Open source builders: Getting started][2]
|
||||
|
||||
> “Eventually I found myself wanting to make code changes myself,” Liz says. One of her first contributions was an authorization extension for the Django framework. “I remember being worried that the maintainers might not want a change from a complete stranger, so it was exciting and something of a relief that it was accepted,” she adds. “It’s always a great feeling to get approval and even thanks for your contribution.”
|
||||
|
||||
**The impact**: This series of interviews with open source maintainers (quote from [Liz Rice][3]) is an intersection of motivations and experiences jumping into open source. It's also a nod to the myth of the genius lone wolf developer; you can get a long ways by yourself, but you'll get further and build better things if you know how to work well with other people. Farther still if you figure out how to persuade and inspire them.
|
||||
|
||||
## [Fluent Bit v1.5: Lightweight and high-performance log processor][4]
|
||||
|
||||
> One of the biggest highlights of this major release is the joint work of different companies contributing with Fluent Bit core maintainers to bring improved and new connectors for observability cloud services provided by Google, Amazon, LogDNA, New Relic and Sumo Logic within others.
|
||||
|
||||
**The impact**: To "collect data/logs from different sources, unify and send them to multiple destinations" is as tedious of a task as you can come across, yet it's one shared both by the hyperscalers and their customers. Exhibit A: a prime example of open source working exactly as intended. Congrats to the Fluent Bit team on this release!
|
||||
|
||||
## [How Kubernetes empowered Nubank engineers to deploy 700 times a week][5]
|
||||
|
||||
> As a result, deployment has gone from 90 minutes to 15 minutes for production environments. And that, says Nobre, was “the main benefit because it helps the developer experience.” Today, Nubank engineers are deploying 700 times a week. “For a bank you would say that’s insane,” Capaverde says with a laugh. “But it’s not insane because with Kubernetes and canary deployments, it’s easier to roll back a change because it’s also faster to deploy. People are shipping more often and with more confidence.”
|
||||
|
||||
**The impact:** This feels like a win and a loss to me. Sure they lowered the cost of making a change in a way that gave people more confidence to try things out. But their developers can no longer run 10k while waiting for their deployment to finish and can now only fit in a single TED talk.
|
||||
|
||||
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/open-source-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://idk.dev/open-source-builders-getting-started/
|
||||
[3]: https://twitter.com/lizrice
|
||||
[4]: https://www.cncf.io/blog/2020/07/14/fluent-bit-v1-5-lightweight-and-high-performance-log-processor/
|
||||
[5]: https://www.cncf.io/blog/2020/07/10/how-kubernetes-empowered-nubank-engineers-to-deploy-200-times-a-week/
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Are newer medical IoT devices less secure than old ones?)
|
||||
[#]: via: (https://www.networkworld.com/article/3568608/are-newer-medical-iot-devices-less-secure-than-old-ones.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Are newer medical IoT devices less secure than old ones?
|
||||
======
|
||||
Legacy medical IoT devices may lack security features, but newer ones built around commodity components can have a whole different set of vulnerabilities that are better understood by attackers.
|
||||
Thinkstock
|
||||
|
||||
Experts differ on whether older connected medical devices or newer ones are more to blame for making healthcare networks more vulnerable to cyberattack.
|
||||
|
||||
The classic narrative of insecure [IoT][1] centers on the integration of older devices into the network. In some industries, those devices pre-date the [internet][2], sometimes by a considerable length of time, so it’s hardly surprising that businesses face a lot of challenges in securing them against remote compromise.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
Even if those devices aren’t quite that old, they often lack key capabilities – in particular, remote software updates and configurable password protection – that would help IT staff defend them against modern threats.
|
||||
|
||||
That might not be strictly true in regards to the medical field, according to Richard Staynings, chief security strategist for medical IoT security startup Cylera. There has, he argues, been an explosion in the number and variety of medical IoT devices in recent years, and many of those gadgets are at least as insecure as the true legacy equipment in the field.
|
||||
|
||||
In some cases, said Staynings, the older devices might actually be considerably more secure than those of more recent vintage. In particular, those based on dated technology like older versions of electrically erasable programmable read-only memory (EEPROM).
|
||||
|
||||
“The older systems were written in EEPROM – you need an EEPROM reader to mess with them,” he said. “The codebase is not on the Internet for hackers to look at, and you need physical access to the EEPROM to rewrite it.”
|
||||
|
||||
In contrast, the newer devices frequently use software and hardware components that are much more familiar to potential attackers. “They’re more ubiquitous in their design and construction – they use [consumer off-the-shelf] operating systems, like Windows embedded, which is still being used believe it or not, and they are much more vulnerable to attack than a legacy system,” Staynings said.
|
||||
|
||||
Insecurity in the current generation of medical IoT hardware also carries the potential for ongoing problems, not just immediate ones. While IT assets get replaced rapidly, IoT devices often have much longer replacement cycles. “Medical devices have the half-life of plutonium,” said Staynings. “They just don’t go away.”
|
||||
|
||||
Other experts are less sold on Staynings’ characterization of the threat to medical IoT, arguing that the idea that newer devices pose a greater threat than older ones flies in the face of recent efforts to make them safer. Keith Mularski directs an advisory cybersecurity practice at Ernst and Young, and described Staynings’ assertion as “surprising,” noting that the regulatory landscape for connected medical devices is quickly moving standards in a positive direction.
|
||||
|
||||
“The FDA has some pretty stringent guidelines that before devices can go to market, you need to put together threat modeling, so looking at security architecture, vectors, and so on ,and then in addition to that the FDA is getting ready to require third-party pen testing in premarket submissions,” said Mularski. “With legacy devices, those premarket submissions aren’t nearly as complete.”
|
||||
|
||||
Mularski does concede that some particularly vulnerable old devices are often more isolated on the network by design, in part because they’re more recognizable as vulnerable assets. Windows 95-vintage x-ray machines, for example, are easy to spot as a potential target for a bad actor.
|
||||
|
||||
“For the most part, I think most of the hospital environments, they do a good job at recognizing that they have these old deices, and the ones that are more vulnerable,” he said.
|
||||
|
||||
This underlines a topic most experts on – simple awareness of the potential security flaws on a given network are central to securing healthcare networks. Greg Murphy is the CEO of Ordr, a network visibility and security startup based in Santa Clara. He said that both Mularski and Staynings have points in their favor.
|
||||
|
||||
“Anyone who minimizes the issue of legacy devices needs to walk a mile in the shoes of the biomedical engineering department at a hospital,” he said. “[But] on the flipside, new devices that are being connected to the network have huge vulnerabilities themselves. Many manufacturers themselves don’t know what vulnerabilities their devices have.”
|
||||
|
||||
The only real way to address the issues, said Murphy, is at the network level – trying to make everything secure at the device level might be a near-impossibility in many cases, and even getting an accurate picture of every device connected to a network often requires the use of an automated solution.
|
||||
|
||||
“This is not a problem of human scale anymore,” he said.
|
||||
|
||||
Both Mularski and Staynings concurred on this point, as well. Regardless of which devices on a particular network are the most vulnerable, it’s worth remembering that cybercriminals generally aren’t particular about what they compromise, as long as they’re able to gain access.
|
||||
|
||||
“There may be an attacker that comes across these devices, runs a scan and happens to see [a vulnerability], but we really haven’t seen specific targeting of medical devices,” said Mularski. “It’s important to make sure that companies that have medical devices are enumerating their network, tracking their devices.”
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568608/are-newer-medical-iot-devices-less-secure-than-old-ones.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[2]: https://www.networkworld.com/article/3410588/evolution-of-the-internet-celebrating-50-years-since-arpanet.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Green Data Centers – Scaling environmental sustainability for business and consumers collectively)
|
||||
[#]: via: (https://www.networkworld.com/article/3568670/green-data-centers-scaling-environmental-sustainability-for-business-and-consumers-collectively.html)
|
||||
[#]: author: (QTS https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Green Data Centers – Scaling environmental sustainability for business and consumers collectively
|
||||
======
|
||||
10 Tips for Choosing the Best Green Data Center
|
||||
Regardless of how you identify politically, environmental sustainability has become an undeniable business directive. Global warming from carbon emissions, increasing sea levels and images of pollution are increasing public and shareholder pressure on corporations to take an active role in finding solutions and be accountable by setting goals and publicly documenting results.
|
||||
|
||||
In the IT industry, reducing electrical power generation from fossil fuels is priority #1, followed closely by water conservation and waste management.
|
||||
|
||||
Multi-tenant data centers are one of the largest per capita consumers of electric power. Based on current estimates, data centers in the U.S. alone will consume approximately 73 thousand megawatts (MW) in 2020.
|
||||
|
||||
To put this in perspective, one megawatt is enough to power 700 households. A single data center can use power equivalent to a small city and typically requires a significant amount of water for cooling.
|
||||
|
||||
Worldwide, it’s estimated that data centers consume about three percent of the global electric supply and account for about two percent of total greenhouse gas (GHG) emissions.
|
||||
|
||||
Data center efficiency and sustainability now transcends companies, geographies, and workloads. There is no simple solution and the challenge is compounded as massive digitization of data globally is creating a parallel demand for energy.
|
||||
|
||||
IDC predicts the world’s data will grow from 33 zettabytes in 2017 to 175 zettabytes in 2025 and the amount of energy used by data centers continues to double every four years - meaning they have the fastest-growing carbon footprint of any area within the IT sector.
|
||||
|
||||
Technological advancements are difficult to forecast, but several models predict data center energy usage could surpass more than 10% of the global electricity supply by 2030 if left unchecked.
|
||||
|
||||
For all of these reasons, the creation of green, sustainable, multi-tenant data centers has become essential in both an environmental and a business sense. Green data centers are built on pillars of commitments to innovative green and renewable strategies, water reclamation, recycling and waste management, and more. They do not contain obsolete systems (such as inactive or underused servers), and take advantage of newer, more efficient technologies.
|
||||
|
||||
Taking cues from the hyperscale data center operators, green data centers recognize the need to lead with modular energy efficient data center designs from the onset, adopt the latest in building technology and influence the overall supply chain for the actual sourcing of materials for these green data centers.
|
||||
|
||||
Benefits such as cost reduction, increased efficiency and knowledge that you are a better corporate citizen are obvious. What is not readily apparent is that by moving into a green multi-tenant data center, sustainability benefits are also passed on to the businesses and consumers who collectively benefit from the data center’s green IT infrastructure.
|
||||
|
||||
The economies of scale are extremely significant. Instead of a business (such as a large online retailer) attempting to deploy its own green IT environment to power its service delivery, it outsources it to the green data center that can do it cheaper and better. The sustainability benefits are then passed along to all the consumers using its services and there could be hundreds of businesses like this in a single green data center.
|
||||
|
||||
In addition, when you deal with a true green data center that is serious about sustainability, the benefits go far beyond the requirement that your power be renewable. There are environmental and philanthropic benefits that can be linked with your outsourced IT infrastructure.
|
||||
|
||||
The best green data center operators are starting to formally document and report their progress in Environmental, Social and Governance (ESG) reports made public annually. For conventional enterprises and data centers that do not have measurable sustainability as part of their governance, it is coming.
|
||||
|
||||
[QTS][1] is one of a few data center companies holding themselves accountable as global citizens and committing to sustainability best practices that are impactful, achievable and will ultimately set the standard for the data center industry in the years to come.
|
||||
|
||||
QTS has committed to minimizing its data center carbon footprint utilizing as much renewable fuel, reclaimed water and recycled materials as possible by implementing a methodic sustainability approach featuring energy efficiency measures and renewable energy procurement, all backed by continuous innovation.
|
||||
|
||||
Knowing that transparency is fundamental to accountability, QTS is documenting and publicly reporting on sustainability goals, metrics and best practices – one of only a few data center companies to do so.
|
||||
|
||||
To support this, QTS’ recently published its second Environmental, Social and Governance (ESG) Initiatives report ([accessed here][2]) that documents the industry’s first formal commitment to provide 100% renewable energy across all of its data centers by 2025.
|
||||
|
||||
In 2019, QTS won numerous awards including the coveted GRESB benchmark ranking QTS as the #1 sustainable data center company among all data centers globally for its ESG initiatives.
|
||||
|
||||
Today QTS has seven data centers running on 100% renewable energy. Approximately 30% of its overall data center power requirements are now sourced from renewable energy sources, representing over 300 million kilowatt-hours (kWh) of renewable power. [According to the EPA][3], this makes QTS one of the largest users of green power among all data center companies and the 12th largest user among Top Tech & Telecom companies.
|
||||
|
||||
**10 Tips for Choosing a Green Data Center**
|
||||
|
||||
For those operating on-premises legacy data centers looking to move into green data centers, or for organizations already outsourcing to a less than green provider, following are 10 tips when evaluating green data center providers.
|
||||
|
||||
* Check the providers’ ESG scores and reviews from ranking organizations such as GRESB, The Carbon Disclosure Project (CDP), RE100, the EPA Green Power Partnership, Sustainalytics and look for documented commitments to 100% renewable energy.
|
||||
|
||||
|
||||
* Look for innovation in power such as the use of AI to forecast power consumption, analyze data output, humidity, temperature, and other important statistics for improving efficiency, drive down costs, and reduce total power consumption.
|
||||
|
||||
|
||||
* Check the EPA Green Power ranking to find the data centers leading in renewable power commitments.
|
||||
* Look for industry-first zero water cooling solutions powered by 100% renewable wind and solar power.
|
||||
|
||||
|
||||
* Renewable energy should impactful and cost effective. Look for data centers with innovative power procurement models that allow it purchase renewable energy on parity or below the price of conventionally produced power.
|
||||
|
||||
|
||||
* Look for innovative companies that create an open dialogue with customers to gather feedback on sustainability initiatives and help the customers meet their ESG goals.
|
||||
|
||||
|
||||
* Look for data center operators that work closely with utilities to develop tariffs and legislation that make it easier and more cost effective for everyone to procure renewable energy.
|
||||
|
||||
|
||||
* Look for providers with innovative philanthropic programs such as the “Grow with QTS” program that plants more than 20,000 trees in the Sierra Mountains every year on behalf of its customers, or its HumanKind program that promotes clean water solutions in developing regions.
|
||||
|
||||
|
||||
* Look for providers actively speaking and participating with leading organizations such as the EPA’s Green Power Partnership, REBA, the Data Center Coalition’s energy committee and RE100.
|
||||
|
||||
|
||||
* Look for providers touting on-site physical features such as smart temperature and lighting controls, rainwater reclamation, recycling and waste initiatives, and EV charging stations.
|
||||
|
||||
|
||||
|
||||
No industry, nor individual company, is perfect. But, when necessary, it can align around a core set of principles that benefit themselves, their communities and generations to come. The threat of global climate change makes it impossible for data center providers to ignore their roles in the global concern. Committing to action today, as a unified industry, benefits all.
|
||||
|
||||
The fact that so many businesses are more environmentally aware means that contemplating what green, sustainable data centers can offer is becoming an increasingly important standard for choosing a data center provider.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568670/green-data-centers-scaling-environmental-sustainability-for-business-and-consumers-collectively.html
|
||||
|
||||
作者:[QTS][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.qtsdatacenters.com/
|
||||
[2]: https://www.qtsdatacenters.com/resources/brochures/esg-initiatives-2019
|
||||
[3]: https://www.epa.gov/greenpower/green-power-partnership-top-30-tech-telecom
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,262 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (silentdawn-zz)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Never forget your password with this Python encryption algorithm)
|
||||
[#]: via: (https://opensource.com/article/20/6/python-passwords)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
有了这个 Python 加密算法,你再也不会忘记密码了
|
||||
======
|
||||
本密码保护算法使用 Python 实现,基于 Shamir 秘密共享算法,可以有效避免黑客窃取和自己忘记。
|
||||
![Searching for code][1]
|
||||
|
||||
很多人使用密码管理器来保密存储自己再用的各种密码。密码管理器的关键环节之一是主密码。主密码保护者所有其它密码,这种情况下,主密码本身就是风险所在。任何知道你的主密码的人,都可以在你的密码保护范围内畅行无阻。自然而然的,为了保证主密码的安全性,你会选用很难猜到的密码,牢记在脑子里,还有很多其它你能想到的 [方法][2]。
|
||||
|
||||
但是万一主密码泄露了或者忘记了,后果是什么?可能你会去个心仪的没有现代技术覆盖的岛上旅行个把月什么的,在开心戏水之后,想用美味菠萝的时刻,突然记不清自己的密码是什么了。是“山尖一寺一壶酒”?还是“一去二三里,烟村四五家”?反正当时选用的时候感觉精灵的很,现在没有那么精灵了。
|
||||
|
||||
![XKCD comic on password strength][3]
|
||||
|
||||
([XKCD][4], [CC BY-NC 2.5][5])
|
||||
|
||||
当然,你不会把自己的主密码告诉其它任何人,因为这是密码管理的首要原则。有没有其它变通的办法,免除这种难以承受的尴尬?
|
||||
|
||||
试试 **[Shamir 秘密共享算法][6]**, 一种可以将保密内容进行分块保存,且只能将所有片段拼合才能恢复保密内容的算法。
|
||||
|
||||
Let's take a look at Shamir's Secret Sharing in action through a story of ancient times and modern times.
|
||||
|
||||
This story does assume some knowledge of cryptography. You can brush up on it with this [introduction to cryptography and public key infrastructure][7].
|
||||
|
||||
### A story of secrets in ancient times
|
||||
|
||||
In an ancient kingdom, it came to pass that the king had a secret. A terrible secret:
|
||||
|
||||
|
||||
```
|
||||
def int_from_bytes(s):
|
||||
acc = 0
|
||||
for b in s:
|
||||
acc = acc * 256
|
||||
acc += b
|
||||
return acc
|
||||
|
||||
secret = int_from_bytes("terrible secret".encode("utf-8"))
|
||||
```
|
||||
|
||||
So terrible, the king could entrust it to none of his offspring. He had five of them but knew that there would be dangers on the road ahead. The king knew his children would need the secret to protect the kingdom after his death, but he could not bear the thought of the secret being known for two decades, while they were still mourning him.
|
||||
|
||||
So he used powerful magic to split the secret into five shards. He knew that it was possible that one child or even two would not respect his wishes, but he did not believe three of them would:
|
||||
|
||||
|
||||
```
|
||||
from mod import Mod
|
||||
from os import urandom
|
||||
```
|
||||
|
||||
The king was well-versed in the magical arts of [finite fields][8] and _randomness_. As a wise king, he used Python to split the secret.
|
||||
|
||||
The first thing he did was choose a large prime—the 13th [Mersenne Prime][9] (`2**521 - 1`)—and ordered it be written in letters 10 feet high, wrought of gold, above the palace:
|
||||
|
||||
|
||||
```
|
||||
`P = 2**521 - 1`
|
||||
```
|
||||
|
||||
This was not part of the secret: it was _public data_.
|
||||
|
||||
The king knew that if `P` is a prime, numbers modulo `P` form a mathematical [field][10]: they can be added, multiplied, subtracted, and divided as long as the divisor is not zero.
|
||||
|
||||
As a busy king, he used the PyPI [package `mod`][11], which implements modulus arithmetic.
|
||||
|
||||
He made sure his terrible secret was less than `P`:
|
||||
|
||||
|
||||
```
|
||||
`secret < P`[/code] [code]`TRUE`
|
||||
```
|
||||
|
||||
And he converted it to its modulus `mod P`:
|
||||
|
||||
|
||||
```
|
||||
`secret = mod.Mod(secret, P)`
|
||||
```
|
||||
|
||||
In order to allow three offspring to reconstruct the secret, the king had to generate two more parts to mix together:
|
||||
|
||||
|
||||
```
|
||||
polynomial = [secret]
|
||||
for i in range(2):
|
||||
polynomial.append(Mod(int_from_bytes(urandom(16)), P))
|
||||
len(polynomial)
|
||||
|
||||
[/code] [code]`3`
|
||||
```
|
||||
|
||||
The king next needed to evaluate this [polynomial][12] at random points. Evaluating a polynomial is calculating `polynomial[0] + polynomial[1]*x + polynomial[2]*x**2 ...`
|
||||
|
||||
While there are third-party modules to evaluate polynomials, they do not work with finite fields. The king needed to write the evaluation code himself:
|
||||
|
||||
|
||||
```
|
||||
def evaluate(coefficients, x):
|
||||
acc = 0
|
||||
power = 1
|
||||
for c in coefficients:
|
||||
acc += c * power
|
||||
power *= x
|
||||
return acc
|
||||
```
|
||||
|
||||
Next, the king evaluated the polynomial at five different points, to give one piece to each offspring:
|
||||
|
||||
|
||||
```
|
||||
shards = {}
|
||||
for i in range(5):
|
||||
x = Mod(int_from_bytes(urandom(16)), P)
|
||||
y = evaluate(polynomial, x)
|
||||
shards[i] = (x, y)
|
||||
```
|
||||
|
||||
Sadly, as the king feared, not all his offspring were honest and true. Two of them, shortly after his death, tried to figure out the terrible secret from the parts they had. Try as they could, they did not succeed. However, when the others learned this, they exiled them from the kingdom forever:
|
||||
|
||||
|
||||
```
|
||||
del shards[2]
|
||||
del shards[3]
|
||||
```
|
||||
|
||||
Twenty years later, as the king had decreed, the oldest sibling and the two youngest came together to figure out their father's terrible secret. They put together their shards:
|
||||
|
||||
|
||||
```
|
||||
`retrieved = list(shards.values())`
|
||||
```
|
||||
|
||||
For 40 days and 40 nights, they struggled with finding the king's secret. No easy task was it before them. Like the king, they knew Python, but none were as wise as he.
|
||||
|
||||
Finally, the answer came to them.
|
||||
|
||||
The retrieval code is based on a concept called [lagrange interpolation][13]. It evaluates a polynomial at `0` based on its values in `n` other places, where `n` is the degree of the polynomial. The way it works is that you can explicitly find a formula for a polynomial that is `1` at `t[0]` and `0` at `t[i]` for `i` different from `0`. Since evaluating a polynomial is a linear function, you evaluate each of _these_ polynomials and interpolate the results of the evaluations with the values the polynomial has:
|
||||
|
||||
|
||||
```
|
||||
from functools import reduce
|
||||
from operator import mul
|
||||
|
||||
def retrieve_original(secrets):
|
||||
x_s = [s[0] for s in secrets]
|
||||
acc = Mod(0, P)
|
||||
for i in range(len(secrets)):
|
||||
others = list(x_s)
|
||||
cur = others.pop(i)
|
||||
factor = Mod(1, P)
|
||||
for el in others:
|
||||
factor *= el * (el - cur).inverse()
|
||||
acc += factor * secrets[i][1]
|
||||
return acc
|
||||
```
|
||||
|
||||
It is no surprise this took them 40 days and 40 nights—this code is pretty complicated! But they ran it on the surviving shards, waiting with bated breath:
|
||||
|
||||
|
||||
```
|
||||
`retrieved_secret = retrieve_original(retrieved)`
|
||||
```
|
||||
|
||||
Did the children get the correct secret?
|
||||
|
||||
|
||||
```
|
||||
`retrieved_secret == secret`[/code] [code]`TRUE`
|
||||
```
|
||||
|
||||
The beauty of math's magic is that it works reliably every time! The children, now older and able to understand their father's choices, used the terrible secret to defend the kingdom. The kingdom prospered and grew.
|
||||
|
||||
### A modern story of Shamir's Secret Sharing
|
||||
|
||||
In modern times, many of us are also burdened with a terrible secret: the master password to our password manager. While few people have one person they can trust completely with their deepest, darkest secrets, many can find a group of five where it is unlikely three will break their trust together.
|
||||
|
||||
Luckily, in these modern times, we do not need to split our secrets ourselves, as the king did. Through the modern technology of _open source_, we can use software that exists.
|
||||
|
||||
Let's say you have five people you trust—not absolutely, but quite a bit: Your best friend, your spouse, your mom, a close colleague, and your lawyer.
|
||||
|
||||
You can install and run the program `ssss` to split the key:
|
||||
|
||||
|
||||
```
|
||||
$ echo 'long legs travel fast' | ssss-split -t 3 -n 5
|
||||
Generating shares using a (3,5) scheme with dynamic security level.
|
||||
Enter the secret, at most 128 ASCII characters: Using a 168 bit security level.
|
||||
1-797842b76d80771f04972feb31c66f3927e7183609
|
||||
2-947925f2fbc23dc9bca950ef613da7a4e42dc1c296
|
||||
3-14647bdfc4e6596e0dbb0aa6ab839b195c9d15906d
|
||||
4-97c77a805cd3d3a30bff7841f3158ea841cd41a611
|
||||
5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
```
|
||||
|
||||
Ah, a strong, powerful, master password: `long legs travel fast`. Never can it be entrusted to a single soul, but you can send the five shards to your five guardians.
|
||||
|
||||
* You send `1` to your best friend, F.
|
||||
* You send `2` to your spouse, S.
|
||||
* You send `3` to your mom, M.
|
||||
* You send `4` to your colleague, C.
|
||||
* You send `5` to your lawyer, L.
|
||||
|
||||
|
||||
|
||||
Now, say you go on a family vacation. For a month, you frolic on the warm sands of the beach. While you frolic, you touch not one electronic device. Soon enough, your powerful master password is forgotten.
|
||||
|
||||
Your loving spouse and your dear mother were with you on vacation. They kept their shards safe in their password manager—and they have forgotten _their passwords_.
|
||||
|
||||
This is fine.
|
||||
|
||||
You contact your best friend, F, who gives you `1-797842b76d80771f04972feb31c66f3927e7183609`. Your colleague, who covered all your shifts, is glad to have you back and gives you `4-97c77a805cd3d3a30bff7841f3158ea841cd41a611`. Your lawyer charges you $150 per hour, goes into their password manager, and digs up `5-17da24ad63f7b704baed220839abb215f97d95f4f8`.
|
||||
|
||||
With those three pieces, you run:
|
||||
|
||||
|
||||
```
|
||||
$ ssss-combine -t 3
|
||||
Enter 3 shares separated by newlines:
|
||||
Share [1/3]: 1-797842b76d80771f04972feb31c66f3927e7183609
|
||||
Share [2/3]: 4-97c77a805cd3d3a30bff7841f3158ea841cd41a611
|
||||
Share [3/3]: 5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
Resulting secret: long legs travel fast
|
||||
```
|
||||
|
||||
And so, with the technology of _open source_, you too can live like a king!
|
||||
|
||||
### Share safely for your safety
|
||||
|
||||
Password management is an essential skill for today's online life. Create a complex password, of course, but don't stop there. Use the handy Shamir's Secret Sharing algorithm to safely share it with others.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/python-passwords
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
|
||||
[2]: https://monitor.firefox.com/security-tips
|
||||
[3]: https://opensource.com/sites/default/files/uploads/password_strength-xkcd.png (XKCD comic on password strength)
|
||||
[4]: https://imgs.xkcd.com/comics/password_strength.png
|
||||
[5]: https://creativecommons.org/licenses/by-nc/2.5/
|
||||
[6]: https://en.wikipedia.org/wiki/Secret_sharing#Shamir's_scheme
|
||||
[7]: https://opensource.com/article/18/5/cryptography-pki
|
||||
[8]: https://en.wikipedia.org/wiki/Finite_field
|
||||
[9]: https://en.wikipedia.org/wiki/Mersenne_prime
|
||||
[10]: https://en.wikipedia.org/wiki/Field_(mathematics)
|
||||
[11]: https://pypi.org/project/mod/
|
||||
[12]: https://en.wikipedia.org/wiki/Polynomial
|
||||
[13]: https://www.math.usm.edu/lambers/mat772/fall10/lecture5.pdf
|
105
sources/tech/20200727 5 open source IDE tools for Java.md
Normal file
105
sources/tech/20200727 5 open source IDE tools for Java.md
Normal file
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 open source IDE tools for Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/ide-java)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
5 open source IDE tools for Java
|
||||
======
|
||||
Java IDE tools offer plenty of ways to create a programming environment
|
||||
based on your unique needs and preferences.
|
||||
![woman on laptop sitting at the window][1]
|
||||
|
||||
[Java][2] frameworks make life easier for programmers by streamlining their work. These frameworks were designed and developed to run any application on any server environment; that includes dynamic behaviors in terms of parsing annotations, scanning descriptors, loading configurations, and launching the actual services on a Java virtual machine (JVM). Controlling this much scope requires more code, making it difficult to minimize memory footprint or speed up startup times for new applications. Regardless, Java consistently ranks in the top three of programming languages in use today with a community of seven to ten million developers in the [TIOBE Index][3].
|
||||
|
||||
With all that code written in Java, that means there are some great options for integrated development environments (IDE) to give developers all the tools needed to effectively write, lint, test, and run Java applications.
|
||||
|
||||
Below, I introduce—in alphabetical order—my five favorite open source IDE tools to write Java and how to configure their basics.
|
||||
|
||||
### BlueJ
|
||||
|
||||
[BlueJ][4] provides an integrated educational Java development environment for Java beginners. It also aids in developing small-scale software using the Java Development Kit (JDK). The installation options for a variety of versions and operating systems are available [here][5].
|
||||
|
||||
Once you install the BlueJ IDE on your laptop, start a new project. Click on New Project in the Project menu then begin writing Java codes from New Class. Sample methods and skeleton codes will be generated as below:
|
||||
|
||||
![BlueJ IDE screenshot][6]
|
||||
|
||||
BlueJ not only provides an interactive graphical user interface (GUI) for teaching Java programming courses in schools but also allows developers to invoke functions (i.e., objects, methods, parameters) without source code compilation.
|
||||
|
||||
### Eclipse
|
||||
|
||||
[Eclipse][7] is one of the most famous Java IDEs based on the desktop, and it supports a variety of programming languages such as C/C++, JavaScript, and PHP. It also allows developers to add unlimited extensions from the Eclipse Marketplace for more development conveniences. [Eclipse Foundation][8] provides a Web IDE called [Eclipse Che][9] for DevOps teams to spin up an agile software development environment with hosted workspaces on multiple cloud platforms.
|
||||
|
||||
[The download][10] is available here; then you can create a new project or import an existing project from the local directory. Find more Java development tips in [this article][11].
|
||||
|
||||
![Eclipse IDE screenshot][12]
|
||||
|
||||
### IntelliJ IDEA
|
||||
|
||||
[IntelliJ IDEA CE (Community Edition)][13] is the open source version of IntelliJ IDEA, providing an IDE for multiple programming languages (i.e., Java, Groovy, Kotlin, Rust, Scala). IntelliJ IDEA CE is also very popular for experienced developers to use for existing source refactoring, code inspections, building testing cases with JUnit or TestNG, and building codes using Maven or Ant. Downloadable binaries are available [here][14].
|
||||
|
||||
IntelliJ IDEA CE comes with some unique features; I particularly like the API tester. For example, if you implement a REST API with a Java framework, IntelliJ IDEA CE allows you to test the API's functionality via Swing GUI designer:
|
||||
|
||||
![IntelliJ IDEA screenshot][15]
|
||||
|
||||
IntelliJ IDEA CE is open source, but the company behind it has a commercial option. Find more differences between the Community Edition and the Ultimate [here][16].
|
||||
|
||||
### Netbeans IDE
|
||||
|
||||
[NetBeans IDE][17] is an integrated Java development environment that allows developers to craft modular applications for standalone, mobile, and web architecture with supported web technologies (i.e., HTML5, JavaScript, and CSS). NetBeans IDE allows developers to set up multiple views on how to manage projects, tools, and data efficiently and helps them collaborate on software development—using Git integration—when a new developer joins the project.
|
||||
|
||||
Download binaries are available [here][18] for multiple platforms (i.e., Windows, macOS, Linux). Once you install the IDE tool in your local environment, the New Project wizard helps you create a new project. For example, the wizard generates the skeleton codes (with sections to fill in like `// TODO code application logic here`) then you can add your own application codes.
|
||||
|
||||
### VSCodium
|
||||
|
||||
[VSCodium][19] is a lightweight, free source code editor that allows developers to install a variety of OS platforms (i.e., Windows, macOS, Linux) and is an open source alternative based on [Visual Studio Code][20]. It was also designed and developed to support a rich ecosystem for multiple programming languages (i.e., Java, C++, C#, PHP, Go, Python, .NET). For high code quality, Visual Studio Code provides debugging, intelligent code completion, syntax highlighting, and code refactoring by default.
|
||||
|
||||
There are many download options available in the [repository][21]. When you run the Visual Studio Code, you can add new features and themes by clicking on the Extensions icon in the activity bar on the left side or by pressing Ctrl+Shift+X in the keyboard. For example, the Quarkus Tools for Visual Studio Code comes up when you type "quarkus" in the search box. The extension allows you to use helpful tools for [writing Java with Quarkus in VS Code][22]:
|
||||
|
||||
![VSCodium IDE screenshot][23]
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Java being one of the most widely used programming languages and environments, these five are just a fraction of the different open source IDE tools available for Java developers. It can be hard to know which is the right one to choose. As always, it depends on your specific needs and goals—what kinds of workloads (web, mobile, messaging, data transaction) you want to implement and what runtimes (local, cloud, Kubernetes, serverless) you will deploy using IDE extended features. While the wealth of options out there can be overwhelming, it does also mean that you can probably find one that suits your particular circumstances and preferences.
|
||||
|
||||
Do you have a favorite open source Java IDE? Share it in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/ide-java
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://opensource.com/resources/java
|
||||
[3]: https://www.tiobe.com/tiobe-index/
|
||||
[4]: https://www.bluej.org/about.html
|
||||
[5]: https://www.bluej.org/versions.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/5_open_source_ide_tools_to_write_java_and_how_you_begin_it.png (BlueJ IDE screenshot)
|
||||
[7]: https://www.eclipse.org/ide/
|
||||
[8]: https://www.eclipse.org/
|
||||
[9]: https://opensource.com/article/19/10/cloud-ide-che
|
||||
[10]: https://www.eclipse.org/downloads/
|
||||
[11]: https://opensource.com/article/19/10/java-basics
|
||||
[12]: https://opensource.com/sites/default/files/uploads/os_ide_2.png (Eclipse IDE screenshot)
|
||||
[13]: https://www.jetbrains.com/idea/
|
||||
[14]: https://www.jetbrains.org/display/IJOS/Download
|
||||
[15]: https://opensource.com/sites/default/files/uploads/os_ide_3.png (IntelliJ IDEA screenshot)
|
||||
[16]: https://www.jetbrains.com/idea/features/editions_comparison_matrix.html
|
||||
[17]: https://netbeans.org/
|
||||
[18]: https://netbeans.org/downloads/8.2/rc/
|
||||
[19]: https://vscodium.com/
|
||||
[20]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
|
||||
[21]: https://github.com/VSCodium/vscodium#downloadinstall
|
||||
[22]: https://opensource.com/article/20/4/java-quarkus-vs-code
|
||||
[23]: https://opensource.com/sites/default/files/uploads/os_ide_5.png (VSCodium IDE screenshot)
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Analyze your web server log files with this Python tool)
|
||||
[#]: via: (https://opensource.com/article/20/7/python-lars)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
Analyze your web server log files with this Python tool
|
||||
======
|
||||
This Python module can collect website usage logs in multiple formats
|
||||
and output well structured data for analysis.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
Ever wanted to know how many visitors you've had to your website? Or which pages, articles, or downloads are the most popular? If you're self-hosting your blog or website, whether you use Apache, Nginx, or even Microsoft IIS (yes, really), [lars][2] is here to help.
|
||||
|
||||
Lars is a web server-log toolkit for [Python][3]. That means you can use Python to parse log files retrospectively (or in real time) using simple code, and do whatever you want with the data—store it in a database, save it as a CSV file, or analyze it right away using more Python.
|
||||
|
||||
Lars is another hidden gem written by [Dave Jones][4]. I first saw Dave present lars at a local Python user group. Then a few years later, we started using it in the [piwheels][5] project to read in the Apache logs and insert rows into our Postgres database. In real time, as Raspberry Pi users download Python packages from [piwheels.org][6], we log the filename, timestamp, system architecture (Arm version), distro name/version, Python version, and so on. Since it's a relational database, we can join these results on other tables to get more contextual information about the file.
|
||||
|
||||
You can install lars with:
|
||||
|
||||
|
||||
```
|
||||
`$ pip install lars`
|
||||
```
|
||||
|
||||
On some systems, the right route will be [ `sudo` ] `pip3 install lars`.
|
||||
|
||||
To get started, find a single web access log and make a copy of it. You'll want to download the log file onto your computer to play around with it. I'm using Apache logs in my examples, but with some small (and obvious) alterations, you can use Nginx or IIS. On a typical web server, you'll find Apache logs in `/var/log/apache2/` then usually `access.log` , `ssl_access.log` (for HTTPS), or gzipped rotated logfiles like `access-20200101.gz` or `ssl_access-20200101.gz` .
|
||||
|
||||
First of all, what does a log entry look like?
|
||||
|
||||
```
|
||||
81.174.152.222 - - [30/Jun/2020:23:38:03 +0000] "GET / HTTP/1.1" 200 6763 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
|
||||
```
|
||||
|
||||
|
||||
This is a request showing the IP address of the origin of the request, the timestamp, the requested file path (in this case / , the homepage, the HTTP status code, the user agent (Firefox on Ubuntu), and so on.
|
||||
|
||||
|
||||
Your log files will be full of entries like this, not just every single page hit, but every file and resource served—every CSS stylesheet, JavaScript file and image, every 404, every redirect, every bot crawl. To get any sensible data out of your logs, you need to parse, filter, and sort the entries. That's what lars is for. This example will open a single log file and print the contents of every row:
|
||||
|
||||
[code]
|
||||
|
||||
```
|
||||
with open('ssl_access.log') as f:
|
||||
with ApacheSource(f) as source:
|
||||
for row in source:
|
||||
print(row)
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
Which will show results like this for every log entry:
|
||||
```
|
||||
|
||||
```
|
||||
Row(remote_host=IPv4Address('81.174.152.222'), ident=None, remote_user=None, time=DateTime(2020, 6, 30, 23, 38, 3), request=Request(method='GET', url=Url(scheme='', netloc='', path_str='/', params='', query_str='', fragment=''), protocol='HTTP/1.1'), status=200, size=6763)
|
||||
```
|
||||
|
||||
|
||||
It's parsed the log entry and put the data into a structured format. The entry has become a namedtuple with attributes relating to the entry data, so for example, you can access the status code with row.status and the path with row.request.url.path_str:
|
||||
|
||||
[code]
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
with open('ssl_access.log') as f:
|
||||
with ApacheSource(f) as source:
|
||||
for row in source:
|
||||
print(f'hit {row.request.url.path_str} with status code {row.status}')
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
If you wanted to show only the 404s, you could do:
|
||||
|
||||
[code]
|
||||
```
|
||||
|
||||
with open('ssl_access.log') as f:
|
||||
with ApacheSource(f) as source:
|
||||
for row in source:
|
||||
if row.status == 404:
|
||||
print(row.request.url.path_str)
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
|
||||
You might want to de-duplicate these and print the number of unique pages with 404s:
|
||||
|
||||
[code]
|
||||
|
||||
```
|
||||
s = set()
|
||||
with open('ssl_access.log') as f:
|
||||
with ApacheSource(f) as source:
|
||||
for row in source:
|
||||
if row.status == 404:
|
||||
s.add(row.request.url.path_str)
|
||||
print(len(s))
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
Dave and I have been working on expanding piwheels' logger to include web-page hits, package searches, and more, and it's been a piece of cake, thanks to lars. It's not going to tell us any answers about our users—we still have to do the data analysis, but it's taken an awkward file format and put it into our database in a way we can make use of it.
|
||||
|
||||
|
||||
Check out lars' documentation to see how to read Apache, Nginx, and IIS logs, and learn what else you can do with it. Thanks, yet again, to Dave for another great tool!
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
```
|
||||
This originally appeared on Ben Nuttall's Tooling Blog and is republished with permission.
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/python-lars
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://lars.readthedocs.io/en/latest/
|
||||
[3]: https://opensource.com/resources/python
|
||||
[4]: https://twitter.com/waveform80/
|
||||
[5]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
[6]: http://piwheels.org
|
91
sources/tech/20200727 What does it mean for code to -work.md
Normal file
91
sources/tech/20200727 What does it mean for code to -work.md
Normal file
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What does it mean for code to "work"?)
|
||||
[#]: via: (https://opensource.com/article/20/7/code-tdd)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
What does it mean for code to "work"?
|
||||
======
|
||||
Test driven development (TDD) separates computing results from actions
|
||||
to ensure your code does what you expect.
|
||||
![Searching for code][1]
|
||||
|
||||
Extreme Programming co-founder [Ron Jeffries][2] famously wrote: "The trick is never to let the code not be working."
|
||||
|
||||
Jeffries' quote points at the fact that software engineering is a very sophisticated activity with plenty of uncertainty. Software engineering also deals with issues of incompleteness—at the outset, you never seem to have all information you need to formulate an optimal approach. There always comes a time, later in the project, when you learn something that shows that several things in your initial understanding were incorrect. In addition, even when you amass sufficient useful information to orient yourself properly, that information tends to contain ambiguity—communication is seldom, if ever, clear. Ambiguity seems to prevail in both verbal and written communication.
|
||||
|
||||
Because of these constraints, the only safe way to proceed when developing software is to rely on the working code. The running, working code is the final oracle. It will tell you if you're doing the right thing and going in the right direction.
|
||||
|
||||
### How do you know if the code is working?
|
||||
|
||||
It is impossible to know if your code is working as designed if the code isn't running. To have code that compiles without errors and always runs without exceptions doesn't mean that the code is working. It could be doing all these activities while doing nothing of use. Therefore, the definition of working code is that it runs without any problems while doing something you expect it to do.
|
||||
|
||||
The only way to find out if your code is working according to expectations is to focus on observable behavior. Merely reading and analyzing the source code is not sufficiently convincing; you need to see the code executing to judge whether its execution meets your expectations.
|
||||
|
||||
There are two ways to measure observable behavior:
|
||||
|
||||
1. Watch the computer that is running your code perform some actions
|
||||
2. Test the running code for the computed values
|
||||
|
||||
|
||||
|
||||
The first type of observable behavior (observing some actions being performed) is not the best way to ensure that your code works according to expectations. For example, you may observe that your code performed an action, such as sending an email. But that alone is not sufficient to confirm the code works according to expectations. What if the email the code is sending contains incorrect information?
|
||||
|
||||
The only way to confirm that your code is working according to expectations is to observe the computed values. And that process (i.e., observing the computed values to see if they match the expected values) is where test-driven development (TDD) shines.
|
||||
|
||||
### How does it feel to write software if the code is not working?
|
||||
|
||||
Before I discovered TDD, I was spending long stretches of time writing code without worrying about whether the code was working or not. Every now and then, when I felt I had reached a milestone in coding, I would run the application I was working on. I'd log in as a fictitious user and manually trigger some actions to see if the program did what I told it to do.
|
||||
|
||||
This approach is similar to measuring an iceberg by just gauging its tip—the part of the iceberg visible above the water. Although my manual testing provided a clean bill of health for the app, naturally, once it went into production, all kinds of bugs and defects started showing up (caused by the "below-the-waterline" part of the iceberg).
|
||||
|
||||
Looking back, it is obvious that writing code without making it work all the time is similar to flying a kite. Flying a kite in a strong wind is exciting, even exhilarating. But the kite almost never touches the ground, and it is very challenging to control its direction in the strong wind. Also, it is almost impossible to land the kite at the exact spot you aim for.
|
||||
|
||||
### How does it feel to write software when doing TDD?
|
||||
|
||||
TDD is based on the idea that the way the code behaves should be independent from the way the code is structured. You are aiming at a desired behavior. While you're writing code, the desired behavior is not there (that's why it is called "desired"). You implement the desired behavior by first writing a test that describes it. Then you run that test, and it fails because the expected behavior is not implemented yet. The failure prompts you to fix it, which forces you to run the code again. If the changes you make to the code satisfy the expectations described in the test, you conclude that the code works according to your expectations.
|
||||
|
||||
If the changes you make to the code do not satisfy the expectations described in the test, the code does not work, and you need to make more changes to the code until it works as expected.
|
||||
|
||||
This process, when done consistently, feels like riding a galloping horse. Every now and then, a galloping horse touches the ground, which is equivalent to the moment when all tests pass in TDD. While the horse is "in flight," it is charging in a straight line. The horse is advancing, but there is no way to change its course. Only when the horse touches the ground does the horse rider get a chance to change the direction they're heading.
|
||||
|
||||
Similarly, when writing code, you are "in flight." While you're coding, you have no way of verifying if you're going in the right direction. It is only when you stop coding, save your changes, and run the code that you can observe if your code is doing what you expect it to do. You are touching the ground each time you run your code.
|
||||
|
||||
TDD is the discipline that guides you toward performing this reality check as often as possible. This minimizes your risk of shipping incorrect code.
|
||||
|
||||
### You are the first customer of your application code
|
||||
|
||||
Since you're writing a test that automates your expectations, you are the first customer of the application code. Actually, it's better to say that _your test_ is the first customer of your code. The test could be viewed as a customer who walks into a restaurant, and being hungry, orders a meal. When the customer orders, she has a specific meal in mind. The kitchen staff's job is to turn that customer's desire into reality.
|
||||
|
||||
If the waiter delivers the meal and, after tasting it, the customer disagrees that the meal meets her expectations, she returns it to the kitchen. The staff modifies the meal until the customer is satisfied. But kitchen staff will never know if the meal is good unless they collect feedback from the customer.
|
||||
|
||||
In a similar fashion, the only way to know if the code you're writing satisfies expectations is by collecting feedback from the test.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The only way to know if the changes you are making to the code are advancing your app in the right direction is by making the code work. Mere review of the code, the act of reading the code, is never sufficient to reach a solid conclusion that the code is correct.
|
||||
|
||||
Most developers prefer writing code in long stretches. But spending hours writing code means you're wasting hours not making the code work (recall Ron Jeffries' advice "The trick is never to let the code not be working"). You're letting the code not be working if you indulge in long code-writing sessions.
|
||||
|
||||
TDD helps control that by focusing your attention on observable behavior. In TDD, you define expected values and continue interrogating your code to see if it computes those expected values. To confirm or refute the expectations, you must make the code work. The more often you do these "reality checks," the higher the probability that your efforts are going in the right direction.
|
||||
|
||||
The transition from a stable, steady state (being ready) to the next stable, steady state (being ready again), should be as short as possible. TDD strives to help you always travel in a ready-to-ready fashion.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/code-tdd
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
|
||||
[2]: https://en.wikipedia.org/wiki/Ron_Jeffries
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (BigBlueButton: Open Source Software for Online Teaching)
|
||||
[#]: via: (https://itsfoss.com/bigbluebutton/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
BigBlueButton: Open Source Software for Online Teaching
|
||||
======
|
||||
|
||||
_**Brief: BigBlueButton is an open-source tool for video conferencing tailored for online teaching. Let’s take a look at what it offers.**_
|
||||
|
||||
In the year 2020, remote working from home is kind of the new normal. Of course, you cannot do everything remotely — but online teaching is something that’s possible.
|
||||
|
||||
Even though a lot of teachers and school organizations aren’t familiar with all the amazing tools available out there, some of the [best open-source video conferencing tools][1] are filling in the requirements to some extent.
|
||||
|
||||
Among the ones I mentioned for video calls, [BigBlueButton][2] caught my attention. Here, I’ll give you an overview of what it offers.
|
||||
|
||||
### BigBlueButton: An Open Source Web Conferencing System for Online Teaching
|
||||
|
||||
![][3]
|
||||
|
||||
BigBlueButton is an open-source web conferencing solution that aims to make online learning easy.
|
||||
|
||||
It is completely free to use but it requires you to set it up on your own server to use it as a full-fledged online learning solution.
|
||||
|
||||
BigBlueButton offers a really good set of features. You can easily try the [demo instance][4] and set it up on your server for your school.
|
||||
|
||||
Before you get started, take a look at the features:
|
||||
|
||||
### Features of BigBlueButton
|
||||
|
||||
BigBlueButton provides a bunch of useful features tailored for teachers and schools for online classes, here’s what you get:
|
||||
|
||||
* Live whiteboard
|
||||
* Public and private messaging options
|
||||
* Webcam support
|
||||
* Session recording support
|
||||
* Emojis support
|
||||
* Ability to group users for team collaboration
|
||||
* Polling options available
|
||||
* Screen sharing
|
||||
* Multi-user support for whiteboard
|
||||
* Ability to self-host it
|
||||
* Provides an API for easy integration on web applications
|
||||
|
||||
|
||||
|
||||
In addition to the features offered, you will find an easy-to-use UI i.e. [Greenlight][5] (the front-end interface for BigBlueButton) to set up when you configure it on your server.
|
||||
|
||||
You can try using the demo instance for casual usage to teach your students for free. However, considering the limitations (60 minutes limit) of using the [demo instance][4] to try BigBlueButton, I’d suggest you to host it on your server to explore all the functionality that it offers.
|
||||
|
||||
To get more clarity on how the features work, you might want to take a look at one of their official tutorials:
|
||||
|
||||
### Installing BigBlueButton On Your Server
|
||||
|
||||
They offer a [detailed documentation][6] which should come in handy for every developer. The easiest and quickest way of setting it up is by using the [bbb-install script][7] but you can also explore other options if that does not work out for you.
|
||||
|
||||
For starters, you need a server running Ubuntu 16.04 LTS at least. You should take a look at the [minimum requirements][8] before deploying a server for BigBlueButton.
|
||||
|
||||
You can explore more about the project in their [GitHub page][9].
|
||||
|
||||
[Try BigBlueButton][2]
|
||||
|
||||
If you’re someone who’s looking to set up a solution for online teaching, BigBlueButton is a great choice to explore.
|
||||
|
||||
It may not offer native smartphone apps — but you can surely access it using the web browser on your mobile. Of course, it’s better to find a laptop/computer to access an online teaching platform — but it works with mobile too.
|
||||
|
||||
What do you think about BigBlueButton for online teaching? Is there a better open-source project as an alternative to this? Let me know in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/bigbluebutton/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/open-source-video-conferencing-tools/
|
||||
[2]: https://bigbluebutton.org/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/big-blue-button.png?ssl=1
|
||||
[4]: http://demo.bigbluebutton.org/
|
||||
[5]: https://bigbluebutton.org/2018/07/09/greenlight-2-0/
|
||||
[6]: https://docs.bigbluebutton.org/
|
||||
[7]: https://github.com/bigbluebutton/bbb-install
|
||||
[8]: https://docs.bigbluebutton.org/2.2/install.html#minimum-server-requirements
|
||||
[9]: https://github.com/bigbluebutton
|
235
sources/tech/20200728 Digging for DNS answers on Linux.md
Normal file
235
sources/tech/20200728 Digging for DNS answers on Linux.md
Normal file
@ -0,0 +1,235 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Digging for DNS answers on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Digging for DNS answers on Linux
|
||||
======
|
||||
Dig is a powerful and flexible tool for interrogating domain name system (DNS) servers. In this post, we’ll take a deep dive into how it works and what it can tell you.
|
||||
[Laurie Avocado][1] [(CC BY 2.0)][2]
|
||||
|
||||
Dig is a powerful and flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name servers that were involved in the process along with details related to the search. System and [DNS][3] administrators often use **dig** to help troubleshoot DNS problems. In this post, we’ll take a deep dive into how it works and see what it can tell us.
|
||||
|
||||
To get started, it's helpful to have a good mental image of how DNS or domain name system works. It's a critical part of the global Internet because it provides a way to look up and, thereby, connect with servers around the world. You can think of it as the Internet's address book and any system that is properly connected to the Internet should be able to use it to look up the IP address of any properly registered server.
|
||||
|
||||
### Getting started with dig
|
||||
|
||||
The **dig** tool is generally installed on Linux systems by default. Here’s an example of a **dig** command with a little annotation:
|
||||
|
||||
```
|
||||
$ dig www.networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> www.networkworld.com <== version of dig you’re using
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6034
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 65494
|
||||
;; QUESTION SECTION: <== details on your query
|
||||
;www.networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION: <== results
|
||||
|
||||
www.networkworld.com. 3568 IN CNAME idg.map.fastly.net.
|
||||
idg.map.fastly.net. 30 IN A 151.101.250.165
|
||||
|
||||
;; Query time: 36 msec <== query time
|
||||
;; SERVER: 127.0.0.53#53(127.0.0.53) <== local caching resolver
|
||||
;; WHEN: Fri Jul 24 19:11:42 EDT 2020 <== date and time of inquiry
|
||||
;; MSG SIZE rcvd: 97 <== bytes returned
|
||||
```
|
||||
|
||||
If you get a response like this, is it good news? The short answer is “yes”. You got a reply in a timely manner. The status field (status: NOERROR) shows there were no problems. You’re connecting to a name server that is able to supply the requested information and getting a reply that tells you some important details about the system you’re inquiring about. In short, you’ve verified that your system and the domain name system are getting along just fine.
|
||||
|
||||
Other possible status indicators include:
|
||||
|
||||
**SERVFAIL** – The name that was queried exists, but no data is available or available data is invalid.
|
||||
|
||||
**NXDOMAIN** – The name in question does not exist.
|
||||
|
||||
**REFUSED** – The zone does not exist at the requested authority and the infrastructure is not set up to provide responses when this is the case.
|
||||
|
||||
Here's an example of what you'd see if you were looking up a domain that doesn't exist:
|
||||
|
||||
```
|
||||
$ dig cannotbe.org
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> cannotbe.org
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 35348
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
|
||||
```
|
||||
|
||||
In general, **dig** provides more details than **ping**, though **ping** will respond with "Name or service not known" if the domain doesn't exit. When you ask about a legitimate system, you get to see what the domain name system knows about the system, how those records are configured and how long it takes to retrieve that data.
|
||||
|
||||
In fact, sometimes **dig** can respond with information when **ping** cannot respond at all and that kind of information can be very helpful when you're trying to nail down a connection problem.
|
||||
|
||||
### DNS record types and flags
|
||||
|
||||
One thing we can see in the first query above is the presence of both **CNAME** and **A** records. The **CNAME** (canonical name) is like an alias that refers one domain name to another. Most systems that you dig for won’t have a **CNAME** record, but only an **A** record. If you run a “dig localhost” command, you will see an **A** record that simply refers to 127.0.0.1 – the "loopback" address that every system uses. An **A** record maps a name to an IP address.
|
||||
|
||||
The DNS record types include:
|
||||
|
||||
* A or AAAA -– IPv4 and IPv6 addresses
|
||||
* CNAME –- alias
|
||||
* MX –- mail exchanger
|
||||
* NS –- name server
|
||||
* PTR –- a reversing entry that lets you find a system name when providing the IP address
|
||||
* SOA –- start of authority record
|
||||
* TXT –- some related text
|
||||
|
||||
|
||||
|
||||
We also see a series of “flags” on the fifth line of output. These are defined in [RFC 1035][4] which defines the flags included in the header of DNS messages and even shows the format of headers.
|
||||
|
||||
```
|
||||
1 1 1 1 1 1
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ID |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| QDCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ANCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| NSCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ARCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
```
|
||||
|
||||
The flags shown in the fifth line in the initial query above are:
|
||||
|
||||
* **qr** = query
|
||||
* **rd** = recursion desired
|
||||
* **ra** = recursion available
|
||||
|
||||
|
||||
|
||||
Other flags described in the RFC include:
|
||||
|
||||
* **aa** = authoritative answer
|
||||
* **cd** = checking disabled
|
||||
* **ad** = authentic data
|
||||
* **opcode** = a 4-bit field
|
||||
* **tc** = truncation
|
||||
* **z** (unused)
|
||||
|
||||
|
||||
|
||||
### Adding the +trace option
|
||||
|
||||
You will get a LOT more output from **dig** if you add **+trace** as an option. It will add information that shows how your DNS query rooted through the hierarchy of name servers to locate the answer you’re looking for.
|
||||
|
||||
All the **NS** records shown below reflect name servers – and this is just the first section of data you will see as the query runs through the hierarchy of name servers to track down what you're looking for.
|
||||
|
||||
```
|
||||
$ dig +trace networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> +trace networkworld.com
|
||||
;; global options: +cmd
|
||||
. 84895 IN NS k.root-servers.net.
|
||||
. 84895 IN NS e.root-servers.net.
|
||||
. 84895 IN NS m.root-servers.net.
|
||||
. 84895 IN NS h.root-servers.net.
|
||||
. 84895 IN NS c.root-servers.net.
|
||||
. 84895 IN NS f.root-servers.net.
|
||||
. 84895 IN NS a.root-servers.net.
|
||||
. 84895 IN NS g.root-servers.net.
|
||||
. 84895 IN NS l.root-servers.net.
|
||||
. 84895 IN NS d.root-servers.net.
|
||||
. 84895 IN NS b.root-servers.net.
|
||||
. 84895 IN NS i.root-servers.net.
|
||||
. 84895 IN NS j.root-servers.net.
|
||||
;; Received 262 bytes from 127.0.0.53#53(127.0.0.53) in 28 ms
|
||||
...
|
||||
```
|
||||
|
||||
Eventually, you'll get information tied directly to your request.
|
||||
|
||||
```
|
||||
networkworld.com. 300 IN A 151.101.2.165
|
||||
networkworld.com. 300 IN A 151.101.66.165
|
||||
networkworld.com. 300 IN A 151.101.130.165
|
||||
networkworld.com. 300 IN A 151.101.194.165
|
||||
networkworld.com. 14400 IN NS ns-d.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-a.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns0.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns1.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns-b.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-c.pnap.net.
|
||||
;; Received 269 bytes from 70.42.185.30#53(ns0.pcworld.com) in 116 ms
|
||||
```
|
||||
|
||||
### Picking your responder
|
||||
|
||||
You can use the **@** sign to specify a particular name server that you want to handle your query. Here we’re asking the primary name server for Google to respond to our query:
|
||||
|
||||
```
|
||||
$ dig @8.8.8.8 networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> @8.8.8.8 networkworld.com
|
||||
; (1 server found)
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43640
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 512
|
||||
;; QUESTION SECTION:
|
||||
;networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION:
|
||||
networkworld.com. 299 IN A 151.101.66.165
|
||||
networkworld.com. 299 IN A 151.101.194.165
|
||||
networkworld.com. 299 IN A 151.101.130.165
|
||||
networkworld.com. 299 IN A 151.101.2.165
|
||||
|
||||
;; Query time: 48 msec
|
||||
;; SERVER: 8.8.8.8#53(8.8.8.8)
|
||||
;; WHEN: Sat Jul 25 11:21:19 EDT 2020
|
||||
;; MSG SIZE rcvd: 109
|
||||
```
|
||||
|
||||
The command shown below does a reverse lookup of the 8.8.8.8 IP address to show that it belongs to Google's DNS server.
|
||||
|
||||
```
|
||||
$ nslookup 8.8.8.8
|
||||
8.8.8.8.in-addr.arpa name = dns.google.
|
||||
```
|
||||
|
||||
#### Wrap-Up
|
||||
|
||||
The dig command is an essential tool for both grasping how DNS works and troubleshooting connection problems when they arise.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.flickr.com/photos/auntylaurie/15997799384
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
[4]: https://tools.ietf.org/html/rfc1035
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (silentdawn-zz )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 Python tools for getting started with astronomy)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-astronomy-open-data)
|
||||
[#]: author: (Gina Helfrich, Ph.D. https://opensource.com/users/ginahelfrich)
|
||||
|
||||
开启天文之路的 4 个 Python 工具
|
||||
======
|
||||
使用 NumPy、SciPy、Scikit-Image 和 Astropy 探索宇宙
|
||||
![Person looking up at the stars][1]
|
||||
|
||||
NumFOCUS 是个非盈利组织,维护着一套科学计算与数据科学方面的杰出开源工具集。作为联系 Opensource.com 读者和 NumFOCUS 社区工作的一部分,我们对我们的 [博客][2] 中一些大家喜闻乐见的文章正在进行再版。如果想了解我们的任务及代码,可以访问 [numfocus.org][3]。如果你有兴趣以个人身份加入 NumFOCUS 社区,可以关注你所在地区的 [PyData 活动][4]。
|
||||
|
||||
* * *
|
||||
|
||||
### 天文学与 Python
|
||||
|
||||
对科学界而言,尤其是对天文学界来说,Python 是一种伟大的语言工具。包括但不限于 [NumPy][5]、[SciPy][6]、[Scikit-Image][7] 和 [Astropy][8] 的很多工具包,都是 Python 非常适用天文学界的有力工具,而且有大量的成功案例( NumPy、Astropy 和 SciPy 是 NumFOCUS 提供资金支持的项目;Scikit-Image 是个隶属项目)。 我在十几年前脱离天文研究领域,成为了软件开发者之后,对前述工具包的演进一直很感兴趣。我的很多前天文界同事在他们的研究中,使用着前面提到的大部分甚至是全部工具包。以我为例,我也曾为位于智利的超大口径望远镜( VLT )上的仪器编写过专业天文软件工具包。
|
||||
|
||||
最近令我吃惊的是,Python 工具包竟然演进到如此好用,任何人都可以轻松编写 [数据缩减][9] 脚本,并产生高质量的数据产品。天文数据易于获取,而且大部分是可以公开使用的,你要做的只是去寻找相关数据。
|
||||
|
||||
比如,负责 VLT 运行的 ESO,直接在他们的网站上提供数据下载服务,只要访问 [www.eso.org/UserPortal][10] 并在首页创建用户就可以享有数据下载服务。如果你需要 SPHERE 数据,可以下载附近任意包含系外行星或者原恒星盘的恒星的全部数据集。对任何 Python 高手而言,通过缩减数据发现深藏于噪声中的行星或者原恒星盘,实在是件令人兴奋的事。
|
||||
|
||||
我很乐于看到你下载 ESO 或其它天文影像数据,开启你的探索历程。这里提供几条建议:
|
||||
|
||||
1. 起步于高质量的数据。看一些有关包含系外行星或者原恒星盘的较近恒星的论文,然后在 <http://archive.eso.org/wdb/wdb/eso/sphere/query> 之类的网站检索数据。需要注意的是,前述网站上的数据有的标注为红色,有的标注为绿色,标注为红色的数据是尚未公开的,在相应的"发布日期"处会注明数据将来公开的时间。
|
||||
2. 了解一些用于获取你所用数据的仪器的信息。尽量对数据的获取有一个基本的理解,对标准的数据缩减之后应该是什么样子做到心中有数。所有的望远镜和仪器都有这方面的文档供公开获取。
|
||||
3. 必须考虑天文数据的标准问题,并予以校正:
|
||||
( 1 )数据以 FITS 格式文件保存。需要使用 **pyfits** 或者 **astropy** (包含 pyfits )读取数据为 **NumPy** 数组。有些情况下,数据是三维的,需要沿 z 轴使用 **numpy.median** 将数据转换为二维数组。有些 SPHERE 数据在同一幅影像中包含了同一片天空的两份拷贝(各自使用了不同的滤波器),这时候需要使用 **索引** 和 **切片** 将它们分离出来。
|
||||
( 2 )全黑图和坏点图。所有仪器都有快门全关(完全无光)状态拍摄的特殊图片,使用 **NumPy 掩膜数组** 从中分离出坏点图。坏点图非常重要,你在合成最终的清晰图像过程中,需要持续跟踪坏点。有些情况下,这还有助于你从原始科学数据中扣除暗背景的操作。
|
||||
( 3 )一般情况下,天文仪器还要拍标准响应图。这是对均匀的单色标准光源拍摄的一张或者一组图片。你需要将所有的原始数据除以标准响应之后再做后续处理(同样,使用 Numpy 掩膜数组实现的话,这仅仅是一个简单的除法运算)。
|
||||
( 4 )对行星影像,为了使行星在明亮恒星背景下变得可见,需要仰仗日冕仪和角差分成像技术。这一步需要识别影像的光学中心,这是比较棘手的环节之一,过程中要使用 **skimage.feature.blob_dog** 从原始影像中寻找一些人工辅助影像作为帮助。
|
||||
4. 要有耐心。理解数据格式并弄清如何操作需要一些时间,绘出像素数据曲线图或者统计图有助于你的理解。贵在坚持,必有收获!你会从中学到很多关于图像数据及其处理的知识。
|
||||
|
||||
|
||||
|
||||
综合应用 NumPy、SciPy、Astropy、scikit-image 及其它工具,结合耐心和恒心,通过分析大量可用天文数据分析实现重大的发现是非常有可能的。说不定,你会成为某个系外行星的第一发现者呢。祝你好运!
|
||||
|
||||
_本文基于 Pivigo CTO [Ole Moeller-Nilsson][12] 的一次 [谈话][11],最初发布于 NumFOCUS 的博客,蒙允再次发布。如果你有意支持 NumFOCUS,可以 [捐赠][13],也可以参与遍布全球的 [PyData 活动][4] 中你身边的那些。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-astronomy-open-data
|
||||
|
||||
作者:[Gina Helfrich, Ph.D.][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[silentdawn-zz](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ginahelfrich
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/space_stars_cosmos_person.jpg?itok=XUtz_LyY (Person looking up at the stars)
|
||||
[2]: https://numfocus.org/blog
|
||||
[3]: https://numfocus.org
|
||||
[4]: https://pydata.org/
|
||||
[5]: http://numpy.scipy.org/
|
||||
[6]: http://www.scipy.org/
|
||||
[7]: http://scikit-image.org/
|
||||
[8]: http://www.astropy.org/
|
||||
[9]: https://en.wikipedia.org/wiki/Data_reduction
|
||||
[10]: http://www.eso.org/UserPortal
|
||||
[11]: https://www.slideshare.net/OleMoellerNilsson/pydata-lonon-finding-planets-with-python
|
||||
[12]: https://twitter.com/olly_mn
|
||||
[13]: https://numfocus.org/donate
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (silentdawn-zz)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -7,64 +7,63 @@
|
||||
[#]: via: (https://opensource.com/article/20/3/zeromq-c-python)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
Share data between C and Python with this messaging library
|
||||
使用 ZeroMQ 消息库在 C 和 Python 间共享数据
|
||||
======
|
||||
ZeroMQ makes for a fast and resilient messaging library to gather data
|
||||
and share between multiple languages.
|
||||
ZeroMQ 是一个快速灵活的消息库,用于数据收集和不同语言间的数据共享。
|
||||
![Chat via email][1]
|
||||
|
||||
I've had moments as a software engineer when I'm asked to do a task that sends shivers down my spine. One such moment was when I had to write an interface between some new hardware infrastructure that requires C and a cloud infrastructure, which is primarily Python.
|
||||
作为软件工程师,我有多次在要求完成指定任务时感到毛骨悚然的经历。其中一次是要写个接口,用于控制一些新的底层硬件间的对接,接口实现形式分别是 C 和 云底层组件,而后者主要是 Python。
|
||||
|
||||
One strategy could be to [write an extension in C][2], which Python supports by design. A quick glance at the documentation shows this would mean writing a good amount of C. That can be good in some cases, but it's not what I prefer to do. Another strategy is to put the two tasks in separate processes and exchange messages between the two with the [ZeroMQ messaging library][3].
|
||||
实现的方式之一是 [用 C 写扩展模块][2],Python 支持 C 扩展的调用。快速浏览文档后发现,这需要编写大量的 C 代码。这样做的话,在有些情况下效果还不错,但不是我想要的方式。另一种方式就是将两个任务放在不同的进程中,以使用 [ZeroMQ 消息库][3] 在进程间交换信息的方式实现数据的共享。
|
||||
|
||||
When I experienced this type of scenario before discovering ZeroMQ, I went through the extension-writing path. It was not that bad, but it is very time-consuming and convoluted. Nowadays, to avoid that, I subdivide a system into independent processes that exchange information through messages sent over [communication sockets][4]. With this approach, several programming languages can coexist, and each process is simpler and thus easier to debug.
|
||||
在引入 ZeroMQ 之前,我选择了编写扩展的方式,试图解决这个场景的需求。这种方式不算太差,但非常复杂和费时。现在为了避免那些问题,我细分出一个系统作为独立的进程运行,专门用于交换通过 [通信套接字][4] 发送的消息所承载的数据。这样,不同的编程语言可以共存,每个进程也变简单了,同时也容易调试。
|
||||
|
||||
ZeroMQ provides an even easier process:
|
||||
ZeroMQ 提供了更简单的进程:
|
||||
|
||||
1. Write a small shim in C that reads data from the hardware and sends whatever it finds as a message.
|
||||
2. Write a Python interface between the new and existing infrastructure.
|
||||
1. 编写一小组 C 代码,从硬件读取数据,发送到它能作用到的消息中。
|
||||
2. 使用 Python 编写接口,实现新旧基础组件的对接。
|
||||
|
||||
|
||||
|
||||
One of ZeroMQ's project's founders is [Pieter Hintjens][5], a remarkable person with [interesting views and writings][6].
|
||||
[Pieter Hintjens][5] 是 ZeroMQ 项目发起者之一,他是个拥有 [有趣视角和作品][6] 的非凡人物。
|
||||
|
||||
### Prerequisites
|
||||
### 准备
|
||||
|
||||
For this tutorial, you will need:
|
||||
本教程中,需要:
|
||||
|
||||
* A C compiler (e.g., [GCC][7] or [Clang][8])
|
||||
* The [**libzmq** library][9]
|
||||
* 一个 C 编译器(例如 [GCC][7] 或 [Clang][8])
|
||||
* [**libzmq** 库][9]
|
||||
* [Python 3][10]
|
||||
* [ZeroMQ bindings][11] for python
|
||||
* [ZeroMQ 的 Python 封装][11]
|
||||
|
||||
|
||||
|
||||
Install them on Fedora with:
|
||||
Fedora 系统上的安装方法:
|
||||
|
||||
|
||||
```
|
||||
`$ dnf install clang zeromq zeromq-devel python3 python3-zmq`
|
||||
$ dnf install clang zeromq zeromq-devel python3 python3-zmq
|
||||
```
|
||||
|
||||
For Debian or Ubuntu:
|
||||
Debian 和 Ubuntu 系统上的安装方法:
|
||||
|
||||
|
||||
```
|
||||
`$ apt-get install clang libzmq5 libzmq3-dev python3 python3-zmq`
|
||||
$ apt-get install clang libzmq5 libzmq3-dev python3 python3-zmq
|
||||
```
|
||||
|
||||
If you run into any issues, refer to each project's installation instructions (which are linked above).
|
||||
如果有问题,参考对应项目的安装指南(上面附有链接)。
|
||||
|
||||
### Writing the hardware-interfacing library
|
||||
### 编写硬件接口库
|
||||
|
||||
Since this is a hypothetical scenario, this tutorial will write a fictitious library with two functions:
|
||||
因为这里针对的是个设想的场景,本教程虚构了包含两个函数的操作库:
|
||||
|
||||
* **fancyhw_init()** to initiate the (hypothetical) hardware
|
||||
* **fancyhw_read_val()** to return a value read from the hardware
|
||||
* **fancyhw_init()** 用来初始化(设想的)硬件
|
||||
* **fancyhw_read_val()** 用于返回从硬件读取的数据
|
||||
|
||||
|
||||
|
||||
Save the library's full source code to a file named **libfancyhw.h**:
|
||||
将库的完整代码保存到文件 **libfancyhw.h** 中:
|
||||
|
||||
|
||||
```
|
||||
@ -89,15 +88,15 @@ int16_t fancyhw_read_val(void)
|
||||
#endif
|
||||
```
|
||||
|
||||
This library can simulate the data you want to pass between languages, thanks to the random number generator.
|
||||
这个库可以模拟你要在不同语言实现的组件间交换的数据,中间有劳随机数发生器。
|
||||
|
||||
### Designing a C interface
|
||||
### 设计 C 接口
|
||||
|
||||
The following will go step-by-step through writing the C interface—from including the libraries to managing the data transfer.
|
||||
下面从包含管理数据传输的库开始,逐步实现 C 接口。
|
||||
|
||||
#### Libraries
|
||||
#### 需要的库
|
||||
|
||||
Begin by loading the necessary libraries (the purpose of each library is in a comment in the code):
|
||||
开始先加载必要的库(每个库的作用见代码注释):
|
||||
|
||||
|
||||
```
|
||||
@ -115,9 +114,9 @@ Begin by loading the necessary libraries (the purpose of each library is in a co
|
||||
#include "libfancyhw.h"
|
||||
```
|
||||
|
||||
#### Significant parameters
|
||||
#### 必要的参数
|
||||
|
||||
Define the **main** function and the significant parameters needed for the rest of the program:
|
||||
定义 **main** 函数和后续过程中必要的参数:
|
||||
|
||||
|
||||
```
|
||||
@ -131,16 +130,16 @@ int main(void)
|
||||
...
|
||||
```
|
||||
|
||||
#### Initialization
|
||||
#### 初始化
|
||||
|
||||
Both libraries need some initialization. The fictitious one needs just one parameter:
|
||||
所有的库都需要初始化。虚构的那个只需要一个参数:
|
||||
|
||||
|
||||
```
|
||||
`fancyhw_init(INIT_PARAM);`
|
||||
fancyhw_init(INIT_PARAM);
|
||||
```
|
||||
|
||||
The ZeroMQ library needs some real initialization. First, define a **context**—an object that manages all the sockets:
|
||||
ZeroMQ 库需要实打实的初始化。首先,定义对象 **context**,它是用来管理全部的套接字的:
|
||||
|
||||
|
||||
```
|
||||
@ -154,14 +153,14 @@ if (!context)
|
||||
}
|
||||
```
|
||||
|
||||
Then define the socket used to deliver data. ZeroMQ supports several types of sockets, each with its application. Use a **publish** socket (also known as **PUB** socket), which can deliver copies of a message to multiple receivers. This approach enables you to attach several receivers that will all get the same messages. If there are no receivers, the messages will be discarded (i.e., they will not be queued). Do this with:
|
||||
之后定义用来发送数据的套接字。ZeroMQ 支持若干种套接字,各有其用。使用 **publish** 套接字(也叫 **PUB** 套接字),可以复制消息并分发到多个接收端。这使得你可以让多个接收端接收同一个消息。没有接收者的消息将被丢弃(即不会入消息队列)。用法如下:
|
||||
|
||||
|
||||
```
|
||||
`void *data_socket = zmq_socket(context, ZMQ_PUB);`
|
||||
void *data_socket = zmq_socket(context, ZMQ_PUB);
|
||||
```
|
||||
|
||||
The socket must be bound to an address so that the clients know where to connect. In this case, use the [TCP transport layer][15] (there are [other options][16], but TCP is a good default choice):
|
||||
套接字需要绑定到一个具体的地址,这样客户端就知道要连接哪里了。本例中,使用了 [TCP 传输层][15](当然也有 [其它选项][16],但 TCP 是不错的默认选择):
|
||||
|
||||
|
||||
```
|
||||
@ -175,7 +174,7 @@ if (rb != 0)
|
||||
}
|
||||
```
|
||||
|
||||
Next, calculate some useful values that you will need later. Note **TOPIC** in the code below; **PUB** sockets need a topic to be associated with the messages they send. Topics can be used by the receivers to filter messages:
|
||||
下一步, 计算一些后续要用到的值。 注意下面代码中的 **TOPIC**,因为 **PUB** 套接字发送的消息需要绑定一个 topic。topic 用于供接收者过滤消息:
|
||||
|
||||
|
||||
```
|
||||
@ -185,9 +184,9 @@ const size_t envelope_size = topic_size + 1 + PACKET_SIZE * sizeof(int16_t);
|
||||
[printf][14]("Topic: %s; topic size: %zu; Envelope size: %zu\n", TOPIC, topic_size, envelope_size);
|
||||
```
|
||||
|
||||
#### Sending messages
|
||||
#### 发送消息
|
||||
|
||||
Start a loop that sends **REPETITIONS** messages:
|
||||
启动一个 **重复** 发送消息的循环:
|
||||
|
||||
|
||||
```
|
||||
@ -196,7 +195,7 @@ for (unsigned int i = 0; i < REPETITIONS; i++)
|
||||
...
|
||||
```
|
||||
|
||||
Before sending a message, fill a buffer of **PACKET_SIZE** values. The library provides signed integers of 16 bits. Since the dimension of an **int** in C is not defined, use an **int** with a specific width:
|
||||
发送消息前,先填充一个长度为 **PACKET_SIZE** 的缓冲区。本库提供的是 16 bits 有符号整数。因为 C 语言中 **int** 类型占用空间大小与平台相关,不是确定的值,所以要使用指定宽度的 **int** 变量:
|
||||
|
||||
|
||||
```
|
||||
@ -210,7 +209,7 @@ for (unsigned int j = 0; j < PACKET_SIZE; j++)
|
||||
[printf][14]("Read %u data values\n", PACKET_SIZE);
|
||||
```
|
||||
|
||||
The first step in message preparation and delivery is creating a ZeroMQ message and allocating the memory necessary for your message. This empty message is an envelope to store the data you will ship:
|
||||
消息的准备和发送中,第一步是创建 ZeroMQ 消息,为消息分配必要的内存空间。空白的消息是用于封装要发送的数据的:
|
||||
|
||||
|
||||
```
|
||||
@ -227,7 +226,7 @@ if (rmi != 0)
|
||||
}
|
||||
```
|
||||
|
||||
Now that the memory is allocated, store the data in the ZeroMQ message "envelope." The **zmq_msg_data()** function returns a pointer to the beginning of the buffer in the envelope. The first part is the topic, followed by a space, then the binary data. Add whitespace as a separator between the topic and the data. To move along the buffer, you have to play with casts and [pointer arithmetic][18]. (Thank you, C, for making things straightforward.) Do this with:
|
||||
现在内存空间已分配,数据保存在 ZeroMQ 消息“envelope”中。函数 **zmq_msg_data()** 返回一个指向封装数据缓存区顶端的指针。第一部分是 topic,之后是一个空格,最后是二进制数。topic 和二进制数据之间的分隔符采用空格字符。需要遍历缓存区的话,使用 cast 和 [指针算法][18]。(感谢 C 语言,让事情变得直截了当。)做法如下:
|
||||
|
||||
|
||||
```
|
||||
@ -236,7 +235,7 @@ Now that the memory is allocated, store the data in the ZeroMQ message "envelope
|
||||
[memcpy][19]((void*)((char*)zmq_msg_data(&envelope) + 1 + topic_size), buffer, PACKET_SIZE * sizeof(int16_t));
|
||||
```
|
||||
|
||||
Send the message through the **data_socket**:
|
||||
通过 **data_socket** 发送消息:
|
||||
|
||||
|
||||
```
|
||||
@ -251,7 +250,7 @@ if (rs != envelope_size)
|
||||
}
|
||||
```
|
||||
|
||||
Make sure to dispose of the envelope after you use it:
|
||||
使用数据之前要先解取封装:
|
||||
|
||||
|
||||
```
|
||||
@ -260,9 +259,9 @@ zmq_msg_close(&envelope);
|
||||
[printf][14]("Message sent; i: %u, topic: %s\n", i, TOPIC);
|
||||
```
|
||||
|
||||
#### Clean it up
|
||||
#### 清场
|
||||
|
||||
Because C does not provide [garbage collection][20], you have to tidy up. After you are done sending your messages, close the program with the clean-up needed to release the used memory:
|
||||
C 语言不提供 [垃圾收集][20] 功能,用完之后记得要自己扫尾。发送消息之后结束程序之前,需要运行扫尾代码,释放分配的内存:
|
||||
|
||||
|
||||
```
|
||||
@ -287,9 +286,9 @@ if (rd != 0)
|
||||
return EXIT_SUCCESS;
|
||||
```
|
||||
|
||||
#### The entire C program
|
||||
#### 完整 C 代码
|
||||
|
||||
Save the full interface library below to a local file called **hw_interface.c**:
|
||||
保存下面完整的接口代码到本地名为 **hw_interface.c** 的文件:
|
||||
|
||||
|
||||
```
|
||||
@ -408,16 +407,16 @@ int main(void)
|
||||
}
|
||||
```
|
||||
|
||||
Compile using the command:
|
||||
用如下命令编译:
|
||||
|
||||
|
||||
```
|
||||
`$ clang -std=c99 -I. hw_interface.c -lzmq -o hw_interface`
|
||||
$ clang -std=c99 -I. hw_interface.c -lzmq -o hw_interface
|
||||
```
|
||||
|
||||
If there are no compilation errors, you can run the interface. What's great is that ZeroMQ **PUB** sockets can run without any applications sending or retrieving data. That reduces complexity because there is no obligation in terms of which process needs to start first.
|
||||
如果没有编译错误,你就可以运行这个接口了。贴心的是,ZeroMQ **PUB** 套接字可以在没有任何应用发送或接受数据的状态下运行,这简化了使用复杂度,因为这样不限制进程启动的次序。
|
||||
|
||||
Run the interface:
|
||||
运行该接口:
|
||||
|
||||
|
||||
```
|
||||
@ -432,24 +431,24 @@ Read 16 data values
|
||||
...
|
||||
```
|
||||
|
||||
The output shows the data being sent through ZeroMQ. Now you need an application to read the data.
|
||||
输出显示数据已经通过 ZeroMQ 完成发送,现在要做的是让一个程序去读数据。
|
||||
|
||||
### Write a Python data processor
|
||||
### 编写 Python 数据处理器
|
||||
|
||||
You are now ready to pass the data from C to a Python application.
|
||||
现在已经准备好从 C 程序向 Python 应用发送数据了。
|
||||
|
||||
#### Libraries
|
||||
#### 库
|
||||
|
||||
You need two libraries to help transfer data. First, you need ZeroMQ bindings in Python:
|
||||
需要两个库帮助实现数据传输。首先是 ZeroMQ 的 Python 封装:
|
||||
|
||||
|
||||
```
|
||||
`$ python3 -m pip install zmq`
|
||||
$ python3 -m pip install zmq
|
||||
```
|
||||
|
||||
The other is the [**struct** library][21], which decodes binary data. It's commonly available with the Python standard library, so there's no need to **pip install** it.
|
||||
另一个就是 [**struct** 库][21],用于解码二进制数据。这个库是 Python 标准库的一部分,所以不需要使用 **pip** 命令安装。
|
||||
|
||||
The first part of the Python program imports both of these libraries:
|
||||
Python 程序的第一部分是导入这些库:
|
||||
|
||||
|
||||
```
|
||||
@ -457,9 +456,9 @@ import zmq
|
||||
import struct
|
||||
```
|
||||
|
||||
#### Significant parameters
|
||||
#### 重要参数
|
||||
|
||||
To use ZeroMQ, you must subscribe to the same topic used in the constant **TOPIC** above:
|
||||
使用 ZeroMQ 时,只能向常量 **TOPIC** 定义相同的接收端发送消息:
|
||||
|
||||
|
||||
```
|
||||
@ -468,9 +467,9 @@ topic = "fancyhw_data".encode('ascii')
|
||||
print("Reading messages with topic: {}".format(topic))
|
||||
```
|
||||
|
||||
#### Initialization
|
||||
#### 初始化
|
||||
|
||||
Next, initialize the context and the socket. Use a **subscribe** socket (also known as a **SUB** socket), which is the natural partner of the **PUB** socket. The socket also needs to subscribe to the right topic:
|
||||
下一步,初始化上下文和套接字。使用 **subscribe** 套接字(也称为 **SUB** 套接字),它是 **PUB** 套接字的天生好友。这个套接字发送时也需要匹配 topic。
|
||||
|
||||
|
||||
```
|
||||
@ -485,9 +484,9 @@ with zmq.Context() as context:
|
||||
...
|
||||
```
|
||||
|
||||
#### Receiving messages
|
||||
#### 接收消息
|
||||
|
||||
Start an infinite loop that waits for new messages to be delivered to the SUB socket. The loop will be closed if you press **Ctrl+C** or if an error occurs:
|
||||
启动一个无限循环,等待接收发送到 SUB 套接字的新消息。这个循环会在你按下 **Ctrl+C** 组合键或者内部发生错误时终止:
|
||||
|
||||
|
||||
```
|
||||
@ -503,16 +502,16 @@ Start an infinite loop that waits for new messages to be delivered to the SUB so
|
||||
socket.close()
|
||||
```
|
||||
|
||||
The loop waits for new messages to arrive with the **recv()** method. Then it splits whatever is received at the first space to separate the topic from the content:
|
||||
这个循环等待 **recv()** 方法获取的新消息,然后将接收到的内容从第一个空格字符处分割开,从而得到 topic:
|
||||
|
||||
|
||||
```
|
||||
`binary_topic, data_buffer = socket.recv().split(b' ', 1)`
|
||||
binary_topic, data_buffer = socket.recv().split(b' ', 1)
|
||||
```
|
||||
|
||||
#### Decoding messages
|
||||
#### 解码消息
|
||||
|
||||
Python does yet not know that the topic is a string, so decode it using the standard ASCII encoding:
|
||||
Python 此时尚不知道 topic 是个字符串,使用标准 ASCII 编解码器进行解码:
|
||||
|
||||
|
||||
```
|
||||
@ -522,7 +521,7 @@ print("Message {:d}:".format(i))
|
||||
print("\ttopic: '{}'".format(topic))
|
||||
```
|
||||
|
||||
The next step is to read the binary data using the **struct** library, which can convert shapeless binary blobs to significant values. First, calculate the number of values stored in the packet. This example uses 16-bit signed integers that correspond to an "h" in the **struct** [format][22]:
|
||||
下一步就是使用 **struct** 库读取二进制数据,它可以将二进制数据段转换为明确的数值。首先,计算数据包中数值的组数。本例中使用的 16 bit 有符号整数对应的是 **struct** [格式字符][22] 中的“h”:
|
||||
|
||||
|
||||
```
|
||||
@ -531,14 +530,14 @@ packet_size = len(data_buffer) // struct.calcsize("h")
|
||||
print("\tpacket size: {:d}".format(packet_size))
|
||||
```
|
||||
|
||||
By knowing how many values are in the packet, you can define the format by preparing a string with the number of values and their types (e.g., "**16h**"):
|
||||
知道数据包中有多少组数据后,就可以通过构建一个包含数据组数和数据类型的字符串,来定义格式了(比如“**16h**”):
|
||||
|
||||
|
||||
```
|
||||
`struct_format = "{:d}h".format(packet_size)`
|
||||
struct_format = "{:d}h".format(packet_size)
|
||||
```
|
||||
|
||||
Convert that binary blob to a series of numbers that you can immediately print:
|
||||
将二进制数据串转换为可直接打印的一系列数字:
|
||||
|
||||
|
||||
```
|
||||
@ -547,9 +546,9 @@ data = struct.unpack(struct_format, data_buffer)
|
||||
print("\tdata: {}".format(data))
|
||||
```
|
||||
|
||||
#### The full Python program
|
||||
#### 完整 Python 代码
|
||||
|
||||
Here is the complete data receiver in Python:
|
||||
下面是 Python 实现的完整的接收器:
|
||||
|
||||
|
||||
```
|
||||
@ -598,9 +597,9 @@ with zmq.Context() as context:
|
||||
socket.close()
|
||||
```
|
||||
|
||||
Save it to a file called **online_analysis.py**. Python does not need to be compiled, so you can run the program immediately.
|
||||
将上面的内容保存到名为 **online_analysis.py** 的文件。Python 代码不需要编译,你可以直接运行它。
|
||||
|
||||
Here is the output:
|
||||
运行输出如下:
|
||||
|
||||
|
||||
```
|
||||
@ -618,13 +617,13 @@ Message 1:
|
||||
...
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
### 小结
|
||||
|
||||
This tutorial describes an alternative way of gathering data from C-based hardware interfaces and providing it to Python-based infrastructures. You can take this data and analyze it or pass it off in any number of directions. It employs a messaging library to deliver data between a "gatherer" and an "analyzer" instead of having a monolithic piece of software that does everything.
|
||||
本教程介绍了一种新方式,实现从基于 C 的硬件接口收集数据,并分发到基于 Python 的基础组件的功能。借此可以获取数据供后续分析,或者转送到任意数量的接收端去。它采用了一个消息库实现数据在发送者和处理者之间的传送,来取代同样功能规模庞大的软件。
|
||||
|
||||
This tutorial also increases what I call "software granularity." In other words, it subdivides the software into smaller units. One of the benefits of this strategy is the possibility of using different programming languages at the same time with minimal interfaces acting as shims between them.
|
||||
本教程还引出了我称之为“软件粒度”的概念,换言之,就是将软件细分为更小的部分。这种做法的优点之一就是,使得同时采用不同的编程语言实现最简接口作为不同部分之间沟通的组件成为可能。
|
||||
|
||||
In practice, this design allows software engineers to work both more collaboratively and independently. Different teams may work on different steps of the analysis, choosing the tool they prefer. Another benefit is the parallelism that comes for free since all the processes can run in parallel. The [ZeroMQ messaging library][3] is a remarkable piece of software that makes all of this much easier.
|
||||
实践中,这种设计使得软件工程师能以更独立、合作更高效的方式做事。不同的团队可以专注于数据分析的不同方面,可以选择自己中意的实现工具。这种做法的另一个优点是实现了零代价的并行,因为所有的进程都可以并行运行。[ZeroMQ 消息库][3] 是个令人赞叹的软件,使用它可以让工作大大简化。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -632,7 +631,7 @@ via: https://opensource.com/article/20/3/zeromq-c-python
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[silentdawn-zz](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,261 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (silentdawn-zz)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Never forget your password with this Python encryption algorithm)
|
||||
[#]: via: (https://opensource.com/article/20/6/python-passwords)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
有了这个 Python 加密算法,你再也不用担心忘记密码了
|
||||
======
|
||||
本密码保护算法使用 Python 实现,基于 Shamir 秘密共享算法,可以有效避免黑客窃取和自己不经意忘记引发的风险和不便。
|
||||
![Searching for code][1]
|
||||
|
||||
很多人使用密码管理器来保密存储自己在用的各种密码。密码管理器的关键环节之一是主密码,主密码保护着所有其它密码。这种情况下,主密码本身就是风险所在。任何知道你的主密码的人,都可以视你的密码保护若无物,畅行无阻。自然而然,为了保证主密码的安全性,你会选用很难想到的密码,把它牢记在脑子里,甚至还有很多其它你能想到的 [各种方法][2]。
|
||||
|
||||
但是万一主密码泄露了或者忘记了,后果是什么?可能你要去个心仪的没有现代技术覆盖的岛上旅行上个把月,在开心戏水之后享用美味菠萝的时刻,突然记不清自己的密码是什么了。是“山尖一寺一壶酒”?还是“一去二三里,烟村四五家”?反正当时选密码的时候感觉浑身都是机灵,现在则后悔当初何必作茧自缚。
|
||||
|
||||
![XKCD comic on password strength][3]
|
||||
|
||||
([XKCD][4], [CC BY-NC 2.5][5])
|
||||
|
||||
当然,你不会把自己的主密码告诉其它任何人,因为这是密码管理的首要原则。有没有其它变通的办法,免除这种难以承受的密码之重?
|
||||
|
||||
试试 **[Shamir 秘密共享算法][6]**,一种可以将保密内容进行分块保存,且只能将片段拼合才能恢复保密内容的算法。
|
||||
|
||||
先分别通过一个古代的和一个现代的故事,看看 Shamir 秘密共享算法究竟是怎么回事吧。
|
||||
|
||||
这些故事的隐含前提是你对密码学有起码的了解,必要的话,你可以先温习一下 [密码学与公钥基础设施引论][7].
|
||||
|
||||
### 一个古代关于加解密的故事
|
||||
|
||||
古代某国,王有个大秘密,很大很大的秘密:
|
||||
|
||||
|
||||
```
|
||||
def int_from_bytes(s):
|
||||
acc = 0
|
||||
for b in s:
|
||||
acc = acc * 256
|
||||
acc += b
|
||||
return acc
|
||||
|
||||
secret = int_from_bytes("terrible secret".encode("utf-8"))
|
||||
```
|
||||
|
||||
大到他自己的孩子都不能轻易信任。他有五个王子,但前程危机重重。他的孩子需要在他百年之后用这个秘密来保卫国家,而国王又不能忍受自己的孩子在他们还记得自己的时候就知道这些秘密,尤其是这种状态可能要持续几十年。
|
||||
|
||||
所以,国王动用大力魔术,将这个秘密分为了五个部分。他知道,可能有一两个孩子不会遵从他的遗嘱,但绝对不会同时有三个或三个以上这样:
|
||||
|
||||
|
||||
```
|
||||
from mod import Mod
|
||||
from os import urandom
|
||||
```
|
||||
|
||||
国王精通 [有限域][8] 和 _随机理论_,当然,对他来说,使用 Python 分割这个秘密也是小菜一碟。
|
||||
|
||||
第一步是选择一个大质数——第 13 个 [梅森质数][9] (`2**521 - 1`),他让人把这个数誊写到纸上,封之金匮,藏之后殿:
|
||||
|
||||
|
||||
```
|
||||
`P = 2**521 - 1`
|
||||
```
|
||||
|
||||
但这不是要保守的秘密:这只是 _公钥_。
|
||||
|
||||
国王知道,如果 `P` 是一个质数, 用 `P` 对数字取模,就形成了一个数学 [场][10]:在场中可以自由进行加、减、乘、除运算。当然,做除法运算时,除数不能为 0。
|
||||
|
||||
国王日理万机,方便起见,他在做模运算时使用了 PyPI 中的 [`mod`][11] 模块,这个模块实现了各种模运算算法。
|
||||
|
||||
他确认过,自己的秘密比 `P` 要短:
|
||||
|
||||
|
||||
```
|
||||
`secret < P`[/code] [code]`TRUE`
|
||||
```
|
||||
|
||||
将秘密转换为 `P` 的模,`mod P`:
|
||||
|
||||
|
||||
```
|
||||
`secret = mod.Mod(secret, P)`
|
||||
```
|
||||
|
||||
为了使任意三个孩子掌握的片段就可以重建这个秘密,他还得生成另外两个部分,并混杂到一起:
|
||||
|
||||
|
||||
```
|
||||
polynomial = [secret]
|
||||
for i in range(2):
|
||||
polynomial.append(Mod(int_from_bytes(urandom(16)), P))
|
||||
len(polynomial)
|
||||
|
||||
[/code] [code]`3`
|
||||
```
|
||||
|
||||
下一步就是在随机选择的点上计算某 [多项式][12] 的值,即计算 `polynomial[0] + polynomial[1]*x + polynomial[2]*x**2 ...`
|
||||
|
||||
虽然有第三方模块可以计算多项式的值,但那并不是针对有限域内的运算的,所以,国王还得亲自操刀,写出计算多项式的代码:
|
||||
|
||||
|
||||
```
|
||||
def evaluate(coefficients, x):
|
||||
acc = 0
|
||||
power = 1
|
||||
for c in coefficients:
|
||||
acc += c * power
|
||||
power *= x
|
||||
return acc
|
||||
```
|
||||
|
||||
再下一步,国王选择五个不同的点,计算多项式的值,并分别交给五个孩子,让他们各自保存一份:
|
||||
|
||||
|
||||
```
|
||||
shards = {}
|
||||
for i in range(5):
|
||||
x = Mod(int_from_bytes(urandom(16)), P)
|
||||
y = evaluate(polynomial, x)
|
||||
shards[i] = (x, y)
|
||||
```
|
||||
|
||||
正如国王所虑,不是每个孩子都正直守信。其中有两个孩子,在他尸骨未寒的时候,就想从自己掌握的秘密片段中窥出些什么,但穷极所能,终无所获。另外三个孩子听说了这个事,合力将这两人永远驱逐:
|
||||
|
||||
|
||||
```
|
||||
del shards[2]
|
||||
del shards[3]
|
||||
```
|
||||
|
||||
二十年弹指一挥间,奉先王遗命,三个孩子将合力恢复出先王的大秘密。他们将各自的秘密片段拼合在一起:
|
||||
|
||||
|
||||
```
|
||||
`retrieved = list(shards.values())`
|
||||
```
|
||||
|
||||
然后是 40 天没日没夜的苦干。这是个大工程,他们虽然都懂些 Python,但都不如前国王精通。
|
||||
|
||||
最终,揭示秘密的时刻到了。
|
||||
|
||||
用于反算秘密的代码基于 [拉格朗日差值][13],它利用多项式在 `n` 个非 0 位置的值,来计算其在 `0` 处的值。前面的 `n` 指的是多项式的阶数。这个过程的原理是,可以为一个多项式找到一个显示方程,使其满足:其在 `t[0]` 处的值是 `1`,在 `i` 不为 `0` 的时候,其在 `t[i]` 处的值是 `0`。因多项式值的计算属于线性运算,需要计算 _这些_ 多项式各自的值,并使用多项式的值进行插值:
|
||||
|
||||
|
||||
```
|
||||
from functools import reduce
|
||||
from operator import mul
|
||||
|
||||
def retrieve_original(secrets):
|
||||
x_s = [s[0] for s in secrets]
|
||||
acc = Mod(0, P)
|
||||
for i in range(len(secrets)):
|
||||
others = list(x_s)
|
||||
cur = others.pop(i)
|
||||
factor = Mod(1, P)
|
||||
for el in others:
|
||||
factor *= el * (el - cur).inverse()
|
||||
acc += factor * secrets[i][1]
|
||||
return acc
|
||||
```
|
||||
|
||||
这代码是在太复杂了,40 天能算出结果已经够快了。雪上加霜的是,他们只能利用五个秘密片段中的三个来完成这个运算,这让他们万分紧张:
|
||||
|
||||
```
|
||||
`retrieved_secret = retrieve_original(retrieved)`
|
||||
```
|
||||
|
||||
后事如何?
|
||||
|
||||
|
||||
```
|
||||
`retrieved_secret == secret`[/code] [code]`TRUE`
|
||||
```
|
||||
|
||||
数学这个魔术的优美之处就在于它每一次都是那么靠谱,无一例外。国王的孩子们,曾经的孩童,而今已是壮年,足以理解先王的初衷,并以先王的锦囊妙计保卫了国家,并继之以繁荣昌盛!
|
||||
|
||||
### 关于 Shamir 秘密共享算法的现代故事
|
||||
|
||||
现代,很多人都对类似的大秘密苦不堪言:密码管理器的主密码!几乎没有谁能有足够信任的人去完全托付自己最深的秘密,好消息是,找到至少有三个不会串通起来搞鬼的五人组不是个太困难的事。
|
||||
|
||||
同样是在现代,比较幸运的是,我们不必再像国王那样自己动手分割要守护的秘密。拜现代 _开源_ 技术所赐,这都可以使用现成的软件完成。
|
||||
|
||||
假设你有五个不敢完全信任,但还可以有点信任的人:金、木、水、火、土。
|
||||
|
||||
安装并运行 `ssss` 分割密钥:
|
||||
|
||||
|
||||
```
|
||||
$ echo 'long legs travel fast' | ssss-split -t 3 -n 5
|
||||
Generating shares using a (3,5) scheme with dynamic security level.
|
||||
Enter the secret, at most 128 ASCII characters: Using a 168 bit security level.
|
||||
1-797842b76d80771f04972feb31c66f3927e7183609
|
||||
2-947925f2fbc23dc9bca950ef613da7a4e42dc1c296
|
||||
3-14647bdfc4e6596e0dbb0aa6ab839b195c9d15906d
|
||||
4-97c77a805cd3d3a30bff7841f3158ea841cd41a611
|
||||
5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
```
|
||||
|
||||
这确实是个非常牛的主密码:`long legs travel fast`,绝不能把它完整的托付给任何人!那就把五个片段分别交给还比较可靠的伙伴,金、木、水、火、土:
|
||||
|
||||
* 把 `1` 给金。
|
||||
* 把 `2` 给木。
|
||||
* 把 `3` 给水。
|
||||
* 把 `4` 给火。
|
||||
* 把 `5` 给土。
|
||||
|
||||
|
||||
|
||||
然后,你开启你的惬意之旅,整整一个月,流连于海边温暖的沙滩,整整一个月,未接触任何电子设备。没用多久,把自己的主密码忘到了九霄云外。
|
||||
|
||||
木和水也在旅行中,你托付给他们保管的密钥片段保存的好好的,在他们各自的密码管理器中,但不幸的是,他们和你一样,也忘了自己的 _主密码_。
|
||||
|
||||
没关系。
|
||||
|
||||
联系金,他保管的密钥片段是 `1-797842b76d80771f04972feb31c66f3927e7183609`;火,一直替你的班,很高兴你能尽快重返岗位,把自己掌握的片段给了你,`4-97c77a805cd3d3a30bff7841f3158ea841cd41a611`;土,收到你给的跑腿费才将自己保管的片段翻出来发给你,`5-17da24ad63f7b704baed220839abb215f97d95f4f8`。
|
||||
|
||||
有了这三个密钥片段,运行:
|
||||
|
||||
|
||||
```
|
||||
$ ssss-combine -t 3
|
||||
Enter 3 shares separated by newlines:
|
||||
Share [1/3]: 1-797842b76d80771f04972feb31c66f3927e7183609
|
||||
Share [2/3]: 4-97c77a805cd3d3a30bff7841f3158ea841cd41a611
|
||||
Share [3/3]: 5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
Resulting secret: long legs travel fast
|
||||
```
|
||||
|
||||
就这么简单,有了 _开源_ 技术加持,你也可以活的像国王一样滋润!
|
||||
|
||||
### 自己的安全不是自己一个人的事
|
||||
|
||||
密码管理是当今网络生活必备技能,当然要选择复杂的密码,来保证安全性,但这不是全部。来用 Shamir 秘密共享算法,和他人共同安全的存储你的密码吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/python-passwords
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[silentdawn-zz](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
|
||||
[2]: https://monitor.firefox.com/security-tips
|
||||
[3]: https://opensource.com/sites/default/files/uploads/password_strength-xkcd.png (XKCD comic on password strength)
|
||||
[4]: https://imgs.xkcd.com/comics/password_strength.png
|
||||
[5]: https://creativecommons.org/licenses/by-nc/2.5/
|
||||
[6]: https://en.wikipedia.org/wiki/Secret_sharing#Shamir's_scheme
|
||||
[7]: https://opensource.com/article/18/5/cryptography-pki
|
||||
[8]: https://en.wikipedia.org/wiki/Finite_field
|
||||
[9]: https://en.wikipedia.org/wiki/Mersenne_prime
|
||||
[10]: https://en.wikipedia.org/wiki/Field_(mathematics)
|
||||
[11]: https://pypi.org/project/mod/
|
||||
[12]: https://en.wikipedia.org/wiki/Polynomial
|
||||
[13]: https://www.math.usm.edu/lambers/mat772/fall10/lecture5.pdf
|
@ -1,88 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about automation testing in CI/CD)
|
||||
[#]: via: (https://opensource.com/article/20/7/automation-testing-cicd)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
你需要了解有关 CI/CD 中的自动化测试的知识
|
||||
======
|
||||
持续集成和持续交付是由测试提供支持。以下是如何做的。
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
>“如果一切似乎都在控制之中,那么你就不会足够快。” —Mario Andretti
|
||||
|
||||
测试自动化意味着持续专注于在软件开发过程中尽早地检测到缺陷,错误和 bug。这是使用那些追求质量为最高价值的工具完成的,它们旨在_确保_质量,而不仅仅是追求。
|
||||
|
||||
持续集成/持续交付(CI / CD)解决方案(也称为 DevOps 管道)最引人注目的功能之一是可以更频繁地进行测试,而又不会给开发人员或操作人员增加更多的手动工作。让我们谈谈为什么这很重要。
|
||||
|
||||
### 为什么要在 CI/CD 中自动化测试?
|
||||
|
||||
敏捷团队迭代速度更快,以更高的速度交付软件和客户满意度,而这些压力可能会损害质量。全球竞争制造了对缺陷_低容忍度_,同时对敏捷团队软件交付_更快迭代_增加了压力。减轻压力的行业解决方案是什么? 是 [DevOps][2]。
|
||||
|
||||
DevOps 是一个有很多定义的大创意,但是对 DevOps 成功至关重要的一项技术是 CI/CD。通过软件开发流程设计一个连续的改进周期可以带来新的测试机会。
|
||||
|
||||
### 这对测试人员意味着什么?
|
||||
|
||||
对于测试人员,这通常意味着他们必须:
|
||||
|
||||
* 更早且更频繁地进行测试(使用自动化)
|
||||
* 持续测试“真实世界”的工作流(自动和手动)
|
||||
|
||||
|
||||
|
||||
更具体地说,任何形式的测试(无论是由编写代码的开发人员运行还是由质量保证工程师团队设计)的作用是利用 CI/CD 基础架构在快速推进的同时提高质量。
|
||||
|
||||
### 测试人员还需要做什么?
|
||||
|
||||
具体点说,测试人员负责:
|
||||
|
||||
* 测试新的和现有的软件应用
|
||||
* 通过根据系统要求评估软件来验证功能
|
||||
* 利用自动化测试工具来开发和维护可重复使用的自动化测试
|
||||
* 与 scrum 团队的所有成员合作,了解正在开发的功能以及实施的技术设计,以设计和开发准确、高质量的自动化测试
|
||||
* 分析记录的用户需求,并创建或协助设计针对中度到高度复杂的软件或 IT 系统的测试计划
|
||||
* 开发自动化测试,并与职能团队一起审查和评估测试方案
|
||||
* 与技术团队合作,确定在开发环境中自动化测试的正确方法
|
||||
* 与团队合作,通过自动化测试来理解和解决软件问题,并回应有关修改或增强的建议
|
||||
* 参与需求梳理,估算和其他敏捷 scrum 仪式
|
||||
* 协助定义标准和流程以支持测试活动和材料(例如脚本、配置、程序、工具、划和结果)
|
||||
|
||||
|
||||
|
||||
测试是一项艰巨的工作,但这是有效构建软件的重要组成部分。
|
||||
|
||||
### 哪些持续测试很重要?
|
||||
|
||||
你可以使用多种测试。不同的类型并不是学科之间的界限。相反,它们是表示测试的不同方式。比较测试类型不太重要,而覆盖每种测试类型更重要。
|
||||
|
||||
* **功能测试:**确保软件具有其要求的功能
|
||||
* **单元测试:**独立测试软件的较小单元/组件以检查其功能
|
||||
* **负载测试:**在重负载或使用期间测试软件的性能
|
||||
* **压力测试:**确定承受压力(最大负载)时软件的断点
|
||||
* **集成测试:**测试组合或集成的一组组件的输出
|
||||
* **回归测试:**当修改任意组件(无论多么小),测试整个应用的功能
|
||||
|
||||
|
||||
|
||||
### 总结
|
||||
|
||||
任何包含持续测试的软件开发过程都将朝着建立关键反馈环路的方向发展,以快速发展并构建有效的软件。最重要的是,该实践将质量内置到 CI/CD 管道中,并意味着了解在软件开发生命周期中提高速度同时减少风险和浪费之间的联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/automation-testing-cicd
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://opensource.com/resources/devops
|
@ -7,17 +7,16 @@
|
||||
[#]: via: (https://opensource.com/article/20/7/d-programming)
|
||||
[#]: author: (Lawrence Aberba https://opensource.com/users/aberba)
|
||||
|
||||
The feature that makes D my favorite programming language
|
||||
让 D 语言成为我最喜欢的编程语言的功能
|
||||
======
|
||||
UFCS gives you the power to compose reusable code that has a natural
|
||||
flow without sacrificing convenience.
|
||||
UFCS 能让你能够编写自然的可重用代码而不会牺牲便利性。
|
||||
![Coding on a computer][1]
|
||||
|
||||
Back in 2017, I wrote about why the [D programming language is a great choice for development][2]. But there is one outstanding feature in D I didn't expand enough on: the [Universal Function Call Syntax][3] (UFCS). UFCS is a [syntactic sugar][4] in D that enables chaining any regular function on a type (string, number, boolean, etc.) like its member function of that type.
|
||||
早在 2017 年,我就写了为什么 [D 语言是开发的绝佳选择][2]的文章。但是 D 语言中有一个杰出功能,我没有充分扩展:[通用函数调用语法][3](UFCS)。UFCS 是 D 语言中的[语法糖][4],它可以在类型(字符串、数字、布尔值等)上链接任何常规函数,例如该类型的成员函数。
|
||||
|
||||
If you don't already have D installed, [install a D compiler][5] so you can [run the D code][6] in this article yourself.
|
||||
如果你尚未安装 D 语言,请[安装 D 语言编译器][5],以便你可以自己[运行 D 代码][6]。
|
||||
|
||||
Consider this example code:
|
||||
考虑以下示例代码:
|
||||
|
||||
|
||||
```
|
||||
@ -41,7 +40,7 @@ void main()
|
||||
}
|
||||
```
|
||||
|
||||
Compile this with your favorite D compiler to see what this simple example application does:
|
||||
使用你喜欢的 D 语言编译器进行编译,查看这个简单示例应用做了什么:
|
||||
|
||||
|
||||
```
|
||||
@ -50,7 +49,7 @@ $ ./ufcs_demo
|
||||
[2, 4]
|
||||
```
|
||||
|
||||
But with UFCS as a built-in feature of D, you can also write your code in a natural way:
|
||||
但是,使用作为 D 语言的内置功能的 UFCS ,你还可以自然方式编写代码:
|
||||
|
||||
|
||||
```
|
||||
@ -59,7 +58,7 @@ writeln([1, 2, 3, 4].evenNumbers());
|
||||
...
|
||||
```
|
||||
|
||||
or completely remove the now-redundant parenthesis to make it feel like `evenNumbers` is a property:
|
||||
或完全删除现在多余的括号,使 “evenNumbers” 看起来像是一个属性:
|
||||
|
||||
|
||||
```
|
||||
@ -68,7 +67,7 @@ writeln([1, 2, 3, 4].evenNumbers); // prints 2, 4
|
||||
...
|
||||
```
|
||||
|
||||
So the complete code now becomes:
|
||||
因此,完整的代码现在变为:
|
||||
|
||||
|
||||
```
|
||||
@ -92,7 +91,7 @@ void main()
|
||||
}
|
||||
```
|
||||
|
||||
Compile it with your favorite D compiler and try it out. As expected, it produces the same output:
|
||||
使用你最喜欢的 D 语言编译器进行编译,然后尝试一下。 如预期的那样,它产生相同的输出:
|
||||
|
||||
|
||||
```
|
||||
@ -101,9 +100,9 @@ $ ./ufcs_demo
|
||||
[2, 4]
|
||||
```
|
||||
|
||||
During compilation, the compiler _automatically_ places the array as the first argument to the function. This is a regular pattern that makes using D such a joy, so it very much feels the same as you naturally think about your code. The result is functional-style programming.
|
||||
在编译过程中,编译器_自动地_将数组作为函数的第一个参数。 这是一个常规模式,使得使用 D 语言成为一种乐趣,因此,它与你自然考虑代码的感觉非常相似。 结果就是函数式编程。
|
||||
|
||||
You can probably guess what this prints:
|
||||
你可能会猜出打印:
|
||||
|
||||
|
||||
```
|
||||
@ -118,7 +117,7 @@ void main()
|
||||
}
|
||||
```
|
||||
|
||||
But just to confirm:
|
||||
确认一下:
|
||||
|
||||
|
||||
```
|
||||
@ -127,11 +126,11 @@ $ ./cool
|
||||
D is cool
|
||||
```
|
||||
|
||||
Combined with [other D features][7], UFCS gives you the power to compose reusable code that has a natural flow to it without sacrificing convenience.
|
||||
结合[其他 D 语言的功能][7],UFCS 使你能够编写可重用的代码,并在不牺牲便利性的情况下自然地进行编写。
|
||||
|
||||
### Time to try D
|
||||
### 是时候尝试 D 语言了
|
||||
|
||||
As I've written before, D is a great language for development. It's easy to install from [the D download page][8], so download the compiler, take a look at the examples, and experience D for yourself.
|
||||
就像我之前写的那样,D 语言是一种很棒的开发语言。从 [D 语言的下载页面][8]可以很容易地进行安装,因此请下载编译器,查看示例,并亲自体验 D 语言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -139,7 +138,7 @@ via: https://opensource.com/article/20/7/d-programming
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user