mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating
This commit is contained in:
commit
86a908757c
@ -1,58 +1,55 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (runningwater)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11086-1.html)
|
||||
[#]: subject: (How to set up virtual environments for Python on MacOS)
|
||||
[#]: via: (https://opensource.com/article/19/6/virtual-environments-python-macos)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez)
|
||||
|
||||
MacOS 系统中如何设置 Python 虚拟环境
|
||||
======
|
||||
使用 pyenv 和 virtualwrapper 来管理你的虚拟环境,可以避免很多困惑。
|
||||
|
||||
> 使用 pyenv 和 virtualwrapper 来管理你的虚拟环境,可以避免很多困惑。
|
||||
|
||||
![][1]
|
||||
|
||||
作为 Python 开发者和 MacOS 用户,拿到新机器首先要做的就是设置 Python 开发环境。下面是最佳实践(虽然我们已经写过 [在 MacOS 上管理 Python 的其它方法][2])。
|
||||
|
||||
### 预备
|
||||
|
||||
首先,打开终端,在其冰冷毫无提示的窗口输入 **xcode-select --install** 命令。点击确认后,基本的开发环境就会被配置上。MacOS 上需要此步骤来设置本地开发实用工具库,根据 [OS X Daily][3] 的说法,其包括 ”许多常用的工具、实用程序和编译器,如 make、GCC、clang、perl、svn、git、size、strip、strings、libtool、cpp、what 及许多在 Linux 中系统默认安装的有用命令“。
|
||||
首先,打开终端,在其冰冷毫无提示的窗口输入 `xcode-select --install` 命令。点击确认后,基本的开发环境就会被配置上。MacOS 上需要此步骤来设置本地开发实用工具库,根据 [OS X Daily][3] 的说法,其包括 ”许多常用的工具、实用程序和编译器,如 make、GCC、clang、perl、svn、git、size、strip、strings、libtool、cpp、what 及许多在 Linux 中系统默认安装的有用命令“。
|
||||
|
||||
接下来,安装 [Homebrew][4], 执行如下的 Ruby 脚本。
|
||||
|
||||
|
||||
```
|
||||
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
```
|
||||
|
||||
如果你像我一样,对随意就运行的来源于 internet 的脚本心存顾虑的话,可以点击上面的脚本去仔细看看其具体功能。
|
||||
|
||||
一旦安装完成后,就恭喜了,你拥有了一个优秀的包管理工具。自然的,你可能接下来会执行 **brew install python** 或其他的命令。不要这样,哈哈!Homebrew 是为我们提供了一个 Python 的管理版本,但让此工具来管理我们的 Python 环境话,很快会失控的。我们需要 [pyenv][5],一款简单的 Python 版本管理工具,它可以安装运行在 [许多操作系统][6] 上。运行如下命令:
|
||||
如果你像我一样,对随意就运行的来源于互联网的脚本心存疑虑的话,可以点击上面的脚本去仔细看看其具体功能。
|
||||
|
||||
一旦安装完成后,就恭喜了,你拥有了一个优秀的包管理工具。自然的,你可能接下来会执行 `brew install python` 或其他的命令。不要这样,哈哈!Homebrew 是为我们提供了一个 Python 的管理版本,但让此工具来管理我们的 Python 环境话,很快会失控的。我们需要 [pyenv][5],一款简单的 Python 版本管理工具,它可以安装运行在 [许多操作系统][6] 上。运行如下命令:
|
||||
|
||||
```
|
||||
$ brew install pyenv
|
||||
```
|
||||
|
||||
想要每次打开命令提示框时 pyenv 都会运行的话,需要把下面的内容加入你的配置文件中(MacOS 中默认为 **.bash_profile**,位于家目录下):
|
||||
|
||||
想要每次打开命令提示框时 `pyenv` 都会运行的话,需要把下面的内容加入你的配置文件中(MacOS 中默认为 `.bash_profile`,位于家目录下):
|
||||
|
||||
```
|
||||
$ cd ~/
|
||||
$ echo 'eval "$(pyenv init -)"' >> .bash_profile
|
||||
```
|
||||
|
||||
添加此行内容后,每个终端都会启动 pyenv 来管理其 **PATH** 环境变量,并插入你想要运行的 Python 版本(而不是在环境变量里面设置的初始版本。更详细的信息,请阅读 “[如何给 Linux 系统设置 PATH 变量][7]”)。打开新的终端以使修改的 **.bash_profile** 文件生效。
|
||||
添加此行内容后,每个终端都会启动 `pyenv` 来管理其 `PATH` 环境变量,并插入你想要运行的 Python 版本(而不是在环境变量里面设置的初始版本。更详细的信息,请阅读 “[如何给 Linux 系统设置 PATH 变量][7]”)。打开新的终端以使修改的 `.bash_profile` 文件生效。
|
||||
|
||||
在安装你中意的 Python 版本前,需要先安装一些有用的工具,如下示:
|
||||
|
||||
|
||||
```
|
||||
$ brew install zlib sqlite
|
||||
```
|
||||
|
||||
Pyenv 依赖于 [zlib][8] 压缩算法和 [SQLite][9] 数据库,如果未正确配置,往往会[导致构建问题][10]。将这些导出配置命令加入当前的终端窗口执行,确保它们安装完成。
|
||||
|
||||
`pyenv` 依赖于 [zlib][8] 压缩算法和 [SQLite][9] 数据库,如果未正确配置,往往会[导致构建问题][10]。将这些导出配置命令加入当前的终端窗口执行,确保它们安装完成。
|
||||
|
||||
```
|
||||
$ export LDFLAGS="-L/usr/local/opt/zlib/lib -L/usr/local/opt/sqlite/lib"
|
||||
@ -61,7 +58,6 @@ $ export CPPFLAGS="-I/usr/local/opt/zlib/include -I/usr/local/opt/sqlite/include
|
||||
|
||||
现在准备工作已经完成,是时候安装一个适合于现代人的 Python 版本了:
|
||||
|
||||
|
||||
```
|
||||
$ pyenv install 3.7.3
|
||||
```
|
||||
@ -70,8 +66,7 @@ $ pyenv install 3.7.3
|
||||
|
||||
### 添加虚拟环境
|
||||
|
||||
一旦完成,就可以愉快地使用虚拟环境了。如没有接下来的步骤的话,你只能在你所有的工作项目中共享同一个 Python 开发环境。使用虚拟环境来隔离每个项目的依赖关系的管理方式,比起 Python 自身提供的开箱即用功能来说,更加清晰明确和更具有重用性。基于这些原因,把 **virtualenvwrapper** 安装到 Python 环境中吧:
|
||||
|
||||
一旦完成,就可以愉快地使用虚拟环境了。如没有接下来的步骤的话,你只能在你所有的工作项目中共享同一个 Python 开发环境。使用虚拟环境来隔离每个项目的依赖关系的管理方式,比起 Python 自身提供的开箱即用功能来说,更加清晰明确和更具有重用性。基于这些原因,把 `virtualenvwrapper` 安装到 Python 环境中吧:
|
||||
|
||||
```
|
||||
$ pyenv global 3.7.3
|
||||
@ -79,8 +74,7 @@ $ pyenv global 3.7.3
|
||||
$ $(pyenv which python3) -m pip install virtualenvwrapper
|
||||
```
|
||||
|
||||
再次打开 **.bash_profile** 文件,把下面内容添加进去,使得每次打开新终端时它都有效:
|
||||
|
||||
再次打开 `.bash_profile` 文件,把下面内容添加进去,使得每次打开新终端时它都有效:
|
||||
|
||||
```
|
||||
# We want to regularly go to our virtual environment directory
|
||||
@ -93,8 +87,7 @@ $ echo 'mkdir -p $WORKON_HOME' >> .bash_profile
|
||||
$ echo '. ~/.pyenv/versions/3.7.3/bin/virtualenvwrapper.sh' >> .bash_profile
|
||||
```
|
||||
|
||||
关掉终端再重新打开(或者运行 **exec /bin/bash -l** 来刷新当前的终端会话),你会看到 **virtualenvwrapper** 正在初始化环境配置:
|
||||
|
||||
关掉终端再重新打开(或者运行 `exec /bin/bash -l` 来刷新当前的终端会话),你会看到 `virtualenvwrapper` 正在初始化环境配置:
|
||||
|
||||
```
|
||||
$ exec /bin/bash -l
|
||||
@ -114,7 +107,6 @@ virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/get_env_detail
|
||||
|
||||
从此刻开始,你的所有工作都是在虚拟环境中的,其允许你使用临时环境来安全地开发。使用此工具链,你可以根据工作所需,设置多个项目并在它们之间切换:
|
||||
|
||||
|
||||
```
|
||||
$ mkvirtualenv test1
|
||||
Using base prefix '/Users/moshe/.pyenv/versions/3.7.3'
|
||||
@ -148,12 +140,11 @@ postmkproject premkproject
|
||||
(test1)$
|
||||
```
|
||||
|
||||
此处, **deactivate** 命令可以退出当前环境。
|
||||
此处,使用 `deactivate` 命令可以退出当前环境。
|
||||
|
||||
### 推荐实践
|
||||
|
||||
你可能已经在比如 **~/src** 这样的目录中添加了长期的项目。当要开始了一个新项目时,进入此目录,为此项目增加子文件夹,然后使用强大的 Bash 解释程序自动根据你的目录名来命令虚拟环境。例如,名称为 “pyfun” 的项目:
|
||||
|
||||
你可能已经在比如 `~/src` 这样的目录中添加了长期的项目。当要开始了一个新项目时,进入此目录,为此项目增加子文件夹,然后使用强大的 Bash 解释程序自动根据你的目录名来命令虚拟环境。例如,名称为 “pyfun” 的项目:
|
||||
|
||||
```
|
||||
$ mkdir -p ~/src/pyfun && cd ~/src/pyfun
|
||||
@ -169,7 +160,6 @@ $
|
||||
|
||||
当需要处理此项目时,只要进入该目录,输入如下命令重新连接虚拟环境:
|
||||
|
||||
|
||||
```
|
||||
$ cd ~/src/pyfun
|
||||
(pyfun)$ workon .
|
||||
@ -177,14 +167,13 @@ $ cd ~/src/pyfun
|
||||
|
||||
初始化虚拟环境意味着对 Python 版本和所加载的模块的时间点的拷贝。由于依赖关系会发生很大的改变,所以偶尔需要刷新项目的虚拟环境。这种情况,你可以通过删除虚拟环境来安全的执行此操作,源代码是不受影响的,如下所示:
|
||||
|
||||
|
||||
```
|
||||
$ cd ~/src/pyfun
|
||||
$ rmvirtualenv $(basename $(pwd))
|
||||
$ mkvirtualenv $(basename $(pwd))
|
||||
```
|
||||
|
||||
这种使用 pyenv 和 virtualwrapper 管理虚拟环境的方法可以避免开发环境和运行环境中 Python 版本的不一致出现的苦恼。这是避免混淆的最简单方法 - 尤其是你工作的团队很大的时候。
|
||||
这种使用 `pyenv` 和 `virtualwrapper` 管理虚拟环境的方法可以避免开发环境和运行环境中 Python 版本的不一致出现的苦恼。这是避免混淆的最简单方法 - 尤其是你工作的团队很大的时候。
|
||||
|
||||
如果你是初学者,正准备配置 Python 环境,可以阅读下 [MacOS 中使用 Python 3][2] 文章。 你们有关于 Python 相关的问题吗,不管是初学者的还是中级使用者的?给我们留下评论信息,我们在下篇文章中会考虑讲解。
|
||||
|
||||
@ -195,7 +184,7 @@ via: https://opensource.com/article/19/6/virtual-environments-python-macos
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco goes deeper into photonic, optical technology with $2.6B Acacia buy)
|
||||
[#]: via: (https://www.networkworld.com/article/3407706/cisco-goes-deeper-into-photonic-optical-technology-with-2-6b-acacia-buy.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco goes deeper into photonic, optical technology with $2.6B Acacia buy
|
||||
======
|
||||
Cisco: Optical-interconnect technologies are becoming increasingly strategic for data centers, service providers
|
||||
![KTSimage / Getty Images][1]
|
||||
|
||||
Looking to bulk-up its optical systems portfolio, Cisco says it intends to buy Acacia Communications for approximately $2.6 billion. The deal is Cisco’s largest since it [laid out $3.7B for AppDynamics][2] in 2017.
|
||||
|
||||
Acacia develops, manufactures and sells high-speed [coherent optical][3] interconnect products that are designed to transform networks linking data centers, cloud and service providers. Cisco is familiar with Acacia as it has been a “significant” customer of the optical firm for about five years, Cisco said.
|
||||
|
||||
**[ Also see [How to plan a software-defined data-center network][4] and [Efficient container use requires data-center software networking][5].]**
|
||||
|
||||
Acacia’s other customers include Nokia Oyj, Huawei and ZTE. Cisco accounts for about 18% of its revenue, [according to Bloomberg’s supply-chain analysis][6].
|
||||
|
||||
"With the explosion of bandwidth in the multi-cloud era, optical interconnect technologies are becoming increasingly strategic,” said David Goeckeler, executive vice president and general manager of Cisco's networking and security business in a statement. “The acquisition of Acacia will allow us to build on the strength of our switching, routing and optical networking portfolio to address our customers' most demanding requirements."
|
||||
|
||||
For Cisco, one of the key drivers for making this deal was Acacia’s coherent technology – “a fancy term that means the ability to send optical signals over long distances,” said Bill Gartner, senior vice president of Cisco’s Optical Systems and Optics business. “That technology today is typically delivered via a line card on a big chassis in a separate optical layer but with Acadia’s digital signal processing, ASIC and other technology we are looking to move that from a line card to a pluggable module that increases network capacity, but also reduces complexity and costs.”
|
||||
|
||||
In addition, Acacia uses silicon photonics as the platform for integration of multiple photonic functions for coherent optics, Gartner wrote in a [blog][7] about the acquisition. “Leveraging the advances in silicon photonics, each new generation of coherent optics products has enabled higher data transmission rates, lower power and higher performance than the one before.”
|
||||
|
||||
Recent research from [IHS Markit][8] shows that data center interconnections are the fastest growing segment for coherent transceivers.
|
||||
|
||||
“Acacia’s digital signal processing and small form-factor long-distance communications technology is strong and will be very valuable to Cisco in the long and short term,” said Jimmy Yu, vice president of the Dell'Oro Group.
|
||||
|
||||
The question many analysts have is the impact the sale will have on other Acacia customers Yu said. “If wasn’t for Acacia selling to others, [such as Huawei, ZTE and Infinera] I don’t thise think vendors would have done as well as they have, and when Cisco owns Acacia it could be a different story,” Yu said.
|
||||
|
||||
The Acacia buy will significantly boost Cisco’s optical portfolio for application outside the data center. In February [Cisco closed a deal to buy optical-semiconductor firm Luxtera][9] for $660 million, bringing it the advanced optical technology customers will need for speed and throughput for future data center and webscale networks.
|
||||
|
||||
The combination of Cisco’s and Luxtera’s capabilities in 100GbE/400GbE optics, silicon and process technology will help customers build future-proof networks optimized for performance, reliability and cost, Cisco stated.
|
||||
|
||||
The reason Cisco snatched-up Luxtera was its silicon photonics technology that moves data among computer chips optically, which is far quicker than today's electrical transfer, Cisco said. Photonics will be the underpinning of future switches and other networking devices.
|
||||
|
||||
"It seems that Cisco is going all in on being a supplier of optical components and optical pluggable: Luxtera (client side optical components and pluggable) and Acacia (line side optical components and pluggable)," Yu said.
|
||||
|
||||
"Unless Cisco captures more of the optical systems market share and coherent shipment volume, I think Cisco will need to continue selling Acacia products to the broader market and other system vendors due to the high cost of product development," Yu said.
|
||||
|
||||
The acquisition is expected to close during the second half of Cisco's FY2020, and upon close, Acacia employees will join Cisco's Optical Systems and Optics business within its networking and security business under Goeckeler.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407706/cisco-goes-deeper-into-photonic-optical-technology-with-2-6b-acacia-buy.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/money_currency_printing_press_us_100-dollar_bills_by_ktsimage_gettyimages-1015664778_2400x1600-100788423-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3184027/cisco-closes-appdynamics-deal-increases-software-weight.html
|
||||
[3]: https://www.ciena.com/insights/what-is/What-Is-Coherent-Optics.html
|
||||
[4]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[5]: https://www.networkworld.com/article/3297379/data-center/efficient-container-use-requires-data-center-software-networking.html
|
||||
[6]: https://www.bloomberg.com/news/articles/2019-07-09/cisco-to-acquire-acacia-communications-for-2-6-billion-jxvs6rva?utm_source=twitter&utm_medium=social&cmpid=socialflow-twitter-business&utm_content=business&utm_campaign=socialflow-organic
|
||||
[7]: https://blogs.cisco.com/news/cisco-news-announcement-07191234
|
||||
[8]: https://technology.ihs.com/
|
||||
[9]: https://www.networkworld.com/article/3339360/cisco-pushes-silicon-photonics-for-enterprise-webscale-networking.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Improving IT Operations – Key to Business Success in Digital Transformation)
|
||||
[#]: via: (https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
Improving IT Operations – Key to Business Success in Digital Transformation
|
||||
======
|
||||
|
||||
![Artem Peretiatko][1]
|
||||
|
||||
Forty seven percent of CEOs say they are being “challenged” by their board of directors to show progress in shifting toward a digital business model according to the [Gartner 2018 CIO][2] Agenda Industry Insights Report. By improving IT operations, organizations can progress and even accelerate their digital transformation initiatives efficiently and successfully. The biggest barrier to success is that IT currently spends around 78 percent of their budget and 80 percent of their time just maintaining IT operations, leaving little time and resource left for innovation according to ZK Research[*][3].
|
||||
|
||||
### **Do you cut the operations budget or invest more in transforming operations? **
|
||||
|
||||
The Cisco IT Operations Readiness Index 2018 predicted a dramatic change in IT operations as CIOs embrace analytics and automation. The study reported that 88 percent of respondents identify investing in IT operations as key to driving preemptive practices and enhancing customer experience.
|
||||
|
||||
### What does this have to do with the wide area network?
|
||||
|
||||
According to the IT Operations Readiness Index, 73 percent of respondents will collect WAN operational or performance data and 70 percent will analyze WAN data and leverage the results to further automate network operations. However, security is the most data-driven infrastructure today compared to other IT infrastructure functions (i.e. IoT, IP telephony, network infrastructure, data center infrastructure, WAN connectivity, etc.). The big questions are:
|
||||
|
||||
* How do you collect operations data and what data should you collect?
|
||||
* How do you analyze it?
|
||||
* How do you then automate IT operations based on the results?
|
||||
|
||||
|
||||
|
||||
By no means, is this a simple task. IT departments use a combination of data collected internally and by outside vendors to aggregate information used to transform operations and make better business decisions.
|
||||
|
||||
In a recent [survey][4] by Frost & Sullivan, 94 percent of respondents indicated they will deploy a Software-defined Wide Area Network ([SD-WAN][5]) in the next 24 months. SD-WAN addresses the gap that router-centric WAN architectures were not designed to fill. A business-driven SD-WAN, designed from the ground up to support a cloud-first business model, provides significantly more network and application performance visibility, significantly assisting enterprises to realize the transformational promise of a digital business model. In fact, Gartner indicates that 90 percent of WAN edge decisions will be based on SD-WAN by 2023.
|
||||
|
||||
### How an SD-WAN can improve IT operations leading to successful digital transformation
|
||||
|
||||
All SD-WAN solutions are not created alike. One of the key components that organizations need to consider and evaluate is having complete observability across the network and applications through a single pane of glass. Without visibility, IT risks running inefficient operations that will stifle digital transformation initiatives. This real-time visibility must provide:
|
||||
|
||||
* Operational metrics enabling IT/CIO’s to shift from a reactive toward a predictive practice
|
||||
* A centralized dashboard that allows IT to monitor, in real-time, all aspects of network operations – a dashboard that has flexible knobs to adjust and collect metrics from all WAN edge appliances to accelerate problem resolution
|
||||
|
||||
|
||||
|
||||
The Silver Peak Unity [EdgeConnect™][6] SD-WAN edge platform provides granular visibility into network and application performance. The EdgeConnect platform ensures the highest quality of experience for both end users and IT. End users enjoy always-consistent, always-available application performance including the highest quality of voice and video, even over broadband. Utilizing the [Unity Orchestrator™][7] comprehensive management dashboard as shown below, IT gains complete observability into the performance attributes of the network and applications in real-time. Customizable widgets provide a wealth of operational data including a health heatmap for every SD-WAN appliance deployed, flow counts, active tunnels, logical topologies, top talkers, alarms, bandwidth consumed by each application and location, application latency and jitter and much more. Furthermore, the platform maintains a week’s worth of data with context allowing IT to playback and see what has transpired at a specific time and location, analogous to a DVR.
|
||||
|
||||
By providing complete observability of the entire WAN, IT spends less time troubleshooting network and application bottlenecks and fielding support/help desk calls day and night, and more time focused on strategic business initiatives.
|
||||
|
||||
![][8]
|
||||
|
||||
This solution brief, “[Simplify SD-WAN Operations with Greater Visibility][9]”, provides additional detail on the capabilities offered in the business-driven EdgeConnect SD-WAN edge platform that enables businesses to accelerate their shift toward a digital business model.
|
||||
|
||||
![][10]
|
||||
|
||||
* ZK Research quote from [Cisco IT Operations Readiness Index 2018][11]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/istock-1096811078_1200x800-100801264-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/is-digital-a-priority-for-your-industry/
|
||||
[3]: https://blog.silver-peak.com/improving-it-operations-key-to-business-success-in-digital-transformation#footnote
|
||||
[4]: https://www.silver-peak.com/sd-wan-edge-survey
|
||||
[5]: https://www.silver-peak.com/sd-wan
|
||||
[6]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[7]: https://www.silver-peak.com/products/unity-orchestrator
|
||||
[8]: https://images.idgesg.net/images/article/2019/07/silver-peak-unity-edgeconnect-sdwan-100801265-large.jpg
|
||||
[9]: https://www.silver-peak.com/resource-center/simplify-sd-wan-operations-greater-visibility
|
||||
[10]: https://images.idgesg.net/images/article/2019/07/simplify-sd-wan-operations-with-greater-visibility-100801266-large.jpg
|
||||
[11]: https://s3-us-west-1.amazonaws.com/connectedfutures-prod/wp-content/uploads/2018/11/CF_transforming_IT_operations_report_3-2.pdf
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux a key player in the edge computing revolution)
|
||||
[#]: via: (https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux a key player in the edge computing revolution
|
||||
======
|
||||
Edge computing is augmenting the role that Linux plays in our day-to-day lives. A conversation with Jaromir Coufal from Red Hat helps to define what the edge has become.
|
||||
![Dominic Smith \(CC BY 2.0\)][1]
|
||||
|
||||
In the past few years, [edge computing][2] has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.
|
||||
|
||||
One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.
|
||||
|
||||
**[ Also read: [What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
|
||||
|
||||
Some forms of edge computing include consumer electronics that are used and installed in millions of homes, others that serve tens of thousands of small businesses with operating their facilities, and still others that tie large companies to their remote sites. Key to this elusive definition is the idea that edge computing always involves distributing the workload in such a way that the bulk of the computing work is done remotely from the central core of the business and close to the business problem being addressed.
|
||||
|
||||
Done properly, edge computing can provide services that are both faster and more reliable. Applications running on the edge can be more resilient and run considerably faster because their required data resources are local. In addition, data can be processed or analyzed locally, often requiring only periodic transfer of results to central sites.
|
||||
|
||||
While physical security might be lower at the edge, edge devices often implement security features that allow them to detect 1) manipulation of the device, 2) malicious software, and 3) a physical breach and wipe data.
|
||||
|
||||
### Benefits of edge computing
|
||||
|
||||
Some of the benefits of edge computing include:
|
||||
|
||||
* A quick response to intrusion detection, including the ability for a remote device to detach or self-destruct
|
||||
* The ability to instantly stop communication when needed
|
||||
* Constrained functionality and fewer generic entry points
|
||||
* Rugged and reliable problem resistance
|
||||
* Making the overall computing system harder to attack because computing is distributed
|
||||
* Less data-in-transit exposure
|
||||
|
||||
|
||||
|
||||
Some examples of edge computing devices include those that provide:
|
||||
|
||||
* Video surveillance – watching for activity, reporting only if seen
|
||||
* Controlling autonomous vehicles
|
||||
* Production monitoring and control
|
||||
|
||||
|
||||
|
||||
### Edge computing success story: Chick-fil-A
|
||||
|
||||
One impressive example of highly successful edge computing caught me by surprise. It turns out Chick-fil-A uses edge computing devices to help manage its food preparation services. At Chick-fil-A, edge devices:
|
||||
|
||||
1. Analyze a fryer’s cleaning and cooking
|
||||
2. Aggregate data as a failsafe in case internet connectivity is lost
|
||||
3. Help with decision-making about cooking – how much and how long to cook
|
||||
4. Enhance business operations
|
||||
5. Help automate the complex food cooking and holding decisions so that even newbies get things right
|
||||
6. Function even when the connection with the central site is down
|
||||
|
||||
|
||||
|
||||
As Coufal pointed out, Chick-fil-A runs [Kubernetes][5] at the edge in every one of its restaurants. Their key motivators are low-latency, scale of operations, and continuous business. And it seems to be working extremely well.
|
||||
|
||||
[Chick-fil-A’s hypothesis][6] captures it all: By making smarter kitchen equipment, we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
|
||||
|
||||
### Are you edge-ready?
|
||||
|
||||
There’s no quick answer as to whether your organization is “edge ready.” Many factors determine what kind of services can be deployed on the edge and whether and when those services need to communicate with more central devices. Some of these include:
|
||||
|
||||
* Whether your workload can be functionally distributed
|
||||
* If it’s OK for devices to have infrequent contact with the central services
|
||||
* If devices can work properly when cut off from their connection back to central services
|
||||
* Whether the devices can be secured (e.g., trusted not to provide an entry point)
|
||||
|
||||
|
||||
|
||||
Implementing an edge computing network will likely take a long time from initial planning to implementation. Still, this kind of technology is taking hold and offers some strong advantages. While edge computing initially took hold 15 or more years ago, the last few years have seen renewed interest thanks to tech advances that have enabled new uses.
|
||||
|
||||
Coufal noted that it's been 15 or more years since edge computing concepts and technologies were first introduced, but renewed interest has come about due to tech advances enabling new uses that require this technology.
|
||||
|
||||
**More about edge computing:**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][4]
|
||||
* [Edge computing best practices][7]
|
||||
* [How edge computing can help secure the IoT][8]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/telecom-100801330-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[5]: https://www.infoworld.com/article/3268073/what-is-kubernetes-container-orchestration-explained.html
|
||||
[6]: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-a-7d67242675e2
|
||||
[7]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[8]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Titan supercomputer is being decommissioned: a costly, time-consuming project)
|
||||
[#]: via: (https://www.networkworld.com/article/3408176/the-titan-supercomputer-is-being-decommissioned-a-costly-time-consuming-project.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The Titan supercomputer is being decommissioned: a costly, time-consuming project
|
||||
======
|
||||
The old gives way to new at Oak Ridge National Labs. The Titan supercomputer is being replaced by Frontier, and it's a super-sized task.
|
||||
![Oak Ridge National Laboratory][1]
|
||||
|
||||
A supercomputer deployed in 2012 is going into retirement after seven years of hard work, but the task of decommissioning it is not trivial.
|
||||
|
||||
The Cray XK7 “Titan” supercomputer at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) is scheduled to be decommissioned on August 1 and disassembled for recycling.
|
||||
|
||||
At 27 petaflops, or 27 quadrillion calculations per second, Titan was at one point the fastest supercomputer in the world at its debut in 2012 and remained in the top 10 worldwide until June 2019.
|
||||
|
||||
**[ Also read: [10 of the world's fastest supercomputers][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
But time marches on. This beast is positively ancient by computing standards. It uses 16-core AMD Opteron CPUs and Nvidia Kepler generation processors. You can buy a gaming PC with better than that today.
|
||||
|
||||
“Titan has run its course,” Operations Manager Stephen McNally at ORNL said in an [article][4] published by ONRL. “The reality is, in electronic years, Titan is ancient. Think of what a cell phone was like seven years ago compared to the cell phones available today. Technology advances rapidly, including supercomputers.”
|
||||
|
||||
In its seven years, Titan generated than 26 billion core hours of computing time for hundreds of research teams around the world, not just the DOE. It was one of the first to use GPUs, a groundbreaking move at the time but now commonplace.
|
||||
|
||||
The Oak Ridge Leadership Computing Facility (OLCF) actually houses Titan in a 60,000-sq.-ft. facility, 20,000 of which is occupied by Titan, the Eos cluster that supports Titan and Atlas file system that holds 32 petabytes of data.
|
||||
|
||||
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
|
||||
|
||||
June 30 was the last day users could submit jobs to Titan or Eos, another supercomputer, which is also 7 years old.
|
||||
|
||||
### Decommissioning a supercomputer is a super-sized task
|
||||
|
||||
Decommissioning a computer the size of Titan is more than turning off a switch. ONRL didn’t have a dollar estimate of the cost involved, but it did discuss the scale, which should give some idea of how costly this will be.
|
||||
|
||||
The decommissioning of Titan will include about 41 people, including staff from ORNL, Cray, and external subcontractors. OLCF staff are supporting users who need to complete runs, save data, or transition their projects to other resources.
|
||||
|
||||
Electricians will safely shut down the 9 megawatt-capacity system, and Cray staff will disassemble and recycle Titan’s electronics and its metal components and cabinets. A separate crew will handle the cooling system. All told, 350 tons of equipment and 10,800 pounds of refrigerant are being removed from the site.
|
||||
|
||||
What becomes of the old gear is unclear. Even ONRL has no idea what Cray will do with it. McNally said there is no value in Titan’s parts: “It’s simply not worth the cost to a data center or university of powering and cooling even fragments of Titan. Titan’s value lies in the system as a whole.”
|
||||
|
||||
The 20,000-sq.-ft. data center that is currently home to Titan will be gutted and expanded in preparation for [Frontier][6], the an exascale system scheduled for delivery in 2021 running AMD Epyc processors and Nvidia GPUs.
|
||||
|
||||
A power, cooling, and data center upgrade is already underway ahead of the Titan decommissioning to prepare for Frontier. The whole removal process will take about a month but has been in the works for several months to ensure a smooth transition for people still using the old machine.
|
||||
|
||||
**[ Now read this: [10 of the world's fastest supercomputers][2] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3408176/the-titan-supercomputer-is-being-decommissioned-a-costly-time-consuming-project.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/titan_supercomputer_at_ornl_oak_ridge_national_laboratory_1200x800-100762120-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.olcf.ornl.gov/2019/06/28/farewell-titan/
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[6]: https://www.olcf.ornl.gov/frontier/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Will IBM’s acquisition be the end of Red Hat?)
|
||||
[#]: via: (https://www.networkworld.com/article/3407746/will-ibms-acquisition-be-the-end-of-red-hat.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Will IBM’s acquisition be the end of Red Hat?
|
||||
======
|
||||
IBM's acquisition of Red Hat is a big deal -- a 34 billion dollar big deal -- and many Linux professionals are wondering how it's going to change Red Hat's role in the Linux world. Here are some thoughts.
|
||||
![Stephen Lawson/IDG][1]
|
||||
|
||||
[IBM's acquisition of Red Hat for $34 billion][2] is now a done deal, and statements from the leadership of both companies sound extremely promising. But some in the Linux users have expressed concern.
|
||||
|
||||
Questions being asked by some Linux professionals and devotees include:
|
||||
|
||||
* Will Red Hat lose customer confidence now that it’s part of IBM and not an independent company?
|
||||
* Will IBM continue putting funds into open source after paying such a huge price for Red Hat? Will they curtail what Red Hat is able to invest?
|
||||
* Both companies’ leaders are saying all the right things now, but can they predict how their business partners and customers will react as they move forward? Will their good intentions be derailed?
|
||||
|
||||
|
||||
|
||||
Part of the worry simply comes from the size of this deal. Thirty-four billion dollars is a _lot_ of money. This is probably the largest cloud computing acquisition to date. What kind of strain will that price tag put on how the new IBM functions going forward? Other worries come from the character of the acquisition – whether Red Hat will be able to continue operating independently and what will change if they cannot. In addition, a few Linux devotees hark back to Oracle’s acquisition of Sun Microsystems in 2010 and Sun’s slow death in its aftermath.
|
||||
|
||||
**[ Also read: [The IBM-Red Hat deal: What it means for enterprises][3] | Get daily insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
The good news is that this merger of IBM and Red Hat appears to offer each of the companies some significant benefits. IBM makes a strong move into cloud computing, and Red Hat gains a broader international footing.
|
||||
|
||||
The other good news relates to the pace at which this acquisition occurred. Initially announced on October 28, 2018, it is now more than eight months later. It’s clear that the leadership of each company has not rushed headlong into this new relationship. Both parties to the acquisition appear to be moving ahead with trust and optimism. IBM promises to ensure Red Hat's independence and will allow it to continue to be "Red Hat" both in name and business activity.
|
||||
|
||||
### The end of Red Hat highly unlikely
|
||||
|
||||
Will this acquisition be the end of Red Hat? That outcome is not impossible, but it seems extremely unlikely. For one thing, both companies stand to gain significantly from the other’s strong points. IBM is likely to be revitalized in ways that allow it to be more successful, and Red Hat is starting from a very strong position. While it’s a huge gamble by some measurements, I think most of us Linux enthusiasts are cautiously optimistic at worst.
|
||||
|
||||
IBM seems intent on allowing Red Hat to work independently and seems to be taking the time required to work out the kinks in their plans.
|
||||
|
||||
As for the eventual demise of Sun Microsystems, the circumstances were very different. As this [coverage in Network World in 2017][5] suggests, Sun was in an altogether different position when it was acquired. The future for IBM and Red Hat appears to be considerably brighter – even to a former (decades earlier) member of the Sun User Group Board of Directors.
|
||||
|
||||
The answer to the question posed by the title of this post is “probably not.” Only time will tell, but leadership seems committed to doing things the right way – preserving Red Hat's role in the Linux world and making the arrangement pay off for both organizations. And I, for one, expect good things to come from the merger – for IBM, for Red Hat and likely even for Linux enthusiasts like myself.
|
||||
|
||||
**[ Now read this: [The IBM-Red Hat deal: What it means for enterprises][3] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407746/will-ibms-acquisition-be-the-end-of-red-hat.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/10/20151027-red-hat-logo-100625237-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3316960/ibm-closes-34b-red-hat-deal-vaults-into-multi-cloud.html
|
||||
[3]: https://www.networkworld.com/article/3317517/the-ibm-red-hat-deal-what-it-means-for-enterprises.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3222707/the-sun-sets-on-solaris-and-sparc.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (32-bit life support: Cross-compiling with GCC)
|
||||
[#]: via: (https://opensource.com/article/19/7/cross-compiling-gcc)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
32-bit life support: Cross-compiling with GCC
|
||||
======
|
||||
Use GCC to cross-compile binaries for different architectures from a
|
||||
single build machine.
|
||||
![Wratchet set tools][1]
|
||||
|
||||
If you're a developer creating binary packages, like an RPM, DEB, Flatpak, or Snap, you have to compile code for a variety of different target platforms. Typical targets include 32-bit and 64-bit x86 and ARM. You could do your builds on different physical or virtual machines, but that means maintaining several systems. Instead, you can use the GNU Compiler Collection ([GCC][2]) to cross-compile, producing binaries for several different architectures from a single build machine.
|
||||
|
||||
Assume you have a simple dice-rolling game that you want to cross-compile. Something written in C is relatively easy on most systems, so to add complexity for the sake of realism, I wrote this example in C++, so the program depends on something not present in C (**iostream**, specifically).
|
||||
|
||||
|
||||
```
|
||||
#include <iostream>
|
||||
#include <cstdlib>
|
||||
|
||||
using namespace std;
|
||||
|
||||
void lose (int c);
|
||||
void win (int c);
|
||||
void draw ();
|
||||
|
||||
int main() {
|
||||
int i;
|
||||
do {
|
||||
cout << "Pick a number between 1 and 20: \n";
|
||||
cin >> i;
|
||||
int c = rand ( ) % 21;
|
||||
if (i > 20) lose (c);
|
||||
else if (i < c ) lose (c);
|
||||
else if (i > c ) win (c);
|
||||
else draw ();
|
||||
}
|
||||
while (1==1);
|
||||
}
|
||||
|
||||
void lose (int c )
|
||||
{
|
||||
cout << "You lose! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void win (int c )
|
||||
{
|
||||
cout << "You win!! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void draw ( )
|
||||
{
|
||||
cout << "What are the chances. You tied. Try again, I dare you! \n";
|
||||
}
|
||||
```
|
||||
|
||||
Compile it on your system using the **g++** command:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ dice.cpp -o dice`
|
||||
```
|
||||
|
||||
Then run it to confirm that it works:
|
||||
|
||||
|
||||
```
|
||||
$ ./dice
|
||||
Pick a number between 1 and 20:
|
||||
[...]
|
||||
```
|
||||
|
||||
You can see what kind of binary you just produced with the **file** command:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice
|
||||
dice: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically
|
||||
linked (uses shared libs), for GNU/Linux 5.1.15, not stripped
|
||||
```
|
||||
|
||||
And just as important, what libraries it links to with **ldd**:
|
||||
|
||||
|
||||
```
|
||||
$ ldd dice
|
||||
linux-vdso.so.1 => (0x00007ffe0d1dc000)
|
||||
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
|
||||
(0x00007fce8410e000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
|
||||
(0x00007fce83d4f000)
|
||||
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
|
||||
(0x00007fce83a52000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007fce84449000)
|
||||
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
|
||||
(0x00007fce8383c000)
|
||||
```
|
||||
|
||||
You have confirmed two things from these tests: The binary you just ran is 64-bit, and it is linked to 64-bit libraries.
|
||||
|
||||
That means that, in order to cross-compile for 32-bit, you must tell **g++** to:
|
||||
|
||||
1. Produce a 32-bit binary
|
||||
2. Link to 32-bit libraries instead of the default 64-bit libraries
|
||||
|
||||
|
||||
|
||||
### Setting up your dev environment
|
||||
|
||||
To compile to 32-bit, you need 32-bit libraries and headers installed on your system. If you run a pure 64-bit system, then you have no 32-bit libraries or headers and need to install a base set. At the very least, you need the C and C++ libraries (**glibc** and **libstdc++**) along with 32-bit version of GCC libraries (**libgcc**). The names of these packages may vary from distribution to distribution. On Slackware, a pure 64-bit distribution with 32-bit compatibility is available from the **multilib** packages provided by [Alien BOB][3]. On Fedora, CentOS, and RHEL:
|
||||
|
||||
|
||||
```
|
||||
$ yum install libstdc++-*.i686
|
||||
$ yum install glibc-*.i686
|
||||
$ yum install libgcc.i686
|
||||
```
|
||||
|
||||
Regardless of the system you're using, you also must install any 32-bit libraries your project uses. For instance, if you include **yaml-cpp** in your project, then you must install the 32-bit version of **yaml-cpp** or, on many systems, the development package for **yaml-cpp** (for instance, **yaml-cpp-devel** on Fedora) before compiling it.
|
||||
|
||||
Once that's taken care of, the compilation is fairly simple:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ -m32 dice.cpp -o dice32 -L /usr/lib -march=i686`
|
||||
```
|
||||
|
||||
The **-m32** flag tells GCC to compile in 32-bit mode. The **-march=i686** option further defines what kind of optimizations to use (refer to **info gcc** for a list of options). The **-L** flag sets the path to the libraries you want GCC to link to. This is usually **/usr/lib** for 32-bit, although, depending on how your system is set up, it could be **/usr/lib32** or even **/opt/usr/lib** or any place you know you keep your 32-bit libraries.
|
||||
|
||||
After the code compiles, see proof of your build:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice32
|
||||
dice: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
|
||||
dynamically linked (uses shared libs) [...]
|
||||
```
|
||||
|
||||
And, of course, **ldd ./dice32** points to your 32-bit libraries.
|
||||
|
||||
### Different architectures
|
||||
|
||||
Compiling 32-bit on 64-bit for the same processor family allows GCC to make many assumptions about how to compile the code. If you need to compile for an entirely different processor, you must install the appropriate cross-build GCC utilities. Which utility you install depends on what you are compiling. This process is a little more complex than compiling for the same CPU family.
|
||||
|
||||
When you're cross-compiling for the same family, you can expect to find the same set of 32-bit libraries as 64-bit libraries, because your Linux distribution is maintaining both. When compiling for an entirely different architecture, you may have to hunt down libraries required by your code. The versions you need may not be in your distribution's repositories because your distribution may not provide packages for your target system, or it may not mirror all packages in a convenient location. If the code you're compiling is yours, then you probably have a good idea of what its dependencies are and possibly where to find them. If the code is something you have downloaded and need to compile, then you probably aren't as familiar with its requirements. In that case, investigate what the code requires to build correctly (they're usually listed in the README or INSTALL files, and certainly in the source code itself), then go gather the components.
|
||||
|
||||
For example, if you need to compile C code for ARM, you must first install **gcc-arm-linux-gnu** (32-bit) or **gcc-aarch64-linux-gnu** (64-bit) on Fedora or RHEL, or **arm-linux-gnueabi-gcc** and **binutils-arm-linux-gnueabi** on Ubuntu. This provides the commands and libraries you need to build (at least) a simple C program. Additionally, you need whatever libraries your code uses. You can place header files in the usual location (**/usr/include** on most systems), or you can place them in a directory of your choice and point GCC to it with the **-I** option.
|
||||
|
||||
When compiling, don't use the standard **gcc** or **g++** command. Instead, use the GCC utility you installed. For example:
|
||||
|
||||
|
||||
```
|
||||
$ arm-linux-gnu-g++ dice.cpp \
|
||||
-I/home/seth/src/crossbuild/arm/cpp \
|
||||
-o armdice.bin
|
||||
```
|
||||
|
||||
Verify what you've built:
|
||||
|
||||
|
||||
```
|
||||
$ file armdice.bin
|
||||
armdice.bin: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV) [...]
|
||||
```
|
||||
|
||||
### Libraries and deliverables
|
||||
|
||||
This was a simple example of how to use cross-compiling. In real life, your source code may produce more than just a single binary. While you can manage this manually, there's probably no good reason to do that. In my next article, I'll demonstrate GNU Autotools, which does most of the work required to make your code portable.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/cross-compiling-gcc
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools)
|
||||
[2]: https://gcc.gnu.org/
|
||||
[3]: http://www.slackware.com/~alien/multilib/
|
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC))
|
||||
[#]: via: (https://fedoramagazine.org/fedora-job-opening-fedora-community-action-and-impact-coordinator-fcaic/)
|
||||
[#]: author: (Brian Exelbierd https://fedoramagazine.org/author/bex/)
|
||||
|
||||
Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC). This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges. I could NEVER have done all of this without the support and assistance of the community!
|
||||
|
||||
As some of you know, I have been covering for some other roles in Red Hat for almost the last year. Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties. I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.
|
||||
|
||||
I think this is a great opportunity for the Fedora community. The Fedora I became FCAIC in three years ago is a very different place from the Fedora of today. While I could easily continue to help shape and grow this community, I think that I can do more by letting some new ideas come in. The new person will hopefully be able to approach challenges differently. I’ll also be here to offer my advice and feedback as others who have moved on in the past have done. Additionally, I will work with Matthew Miller and Red Hat to help hire and onboard the new Fedora Community and Impact Coordinator. During this time I will continue as FCAIC.
|
||||
|
||||
This means that we are looking for a new FCAIC. Love Fedora? Want to work with Fedora full-time to help support and grow the Fedora community? This is the core of what the FCAIC does. The job description (also below), has a list of some of the primary job responsibilities and required skills – but that’s just a sample of the duties required, and the day to day life working full-time with the Fedora community.
|
||||
|
||||
Day to day work includes working with Mindshare, managing the Fedora Budget, and being part of many other teams, including the Fedora Council. You should be ready to write frequently about Fedora’s achievements, policies and decisions, and to draft and generate ideas and strategies. And, of course, planning Flock and Fedora’s presence at other events. It’s hard work, but also a great deal of fun.
|
||||
|
||||
Are you good at setting long-term priorities and hacking away at problems with the big picture in mind? Do you enjoy working with people all around the world, with a variety of skills and interests, to build not just a successful Linux distribution, but a healthy project? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Is Fedora’s mission deeply important to you?
|
||||
|
||||
If you said “yes” to those questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit [apply online][2], or contact [Matthew Miller][3], [Brian Exelbierd][4], or [Stormy Peters][5].
|
||||
|
||||
* * *
|
||||
|
||||
### Fedora Community Manager
|
||||
|
||||
Location: CZ-Remote – prefer Europe but can be North America
|
||||
|
||||
#### Company Description
|
||||
|
||||
At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortune 500 companies.
|
||||
|
||||
#### Job summary
|
||||
|
||||
Red Hat’s Open Source Programs Office (OSPO) team is looking for the next Fedora Community Action and Impact Lead. In this role, you will join the Fedora Council and guide initiatives to grow the Fedora user and developer communities, as well as make Red Hat and Fedora interactions even more transparent and positive. The Council is responsible for stewardship of the Fedora Project as a whole, and supports the health and growth of the Fedora community.
|
||||
|
||||
As a the Fedora Community Action and Impact Lead, you’ll facilitate decision making on how to best focus the Fedora community budget to meet our collective objectives, work with other council members to identify the short, medium, and long-term goals of the Fedora community, and organize and enable the project.
|
||||
|
||||
You will also help make decisions about trademark use, project structure, community disputes or complaints, and other issues. You’ll hold a full council membership, not an auxiliary or advisory role.
|
||||
|
||||
#### Primary job responsibilities
|
||||
|
||||
* Identify opportunities to engage new contributors and community members; align project around supporting those opportunities.
|
||||
* Improve on-boarding materials and processes for new contributors.
|
||||
* Participate in user and developer discussions and identify barriers to success for contributors and users.
|
||||
* Use metrics to evaluate the success of open source initiatives.
|
||||
* Regularly report on community metrics and developments, both internally and externally.
|
||||
* Represent Red Hat’s stake in the Fedora community’s success.
|
||||
* Work with internal stakeholders to understand their goals and develop strategies for working effectively with the community.
|
||||
* Improve onboarding materials and presentation of Fedora to new hires; develop standardized materials on Fedora that can be used globally at Red Hat.
|
||||
* Work with the Fedora Council to determine the annual Fedora budget.
|
||||
* Assist in planning and organizing Fedora’s flagship events each year.
|
||||
* Create and carry out community promotion strategies; create media content like blog posts, podcasts, and videos and facilitate the creation of media by other members of the community
|
||||
|
||||
|
||||
|
||||
#### Required skills
|
||||
|
||||
* Extensive experience with the Fedora Project or a comparable open source community.
|
||||
* Exceptional writing and speaking skills
|
||||
* Experience with software development and open source developer communities; understanding of development processes.
|
||||
* Outstanding organizational skills; ability to prioritize tasks matching short and long-term goals and focus on the tasks of high priority
|
||||
* Ability to manage a project budget.
|
||||
* Ability to lead teams and participate in multiple cross-organizational teams that span the globe.
|
||||
* Experience motivating volunteers and staff across departments and companies
|
||||
|
||||
|
||||
|
||||
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
|
||||
|
||||
Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.
|
||||
|
||||
* * *
|
||||
|
||||
*Photo by _[_Deva Williamson_][6]_ on *[_Unsplash_][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/fedora-job-opening-fedora-community-action-and-impact-coordinator-fcaic/
|
||||
|
||||
作者:[Brian Exelbierd][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/bex/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/fcaic-job-2019-816x345.jpg
|
||||
[2]: https://global-redhat.icims.com/jobs/70362/open-source-community-manager/job?hub=7&mobile=false&width=1193&height=500&bga=true&needsRedirect=false&jan1offset=-420&jun1offset=-360
|
||||
[3]: mailto:mattdm@redhat.com
|
||||
[4]: mailto:bexelbie@redhat.com
|
||||
[5]: mailto:stpeters@redhat.com%EF%BB%BF
|
||||
[6]: https://unsplash.com/@biglaughkitchen?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[7]: https://unsplash.com/search/photos/cake?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,119 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to teach software engineering students about the enterprise)
|
||||
[#]: via: (https://opensource.com/article/19/7/enterprise-technology)
|
||||
[#]: author: (Tomas Cerny https://opensource.com/users/tomcze)
|
||||
|
||||
How to teach software engineering students about the enterprise
|
||||
======
|
||||
Start with a solid foundation of polymorphism, object-oriented
|
||||
programming, collections, lambda, and design patterns.
|
||||
![Tall building with windows][1]
|
||||
|
||||
In this opinion article, you will find a set of suggestions for the inclusion of enterprise technology into software engineering courses. This piece goes through the difficulties that students face and proposes simplifications successfully used in the past. The continual advancement of enterprise technologies leads to a simplifying of the inclusion process in education.
|
||||
|
||||
In the coming years, one can expect that industry demand for experts who know the technology used in enterprise development processes and production systems will increase. Academic institutions are here to prepare experts and leaders for industry, and thus they should know the technologies being used.
|
||||
|
||||
It has been ten years since I taught my first software engineering course. Since then, I have taught this course every year. Many software engineering courses put emphasis on analysis and design from the abstract perspective, involving UML models and notations, and letting students develop software projects on their own.
|
||||
|
||||
However, in my course, I chose a harder path rooted in theory and practice. This path includes lectures and labs on enterprise Java technology. When we pinpoint where actual software engineering skills are needed, we can point to large and complex systems. How could you become a software engineer without being involved in the development of such a system?
|
||||
|
||||
For large systems, standard development technologies are no longer sufficient, since their constructs are too low-level to address the typical problems and situations in enterprise development. Moreover, why would someone build large systems from objects when we could involve components that are built for specific purposes? With objects, we are trying to reinvent the wheel of existing enterprise practice.
|
||||
|
||||
While there are many pros highlighted, frankly speaking, including enterprise technology in your coursework can turn a rather simple software engineering course into a quite difficult one, especially for the first few iterations.
|
||||
|
||||
### Unfamiliar territory
|
||||
|
||||
As long as we understand enterprise technology and standards, and have developed a larger system on our own, we are ready to include enterprise technologies into lectures. If we have not developed a large system before, we still can try to include the technology, but we must be ready to run through multiple examples and demos beforehand, and especially put it all together.
|
||||
|
||||
However, where do we start? I remember the times when early versions of enterprise Java were released with initial demos on hotel booking that seemed ideal for learning the technology, at least from an initial examination. The difficulties quickly became clear when students got stuck. The available tutorials with the documentation are not written for novice students and beginners; they are made for users who have already used similar technologies before.
|
||||
|
||||
### More than just Java
|
||||
|
||||
The first issue novices run into are related to running enterprise Java itself. Enterprise Java no longer needs only a Java virtual machine to run. Now, it needs a container and a web server compliant with the technology in order to use its many components.
|
||||
|
||||
Students of software engineering must suddenly assume the role of a system administrator to install a complex environment on their machines that requires further configuration. For many students who have never opened a terminal before, it becomes a tedious task just to prepare the needed environment.
|
||||
|
||||
Operating systems don’t always make it easy for novices, as not all terminals are as friendly as others. In the ideal case, students would need to reinstall their operating system to Linux, but that takes a software engineering course into a completely different level. Those who manage to install and configure the server are suddenly told that the server runs on a certain port. Perhaps the most breath-taking question to come from a student is, what is a port?
|
||||
|
||||
In such a case, the intent to deploy our first example takes another detour to explain networking, since we must connect to the enterprise system over the network. When we finally roll over operating systems and networking, student motivation is almost gone, and suddenly we face another challenge: where to store data?
|
||||
|
||||
Enterprise systems are all about big data, and one could barely imagine them without a database. However, it assumes that students not only know databases but also know to configure them to accept new connections. Our initial intention for a quick demo to students almost failed, as our software engineering efforts took multiple detours to get into our first demo.
|
||||
|
||||
When we finally got to a running demo, students wanted to update the demo and rerun it. They soon realized that the changes do not propagate to the running demo, and they need to blindly develop the code, and then redeploy it. This process takes up to a minute in some cases at school workstations.
|
||||
|
||||
At that point, students have lost most of their motivation and initial drive. And of all these efforts were just to run the initial demo, not to learn the technology itself.
|
||||
|
||||
In most cases, one has to further advise students about version control and [Maven][2], but we still have not gotten to the point of learning the various components needed to develop such systems. The initial great idea to expose students to enterprise technology thus changes into teaching them all the needed supportive materials.
|
||||
|
||||
This is simply too much work for a software engineering course to demonstrate component-based development. The actual fruit of the intention comes when students spent hours on configuration and setup, leaving not much space in the semester.
|
||||
|
||||
### Fixes for the problem
|
||||
|
||||
With recent technology example projects such as [kitchen-sink][3], we can find every piece of important technology applied in a single demo, which is great. From there, it is pretty straight-forward to cover topics on object-relational mapping and persistence, and session beans to handle business logic, as well as context and dependency injection, which very nicely correspond to the components and UML component diagrams commonly taught in software engineering courses.
|
||||
|
||||
While Java still promotes server-side user interface development, in many cases students reject the choice and prefer to go with [React][4] or [Angular][5] frameworks, which promote the need to cover XML bindings and JSON transformation.
|
||||
|
||||
Over the years of promoting enterprise technologies in my courses, I have experimented with many supportive instruments to reduce the initial efforts needed to learn and cover the introduction materials and deploy the first demo. Here’s what I found.
|
||||
|
||||
#### Back to the basics
|
||||
|
||||
Primarily, I must highlight that it is not possible to teach enterprise technology to immature students. It is much more important to teach such students polymorphism, object-oriented programming, collections, lambda, and design patterns so that they understand the primary design.
|
||||
|
||||
Why? Because otherwise, our students cannot become who we want them to be. In enterprise Java, great solutions are component-based, but in the background, they are full of polymorphism, patterns, and collections, and lacking a full understanding of these will yield significant issues in later design on real systems. Thus, it is better to exclude enterprise Java when students lack basics, and instead focus on the core skills, and possibly postpone the topic for a later course if allowed by the curriculum.
|
||||
|
||||
#### Primer courses
|
||||
|
||||
At my previous university, one of the new undergraduate programs perfected the curriculum with well-formed preceding courses. Students taking software engineering had covered all of the prerequisites, such as networks, operating systems, databases, and object-oriented programming, before starting the course. However, the mentioned curriculum was to prepare bachelors for industry needs, and thus theoretical coursework was not as emphasized.
|
||||
|
||||
#### Multimedia
|
||||
|
||||
One significant time reduction for the initial demo setup could be achieved through step-by-step video tutorials detailing each stage on how to run, debug, and redeploy it. This seems great, but often students want to install the demo on their personal machine, and it is simply hard to make a perfect tutorial prepared for all conditions that can be found on multiple operating systems. Students found videos very helpful, as they may perform the learning process when they choose. Skipping all the errors faced with the initial demo helps students keep their motivation and drive.
|
||||
|
||||
#### Virtual machines
|
||||
|
||||
Another significant improvement is to prepare a virtual image of the operating system, with a setup demo and environment for the students. In the simplest case, students only need to start their integrated development environment (IDE) and click a button to see the demo running. Later, they can install the demo in their own environment, but only after they have a running example in place and hands-on experience.
|
||||
|
||||
#### The right technology
|
||||
|
||||
My last semester, with a course on a slightly different topic, I came across a significant improvement, and perhaps something that changes enterprise development forever. Enterprise microservice architecture came recently as the answer to cloud-based demands.
|
||||
|
||||
Eclipse [MicroProfile][6] is perhaps the right ingredient to teach enterprise development. It allows developers to only include technologies that are needed for the particular application. The idea is to run an enterprise application from a JAR file that contains only the needed libraries. This practice allows running the application from outside of the container. One could see it as a configurable microcontainer that includes the minimum setup for your JAR and runs as a server. This is exactly what we need to simplify our coursework.
|
||||
|
||||
We no longer need to explain all of the technologies provided by enterprise containers, and can instead focus our attention on a much smaller set. This can bring us quickly to the point we want to make in academia. We can focus on teaching our students components, and skip the necessity of container knowledge and complicated redeploys.
|
||||
|
||||
While everyone loves standards, it seems that the [Spring][7] framework—a strong competitor to enterprise Java—predated the idea of running applications outside of a container almost by a decade. Thus, to get to the point in an academic environment, it might be the right way to go (on the other hand, such a choice sacrifices the standardized technology agreed upon by the main industry players).
|
||||
|
||||
### Next steps
|
||||
|
||||
What should we do for our next course? First, know who your audience is and whether they are mature enough to learn enterprise technology. A simple evaluation test can tell more. With a large class audience, you should consider including video tutorials. Otherwise, labs could turn into underprovisioned debugging sessions. With video tutorials used as homework, you make use of the time for lectures and labs more effectively and cover other important topics.
|
||||
|
||||
If you are expecting troubles with operating systems, consider making a virtual machine image, or prepare [Docker][8] images for particular pieces such as the database, etc. Most importantly, keep innovating since technologies come and leave, such as the [JRebel][9] (for hot deploying changes) academic license that is no longer available.
|
||||
|
||||
Fortunately, recent advancements in enterprise technologies bring simplifications, and learning technology will be easier for the next generations. In the end, we will be able to focus on the intended topics to take students in our intended direction. Nevertheless, starting with enterprise technology too early would be counterproductive, and no advancement can change that.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/enterprise-technology
|
||||
|
||||
作者:[Tomas Cerny][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomcze
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
|
||||
[2]: https://maven.apache.org/index.html
|
||||
[3]: https://developers.redhat.com/quickstarts/eap/kitchensink/
|
||||
[4]: https://reactjs.org/
|
||||
[5]: https://angular.io/
|
||||
[6]: https://microprofile.io/
|
||||
[7]: https://spring.io/
|
||||
[8]: https://www.docker.com/
|
||||
[9]: https://jrebel.com/
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You can tinker with this conference badge)
|
||||
[#]: via: (https://opensource.com/article/19/7/conference-badge-hardware)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg/users/mweinberg/users/mweinberg/users/mweinberg/users/mweinberg)
|
||||
|
||||
You can tinker with this conference badge
|
||||
======
|
||||
Check out these unique conference badges that attendees can take home
|
||||
and play with.
|
||||
![All Things Open check-in at registration booth][1]
|
||||
|
||||
In the beginning, there were conferences. Over time, as those conferences grew, attendees needed ways to identify each other. This need gave us the conference badge.
|
||||
|
||||
No one knows what the first conference badges looked like, but we can be confident that they were relatively simple affairs. Over time, paper was put into plastic sleeves, which eventually became information printed on hard plastic. Sometimes that plastic wasn’t even rectangular.
|
||||
|
||||
At some point, batteries were introduced, and then things started to get a bit crazy. Today, many conference badges are intensely elaborate affairs. Some of the best of them are also certified open source hardware.
|
||||
|
||||
### The SMD Challenge
|
||||
|
||||
![The SMD Challenge conference badge.][2]
|
||||
|
||||
The SMD Challenge conference badge by MakersBox.
|
||||
|
||||
The [SMD Challenge][3] was born from an insight into the human condition, as its creators [explain][4]:
|
||||
|
||||
"Making LEDs blink is what people think make Makers happy, but they are wrong. Makers want to be miserable. They like to make mistakes and to have to try things over and over again. _That which does not kill us, makes us stronger_. This project will make you strong!"
|
||||
|
||||
The SMD challenge is a badge you make yourself. This project starts with a relatively easy to solder resistor and LED. It then moves into increasingly tiny resistors and LEDs. Coming in both "Regular Edition" and "Misery Edition," the SMD challenge is designed to challenge—and break—all but the most determined solderers.
|
||||
|
||||
If you do manage to make it all the way to the end (and can document your success), you can enter the prestigious [0201 Club][5]. If you prefer to experience the misery (and success) secondhand, the club also features links showing many of the successful attempts.
|
||||
|
||||
### The Fri3D Camp 2018 badge
|
||||
|
||||
![The Fri3d Camp 2018 Badge.][6]
|
||||
|
||||
The Fri3d Camp 2018 Badge, also known as Ph0xx.
|
||||
|
||||
The [badge][7] (nicknamed Ph0xx) for Fri3D Camp 2018 also has the distinction of being the first piece of certified open source hardware from Belgium. Ph0xx is more than a simple conference badge. It features WiFi and Bluetooth (thanks to an ESP32), native USB connectivity, four buttons, two 5x7 LED matrices, a [piezo speaker][8], and an accelerometer.
|
||||
|
||||
Since this badge is open source, you can immediately start taking advantage of all of these features. Fri3D even created an [app][9] to help you get started by animating the eyes.
|
||||
|
||||
### The Open Hardware Summit 2018 badge
|
||||
|
||||
![Open Hardware Summit 2018 badge][10]
|
||||
|
||||
The Open Hardware Summit 2018 badge.
|
||||
|
||||
Naturally, the [Open Hardware Summit][11] has a [badge][12] that is both awesome and certified open source hardware. In addition to WiFi connectivity, the badge features a programmable e-paper display.
|
||||
|
||||
At the start of the Summit, that display loaded just the wearer’s name and contact information, but since the badge is open source, it quickly displayed much more. Drew Fustini even provided an overview of the badges at the Summit to help people start hacking:
|
||||
|
||||
### Demand the best: Open source conference badges
|
||||
|
||||
In a world full of good conference badges, open source makes them great. If you are a conference organizer, open sourcing your badges helps make sure that they live on well after the event. If you are a conference attendee, open source badges become a platform for all sorts of future hacking.
|
||||
|
||||
Let us know about other great badges in the comments. If you are putting together a badge for an upcoming event, don’t forget to [certify it as open source][13]!
|
||||
|
||||
Explore the entire list of OSHWA certified hardware projects.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/conference-badge-hardware
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg/users/mweinberg/users/mweinberg/users/mweinberg/users/mweinberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ato2016_checkin_conference.jpg?itok=DJtoSS6t (All Things Open check-in at registration booth)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/smdchallenge.png (The SMD Challenge conference badge.)
|
||||
[3]: https://certification.oshwa.org/us000073.html
|
||||
[4]: https://hackaday.io/project/25265-an-unfortunate-smd-project
|
||||
[5]: https://hackaday.io/project/25265-an-unfortunate-smd-project/log/71954-0201-club
|
||||
[6]: https://opensource.com/sites/default/files/uploads/fri3d_badge_2018.png (The Fri3d Camp 2018 Badge.)
|
||||
[7]: https://certification.oshwa.org/be000001.html
|
||||
[8]: https://en.wikipedia.org/wiki/Loudspeaker#Piezoelectric_speakers
|
||||
[9]: https://sebastiaanjansen.be/fri3d-eyes/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/open_hardware_summit_badge_2018.png (Open Hardware Summit 2018 badge)
|
||||
[11]: http://2018.oshwa.org/
|
||||
[12]: https://certification.oshwa.org/us000133.html
|
||||
[13]: https://certification.oshwa.org/
|
Loading…
Reference in New Issue
Block a user