mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
0c1e7ff23a
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chenmu-kk)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12817-1.html)
|
||||
[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option)
|
||||
[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
当 Wi-Fi 成为关键业务时,混合信道架构是最好的多信道选择
|
||||
======
|
||||
|
||||
> 混合信道架构是最好的多信道选择,但它并不总是最佳的选择。当需要可靠的 Wi-Fi 时,单信道和混合 AP 提供了令人信服的替代方案。
|
||||
|
||||
![Getty Images][1]
|
||||
|
||||
我曾与许多实施数字项目的公司合作过,结果却发现它们失败了。正确的想法,健全地施行,现存的市场机遇。哪里是薄弱的环节?是 Wi-Fi 网络。
|
||||
|
||||
例如,一家大型医院希望通过将遥测信息发送到移动设备,来提高临床医生对患者警报的响应时间。如果没有这个系统,护士了解病人警报的唯一途径就是通过声音警报。在所有嘈杂的背景音中,通常很难分辨噪音来自哪里。问题是这家医院中的 Wi-Fi 网络已经很多年未升级了,这导致信息传递严重延迟(通常需要 4~5 分钟)。过长的信息传递导致人们对该系统失去信心,因此许多临床医生停止使用该系统,转而使用手动警报。最终,人们认为这个项目是失败的。
|
||||
|
||||
我曾在制造业、K-12 教育、娱乐和其他行业中见过类似的案例。企业竞争的基础是客户体验,而竞争的动力来自不断扩展又无处不在的无线优势。好的 Wi-Fi 并不意味着市场领导地位,但是劣质的 Wi-Fi 将会对客户和员工产生负面影响。而在当今竞争激烈的环境下,这是灾难的根源。
|
||||
|
||||
### Wi-Fi 性能历来不一致
|
||||
|
||||
Wi-Fi 的问题在于它本身就很脆弱。我相信每个阅读这篇文章的人都经历过下载失败、连接中断、性能不一致以及连接公用热点的漫长等待时间等缺陷。
|
||||
|
||||
想象一下,你在一个会议上,在一个主题演讲之前,你可以随意地发推特、发电子邮件、浏览网页以及做其他事情。然后主讲人上台,所有观众开始拍照,上传并流传信息——然后网络崩溃了。我发现这不仅仅是一个例外,更是一种常态,强调了对[无损 Wi-Fi][3]的需求。
|
||||
|
||||
对于网络技术人员的问题是如何让一个地方的 Wi-Fi 达到全部时间都保持不间断。有人说只要加强现存的网络可以做到,这也许可以,但在某些情况下,Wi-Fi 的类型可能并不合适。
|
||||
|
||||
最常见的 Wi-Fi 部署类型是多信道,也称为微蜂窝,每个客户端通过无线信道连接到接入点(AP)。高质量的通话体验基于两点:良好的信号强度和最小的干扰。有几个因素会导致干扰,例如接入点距离太近、布局问题或者来自其他设备的干扰。为了最大程度地减少干扰,企业需要投入大量的时间和资金在[现场调查中规划最佳的信道地图][2],但即使这些做得很好,Wi-Fi 故障仍然可能发生。
|
||||
|
||||
## 多通道 Wi-Fi 并非总是最佳选择
|
||||
|
||||
对于许多铺着地毯的办公室来说,多通道 Wi-Fi 可能是可靠的,但在某些环境中,外部环境会影响性能。一个很好的例子是多租户建筑,其中有多个 Wi-Fi 网络在同一信道上传输并相互干扰。另一个例子是医院,这里有许多工作人员在多个接入点间流动。客户端将试图连接到最佳接入点,导致客户端不断断开连接并重新连接,从而导致会话中断。还有一些环境,例如学校、机场和会议设施,那里存在大量的瞬态设备,而多通道则难以跟上。
|
||||
|
||||
## 单通道 Wi-Fi 提供更好的可靠性但与此同时性能会受到影响
|
||||
|
||||
网络管理器要做什么?不一致的 Wi-Fi 只是一个既成事实吗?多信道是一种标准,但它并非是为动态物理环境或那些需要可靠的连接环境而设计的。
|
||||
|
||||
几年前提出了一项解决这些问题的替代架构。顾名思义,“单信道”Wi-Fi 在网络中为所有接入点使用单一的无线频道。可以把它想象成在一个信道上运行的单个 Wi-Fi 结构。这种架构中,接入点的位置无关紧要,因为它们都利用相同的通道,因此不会互相干扰。这有一个显而易见的简化优势,比如,如果覆盖率很低,那就没有理由再做一次昂贵的现场调查。相反,只需在需要的地方布置接入点就可以了。
|
||||
|
||||
单通道的缺点之一是总网络吞吐量低于多通道,因为只能使用一个通道。在可靠性高于性能的环境中,这可能会很好,但许多组织希望二者兼而有之。
|
||||
|
||||
## 混合接入点提供了两全其美的优势
|
||||
|
||||
单信道系统制造商最近进行了创新,将信道架构混合在一起,创造了一种“两全其美”的部署,可提供多信道的吞吐量和单信道的可靠性。举个例子,Allied Telesis 提供了混合接入点,可以同时在多信道和单信道模式下运行。这意味着可以分配一些 Web 客户端到多信道以获得最大的吞吐量,而其他的 Web 客户端则可使用单信道来获得无缝漫游体验。
|
||||
|
||||
这种混合的实际用例可能是物流设施,办公室工作人员使用多通道,但叉车操作员在整个仓库移动时使用单一通道持续连接。
|
||||
|
||||
Wi-Fi 曾是一个便利的网络,但如今它或许是所有网络中最关键的任务。传统的多信道体系也许可以工作,但应该做一些尽职调查来看看它在重负下如何运转。IT 领导者需要了解 Wi-Fi 对数字转型计划的重要性,并进行适当的测试,以确保它不是基础设施链中的薄弱环节,并为当今环境选择最佳技术。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chenmu-kk](https://github.com/chenmu-kk)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html
|
||||
[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
|
||||
[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
|
||||
[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
|
||||
[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12816-1.html)
|
||||
[#]: subject: (5 surprising ways I use Jupyter to improve my life)
|
||||
[#]: via: (https://opensource.com/article/20/11/surprising-jupyter)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||
> Jupyter 不仅仅是一个数据分析工具,让我们看看如何以最有创意的方式使用这个基于 Python 的软件。
|
||||
|
||||
!["太空中的电脑笔记本"[1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202011/12/224138d99jlp3q5qjqv5v7.jpg)
|
||||
|
||||
[Jupyter][2] 项目提供了用 JupyterLab 和 Jupyter Notebook 等交互式编写软件的技术方式。这个软件通常用于数据分析,但你可能不知道(Jupyter 社区也没有想到),你可以用它做多少事情。
|
||||
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open environments are where innovative ideas thrive)
|
||||
[#]: via: (https://opensource.com/open-organization/20/11/environments-for-innovation)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
|
||||
|
||||
Open environments are where innovative ideas thrive
|
||||
======
|
||||
Certain environments are more conducive to innovation. A better
|
||||
understanding of innovation's true nature could help us build them.
|
||||
![A sprout in a forest][1]
|
||||
|
||||
In the [first article of this series][2], I examined the nature of the innovation process in great detail. I also discussed some sources of resistance to it. In this second part of my review of [Matt Ridley's][3] book [_How Innovation Works_][4], I will explain the ideal environment in which discoveries are born, protected, and progress into useful products and services, considering certain essential conditions for innovations to flourish. And I will argue that [open organization principles][5] are the keys to establishing those conditions.
|
||||
|
||||
### Better when shared
|
||||
|
||||
Unfortunately, Ridley explains, history has shown that economic instability, downturns, and recessionary periods spur people toward more self-sufficiency and protectionism at the expense of productivity. That is, our attitude toward cooperation and interaction changes; people separate themselves from outsiders and tend to restrict interaction. In this environment, people, communities, and even entire countries fear dependence on others. Collaboration breaks down, feeding costly and counterproductive recessions (it's almost like reverting back to [subsistence-living societies][6], where everyone only consumes and benefits from what they produce by themselves).
|
||||
|
||||
This attitude stifles innovation, as innovation occurs primarily when people are able to work _with_ each other and utilize the principles of community, collaboration, inclusion, transparency, and adaptability.
|
||||
|
||||
Periods of economic stability and expansion, on the other hand, tend to create strong feelings of safety, which encourages mutual interdependence between people. As Ridley writes, "By concentrating on serving other people's needs for forty hours a week—which we call a job—you can spend the other seventy-two hours (not counting fifty-six hours in bed) drawing upon the services provided to you by other people." We do what we are good at and what we enjoy most, and then depend on others to do the things we do not do well and perhaps even dislike. When interdependence and cooperation are commonplace in a society, there tends to be an increase in innovative economic breakthroughs, specifically because of work providing specialized services. Simply put, working together to come up with new things leads to increased human productivity for all.
|
||||
|
||||
In this environment, open organizations come alive.
|
||||
|
||||
So let's examine the qualities that make those environments so fertile for innovation.
|
||||
|
||||
### Where innovation thrives
|
||||
|
||||
Several factors are common throughout the innovation process, Ridley notes in _How Innovation Works_. Notice that open organization principles are involved in each of these factors.
|
||||
|
||||
#### 1\. Innovation is gradual.
|
||||
|
||||
If you look closely, you'll see that innovation does not often involve single breakthroughs. Instead, it's often [a series of discoveries over months and even years][2]. Innovations are gradual, incremental, and _collective_—over time. Unfortunately, we still tend to view innovations as the result of singular actors. This is due to several factors, Ridley explains:
|
||||
|
||||
1. Human nature and a love of heroes: People love telling stories starring single heroes with key events as turning points in a linear series (what [Jim Whitehurst][7] calls "[the innovation delusion][8]"). These stories are more exciting and inspiring, perhaps, but not necessarily historically accurate (years of testing and making trial-and-error mistakes are not as interesting!).
|
||||
2. Intellectual property laws: Contemporary intellectual property systems attempt to ascribe all credit to single inventors and magnify that person's importance, overshadowing collaborators, competitors, and predecessors who contributed "stepping stone discoveries."
|
||||
|
||||
|
||||
|
||||
In truth, _innovations are the result of gradual progress_.
|
||||
|
||||
If you look closely, you'll see that innovation does not often involve single breakthroughs.
|
||||
|
||||
#### 2\. Innovation is serendipitous.
|
||||
|
||||
More often than not, innovative ideas come when one is looking for something else or trying to solve a completely different problem. They are accidental discoveries.
|
||||
|
||||
For example, [Roy Plunkett][9] was trying to improve a refrigerant fluid when he discovered Teflon. Other times, certain historical events turn researchers in directions they had not considered. This is a good reason why broad inclusivity and openness to the unexpected (adaptability) while working on a project are so important.
|
||||
|
||||
Sometimes, broad peripheral vision is far more important than what you're _directly_ looking at. As serendipity plays a big part in innovation, societies that embrace a more free-roving, experimental nature do so well. This approach increases the chance of a lucky idea appearing. Innovation happens when people are free to think, experiment, and speculate. It happens when people can trade things, concepts, and ideas with each other (in other words, they collaborate—see below). This explains why innovative ideas seem to emerge more from cities than from rural areas, Ridley argues.
|
||||
|
||||
#### 3\. Innovation is a process of combining existing components.
|
||||
|
||||
Innovation is often the result of combining pre-existing products in new ways, or applying well-known components to unanticipated problems. Again, this occurs in places where people can meet and exchange goods, services, and thoughts (where they can collaborate). This explains why innovation happens in places where trade and exchange are frequent, and not in isolated or under-populated places. Innovation is born out of communities.
|
||||
|
||||
#### 4\. Innovation is iterative.
|
||||
|
||||
To innovate successfully, one must be willing to experiment and develop a high tolerance for error. One must not be shocked by set-backs; instead, we need to be overjoyed by learning from them. Most great discoveries come from data gleaned and lessons learned during failed experiments. Therefore, while collaborating, there should be an environment of _playfulness_. By just playing around, we help ideas come to the surface. This is why I prefer approaching problems with others by using the expression "let's just kick this idea around a bit." It keeps everyone open to new ideas and does not allow bias or strong narrowing positions to creep in, avoiding the tendency to shut down discussion too quickly.
|
||||
|
||||
The more and faster you make errors, the better. As Ridley states:
|
||||
|
||||
> "Amazon is a good example of failure on the road to success, as Jeff Bezos has often proudly insisted. 'Our success at Amazon is a function of how many experiments we do per year, per month, per week. Being wrong might hurt you a bit, but being slow will kill you.' Bezos once said: 'If you can increase the number of experiments you try from a hundred to a thousand, you dramatically increase the number of innovations you produce.'"
|
||||
|
||||
This is where open organization principles are important. If you have a wide community experimenting and feeding their results back to you, imagine how quickly you will find a solution (or new opportunities, for that matter) while looking at a failure with many eyes and perspectives.
|
||||
|
||||
#### 5\. Innovation is collaborative.
|
||||
|
||||
No one innovates in isolation. Others always play a role—whether large or small—in your innovations. The best ideas are stored between two or more minds, not in one single mind.
|
||||
|
||||
Therefore, _transparency_ to what is in everyone's minds and _collaboration_ on that content are vital. Quite often, one person comes up with a product idea, then another person finds ways to manufacture it, then a third person looks at that manufacturing process and comes up with a way to execute it in a far less expensive way. All these phases are part of the overall innovation—but we focus only on one in the stories we tell about it. More often than recognized, it was a _team_, a collective effort, that led to success.
|
||||
|
||||
Moreover, Matt Ridley argues that most innovations _do not_ come from producers. More frequently, they come from _consumers_ that want to _improve_ on what they are using. So when we explore open organization communities, we must remain focused on the ways these communities incorporate feedback they receive from _outside_ themselves, perhaps from _users_ of related products, and find someone among them that wants to improve on them.
|
||||
|
||||
### Free to innovate
|
||||
|
||||
"The main ingredient in the secret sauce that leads to innovation is freedom," Ridley writes. "Freedom to exchange, experiment, imagine, invest and fail; freedom from expropriation or restriction; freedom on the part of consumers to reward the innovations they like and reject the ones they do not." This freedom will have to be supported by sensible regulations that are permissive, encouraging, and quick to give decisions (Ridley is critical of current intellectual property regulations, citing a negative impact on the innovation process). We have to take a good look at the hurdles blocking innovation, as this exploration process should be promoted.
|
||||
|
||||
In the next part of this series, I'll provide an overview of actual cases of various innovations throughout history and explain how open organization principles played a role in shaping them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/11/environments-for-innovation
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_redwoods2.png?itok=H-iasPEv (A sprout in a forest)
|
||||
[2]: https://opensource.com/open-organization/20/10/best-ideas-recognition-innovation
|
||||
[3]: https://en.wikipedia.org/wiki/Matt_Ridley
|
||||
[4]: https://www.goodreads.com/en/book/show/45449488-how-innovation-works
|
||||
[5]: https://theopenorganization.org/definition/
|
||||
[6]: https://opensource.com/open-organization/20/8/global-history-collaboration
|
||||
[7]: https://opensource.com/users/jwhitehurst
|
||||
[8]: https://opensource.com/open-organization/19/6/innovation-delusion
|
||||
[9]: https://en.wikipedia.org/wiki/Roy_J._Plunkett
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (I'm doing another Recurse Center batch!)
|
||||
[#]: via: (https://jvns.ca/blog/2020/11/05/i-m-doing-another-recurse-center-batch-/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
I'm doing another Recurse Center batch!
|
||||
======
|
||||
|
||||
Hello! I’m going to do a batch (virtually) at the [Recurse Center][1], starting on Monday. I’ll probably be blogging about what I learn there, so I want to talk a bit about my plans!
|
||||
|
||||
I’m excited about this because:
|
||||
|
||||
* I love RC, it’s my favourite programming community, and I’ve always been able to do fun programming projects there.
|
||||
* they’ve put a lot of care into building a great virtual experience (including building some very fun [custom software][2])
|
||||
* there’s a pandemic, it’s going to be cold outside, and I think having a little bit more structure in my life is going to make me a lot happier this winter :)
|
||||
|
||||
|
||||
|
||||
### what’s the Recurse Center?
|
||||
|
||||
The Recurse Center (which I’ll abbreviate to “RC”) is a self-directed programming retreat. It’s free to attend.
|
||||
|
||||
A “batch” is 1 or 6 or 12 weeks, and the idea is that during that time, you:
|
||||
|
||||
1. choose what things you want to learn
|
||||
2. come up with a plan to learn the things (often the plan is “do some kind of fun project”, like [“write a TCP stack”][3])
|
||||
3. learn the things
|
||||
|
||||
|
||||
|
||||
Also there are a bunch of other delightful people learning things, so there’s lots of inspiration for what to learn and people to collaborate with. There are always people who are early in their programming life and looking to get their first programming job, as well as people who have been programming for a long time.
|
||||
|
||||
Their business model is recruiting – they [partner with][4] a bunch of companies, and if you want a job at the end of your batch, then they’ll match you with companies, and if you accept a job with one of those companies then the company pays them a fee.
|
||||
|
||||
I won’t say too much more about it because I’ve written 50+ other posts about how much I love RC on this blog that probably cover it :)
|
||||
|
||||
### some ideas for what I’ll do at RC
|
||||
|
||||
Last time I did RC I had a bunch of plans going in and did not do any of them. I think this time it’ll probably be the same but I’ll list my ideas anyway: here are some possible things I might do.
|
||||
|
||||
* learn Rails! I’ve been making small websites for a very long time but I haven’t really worked as a professional web developer, and I think it might be fun to have the superpower of being able to build websites quickly. I have an idea for a silly website that I think would be a fun starter rails project.
|
||||
* experiment with generative neural networks (I’ve been curious about this for years but haven’t made any progress yet)
|
||||
* maybe finish up this “incidents as a service” system that I started a year and a half ago to help people learn by practicing responding to weird computer situations
|
||||
* deal with some of the [rbspy][5] issues that I’ve been ignoring for months/years
|
||||
* maybe build a game! (there’s a games theme week during the batch!)
|
||||
* maybe learn about graphics or shaders?
|
||||
|
||||
|
||||
|
||||
In my first batch I spent a lot of time on systems / networking stuff because that felt like the hardest thing I could do. This time I feel pretty comfortable with my ability to learn about systems stuff, so I think I’ll focus on different topics!
|
||||
|
||||
### so that’s what I’ll be writing about for a while!
|
||||
|
||||
I’m hoping to blog more while I’m there, maybe not “every day” exactly (it turns out that blogging every day is a lot of work!), but I think it might be fun to write a bunch of tiny blog posts about things I’m learning.
|
||||
|
||||
I’m also planning to release a couple of zines this month – I finished a zine about CSS, and also wrote another entire zine about bash while I was procrastinating on finishing the CSS zine in a self-imposed “write an entire zine in October” challenge, so you should see those here soon too. I’m pretty excited about both of them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/11/05/i-m-doing-another-recurse-center-batch-/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.recurse.com/
|
||||
[2]: https://www.recurse.com/virtual-rc
|
||||
[3]: https://jvns.ca/blog/2014/08/12/what-happens-if-you-write-a-tcp-stack-in-python/
|
||||
[4]: https://www.recurse.com/hire
|
||||
[5]: https://github.com/rbspy/rbspy
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Day 1: a confusing Rails error message)
|
||||
[#]: via: (https://jvns.ca/blog/2020/11/09/day-1--a-little-rails-/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
Day 1: a confusing Rails error message
|
||||
======
|
||||
|
||||
Today I started an Recurse Center batch! I got to meet a few people, and started on a tiny fun Rails project. I think I won’t talk too much about what the project actually is today, but here are some quick notes on a day with Rails:
|
||||
|
||||
### some notes on getting started
|
||||
|
||||
The main thing I learned about setting up a Rails project is that
|
||||
|
||||
1. it uses sqlite by default, you have to tell it to use Postgres
|
||||
2. there are a ton of things that Rails includes by default that you can disable.
|
||||
|
||||
|
||||
|
||||
I installed and `rm -rf`’d Rails maybe 7 times before I was satisfied with it and ended up with this incantation:
|
||||
|
||||
```
|
||||
rails new . -d postgresql --skip-sprockets --skip-javascript`
|
||||
```
|
||||
|
||||
Basically because I definitely wanted to use Postgres and not sqlite, and skipping sprockets and javascript seemed to make installing Rails faster, and I figured I could install them later if I decided I wanted them.
|
||||
|
||||
### the official Rails guide is really good
|
||||
|
||||
I used 2 main resources for creating my starter Rails app:
|
||||
|
||||
* DHH’s original Rails talk from 2005 <https://www.youtube.com/watch?v=Gzj723LkRJY> (which I didn’t watch this time, but I watched the last time I spent a day with Rails, and I found it pretty inspiring and helpful)
|
||||
* The official Rails “getting started” guide, which seems pretty short and clear <https://guides.rubyonrails.org/v5.0/getting_started.html>
|
||||
|
||||
|
||||
|
||||
### a mysterious error message: `undefined method 'user'`
|
||||
|
||||
I love bugs, so here’s a weird Rails error I ran into today! I had some code that looked like this:
|
||||
|
||||
```
|
||||
@user = User.new(user_params)
|
||||
@user.save
|
||||
```
|
||||
|
||||
Pretty simple, right? But when that code ran, I got this baffling error message:
|
||||
|
||||
```
|
||||
undefined method `user' for #<User:0x00007fb6f4012ab8> Did you mean? super
|
||||
```
|
||||
|
||||
I was EXTREMELY confused about what was going on here because I hadn’t _called_ a method called `user`. I’d called `.save`. What???? I stayed confused and frustrated about this for maybe 20 minutes, and then finally I looked at my `User` model and found this code:
|
||||
|
||||
```
|
||||
class User < ApplicationRecord
|
||||
has_secure_password
|
||||
|
||||
validates :user, presence: true, uniqueness: true
|
||||
end
|
||||
```
|
||||
|
||||
`validates :user...` was _supposed_ to be some Rails magic validating that every `User` had a `username`, and that usernames had to be unique. But I’d made a typo, and I’d written `user` and not `username`. I fixed this and then everything worked! hooray!
|
||||
|
||||
I still don’t understand how I was supposed to debug this though: the stack trace told me the problem was with the `@user.save` line, and never mentioned that `validates :user` thing at all. I feel like there must be a way to debug this but I don’t know what it is.
|
||||
|
||||
The whole point of me playing with Rails is to see how the Rails magic plays out in practice so this was a fun bug to hit early on.
|
||||
|
||||
### a simple user management system
|
||||
|
||||
I decided I wanted users in my toy app. Some Googling showed me that there’s an extremely popular gem called [devise][1] that handles users. I found the README a little overwhelming and I knew that I wanted a very minimal user management system in my toy app, so instead I followed this guide called [Authentication from Scratch with Rails 5.2][2] which seems to be working out so far. Rails seems to already have a bunch of built in stuff for managing users – I was really surprised by how short that guide was and how little code I needed to write.
|
||||
|
||||
I learned while implementing users that Rails has a built in magical session management system (see [How Rails Sessions Work][3]. By default all the session data seems to be stored in a cookie on the user’s computer, though I guess you can also store the session data in a database if it gets too big for a cookie.
|
||||
|
||||
It’s definitely kind of strange to already have a session management system and cookies and users without quite knowing what’s going on exactly, but it’s also kind of fun! We’ll see how it goes.
|
||||
|
||||
### tomorrow: more rails!
|
||||
|
||||
Maybe tomorrow I can actually make some progress on implementing my fun rails app idea!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/11/09/day-1--a-little-rails-/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/heartcombo/devise
|
||||
[2]: https://medium.com/@wintermeyer/authentication-from-scratch-with-rails-5-2-92d8676f6836
|
||||
[3]: https://www.justinweiss.com/articles/how-rails-sessions-work/
|
201
sources/tech/20201109 Getting started with Stratis encryption.md
Normal file
201
sources/tech/20201109 Getting started with Stratis encryption.md
Normal file
@ -0,0 +1,201 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Stratis encryption)
|
||||
[#]: via: (https://fedoramagazine.org/getting-started-with-stratis-encryption/)
|
||||
[#]: author: (briansmith https://fedoramagazine.org/author/briansmith/)
|
||||
|
||||
Getting started with Stratis encryption
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Stratis is described on its [official website][2] as an “_easy to use local storage management for Linux_.” See this [short video][3] for a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system. The concepts shown in the video also apply to Stratis in Fedora.
|
||||
|
||||
Stratis version 2.1 introduces support for encryption. Continue reading to learn how to get started with encryption in Stratis.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Encryption requires Stratis version 2.1 or greater. The examples in this post use a pre-release of Fedora 33. Stratis 2.1 will be available in the final release of Fedora 33.
|
||||
|
||||
You’ll also need at least one available block device to create an encrypted pool. The examples shown below were done on a KVM virtual machine with a 5 GB virtual disk drive _(/dev/vdb_).
|
||||
|
||||
### Create a key in the kernel keyring
|
||||
|
||||
The Linux kernel keyring is used to store the encryption key. For more information on the kernel keyring, refer to the _keyrings_ manual page (_man keyrings_).
|
||||
|
||||
Use the _stratis key set_ command to set up the key within the kernel keyring. You must specify where the key should be read from. To read the key from standard input, use the _–capture-key_ option. To retrieve the key from a file, use the _–keyfile-path <file>_ option. The last parameter is a key description. It will be used later when you create the encrypted Stratis pool.
|
||||
|
||||
For example, to create a key with the description _pool1key_, and to read the key from standard input, you would enter:
|
||||
|
||||
```
|
||||
# stratis key set --capture-key pool1key
|
||||
Enter desired key data followed by the return key:
|
||||
```
|
||||
|
||||
The command prompts us to type the key data / passphrase, and the key is then created within the kernel keyring.
|
||||
|
||||
To verify that the key was created, run _stratis key list_:
|
||||
|
||||
```
|
||||
# stratis key list
|
||||
Key Description
|
||||
pool1key
|
||||
```
|
||||
|
||||
This verifies that the _pool1key_ was created. Note that these keys are not persistent. If the host is rebooted, the key will need to be provided again before the encrypted Stratis pool can be accessed (this process is covered later).
|
||||
|
||||
If you have multiple encrypted pools, they can have a separate keys, or they can share the same key.
|
||||
|
||||
The keys can also be viewed using the following _keyctl_ commands:
|
||||
|
||||
```
|
||||
# keyctl get_persistent @s
|
||||
318044983
|
||||
# keyctl show
|
||||
Session Keyring
|
||||
701701270 --alswrv 0 0 keyring: _ses
|
||||
649111286 --alswrv 0 65534 \_ keyring: _uid.0
|
||||
318044983 ---lswrv 0 65534 \_ keyring: _persistent.0
|
||||
1051260141 --alswrv 0 0 \_ user: stratis-1-key-pool1key
|
||||
```
|
||||
|
||||
### Create the encrypted Stratis pool
|
||||
|
||||
Now that a key has been created for Stratis, the next step is to create the encrypted Stratis pool. Encrypting a pool can only be done at pool creation. It isn’t currently possible to encrypt an existing pool.
|
||||
|
||||
Use the _stratis pool create_ command to create a pool. Add _–key-desc_ and the key description that you provided in the previous step (_pool1key_). This will signal to Stratis that the pool should be encrypted using the provided key. The below example creates the Stratis pool on _/dev/vdb_, and names it _pool1_. Be sure to specify an empty/available device on your system.
|
||||
|
||||
```
|
||||
# stratis pool create --key-desc pool1key pool1 /dev/vdb
|
||||
```
|
||||
|
||||
You can verify that the pool has been created with the _stratis pool list_ command:
|
||||
|
||||
```
|
||||
# stratis pool list
|
||||
Name Total Physical Properties
|
||||
pool1 4.98 GiB / 37.63 MiB / 4.95 GiB ~Ca, Cr
|
||||
```
|
||||
|
||||
In the sample output shown above, _~Ca_ indicates that caching is disabled (the tilde negates the property). _Cr_ indicates that encryption is enabled. Note that caching and encryption are mutually exclusive. Both features cannot be simultaneously enabled.
|
||||
|
||||
Next, create a filesystem. The below example, demonstrates creating a filesystem named _filesystem1_, mounting it at the _/filesystem1_ mountpoint, and creating a test file in the new filesystem:
|
||||
|
||||
```
|
||||
# stratis filesystem create pool1 filesystem1
|
||||
# mkdir /filesystem1
|
||||
# mount /stratis/pool1/filesystem1 /filesystem1
|
||||
# cd /filesystem1
|
||||
# echo "this is a test file" > testfile
|
||||
```
|
||||
|
||||
### Access the encrypted pool after a reboot
|
||||
|
||||
When you reboot you’ll notice that Stratis no longer shows your encrypted pool or its block device:
|
||||
|
||||
```
|
||||
# stratis pool list
|
||||
Name Total Physical Properties
|
||||
```
|
||||
|
||||
```
|
||||
# stratis blockdev list
|
||||
Pool Name Device Node Physical Size Tier
|
||||
```
|
||||
|
||||
To access the encrypted pool, first re-create the key with the same key description and key data / passphrase that you used previously:
|
||||
|
||||
```
|
||||
# stratis key set --capture-key pool1key
|
||||
Enter desired key data followed by the return key:
|
||||
```
|
||||
|
||||
Next, run the _stratis pool unlock_ command, and verify that you can now see the pool and its block device:
|
||||
|
||||
```
|
||||
# stratis pool unlock
|
||||
# stratis pool list
|
||||
Name Total Physical Properties
|
||||
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
|
||||
# stratis blockdev list
|
||||
Pool Name Device Node Physical Size Tier
|
||||
pool1 /dev/dm-2 4.98 GiB Data
|
||||
```
|
||||
|
||||
Next, mount the filesystem and verify that you can access the test file you created previously:
|
||||
|
||||
```
|
||||
# mount /stratis/pool1/filesystem1 /filesystem1/
|
||||
# cat /filesystem1/testfile
|
||||
this is a test file
|
||||
```
|
||||
|
||||
### Use a systemd unit file to automatically unlock a Stratis pool at boot
|
||||
|
||||
It is possible to automatically unlock your Stratis pool at boot without manual intervention. However, a file containing the key must be available. Storing the key in a file might be a security concern in some environments.
|
||||
|
||||
The systemd unit file shown below provides a simple method to unlock a Stratis pool at boot and mount the filesystem. Feedback on a better/alternative methods is welcome. You can provide suggestions in the comment section at the end of this article.
|
||||
|
||||
Start by creating your key file with the following command. Be sure to substitute _passphrase_ with the same key data / passphrase you entered previously.
|
||||
|
||||
```
|
||||
# echo -n passphrase > /root/pool1key
|
||||
```
|
||||
|
||||
Make sure that the file is only readable by root:
|
||||
|
||||
```
|
||||
# chmod 400 /root/pool1key
|
||||
# chown root:root /root/pool1key
|
||||
```
|
||||
|
||||
Create a systemd unit file at _/etc/systemd/system/stratis-filesystem1.service_ with the following content:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description = stratis mount pool1 filesystem1 file system
|
||||
After = stratisd.service
|
||||
|
||||
[Service]
|
||||
ExecStartPre=sleep 2
|
||||
ExecStartPre=stratis key set --keyfile-path /root/pool1key pool1key
|
||||
ExecStartPre=stratis pool unlock
|
||||
ExecStartPre=sleep 3
|
||||
ExecStart=mount /stratis/pool1/filesystem1 /filesystem1
|
||||
RemainAfterExit=yes
|
||||
|
||||
[Install]
|
||||
WantedBy = multi-user.target
|
||||
```
|
||||
|
||||
Next, enable the service so that it will run at boot:
|
||||
|
||||
```
|
||||
# systemctl enable stratis-filesystem1.service
|
||||
```
|
||||
|
||||
Now reboot and verify that the Stratis pool has been automatically unlocked and that its filesystem is mounted.
|
||||
|
||||
### Summary and conclusion
|
||||
|
||||
In today’s environment, encryption is a must for many people and organizations. This post demonstrated how to enable encryption in Stratis 2.1.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/getting-started-with-stratis-encryption/
|
||||
|
||||
作者:[briansmith][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/briansmith/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/stratis-encryption-2-816x345.jpg
|
||||
[2]: https://stratis-storage.github.io/
|
||||
[3]: https://www.youtube.com/watch?v=CJu3kmY-f5o
|
@ -0,0 +1,144 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Program your microcontroller with MicroBlocks)
|
||||
[#]: via: (https://opensource.com/article/20/11/microblocks)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Program your microcontroller with MicroBlocks
|
||||
======
|
||||
MicroBlocks brings a Scratch-like interface to programming the
|
||||
Micro:bit, Circuit Playground Express, and other microcontroller boards.
|
||||
![Computer hardware components, generic][1]
|
||||
|
||||
If you like to tinker with technology, you may be familiar with programmable microcontroller boards, such as AdaFruit's [Circuit Playground Express][2] and the [BBC Micro:bit][3]. Now there's a new programming option for you to try: [MicroBlocks][4]. It's a simple Scratch-like programming interface that works well with several microcontrollers, including those two.
|
||||
|
||||
I own both the Circuit Playground Express and the BBC Micro:bit and was eager to try MicroBlocks after discovering it on [Twitter][5].
|
||||
|
||||
### Install MicroBlocks
|
||||
|
||||
To set up MicroBlocks on a Debian-based Linux distribution, [download][6] and install the .deb file. If you use an RPM-based Linux distribution, you can download the Linux [64-bit][7] or [32-bit][8] standalone executable. MicroBlocks also offers installers for [Windows][9], [macOS][10], and [Raspberry Pi][11].
|
||||
|
||||
MicroBlocks can also run in a Chrome, Chromium, or Edge [browser][12] using its experimental web platform, which enables special web serial connections. The Chrome web store also has a [browser extension][13] for MicroBlocks.
|
||||
|
||||
### Connect your microcontroller
|
||||
|
||||
Before you can access your microcontroller on Linux, you must add yourself to your computer's dialout group. Linux uses this group to communicate with serial devices, and if your user account isn't in that group, you won't be able to control your device.
|
||||
|
||||
Run the following in a terminal to add yourself to the dialout group:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo usermod -G dialout -a `whoami``
|
||||
```
|
||||
|
||||
Log out of your desktop and then log back in (or just reboot). Then connect your BBC Micro:bit, Circuit Playground Express, or other microcontroller board to an available USB port on your computer. My [Intel NUC][14] recognized my microcontroller without issue.
|
||||
|
||||
After connecting your microcontroller, you may be asked to update the device's firmware. It's safe to do so.
|
||||
|
||||
![Update firmware option][15]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
Once that's done, you're all ready to go.
|
||||
|
||||
### Start programming
|
||||
|
||||
Use the programming interface to set what language you want to use when interacting with MicroBlocks.
|
||||
|
||||
![MicroBlocks language options][17]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
You can verify that your microcontroller is connected by checking the Connect icon on the menu.
|
||||
|
||||
![MicroBlocks Connection icon][18]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
Now you're ready to start exploring. One of my favorite ways to learn is to tinker with a user interface's different options. What makes MicroBlocks special is that it's a live coding environment, so you get to see changes you make right away.
|
||||
|
||||
Try this: Go to the Display category (in the left-hand column) and drag the display array into the scripting area. Use the menu to change A to B in one of them.
|
||||
|
||||
Click on a programming block, and your code, simple though it may be, runs immediately on the board.
|
||||
|
||||
### Use programming blocks
|
||||
|
||||
If you are familiar with [Scratch][19], you are likely to find MicroBlocks extremely easy to use. Students love it because of the instant feedback from the board and the program.
|
||||
|
||||
My first program was very simple. I wanted to make a simple "smiley face" on my Micro:bit.
|
||||
|
||||
First, I clicked on the Control block and selected: "When button 'a' is pressed."
|
||||
|
||||
Then I selected a smiley face from the LED Display library and connected that to the Control block.
|
||||
|
||||
Finally, I pressed Button A on my Micro:bit. Feedback is instantaneous.
|
||||
|
||||
![Smiley face displayed on Micro:bit][20]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
### Save your code
|
||||
|
||||
Saving your program is easy. On the top menu bar, click on the third icon from the left (the document icon). Choose the Save option from the drop-down menu.
|
||||
|
||||
![Save file in MicroBlocks][21]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
Try experimenting with the interface to program your board however you want. For my second program, I used the Control and LED Display blocks to spell out "Bills," which is my favorite NFL team. But there are lots of other functions available, so try designing something that interests you.
|
||||
|
||||
!["Bills" on Micro:Bit][22]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][16])
|
||||
|
||||
### Do more with MicroBlocks
|
||||
|
||||
Be sure to check out the [quickstart][23] guide on the MicroBlocks website for more information. The site also contains [activity guides][24] with easy-to-follow code examples for students and teachers. These will help anyone get started programming the Micro:bit or the Circuit Playground Express with MicroBlocks.
|
||||
|
||||
MicroBlocks is fully [open source][25] and released under the [Mozilla Public License 2.0][26].
|
||||
|
||||
MicroBlocks is still under active development by the core team, and they're not currently soliciting code contributions or pull requests. However, they are interested in any MicroBlocks tutorials, lesson plans, or examples from the community, so please [contact them][27] if you have something to share.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/11/microblocks
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hardware_disk_components.png?itok=W1fhbwYp (Computer hardware components, generic)
|
||||
[2]: https://opensource.com/article/19/7/circuit-playground-express
|
||||
[3]: https://opensource.com/article/19/8/getting-started-bbc-microbit
|
||||
[4]: https://microblocks.fun/
|
||||
[5]: https://twitter.com/microblocksfun
|
||||
[6]: https://microblocks.fun/download
|
||||
[7]: https://microblocks.fun/downloads/latest/standalone/ublocks-linux64bit.zip
|
||||
[8]: https://microblocks.fun/downloads/latest/standalone/ublocks-linux32bit.zip
|
||||
[9]: https://microblocks.fun/downloads/latest/packages/microBlocks%20setup.exe
|
||||
[10]: https://microblocks.fun/downloads/latest/packages/MicroBlocks.app.zip
|
||||
[11]: https://microblocks.fun/downloads/latest/packages/ublocks-armhf.deb
|
||||
[12]: https://microblocks.fun/run/microblocks.html
|
||||
[13]: https://chrome.google.com/webstore/detail/microblocks/cbmcbhgijipgdmlnieolilhghfmnngbb?hl=en
|
||||
[14]: https://opensource.com/article/20/9/linux-intel-nuc
|
||||
[15]: https://opensource.com/sites/default/files/uploads/microblocks_update-firmware.png (Update firmware option)
|
||||
[16]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[17]: https://opensource.com/sites/default/files/uploads/microblocks_set-language.png (MicroBlocks language options)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/microblocks_connected.png (MicroBlocks Connection icon)
|
||||
[19]: https://scratch.mit.edu/
|
||||
[20]: https://opensource.com/sites/default/files/uploads/smileyface.jpg (Smiley face displayed on Micro:bit)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/microblocks_save.png (Save file in MicroBlocks)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/microblocks_bills.png ("Bills" on Micro:Bit)
|
||||
[23]: https://microblocks.fun/learn#getstarted
|
||||
[24]: https://microblocks.fun/learn#activity_cards
|
||||
[25]: https://bitbucket.org/john_maloney/smallvm/src/master/
|
||||
[26]: https://www.mozilla.org/en-US/MPL/2.0/
|
||||
[27]: https://microblocks.fun/info#contact
|
@ -0,0 +1,311 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Set up Minishift and run Jenkins on Linux)
|
||||
[#]: via: (https://opensource.com/article/20/11/minishift-linux)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
|
||||
Set up Minishift and run Jenkins on Linux
|
||||
======
|
||||
Install, configure, and use Minishift to create your first pipeline.
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
[Minishift][2] is a tool that helps you run [OKD][3] (Red Hat's open source OpenShift container platform) locally by launching a single-node OKD cluster inside a virtual machine. It is powered by [Kubernetes][4], which is one of my favorite things to talk about.
|
||||
|
||||
In this article, I will demonstrate how to get started with Minishift on Linux. This was written for Ubuntu 18.04, and you'll need [sudo access][5] on your Linux machine to run some commands.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before starting the installation, your Linux machine must have either [KVM][6] for Linux or [VirtualBox][7], which runs on every platform. This demo uses KVM, which you can install along with all the required dependencies:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install qemu-kvm \
|
||||
libvirt-clients libvirt-daemon-system \
|
||||
bridge-utils virt-manager
|
||||
```
|
||||
|
||||
After installing KVM, you must make some modifications to allow your user to use it. Specifically, you must add your user name to the `libvirt` group:
|
||||
|
||||
|
||||
```
|
||||
$ sudo usermod --append --groups libvirt $(whoami)
|
||||
$ newgrp libvirt
|
||||
```
|
||||
|
||||
Next, install the Docker KVM driver, which is needed to run containers on Minishift. I downloaded the Docker machine driver directly to `/usr/local/bin`. You don't have to save it to `/usr/local/bin`, but you must ensure that its location is in your [PATH][8]:
|
||||
|
||||
|
||||
```
|
||||
$ curl -L <https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-ubuntu16.04> \
|
||||
-o /usr/local/bin/docker-machine-driver-kvm
|
||||
|
||||
$ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm
|
||||
```
|
||||
|
||||
### Install Minishift
|
||||
|
||||
Now that the prerequisites are in place, visit the Minishift [releases page][9] and determine which version of Minishift you want to install. I used Minishift [v1.34.3][10].
|
||||
|
||||
[Download the Linux .tar file][11] to a directory you will be able to find easily. I used the `minishift` directory:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
Minishift-1.34.3-linux-amd64.tgz
|
||||
```
|
||||
|
||||
Next, untar your new file using [the `tar` command][12]:
|
||||
|
||||
|
||||
```
|
||||
$ tar zxvf minishift-1.34.3-linux-amd64.tgz
|
||||
minishift-1.34.3-linux-amd64/
|
||||
minishift-1.34.3-linux-amd64/LICENSE
|
||||
minishift-1.34.3-linux-amd64/README.adoc
|
||||
minishift-1.34.3-linux-amd64/minishift
|
||||
```
|
||||
|
||||
By using the `v` (for _verbose_) option in your command, you can see all the files and their locations in your directory structure.
|
||||
|
||||
Run the `ls` command to confirm that the new directory was created:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
minishift-1.34.3-linux-amd64
|
||||
```
|
||||
|
||||
Next, change to the new directory and find the binary file you need; it is named `minishift`:
|
||||
|
||||
|
||||
```
|
||||
$ cd minishift-1.34.3-linux-amd64
|
||||
$ ls
|
||||
LICENSE minishift README.adoc
|
||||
$
|
||||
```
|
||||
|
||||
Move the `minishift `binary file to your PATH, which you can find by running the following and looking at the output:
|
||||
|
||||
|
||||
```
|
||||
$ echo $PATH
|
||||
/home/jess/.local/bin:/usr/local/sbin:/usr/local/bin
|
||||
```
|
||||
|
||||
I used `/usr/local/bin` as the `minishift` binary file's location:
|
||||
|
||||
|
||||
```
|
||||
$ sudo mv minishift /usr/local/bin
|
||||
[sudo] password for jess:
|
||||
$ ls /usr/local/bin
|
||||
minishift
|
||||
```
|
||||
|
||||
Run the `minishift` command and look at the output:
|
||||
|
||||
|
||||
```
|
||||
$ minishift
|
||||
Minishift is a command-line tool that provisions and manages single-node OpenShift clusters optimized for development workflows.
|
||||
|
||||
Usage:
|
||||
minishift [command]
|
||||
|
||||
Available Commands:
|
||||
addons Manages Minishift add-ons.
|
||||
completion Outputs minishift shell completion for the given shell
|
||||
config Modifies Minishift configuration properties.
|
||||
console Opens or displays the OpenShift Web Console URL.
|
||||
[...]
|
||||
|
||||
Use "minishift [command] --help" for more information about a command.
|
||||
```
|
||||
|
||||
### Log into Minishift's web console
|
||||
|
||||
Now that Minishift is installed, you can walk through it and play with some cool new software. Begin with `minishift start`. This, as you might guess, starts Minishift—specifically, it starts a one-node cluster on your computer:
|
||||
|
||||
|
||||
```
|
||||
$ minishift start
|
||||
Starting profile 'minishift'
|
||||
Check if deprecated options are used … OK
|
||||
Checking if <https://github.com> is reachable … OK
|
||||
[...]
|
||||
Minishift will be configured with…
|
||||
Memory: 4GB
|
||||
vCPUs : 2GB
|
||||
Disk size: 20 GB
|
||||
Starting Minishift VM ……….OK
|
||||
```
|
||||
|
||||
This process can take a long time, depending on your hardware, so be patient. When it ends, you'll get information about where to find your imaginary cluster on your virtualized network:
|
||||
|
||||
|
||||
```
|
||||
Server Information ...
|
||||
MiniShift server started.
|
||||
The server is accessible via web console at:
|
||||
<https://192.168.42.66:8443/console>
|
||||
```
|
||||
|
||||
Now, MiniShift is running, complete with a web console. You can log into the OKD console using **developer** as the user name and any password you want. I chose **developer** / **developer**.
|
||||
|
||||
![Minishift web console login][13]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
The web console is an easy control panel you can use to administer your humble cluster. It's a place for you to create and load container images, add and monitor pods, and ensure your instance is healthy.
|
||||
|
||||
![Minishift web console][15]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
### Build a pipeline
|
||||
|
||||
To start building your first pipeline, click **Pipeline Build Example** on the console. Click **Next** to show the parameters available to create the pipeline project.
|
||||
|
||||
![Pipeline Build Example][16]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
A window appears with parameters to fill in if you want; you can use what's already there for this example. Walk through the rest of the screen choices to create a sample pipeline.
|
||||
|
||||
![Pipeline options][17]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Click **Create**, and let Minishift create the project for you. It shows your success (or failure).
|
||||
|
||||
![Successful pipeline build][18]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
You can also click **Show Parameters** and scroll through the list of parameters configured for this project. Click **Close** and look for a confirmation message on the left.
|
||||
|
||||
![Show Parameters Minishift][19]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
![List of projects][20]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
When you click on **My Project**, you can see the details and pods created for the project to run.
|
||||
|
||||
![Project details][21]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Open the `jenkins-ephemeral` link that was generated. Log in again with the **developer** credentials and allow access to run a pipeline in Jenkins.
|
||||
|
||||
![Authorize access interface][22]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Now you can look through the Jenkins interface to get a feel for what it has to offer.
|
||||
|
||||
![Jenkins interface][23]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Find your project.
|
||||
|
||||
![Jenkins projects][24]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
When you're ready, click **Build Now**.
|
||||
|
||||
![Jenkins "build now"][25]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Then you can view the job's output in the console output.
|
||||
|
||||
![Jenkins console output][26]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Once the job completes successfully, you will see a success message at the bottom of the console.
|
||||
|
||||
What did this pipeline do? It updated the deployment manually.
|
||||
|
||||
![Pipeline result][27]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
Congratulations, you successfully created an example automated deployment using Minishift!
|
||||
|
||||
### Clean it up
|
||||
|
||||
The last thing to do is to clean up everything by running two commands:
|
||||
|
||||
|
||||
```
|
||||
$ minishift stop
|
||||
$ minishift delete
|
||||
```
|
||||
|
||||
Why `stop` and then `delete`? Well, I like to make sure nothing is running before I run a delete command of any kind. This results in a cleaner delete without the possibility of having any leftover or hung processes. Here are the commands' output.
|
||||
|
||||
![minishift stop command][28]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
![minishift delete command][29]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][14])
|
||||
|
||||
### Final notes
|
||||
|
||||
Minishift is a great tool with great built-in automation. The user interface is comfortable to work with and easy on the eyes. I found it a fun new tool to play with at home, and if you want to dive in deeper, just look over the great [documentation][30] and many [online tutorials][3]. I recommend exploring this application in depth. Have a happy time Minishifting!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/11/minishift-linux
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://www.okd.io/minishift/
|
||||
[3]: https://www.redhat.com/sysadmin/learn-openshift-minishift
|
||||
[4]: https://opensource.com/resources/what-is-kubernetes
|
||||
[5]: https://en.wikipedia.org/wiki/Sudo
|
||||
[6]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
|
||||
[7]: https://www.virtualbox.org/wiki/Downloads
|
||||
[8]: https://opensource.com/article/17/6/set-path-linux
|
||||
[9]: https://github.com/minishift/minishift/releases
|
||||
[10]: https://github.com/minishift/minishift/releases/tag/v1.34.3
|
||||
[11]: https://github.com/minishift/minishift/releases/download/v1.34.3/minishift-1.34.3-linux-amd64.tgz
|
||||
[12]: https://opensource.com/article/17/7/how-unzip-targz-file
|
||||
[13]: https://opensource.com/sites/default/files/uploads/minishift_web-console-login.png (Minishift web console login)
|
||||
[14]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[15]: https://opensource.com/sites/default/files/uploads/minishift_web-console.png (Minishift web console)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/minishift_pipeline-build-example.png (Pipeline Build Example)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/minishift_pipeline-build-config.png (Pipeline options)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/minishift_pipeline-build-success.png (Successful pipeline build)
|
||||
[19]: https://opensource.com/sites/default/files/pictures/params-minishift.jpg (Show Parameters Minishift)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/minishift_myprojects.png (List of projects)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/minishift_project-details.png (Project details)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/minishift_authorize-access.png (Authorize access interface)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/jenkins-interface.png (Jenkins interface)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/jenkins-project.png (Jenkins projects)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/jenkins_build-now.png (Jenkins "build now")
|
||||
[26]: https://opensource.com/sites/default/files/uploads/jenkins_console-output.png (Jenkins console output)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/pipelineresult.png (Pipeline result)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/minishift-stop.png (minishift stop command)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/minishift-delete.png (minishift delete command)
|
||||
[30]: https://docs.okd.io/3.11/minishift/using/index.html
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's the difference between orchestration and automation?)
|
||||
[#]: via: (https://opensource.com/article/20/11/orchestration-vs-automation)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
What's the difference between orchestration and automation?
|
||||
======
|
||||
Both terms imply that things happen without your direct intervention.
|
||||
But the way you get to those results, and the tools you use to make them
|
||||
happen, differ.
|
||||
![doodles of arrows moving in different directions][1]
|
||||
|
||||
For the longest time, it seemed the only thing any sysadmin cared about was automation. Recently, though, the mantra seems to have changed from automation to orchestration, leading many puzzled admins to wonder: "What's the difference?"
|
||||
|
||||
The difference between automation and orchestration is primarily in intent and tooling. Technically, automation can be considered a subset of orchestration. While orchestration suggests many moving parts, automation usually refers to a singular task or a small number of strongly related tasks. Orchestration works at a higher level and is expected to make decisions based on changing conditions and requirements.
|
||||
|
||||
However, this view shouldn't be taken too literally because both terms—_automation_ and _orchestration_—do have implications when they're used. The results of both are functionally the same: things happen without your direct intervention. But the way you get to those results, and the tools you use to make them happen, are different, or at least the terms are used differently depending on what tools you've used.
|
||||
|
||||
For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions.
|
||||
|
||||
### Decisionmaking and IT orchestration
|
||||
|
||||
Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a "recipe" of tasks.
|
||||
|
||||
Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do.
|
||||
|
||||
Consider Ansible and Bash. Bash is a popular shell and scripting language used by sysadmins to accomplish practically everything they do during a given workday. Automating with Bash is straightforward: Instead of typing commands into an interactive session, you type them into a text document and save the file as a shell script. Bash runs the shell script, executing each command in succession. There's room for some conditional decisionmaking, but usually, it's no more complex than simple if-then statements, each of which must be coded into the script.
|
||||
|
||||
Ansible, on the other hand, uses playbooks in which a sysadmin describes the desired state of the computer. It lists requirements that must be met before Ansible can consider the job done. When Ansible runs, it takes action based on the current state of the computer compared to the desired state, based on the computer's operating system, and so on. A playbook doesn't contain specific commands, instead leaving those decisions up to Ansible itself.
|
||||
|
||||
Of course, it's particularly revealing that Ansible is referred to as an automation—not an orchestration—tool. The difference can be subtle, and the terms definitely overlap.
|
||||
|
||||
### Orchestration and the cloud
|
||||
|
||||
Say you need to convert a file type that's regularly uploaded to your server by your users.
|
||||
|
||||
The manual solution would be to check a directory for uploaded content every morning, open the file, and then save it in a different format. This solution is slow, inefficient, and probably could happen only once every 24 hours because you're a busy person.
|
||||
|
||||
**[Read next: [How to explain orchestration][2]]**
|
||||
|
||||
You could automate the task. Were you to do that, you might write a PHP or a Node.js script to detect when a file has been uploaded. The script would perform the conversion and send an alert or make a log entry to confirm the conversion was successful. You could improve the script over time to allow users to interact with the upload and conversion process.
|
||||
|
||||
Were you to orchestrate the process, you might instead start with an application. Your custom app would be designed to accept and convert files. You might run the application in a container on your cloud, and using OpenShift, you could launch additional instances of your app when the traffic or workload increases beyond a certain threshold.
|
||||
|
||||
### Learning automation and orchestration
|
||||
|
||||
There isn't just one discipline for automation or orchestration. These are broad practices that are applied to many different tasks across many different industries. The first step to learning, though, is to become proficient with the technology you're meant to orchestrate and automate. It's difficult to orchestrate (safely) the scaling a series of web servers if you don't understand how a web server works, or what ports need to be open or closed, or what a port is. In practice, you may not be the person opening ports or configuring the server; you could be tasked with administrating OpenShift without really knowing or caring what's inside a container. But basic concepts are important because they broadly apply to usability, troubleshooting, and security.
|
||||
|
||||
You also need to get familiar with the most common tools of the orchestration and automation world. Learn some [Bash][3], start using [Git][4] and design some [Git hooks][5], learn some Python, get comfortable with [YAML][6] and [Ansible][7], and try out Minikube, [OKD][8], and [OpenShift][9].
|
||||
|
||||
Orchestration and automation are important skills, both to make your work more efficient and as something to bring to your team. Invest in it today, and get twice as much done tomorrow.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/11/orchestration-vs-automation
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arrows_operation_direction_system_orchestrate.jpg?itok=NUgoZYY1 (doodles of arrows moving in different directions)
|
||||
[2]: https://enterprisersproject.com/article/2020/8/orchestration-explained-plain-english
|
||||
[3]: https://www.redhat.com/sysadmin/using-bash-automation
|
||||
[4]: https://opensource.com/life/16/7/stumbling-git
|
||||
[5]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
|
||||
[6]: https://www.redhat.com/sysadmin/understanding-yaml-ansible
|
||||
[7]: https://opensource.com/downloads/ansible-k8s-cheat-sheet
|
||||
[8]: https://www.redhat.com/sysadmin/learn-openshift-minishift
|
||||
[9]: http://openshift.io
|
@ -0,0 +1,249 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Load balance network traffic with HAProxy)
|
||||
[#]: via: (https://opensource.com/article/20/11/load-balancing-haproxy)
|
||||
[#]: author: (Jim O'Connell https://opensource.com/users/jimoconnell)
|
||||
|
||||
Load balance network traffic with HAProxy
|
||||
======
|
||||
Install, configure, and run HAProxy to distribute network traffic across
|
||||
several web or application servers.
|
||||
![eight stones balancing][1]
|
||||
|
||||
You don't have to work at a huge company to justify using a load balancer. You might be a hobbyist, self-hosting a website from a couple of Raspberry Pi computers. Perhaps you're the server administrator for a small business; maybe you _do_ work for a huge company. Whatever your situation, you can benefit from using the [HAProxy][2] load balancer to manage your traffic.
|
||||
|
||||
HAProxy is known as "the world's fastest and most widely used software load balancer." It packs in many features that can make your applications more secure and reliable, including built-in rate limiting, anomaly detection, connection queuing, health checks, and detailed logs and metrics. Learning the basic skills and concepts covered in this tutorial will help you use HAProxy to build a more robust, far more powerful infrastructure.
|
||||
|
||||
### Why would you need a load balancer?
|
||||
|
||||
A load balancer is a way to easily distribute connections across several web or application servers. In fact, HAProxy can balance any type of Transmission Control Protocol ([TCP][3]) traffic, including RDP, FTP, WebSockets, or database connections. The ability to distribute load means you don't need to purchase a massive web server with zillions of gigs of RAM just because your website gets more traffic than Google.
|
||||
|
||||
A load balancer also gives you flexibility. Perhaps your existing web server isn't robust enough to meet peak demand during busy times of the year and you'd like to add another, but only temporarily. Maybe you want to add some redundancy in case one server fails. With HAProxy, you can add more servers to the backend pool when you need them and remove them when you don't.
|
||||
|
||||
You can also route requests to different servers depending on the context. For example, you might want to handle your static content with a couple of cache servers, such as [Varnish][4], but route anything that requires dynamic content, such as an API endpoint, to a more powerful machine.
|
||||
|
||||
In this article, I will walk through setting up a very basic HAProxy installation to use HTTPS to listen on secure port 443 and utilize a couple of backend web servers. It will even send all traffic that comes to a predefined URL (like `/api/`) to a different server or pool of servers.
|
||||
|
||||
### Install HAProxy
|
||||
|
||||
To get started, spin up a new CentOS 8 server or instance and bring the system up to date:
|
||||
|
||||
|
||||
```
|
||||
`sudo yum update -y`
|
||||
```
|
||||
|
||||
This typically runs for a while. Grab yourself a coffee while you wait.
|
||||
|
||||
This installation has two parts: the first part installs the yum version of HAProxy, and the second part compiles and installs your binary from source to overwrite the previous HAProxy with the latest version. Installing with yum does a lot of the heavy lifting as far as generating systemd startup scripts, etc., so run the `yum install` and then overwrite the HAProxy binary with the latest version by compiling it from its source code:
|
||||
|
||||
|
||||
```
|
||||
`sudo yum install -y haproxy`
|
||||
```
|
||||
|
||||
Enable the HAProxy service:
|
||||
|
||||
|
||||
```
|
||||
`sudo systemctl enable haproxy`
|
||||
```
|
||||
|
||||
To upgrade to the latest version ([version 2.2][5], as of this writing), compile the source code. Many people assume that compiling and installing a program from its source code requires a high degree of technical ability, but it's a pretty straightforward process. Start by using `yum` to install a few packages that provide the tools for compiling code:
|
||||
|
||||
|
||||
```
|
||||
sudo yum install dnf-plugins-core
|
||||
sudo yum config-manager --set-enabled PowerTools
|
||||
# (Multiline command next 3 lines. Copy and paste together:)
|
||||
|
||||
sudo yum install -y git ca-certificates gcc glibc-devel \
|
||||
lua-devel pcre-devel openssl-devel systemd-devel \
|
||||
make curl zlib-devel
|
||||
```
|
||||
|
||||
Use `git` to get the latest source code and change to the `haproxy` directory:
|
||||
|
||||
|
||||
```
|
||||
git clone <http://git.haproxy.org/git/ haproxy>
|
||||
cd haproxy
|
||||
```
|
||||
|
||||
Run the following three commands to build and install HAProxy with integrated Prometheus support:
|
||||
|
||||
|
||||
```
|
||||
# Multiline command next 3 lines copy and paste together:
|
||||
make TARGET=linux-glibc USE_LUA=1 USE_OPENSSL=1 USE_PCRE=1 \
|
||||
PCREDIR= USE_ZLIB=1 USE_SYSTEMD=1 \ EXTRA_OBJS="contrib/
|
||||
|
||||
sudo make PREFIX=/usr install # Install to /usr/sbin/haproxy
|
||||
```
|
||||
|
||||
Test it by querying the version:
|
||||
|
||||
|
||||
```
|
||||
`haproxy -v`
|
||||
```
|
||||
|
||||
You should get the following output:
|
||||
|
||||
|
||||
```
|
||||
`HA-Proxy version 2.2.4-b16390-23 2020/10/09 - https://haproxy.org/`
|
||||
```
|
||||
|
||||
### Create the backend server
|
||||
|
||||
HAProxy doesn't serve any traffic directly—this is the job of backend servers, which are typically web or application servers. For this exercise, I'm using a tool called [Ncat][6], the "Swiss Army knife" of networking, to create some exceedingly simple servers. Install it:
|
||||
|
||||
|
||||
```
|
||||
`sudo yum install nc -y`
|
||||
```
|
||||
|
||||
If your system has [SELinux][7] enabled, you'll need to enable port 8404, the port used for accessing the HAProxy Stats page (explained below), and the ports for your backend servers:
|
||||
|
||||
|
||||
```
|
||||
sudo dnf install policycoreutils-python-utils
|
||||
sudo semanage port -a -t http_port_t -p tcp 8404
|
||||
sudo semanage port -a -t http_port_t -p tcp 10080;
|
||||
sudo semanage port -a -t http_port_t -p tcp 10081;
|
||||
sudo semanage port -a -t http_port_t -p tcp 10082;
|
||||
```
|
||||
|
||||
Create two Ncat web servers and an API server:
|
||||
|
||||
|
||||
```
|
||||
while true ;
|
||||
do
|
||||
nc -l -p 10080 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server ONE"' ;
|
||||
done &
|
||||
|
||||
while true ;
|
||||
do
|
||||
nc -l -p 10081 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server TWO"' ;
|
||||
done &
|
||||
|
||||
while true ;
|
||||
do
|
||||
nc -l -p 10082 -c 'echo -e "HTTP/1.1 200 OK\nContent-Type: application/json\n\n { \"Message\" :\"Hello, World!\" }"' ;
|
||||
done &
|
||||
```
|
||||
|
||||
These simple servers print out a message (such as "This is Server ONE") and run until the server is stopped. In a real-world setup, you would use actual web and app servers.
|
||||
|
||||
### Modify the HAProxy config file
|
||||
|
||||
HAProxy's configuration file is `/etc/haproxy/haproxy.cfg`. This is where you make the changes to define your load balancer. This [basic configuration][8] will get you started with a working server:
|
||||
|
||||
|
||||
```
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
user haproxy
|
||||
group haproxy
|
||||
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
|
||||
frontend main
|
||||
bind *:80
|
||||
|
||||
default_backend web
|
||||
use_backend api if { path_beg -i /api/ }
|
||||
|
||||
#-------------------------
|
||||
# SSL termination - HAProxy handles the encryption.
|
||||
# To use it, put your PEM file in /etc/haproxy/certs
|
||||
# then edit and uncomment the bind line (75)
|
||||
#-------------------------
|
||||
# bind *:443 ssl crt /etc/haproxy/certs/haproxy.pem ssl-min-ver TLSv1.2
|
||||
# redirect scheme https if !{ ssl_fc }
|
||||
|
||||
#-----------------------------
|
||||
# Enable stats at <http://test.local:8404/stats>
|
||||
#-----------------------------
|
||||
|
||||
frontend stats
|
||||
bind *:8404
|
||||
stats enable
|
||||
stats uri /stats
|
||||
#-----------------------------
|
||||
# round robin balancing between the various backends
|
||||
#-----------------------------
|
||||
|
||||
backend web
|
||||
server web1 127.0.0.1:10080 check
|
||||
server web2 127.0.0.1:10081 check
|
||||
|
||||
#-----------------------------
|
||||
|
||||
# API backend for serving up API content
|
||||
#-----------------------------
|
||||
backend api
|
||||
server api1 127.0.0.1:10082 check
|
||||
```
|
||||
|
||||
### Restart and reload HAProxy
|
||||
|
||||
HAProxy is probably not running yet, so issue the command `sudo systemctl restart haproxy` to start (or restart) it. The `restart` method is fine for non-production situations, but once you are up and running, you'll want to get in the habit of using `sudo systemctl reload haproxy` to avoid service interruptions, even if you have an error in your config.
|
||||
|
||||
For example, after you make changes to `/etc/haproxy/haproxy.cfg`, you need to reload the daemon with `sudo systemctl reload haproxy` to effect the changes. If there is an error, it will let you know but continue running with the previous configuration. Check your HAProxy status with `sudo systemctl status haproxy`.
|
||||
|
||||
If it doesn't report any errors, you have a running server. Test it with curl on the server, by typing `curl http://localhost/` on the command line. If you see "_This is Server ONE_," then it all worked! Run `curl` a few times and watch it cycle through your backend pool, then see what happens when you type `curl http://localhost/api/`. Adding `/api/` to the end of the URL will send all of that traffic to the third server in your pool. At this point, you should have a functioning load balancer!
|
||||
|
||||
### Check your stats
|
||||
|
||||
You may have noted that the configuration defined a frontend called `stats` that is listening on port 8404:
|
||||
|
||||
|
||||
```
|
||||
frontend stats
|
||||
bind *:8404
|
||||
stats uri /stats
|
||||
stats enable
|
||||
```
|
||||
|
||||
In your browser, load up `http://localhost:8404/stats`. Read HAProxy's blog "[Exploring the HAProxy Stats page][9]" to find out what you can do here.
|
||||
|
||||
### A powerful load balancer
|
||||
|
||||
Although I covered just a few of HAProxy's features, you now have a server that listens on ports 80 and 443, redirecting HTTP traffic to HTTPS, balancing traffic between several backend servers, and even sending traffic matching a specific URL pattern to a different backend server. You also unlocked the very powerful HAProxy Stats page that gives you a great overview of your systems.
|
||||
|
||||
This exercise might seem simple, make no mistake about it—you have just built and configured a very powerful load balancer capable of handling a significant amount of traffic.
|
||||
|
||||
For your convenience, I put all the commands in this article in a [GitHub Gist][10].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/11/load-balancing-haproxy
|
||||
|
||||
作者:[Jim O'Connell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jimoconnell
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V (eight stones balancing)
|
||||
[2]: https://www.haproxy.org/
|
||||
[3]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
[4]: https://varnish-cache.org/
|
||||
[5]: https://www.haproxy.com/blog/announcing-haproxy-2-2/
|
||||
[6]: https://nmap.org/ncat
|
||||
[7]: https://www.redhat.com/en/topics/linux/what-is-selinux
|
||||
[8]: https://gist.github.com/haproxytechblog/38ef4b7d42f16cfe5c30f28ee3304dce
|
||||
[9]: https://www.haproxy.com/blog/exploring-the-haproxy-stats-page/
|
||||
[10]: https://gist.github.com/haproxytechblog/d656422754f1b5eb1f7bbeb1452d261e
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What I love about the newest GNOME desktop)
|
||||
[#]: via: (https://opensource.com/article/20/11/new-gnome)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
What I love about the newest GNOME desktop
|
||||
======
|
||||
Check out the top new features in the GNOME 3.38 desktop.
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
Fedora 33 [just came out][2], and I installed it right away. Among the many features in this new version of the Linux distribution is the latest GNOME desktop. GNOME 3.38 was released in September 2020, and I'm loving it.
|
||||
|
||||
### Why I love GNOME 3.38
|
||||
|
||||
The [GNOME 3.38 release notes][3] list some great new features in this update. Among other things, the Welcome Tour for new users received a major facelift and is now much easier to use and provides more useful information if you're new to GNOME.
|
||||
|
||||
![The new "Welcome GNOME"][4]
|
||||
|
||||
([GNOME][5], [CC BY-SA 4.0][6])
|
||||
|
||||
I also love that I can drag to reorder application icons in the GNOME Application Overview. This makes it a breeze to organize the applications I use all the time under GNOME. You can even drag and drop icons together to automatically put them into folders.
|
||||
|
||||
![GNOME 3.38 Application Overview][7]
|
||||
|
||||
([GNOME][5], [CC BY-SA 4.0][6])
|
||||
|
||||
I have family in different time zones, and the updated GNOME Clocks makes it much easier to add new world clocks, so I don't have to figure out what time it is when I call a family member. Are they an hour ahead or an hour behind? I just check the GNOME clock, and I can see everyone's local times at a glance. And while I don't use the alarms feature very often, I like that I can set my own ring duration and default "snooze" time on each alarm.
|
||||
|
||||
![Adding a new world clock in GNOME Clocks][8]
|
||||
|
||||
([GNOME][5], [CC BY-SA 4.0][6])
|
||||
|
||||
Aside from all the feature updates, the biggest improvement in GNOME 3.38 is performance. As GNOME developer Emmanuele Bassi [explained earlier this year][9], there's been "lots of work by everyone in GNOME to make things faster, even for people running on more limited systems like the Raspberry Pi. There's been a lot of work to get GNOME to perform better … because people really care about it." And that shows in the new release! The GNOME desktop feels much more responsive.
|
||||
|
||||
![Applications running on GNOME 3.38][10]
|
||||
|
||||
([GNOME][5], [CC BY-SA 4.0][6])
|
||||
|
||||
As part of my consulting and training business, I regularly flip between several open applications, including LibreOffice, GIMP, Inkscape, a web browser, and others. Starting a new application or switching between open applications just feels faster in GNOME 3.38.
|
||||
|
||||
### Except one thing
|
||||
|
||||
If there's one thing I'm not fond of in the new version of GNOME, it's the redesigned Screenshot tool. I use this all the time to grab a portion of what's on the screen and insert it into my presentations and training documents.
|
||||
|
||||
![GNOME Screenshot tool][11]
|
||||
|
||||
(Jim Hall, [CC BY-SA 4.0][6])
|
||||
|
||||
When I read a user interface or computer screen, I naturally navigate as I would a book or magazine: left to right and top to bottom. When I make screenshots with the new Screenshot tool, I start in the upper-left and make my selections as I go. Most of the time, I only need to change the capture area for a selection, so I click that button and then look for the button that will take a screenshot. But it always takes me a moment to find the **Take Screenshot** button in the upper-left corner. It's not at the bottom of the window, where I expect to find it.
|
||||
|
||||
![GNOME Screenshot tool][12]
|
||||
|
||||
(Jim Hall, [CC BY-SA 4.0][6])
|
||||
|
||||
So far, that seems to be my only annoyance in GNOME 3.38. Overall, I'm very excited for the new GNOME. And I hope you are, too!
|
||||
|
||||
To learn more about GNOME 3.38, visit the [GNOME website][13] or read the [GNOME 3.38 announcement][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/11/new-gnome
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
|
||||
[2]: https://fedoramagazine.org/announcing-fedora-33/
|
||||
[3]: https://help.gnome.org/misc/release-notes/3.38/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/welcome-tour.png (The new "Welcome GNOME" )
|
||||
[5]: https://www.gnome.org/news/2020/09/gnome-3-38-released/
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/app-overview.png (GNOME 3.38 Application Overview)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/world-clocks.png (Adding a new world clock in GNOME Clocks)
|
||||
[9]: https://opensource.com/article/20/7/new-gnome-features
|
||||
[10]: https://opensource.com/sites/default/files/uploads/desktop-busy.png (Applications running on GNOME 3.38)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/gnome-screenshot-tool.png (GNOME Screenshot tool)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/screenshot-tool-path.png (GNOME Screenshot tool)
|
||||
[13]: https://www.gnome.org/
|
@ -1,90 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chenmu-kk)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option)
|
||||
[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
当Wi-Fi 成为关键业务时,混合信道架构是最好的多信道选择
|
||||
======
|
||||
|
||||
### 当Wi-Fi 成为关键业务时,对于如今的它来说,混合信道架构是最好的多信道选择,但它并不总是最佳的选择。当需要可靠的Wi-Fi时,单信道和混合Aps提供了令人信服的替代方案。
|
||||
|
||||
![Getty Images][1]
|
||||
|
||||
我曾与许多实施数字项目的公司合作,结果却发现它们失败了。正确的想法,健全地施行,现存的市场机遇。哪里是薄弱的环节?是Wi-Fi网络。
|
||||
|
||||
例如,一家大型医院希望通过将遥测信息发送到移动设备,来提高临床医生对患者警报的响应时间。如果没有这个系统,护士了解病人警报的唯一途径就是通过声音警报。在所有嘈杂的背景音中,通常很难分辨噪音来自哪里。问题是这家医院中的Wi-Fi网络已经很多年未升级了,这导致信息传递严重延迟(通常需要4~5分钟)。过长的信息传递导致人们对该系统失去信心,因此许多临床医生停止使用该系统,转而使用手动警报。最终,人们认为这个项目是失败的。
|
||||
|
||||
我曾在制造业、K-12教育、娱乐和其他行业中见过类似的案例。企业竞争的基础是客户体验,而竞争的动力来自不断扩展又无处不在的无线优势。好的 Wi-Fi并不意味着市场领导地位,但是劣质的Wi-Fi将会对客户和员工产生负面影响。而在当今竞争激烈的环境下,这是灾难的根源。
|
||||
|
||||
**[ Read also:[Wi-Fi site-survey tips: How to avoid interference, dead spots][2] ]**
|
||||
|
||||
## Wi-Fi性能历来不一致
|
||||
|
||||
Wi-Fi的问题在于它本身就很脆弱。我相信每个阅读这篇文章的人都经历过下载失败、连接中断、性能不一致以及连接公用热点的漫长等待时间等缺陷。
|
||||
|
||||
想象一下,你坐在一个会议上,在一个主题演讲之前,你可以随意地发推特、发电子邮件、浏览网页以及做其他事情。然后主讲人上台,所有观众开始拍照,上传并流传信息——然后网络崩溃了。我发现这不仅仅是一个例外,更是一种常态,强调了对[不妥协Wi-Fi][3]的需求。
|
||||
|
||||
对于网络技术人员的问题是如何到达一个Wi-Fi一直不间断保持100%的地方。有人说只要加强现存的网络可以做到,这也许可以,但在某些情况下,Wi-Fi的类型可能并不合适。
|
||||
|
||||
最常见的Wi-Fi部署类型是多信道,也称为微蜂窝,每个客户端通过无线信道连接到接入点(AP)。高质量的通话体验基于两点:良好的信号强度和最小的干扰。有几个因素会导致干扰,例如接入点太靠近,布局问题或者来自其他设备的干扰。为了最大程度地减少干扰,企业需要投入大量的时间和资金在 [现场调查中规划最佳的信道地图][2],但即使这些做得很好,Wi-Fi故障仍然可能发生。
|
||||
|
||||
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
|
||||
|
||||
## 多通道Wi-Fi并非总是最佳选择
|
||||
|
||||
对于许多铺着地毯的办公室来说,多通道Wi-Fi可能是可靠的,但在某些环境中,外部环境会影响性能。一个很好的例子是多租户建筑,其中有多个Wi-Fi网络在同一信道上传输并相互干扰。另一个例子是医院,这里有许多工作人员在多个接入点间流动。有并联Wi-Fi网络传输在相同通道并相互干扰。客户端将试图连接到最佳接入点,导致客户端不断断开连接并重新连接,从而导致会话中断。还有一些环境,例如学校、机场和会议设施,那里存在大量的瞬态设备,而多通道则难以跟上。
|
||||
|
||||
## 单通道Wi-Fi提供更好的可靠性但与此同时性能会受到影响
|
||||
|
||||
网络管理器要做什么?不一致的Wi-Fi只是一个既定事实吗?多信道是一种标准,但它并非是为动态物理环境或那些需要可靠的连接环境而设计的。
|
||||
|
||||
几年前提出了一项解决这些问题的替代架构。顾名思义,“单信道”Wi-Fi在网络中为所有接入点使用单一的无线频道。可以把它想象成在一个信道上运行的单个Wi-Fi结构。这种架构中,接入点的位置无关紧要,因为他们都利用相同的通道,因此不会互相干扰。这有一个显而易见的简化优势,比如,如果覆盖率很低,那就没有理由再做一次昂贵的现场调查。相反,只需在需要的地方布置接入点就可以了。
|
||||
|
||||
单通道的缺点之一是总网络吞吐量低于多通道,因为只能使用一个通道。在可靠性高于性能的环境中,这可能会很好,但许多组织希望二者兼而有之。
|
||||
|
||||
## 混合接入点提供了两全其美的优势
|
||||
|
||||
单信道系统制造商最近进行了创新,将信道架构混合在一起,创造了一种“两全其美”的部署,可提供多信道的吞吐量和单信道的可靠性。举个例子,安奈特提供了混合接入点,可以同时在多信道和单信道模式下运行。这意味着可以分配一些web客户端到多信道以获得最大的吞吐量,而其他的web客户端则可使用单信道来获得无缝漫游体验。
|
||||
|
||||
这种混合的实际用例可能是物流设施,办公室工作人员使用多通道,但叉车操作员在整个仓库移动时使用单一通道持续连接。
|
||||
|
||||
Wi-Fi曾是一个便利的网络,但如今他或许是所有网络中最关键的任务。传统的多信道体系也许可以工作,但应该做一些尽职调查来看看它在重负下如何运转。IT领导者需要了解Wi-Fi对数字转型计划的重要性,并进行适当的测试,以确保它不是基础设施链中的薄弱环节,并为当今环境选择最佳技术。
|
||||
|
||||
**综述:4个免费的开源网络监控工具**
|
||||
|
||||
* [Icinga:可扩展的企业级开源网络监控][5]
|
||||
* [Nagios Core: 包含大量插件、陡峭学习曲线的网络监控软件][6]
|
||||
* [Observium 开源网络监控工具: 无法在Windows系统上运行,但有着出色的用户界面][7]
|
||||
* [Zabbix 提供有效的网络监控][8]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chenmu-kk](https://github.com/chenmu-kk)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html
|
||||
[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
|
||||
[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
|
||||
[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
|
||||
[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user