mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
0ad72df196
@ -7,7 +7,7 @@
|
||||
[#]: via: (https://opensource.com/article/19/4/bash-vs-python)
|
||||
[#]: author: (Archit Modi Red Hat https://opensource.com/users/architmodi/users/greg-p/users/oz123)
|
||||
|
||||
Bash vs Python:你该使用个?
|
||||
Bash vs Python:你该使用哪个?
|
||||
======
|
||||
|
||||
> 两种编程语言都各有优缺点,它们在某些任务方面互有胜负。
|
||||
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software)
|
||||
[#]: via: (https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software
|
||||
======
|
||||
VPN packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies
|
||||
![Getty Images][1]
|
||||
|
||||
The Department of Homeland Security has issued a warning that some [VPN][2] packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies, allowing nefarious actors an opening to invade and take control over an end user’s system.
|
||||
|
||||
The DHS’s Cybersecurity and Infrastructure Security Agency (CISA) [warning][3] comes on the heels of a notice from Carnegie Mellon's CERT that multiple VPN applications store the authentication and/or session cookies insecurely in memory and/or log files.
|
||||
|
||||
**[Also see:[What to consider when deploying a next generation firewall][4]. Get regularly scheduled insights by [signing up for Network World newsletters][5]]**
|
||||
|
||||
“If an attacker has persistent access to a VPN user's endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” [CERT wrote][6]. “An attacker would then have access to the same applications that the user does through their VPN session.”
|
||||
|
||||
According to the CERT warning, the following products and versions store the cookie insecurely in log files:
|
||||
|
||||
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0 ([CVE-2019-1573][7])
|
||||
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
|
||||
|
||||
|
||||
|
||||
The following products and versions store the cookie insecurely in memory:
|
||||
|
||||
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0.
|
||||
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
|
||||
* Cisco AnyConnect 4.7.x and prior.
|
||||
|
||||
|
||||
|
||||
CERT says that Palo Alto Networks GlobalProtect version 4.1.1 [patches][8] this vulnerability.
|
||||
|
||||
In the CERT warning F5 stated it has been aware of the insecure memory storage since 2013 and has not yet been patched. More information can be found [here][9]. F5 also stated it has been aware of the insecure log storage since 2017 and fixed it in version 12.1.3 and 13.1.0 and onwards. More information can be found [here][10].
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
|
||||
|
||||
CERT said it is unaware of any patches at the time of publishing for Cisco AnyConnect and Pulse Secure Connect Secure.
|
||||
|
||||
CERT credited the [National Defense ISAC Remote Access Working Group][12] for reporting the vulnerability.
|
||||
|
||||
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/broken-chain_metal_link_breach_security-100777433-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
|
||||
[3]: https://www.us-cert.gov/ncas/current-activity/2019/04/12/Vulnerability-Multiple-VPN-Applications
|
||||
[4]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.kb.cert.org/vuls/id/192371/
|
||||
[7]: https://nvd.nist.gov/vuln/detail/CVE-2019-1573
|
||||
[8]: https://securityadvisories.paloaltonetworks.com/Home/Detail/146
|
||||
[9]: https://support.f5.com/csp/article/K14969
|
||||
[10]: https://support.f5.com/csp/article/K45432295
|
||||
[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[12]: https://ndisac.org/workinggroups/
|
||||
[13]: https://www.facebook.com/NetworkWorld/
|
||||
[14]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Designing posters with Krita, Scribus, and Inkscape)
|
||||
[#]: via: (https://opensource.com/article/19/4/design-posters)
|
||||
[#]: author: (Raghavendra Kamath https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath)
|
||||
|
||||
Designing posters with Krita, Scribus, and Inkscape
|
||||
======
|
||||
Graphic designers can do professional work with free and open source
|
||||
tools.
|
||||
![Hand drawing out the word "code"][1]
|
||||
|
||||
A few months ago, I was asked to design some posters for a local [Free Software Foundation][2] (FSF) event. Richard M. Stallman was [visiting][3] our country, and my friend [Abhas Abhinav][4] wanted to put up some posters and banners to promote his visit. I designed two posters for RMS's talk in Bangalore.
|
||||
|
||||
I create my artwork with F/LOSS (free/libre open source software) tools. Although many artists successfully use free software to create artwork, I repeatedly encounter comments in discussion forums claiming that free software is not made for creative work. This article is my effort to detail the process I typically use to create my artwork and to spread awareness that one can do professional work with the help of F/LOSS tools.
|
||||
|
||||
### Sketching some concepts
|
||||
|
||||
After understanding Abhas' initial requirements, I sat down to visualize some concepts. I am not that great of a copywriter, so I started reading the FSF website to get some copy material. I needed to finish the project in two days time, while simultaneously working on other projects. I started sketching some rough layouts. From five layouts, I liked three. I scanned them using [Skanlite][5]; although these sketches were very rough and would need proper layout and design, they were a good base for me to work from.
|
||||
|
||||
![Skanlite][6]
|
||||
|
||||
![Poster sketches][7]
|
||||
|
||||
![Poster sketch][8]
|
||||
|
||||
I had three concepts:
|
||||
|
||||
* On the [FSF's website][2], I read about taking free software to new frontiers, which made me think about the idea of "conquering a summit." Free software work is also filled with adventures, in my opinion, and sometimes a task may seem like scaling a summit. So, I thought showing some mountaineers would resonate well.
|
||||
* I also wanted to ask people to donate to FSF, so I sketched a hand giving a heart. I didn't feel any excitement in executing this idea, nevertheless, I kept it for backup in case I fell short of time.
|
||||
* The FSF website has a hashtag for a donation program called #thankGNU, so I thought about using this as the basis of my design. Repurposing my hand visual, I replaced the heart with a bouquet of flowers that has a heart-shaped card saying #thankGNU!
|
||||
|
||||
|
||||
|
||||
I know these are somewhat quick and safe concepts, but given the little time I had for the project, I went ahead with them.
|
||||
|
||||
My design process mostly depends on the kind of look I need in the final image. I choose my software and process according to my needs. I may use one software from start to finish or combine various software packages to accomplish what I need. For this project, I used [Krita][9] and [Scribus][10], with some minimal use of [Inkscape][11].
|
||||
|
||||
### Krita: Making the illustrations
|
||||
|
||||
I imported my sketches into [Krita][12] and started adding more defined lines and shapes.
|
||||
|
||||
For the first image, which has some mountaineers climbing, I used [vector layers][13] in Krita to add basic shapes and then used [Alpha Inheritance][14], which is similar to what is called Clipping Masks in Photoshop, to add texture and gradients inside the shapes. This helped me change the underlying base shape (in this case, the shape of the mountain in the first poster) anytime during the process. Krita also has a nice feature called the Reference Image tool, which lets you pin some references around your canvas (this helps a lot and saves many Alt+Tabs). Once I got the mountain how I wanted, according to the layout, I started painting the mountaineers and added more details for the ice and other features. I like grungy brushes and brushes that have a texture akin to chalks and sponges. Krita has a wide range of brushes as well as a brush engine, which makes replicating a traditional medium easier. After about 3.5 hours of painting, this image was ready for further processing.
|
||||
|
||||
I wanted the second poster to have the feel of an old-style book illustration. So, I created the illustration with inked lines, somewhat similar to what we see in textbooks or novels. Inking in Krita is really a time saver; since it has stabilizer options, your wavy, hand-drawn lines will be smooth and crisp. I added a textured background and some minimal colors beneath the lines. It took me about three hours to do this illustration as well.
|
||||
|
||||
![Poster][15]
|
||||
|
||||
![Poster][16]
|
||||
|
||||
### Scribus: Adding layout and typography
|
||||
|
||||
Once my illustrations were ready, it was time to move on to the next part: adding text and other things to the layout. For this, I used Scribus. Both Scribus and Krita have CMYK support. In both applications, you can soft-proof your artwork and make changes according to the color profile you get from the printer. I mostly do my work in RGB and then, if required, I convert it to CMYK. Since most printers nowadays will do the color conversion, I don't think CMYK is support required, however, it's good to be able to work in CMYK with free software tools.
|
||||
|
||||
I use open source fonts for my design work unless a client has licensed a closed font for use. A good way to browse for suitable fonts is [Google Fonts repository][17]. (I have the entire repository cloned.) Occasionally, I also browse fonts on [Font Library][18], as it also has a nice collection. I decided to use Montserrat by Julieta Ulanovsky for the posters. Placing text was very quick in Scribus; once you create a style, you can apply it to any number of paragraphs or titles. This helped me place text in both designs quickly since I didn't have to re-create the text properties.
|
||||
|
||||
![Poster in Scribus][19]
|
||||
|
||||
I keep two layers in Scribus. One is for the illustrations, which are linked to the original files so if I change an illustration, it will update in Scribus. The other is for text and it's layered on top of the illustration layer.
|
||||
|
||||
### Inkscape: QR codes
|
||||
|
||||
I used Inkscape to generate a QR code that points to the Membership page on FSF's website. To generate a QR code in Scribus, go to **Extensions > Render > Barcode > QR Code** in Inkscape's menu. The logos are also vector; because Scribus supports vector images, you can directly paste things from Inkscape into Scribus. In a way, this helps in designing CMYK-based vector graphics.
|
||||
|
||||
![Final poster design][20]
|
||||
|
||||
![Final poster design][21]
|
||||
|
||||
With the designs ready, I exported them to layered PDF and sent to them to Abhas for feedback. He asked me to add FSF India's logo, which I did and sent a new PDF to him.
|
||||
|
||||
### Printing the posters
|
||||
|
||||
From here, Abhas took over the printing part of the process. His local printer in Bangalore printed the posters in A2 size. He was kind enough to send me some pictures of them. The prints came out well, considering I didn't even convert them to CMYK nor do any color corrections or soft proofing, as I usually do when I get the color profile from my printer. My opinion is that 100% accurate CMYK printing is just a myth; there are too many factors to consider. If I really want perfect color reproduction, I leave this job to the printer, as they know their printer well and can do the conversion.
|
||||
|
||||
![Final poster design][22]
|
||||
|
||||
![Final poster design][23]
|
||||
|
||||
### Accessing the source files
|
||||
|
||||
When we discussed the requirements for these posters, Abhas told me to release the artwork under a Creative Commons license so others can re-use, modify, and share it. I am really glad he mentioned it. Anyone who wants to poke at the files can [download them from my Nextcloud drive][24]. If you have any improvements to make, please go ahead—and do remember to share your work with everybody.
|
||||
|
||||
Let me know what you think about this article by [emailing me][25].
|
||||
|
||||
* * *
|
||||
|
||||
_[This article][26] originally appeared on [Raghukamath.com][27] and is republished with the author's permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/design-posters
|
||||
|
||||
作者:[Raghavendra Kamath][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
|
||||
[2]: https://www.fsf.org/
|
||||
[3]: https://rms-tour.gnu.org.in/
|
||||
[4]: https://abhas.io/
|
||||
[5]: https://kde.org/applications/graphics/skanlite/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/skanlite.png (Skanlite)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/sketch-01.png (Poster sketches)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/sketch-02.png (Poster sketch)
|
||||
[9]: https://krita.org/
|
||||
[10]: https://www.scribus.net/
|
||||
[11]: https://inkscape.org/
|
||||
[12]: /life/16/4/nick-hamilton-linuxfest-northwest-2016-krita
|
||||
[13]: https://docs.krita.org/en/user_manual/vector_graphics.html#vector-graphics
|
||||
[14]: https://docs.krita.org/en/tutorials/clipping_masks_and_alpha_inheritance.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/poster-illo-01.jpg (Poster)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/poster-illo-02.jpg (Poster)
|
||||
[17]: https://fonts.google.com/
|
||||
[18]: https://fontlibrary.org/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/poster-in-scribus.png (Poster in Scribus)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/final-01.png (Final poster design)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/final-02.png (Final poster design)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/posters-in-action-01.jpg (Final poster design)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/posters-in-action-02.jpg (Final poster design)
|
||||
[24]: https://box.raghukamath.com/cloud/index.php/s/97KPnTBP4QL4iCx
|
||||
[25]: mailto:raghu@raghukamath.com?Subject=designing-posters-with-free-software
|
||||
[26]: https://raghukamath.com/journal/designing-posters-with-free-software/
|
||||
[27]: https://raghukamath.com/
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How libraries are adopting open source)
|
||||
[#]: via: (https://opensource.com/article/19/4/software-libraries)
|
||||
[#]: author: (Don Watkins (Community Moderator) https://opensource.com/users/don-watkins)
|
||||
|
||||
How libraries are adopting open source
|
||||
======
|
||||
Over the past decade, ByWater Solutions has expanded its business by
|
||||
advocating for open source software.
|
||||
![][1]
|
||||
|
||||
Four years ago, I [interviewed Nathan Currulla][2], co-founder of ByWater Solutions, a major services and solutions provider for [Koha][3], a popular open source integrated library system (ILS). Since then, I've benefitted directly from his company's work, as my local [Chautauqua–Cattaraugus Library System][4] in western New York migrated from a proprietary software system to a [ByWater Systems][5]' Koha implementation.
|
||||
|
||||
When I learned that ByWater is celebrating its 10th anniversary in 2019, I decided to reach out to Nathan to learn how the company has grown over the last decade. (Our remarks have been edited slightly for grammar and clarity.)
|
||||
|
||||
**Don Watkins** : How has ByWater grown in the last 10 years?
|
||||
|
||||
**Nathan Currulla** : Over the last 10 years, ByWater has grown by leaps and bounds. By the end of 2009, we supported five libraries with five contracts. That number shot up to 117 libraries made up of 46 contracts by the end of 2010. We now support over 1,500 libraries and 450+ contracts. We also went from having two team members to 25 in the past 10 years. The service-focused processes we have developed for migrating new libraries have been adopted by other library companies, and we have become a real market disruptor, putting pressure on other companies to provide better support and lower software subscription fees for libraries using their products. This was our goal from the outset, to change the way libraries work with the technology companies who support them, whomever they may be.
|
||||
|
||||
Since the beginning, we have been rooted in the future, while legacy systems are still rooted in the past. Ten years ago, it was a real struggle for us to overcome the barriers presented by the fear of change in libraries and the outdated perceptions of open source in general. Now, although we still have to deal with change aversion, there are enough users to disprove any misinformation that exists regarding Koha and open source. The conversation is easier now than it ever was. That said, despite the fact that the ideals and morals held by open source are directly aligned with those of libraries, we still have a long way to go until open source technologies are the norm in this marketplace.
|
||||
|
||||
**DW** : What kinds of libraries do you support?
|
||||
|
||||
**NC** : Our partners are made up of a diverse set of library types. About 35% of our partners are public libraries, 35% are academic, and the remaining 30% are made up of museum, corporate, law, school, and other special library types. Because of Koha's flexibility and diverse feature set, we can successfully provide services to a variety of library types despite the current trend of consolidation in the library technology marketplace.
|
||||
|
||||
**DW** : How does ByWater work with and help the Koha community?
|
||||
|
||||
**NC** : We are working with the rest of the Koha community to streamline workflows and further improve the process of submitting and accepting new features into Koha. The vast majority of the community is made up of volunteers; by providing paid positions within the community, we can dedicate more time to the quality assurance and sign-off processes needed to stay competitive with other systems, both open source and proprietary. The number of new features submitted to the Koha community for each release is staggering. The more resources we have to get those features out to our users, the faster Koha can evolve and further shape the library-technology marketplace.
|
||||
|
||||
**DW** : When we talked in 2015, ByWater had recently partnered with library solutions provider [EBSCO][6]. What initiatives are you working on now with EBSCO?
|
||||
|
||||
**NC** : Originally, Catalyst IT of New Zealand worked with EBSCO to create the EBSCO Discovery Service (EDS) plugin that is used by many of our customers. Unlike most discovery systems that sit on top of a library's online public access catalog (OPAC), Koha's integration with EDS uses the Koha OPAC as the frontend, with EDS feeding data into the Koha interface. This allows libraries to choose which interface they prefer (EDS or Koha as the frontend) and provides a unified library service platform (LSP). EBSCO has always been a great partner and has always shown a strong willingness to contribute to the open source initiative. They understand the importance of having fewer barriers between the ILS and the libraries' other content to provide a seamless interface to the end user.
|
||||
|
||||
Outside of Koha, ByWater is working closely with EBSCO to provide implementation, training, and support services for its [Folio LSP][7]. Folio is an open source LSP for academic libraries with the intent to provide even more seamless integration with other content providers using an extensible, open app marketplace. ByWater is developing a separate department for the implementation and ongoing support of Folio, with EBSCO providing hosting services to our mutual customers. The fact that EBSCO is investing millions in the creation of an open source platform lends further credence to the importance and validity of open source technologies in the library market.
|
||||
|
||||
**DW** : What other projects are you supporting? How do they complement Koha?
|
||||
|
||||
**NC** : ByWater also supports Libki, an open source, web-based kiosk and print management solution; Coral, an open source electronic resource management (ERM) solution; and Folio. Libki and Coral seamlessly integrate with Koha to provide a unified LSP. Folio may work in cooperation with Koha on some functionality, but it is too early to tell what that will specifically look like.
|
||||
|
||||
ByWater also offers Koha Klassmates, a program that provides free installations of Koha to over 40 library schools in the US to familiarize the next generation of librarians with open source and the tools they will use daily in the workforce. We are also rolling out a program called Koha University, which will mentor computer science students in writing and submitting code to Koha, one of the largest open source projects in the world. This will give them experience in working in such an environment and provide the opportunity for their names to be listed as official Koha contributors.
|
||||
|
||||
**DW** : What is ByWater's strategic focus over the next five years?
|
||||
|
||||
**NC** : ByWater will continue offering top-rated support to our ever-growing customer base while leveraging new open source opportunities to disprove misinformation surrounding the use of open source solutions in libraries. We will focus on making open source the norm and educating libraries that could be taking advantage of these technologies but do not because of outdated information and perceptions.
|
||||
|
||||
Additionally, our research and development efforts will be focused on analyzing machine learning for advanced education and support services. We also want to work closely with our partners on advancing the marketing efforts (through software) for small and large libraries to help cement their roles as community centers by marketing inventory, programs, and library events. We want to be community builders on different levels, both for our partner libraries and with the open source communities that we are involved in.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/software-libraries
|
||||
|
||||
作者:[Don Watkins (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_opencardcatalog.png?itok=f9PyJEe-
|
||||
[2]: https://opensource.com/business/15/5/bywater-solutions-empowering-library-tech
|
||||
[3]: http://www.koha.org/
|
||||
[4]: https://catalog.cclsny.org/
|
||||
[5]: https://bywatersolutions.com/
|
||||
[6]: https://www.ebsco.com/
|
||||
[7]: https://www.ebsco.com/products/ebsco-folio-library-services
|
122
sources/tech/20190412 Joe Doss- How Do You Fedora.md
Normal file
122
sources/tech/20190412 Joe Doss- How Do You Fedora.md
Normal file
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Joe Doss: How Do You Fedora?)
|
||||
[#]: via: (https://fedoramagazine.org/joe-doss-how-do-you-fedora/)
|
||||
[#]: author: (Charles Profitt https://fedoramagazine.org/author/cprofitt/)
|
||||
|
||||
Joe Doss: How Do You Fedora?
|
||||
======
|
||||
|
||||
![Joe Doss][1]
|
||||
|
||||
We recently interviewed Joe Doss on how he uses Fedora. This is part of a [series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee.
|
||||
|
||||
### Who is Joe Doss?
|
||||
|
||||
Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.”
|
||||
|
||||
His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.”
|
||||
|
||||
At Kenna, Doss’ group makes use of Fedora and [Ansible][4]: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.”
|
||||
|
||||
Doss brews beer at home and contributes to open source in his free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.”
|
||||
|
||||
![Tibby the cute cat!][5]
|
||||
|
||||
### The Fedora community
|
||||
|
||||
Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release.
|
||||
|
||||
When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive.
|
||||
|
||||
One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.”
|
||||
|
||||
Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.”
|
||||
|
||||
Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release.
|
||||
|
||||
Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.”
|
||||
|
||||
Doss is the [COPR][6] Fedora, RHEL, and CentOS package maintainer for [WireGuard VPN][7]. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.”
|
||||
|
||||
### What Hardware?
|
||||
|
||||
Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.”
|
||||
|
||||
### What Software?
|
||||
|
||||
Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an [Ansible playbook][8] for setting up my workstation.”
|
||||
|
||||
### Ansible
|
||||
|
||||
I asked Doss if he had advice for people trying to learn Ansible.
|
||||
|
||||
“Start small. Automate the stuff that makes your life easier, but don’t over complicate it. [Ansible Galaxy][9] is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take.
|
||||
|
||||
“I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of [Ansible for Devops][10] by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the [official Ansible docs][11].”
|
||||
|
||||
Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.”
|
||||
|
||||
### Home lab
|
||||
|
||||
He recommended setting up a home lab. “At Kenna and at home I use [Vagrant][12] with the [Vagrant-libvirt plugin][13] for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run _vagrant provision_ to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.”
|
||||
|
||||
```
|
||||
-- mode: ruby --
|
||||
vi: set ft=ruby :
|
||||
Vagrant.configure(2) do |config|
|
||||
config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y"
|
||||
config.ssh.forward_agent = true
|
||||
config.vm.define "f29", autostart: false do |f29|
|
||||
f29.vm.box = "fedora/29-cloud-base"
|
||||
f29.vm.hostname = "f29.example.com"
|
||||
f29.vm.provider "libvirt" do |vm|
|
||||
vm.memory = 2048
|
||||
vm.cpus = 2
|
||||
vm.driver = "kvm"
|
||||
vm.nic_model_type = "e1000"
|
||||
end
|
||||
config.vm.synced_folder '.', '/vagrant', disabled: true
|
||||
|
||||
config.vm.provision "ansible" do |ansible|
|
||||
ansible.groups = {
|
||||
}
|
||||
ansible.playbook = "playbooks/main.yml"
|
||||
ansible.inventory_path = "inventory/development"
|
||||
ansible.extra_vars = {
|
||||
ansible_python_interpreter: "/usr/bin/python3"
|
||||
}
|
||||
# ansible.verbose = 'vvv' end
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/joe-doss-how-do-you-fedora/
|
||||
|
||||
作者:[Charles Profitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cprofitt/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/IMG_20181029_121944-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/tag/how-do-you-fedora/
|
||||
[3]: https://fedoramagazine.org/submit-an-idea-or-tip/
|
||||
[4]: https://ansible.com
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/IMG_20181231_110920_fixed.jpg
|
||||
[6]: https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/
|
||||
[7]: https://www.wireguard.com/install/
|
||||
[8]: https://github.com/jdoss/fedora-workstation
|
||||
[9]: https://galaxy.ansible.com/
|
||||
[10]: https://www.ansiblefordevops.com/
|
||||
[11]: https://docs.ansible.com/ansible/latest/index.html
|
||||
[12]: http://www.vagrantup.com/
|
||||
[13]: https://github.com/vagrant-libvirt/vagrant-libvirt%20plugin
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 2)
|
||||
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2)
|
||||
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
|
||||
|
||||
Linux Server Hardening Using Idempotency with Ansible: Part 2
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
In the first part of this series, we introduced something called idempotency, which can provide the ongoing improvements to your server estate’s security posture. In this article, we’ll get a little more hands-on with a look at some specific Ansible examples.
|
||||
|
||||
### Shopping List
|
||||
|
||||
You will need some Ansible experience before being able to make use of the information that follows. Rather than run through the installation and operation of Ansible let’s instead look at some of the idempotency playbook’s content.
|
||||
|
||||
As mentioned earlier there might be hundreds of individual system tweaks to make on just one type of host so we’ll only explore a few suggested Ansible tasks and how I like to structure the Ansible role responsible for the compliance and hardening. You have hopefully picked up on the fact that the devil is in the detail and you should absolutely, unequivocally, understand to as high a level of detail as possible, about the permutations of making changes to your server OS.
|
||||
|
||||
Be aware that I will mix and match between OSs in the Ansible examples that follow. Many examples are OS agnostic but as ever you should pay close attention to the detail. Obvious changes like “apt” to “yum” for the package manager is a given.
|
||||
|
||||
Inside a “tasks” file under our Ansible “hardening” role, or whatever you decide to name it, these named tasks represent the areas of a system with some example code to offer food for thought. In other words, each section that follows will probably be a single YAML file, such as “accounts.yml”, and each will have with varying lengths and complexity.
|
||||
|
||||
Let’s look at some examples with ideas about what should go into each file to get you started. The contents of each file that follow are just the very beginning of a checklist and the following suggestions are far from exhaustive.
|
||||
|
||||
#### SSH Server
|
||||
|
||||
This is the application that almost all engineers immediately look to harden when asked to secure a server. It makes sense as SSH (the OpenSSH package in many cases) is usually only one of a few ports intentionally prised open and of course allows direct access to the command line. The level of hardening that you should adopt is debatable. I believe in tightening the daemon as much as possible without disruption and would usually make around fifteen changes to the standard OpenSSH server config file, “sshd_config”. These changes would include pulling in a MOTD banner (Message Of The Day) for legal compliance (warning of unauthorised access and prosecution), enforcing the permissions on the main SSHD files (so they can’t be tampered with by lesser-privileged users), ensuring the “root” user can’t log in directly, setting an idle session timeout and so on.
|
||||
|
||||
Here’s a very simple Ansible example that you can repeat within other YAML files later on, focusing on enforcing file permissions on our main, critical OpenSSH server config file. Note that you should carefully check every single file that you hard-reset permissions on before doing so. This is because there are horrifyingly subtle differences between Linux distributions. Believe me when I say that it’s worth checking first.
|
||||
|
||||
name: Hard reset permissions on sshd server file
|
||||
|
||||
file: owner=root group=root mode=0600 path=/etc/ssh/sshd_config
|
||||
|
||||
To check existing file permissions I prefer this natty little command for the job:
|
||||
|
||||
```
|
||||
$ stat -c "%a %n" /etc/ssh/sshd_config
|
||||
|
||||
644 /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
As our “stat” command shows our Ansible snippet would be an improvement to the current permissions because 0600 means only the “root” user can read and write to that file. Other users or groups can’t even read that file which is of benefit because if we’ve made any mistakes in securing SSH’s config they can’t be discovered as easily by less-privileged users.
|
||||
|
||||
#### System Accounts
|
||||
|
||||
At a simple level this file might define how many users should be on a standard server. Usually a number of users who are admins have home directories with public keys copied into them. However this file might also include performing simple checks that the root user is the only system user with the all-powerful superuser UID 0; in case an attacker has altered user accounts on the system for example.
|
||||
|
||||
#### Kernel
|
||||
|
||||
Here’s a file that can grow arms and legs. Typically I might affect between fifteen and twenty sysctl changes on an OS which I’m satisfied won’t be disruptive to current and, all going well, any future uses of a system. These changes are again at your discretion and, at my last count (as there’s between five hundred and a thousand configurable kernel options using sysctl on a Debian/Ubuntu box) you might opt to split off these many changes up into different categories.
|
||||
|
||||
Such categories might include network stack tuning, stopping core dumps from filling up disk space, disabling IPv6 entirely and so on. Here’s an Ansible example of logging network packets that shouldn’t been routed out onto the Internet, namely those packets using spoofed private IP Addresses, called “martians”.
|
||||
|
||||
name: Keep track of traffic that shouldn’t be routed onto the Internet
|
||||
|
||||
lineinfile: dest="/etc/sysctl.conf" line="{{item.network}}" state=present
|
||||
|
||||
with_items:
|
||||
|
||||
\- { network: 'net.ipv4.conf.all.log_martians = 1' }
|
||||
|
||||
\- { network: 'net.ipv4.conf.default.log_martians = 1' }
|
||||
|
||||
Pay close attention that you probably don’t want to use the file “/etc/sysctl.conf” but create a custom file under the directory “/etc/sysctl.d/” or similar. Again, check your OS’s preference, usually in the comments of the pertinent files. If you’ve not seen martian packets being enabled before then type “dmesg” (sometimes only as the “root” user) to view kernel messages and after a week or two of logging being in place you’ll probably see some traffic polluting your logs. It’s much better to know how attackers are probing your servers than not. A few log entries for reference can only be of value. When it comes to looking after servers, ignorance is certainly not bliss.
|
||||
|
||||
#### Network
|
||||
|
||||
As mentioned you might want to include hardening the network stack within your kernel.yml file, depending on whether there’s many entries or not, or simply for greater clarity. For your network.yml file have a think about stopping old-school broadcast attacks flooding your LAN and ICMP oddities from changing your routing in addition.
|
||||
|
||||
#### Services
|
||||
|
||||
Usually I would stop or start miscellaneous system services (and potentially applications) within this Ansible file. If there weren’t many services then rather than also using a “cron.yml” file specifically for “cron” hardening I’d include those here too.
|
||||
|
||||
There’s a bundle of changes you can make around cron’s file permissions etc. If you haven’t come across it, on some OSs, there’s a “cron.deny” file for example which blacklists certain users from accessing the “crontab” command. Additionally you also have a multitude of cron directories under the “/etc” directory which need permissions enforced and improved, indeed along with the file “/etc/crontab” itself. Once again check with your OS’s current settings before altering these or “bad things” ™ might happen to your uptime.
|
||||
|
||||
In terms of miscellaneous services being purposefully stopped and certain services, such as system logging which is imperative to a healthy and secure system, have a quick look at the Ansible below which I might put in place for syslog as an example.
|
||||
|
||||
name: Insist syslog is definitely installed (so we can receive upstream logs)
|
||||
|
||||
apt: name=rsyslog state=present
|
||||
|
||||
name: Make sure that syslog starts after a reboot
|
||||
|
||||
service: name=rsyslog state=started enabled=yes
|
||||
|
||||
#### IPtables
|
||||
|
||||
The venerable Netfilter which, from within the Linux kernel offers the IPtables software firewall the ability to filter network packets in an exceptionally sophisticated manner, is a must if you can enable it sensibly. If you’re confident that each of your varying flavours of servers (whether it’s a webserver, database server and so on) can use the same IPtables config then copy a file onto the filesystem via Ansible and make sure it’s always loaded up using this YAML file.
|
||||
|
||||
Next time, we’ll wrap up our look at specific system suggestions and talk a little more about how the playbook might be used.
|
||||
|
||||
Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2
|
||||
|
||||
作者:[Chris Binnie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/chrisbinnie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1280.jpg?itok=PHazitpd
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.devsecops.cc/
|
@ -0,0 +1,36 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's your primary backup strategy for the /home directory in Linux?)
|
||||
[#]: via: (https://opensource.com/poll/19/4/backup-strategy-home-directory-linux)
|
||||
[#]: author: ( https://opensource.com/users/dboth/users/don-watkins/users/greg-p)
|
||||
|
||||
What's your primary backup strategy for the /home directory in Linux?
|
||||
======
|
||||
|
||||
![Linux keys on the keyboard for a desktop computer][1]
|
||||
|
||||
I frequently upgrade to newer releases of Fedora, which is my primary distribution. I also upgrade other distros but much less frequently. I have also had many crashes of various types over the years, including a large portion of self-inflicted ones. Past experience with data loss has made me very aware of the need for good backups.
|
||||
|
||||
I back up many parts of my Linux hosts but my **/home** directory is especially important. Losing any of the data in **/home** on my primary workstation due to a crash or an upgrade could be disastrous.
|
||||
|
||||
My backup strategy for **/home** is to back up everything every day. There are other things on every Linux system to back up but **/home **is the center of everything I do on my workstation. I keep my documents and financial records there as well as off-line emails, address books for different apps, calendar and task data, and most importantly for me these days, the working copies of my next two Linux books.
|
||||
|
||||
I can think of a number of approaches to doing backups and restores of **/home** which would allow an easy and complete recovery after a data loss ranging from a single file to the entire directory. Which approach do you take? Which tools do you use?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/poll/19/4/backup-strategy-home-directory-linux
|
||||
|
||||
作者:[][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth/users/don-watkins/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Zip Files and Folders in Linux [Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/linux-zip-folder/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Zip Files and Folders in Linux [Beginner Tip]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip shows you how to create a zip folder in Ubuntu and other Linux distributions. Both terminal and GUI methods have been discussed.**_
|
||||
|
||||
Zip is one of the most popular archive file format out there. With zip, you can compress multiple files into one file. This not only saves disk space, it also saves network bandwidth. This is why you’ll encounter zip files almost all the time.
|
||||
|
||||
As a normal user, mostly you’ll unzip folders in Linux. But how do you zip a folder in Linux? This article helps you answer that question.
|
||||
|
||||
**Prerequisite: Verify if zip is installed**
|
||||
|
||||
Normally [zip][1] support is installed but no harm in verifying. You can run the below command to install zip and unzip support. If it’s not installed already, it will be installed now.
|
||||
|
||||
```
|
||||
sudo apt install zip unzip
|
||||
```
|
||||
|
||||
Now that you know your system has zip support, you can read on to learn how to zip a directory in Linux.
|
||||
|
||||
![][2]
|
||||
|
||||
### Zip a folder in Linux Command Line
|
||||
|
||||
The syntax for using the zip command is pretty straight forward.
|
||||
|
||||
```
|
||||
zip [option] output_file_name input1 input2
|
||||
```
|
||||
|
||||
While there could be several options, I don’t want you to confuse with them. If your only aim is to create a zip folder from a bunch of files and directories, use the command like this:
|
||||
|
||||
```
|
||||
zip -r output_file.zip file1 folder1
|
||||
```
|
||||
|
||||
The -r option will recurse into directories and compress its contents as well. The .zip extension in the output files is optional as .zip is added by default.
|
||||
|
||||
You should see the files being added to the compressed folder during the zip operation.
|
||||
|
||||
```
|
||||
zip -r myzip abhi-1.txt abhi-2.txt sample_directory
|
||||
adding: abhi-1.txt (stored 0%)
|
||||
adding: abhi-2.txt (stored 0%)
|
||||
adding: sample_directory/ (stored 0%)
|
||||
adding: sample_directory/newfile.txt (stored 0%)
|
||||
adding: sample_directory/agatha.txt (deflated 41%)
|
||||
```
|
||||
|
||||
You can use the -e option to [create a password protect zip folder in Linux][3].
|
||||
|
||||
You are not always restricted to the terminal for creating zip archive files. You can do that graphically as well. Here’s how!
|
||||
|
||||
### Zip a folder in Ubuntu Linux Using GUI
|
||||
|
||||
_Though I have used Ubuntu here, the method should be pretty much the same in other distributions using GNOME or other desktop environment._
|
||||
|
||||
If you want to compress a file or folder in desktop Linux, it’s just a matter of a few clicks.
|
||||
|
||||
Go to the folder where you have the desired files (and folders) you want to compress into one zip folder.
|
||||
|
||||
In here, select the files and folders. Now, right click and select Compress. You can do the same for a single file as well.
|
||||
|
||||
![Select the files, right click and click compress][4]
|
||||
|
||||
Now you can create a compressed archive file in zip, tar xz or 7z format. In case you are wondering, all these three are various compression algorithms that you can use for compressing your files.
|
||||
|
||||
Give it the name you desire and click on Create.
|
||||
|
||||
![Create archive file][5]
|
||||
|
||||
It shouldn’t take long and you should see an archive file in the same directory.
|
||||
|
||||
![][6]
|
||||
|
||||
Well, that’s it. You successfully created a zip folder in Linux.
|
||||
|
||||
I hope this quick little tip helped you with the zip files. Please feel free to share your suggestions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-zip-folder/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-folder-linux.png?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/password-protect-zip-file/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-file-ubuntu.jpg?resize=800%2C428&ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-folder-ubuntu-1.jpg?ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-file-created-in-ubuntu.png?resize=800%2C277&ssl=1
|
448
sources/tech/20190413 The Fargate Illusion.md
Normal file
448
sources/tech/20190413 The Fargate Illusion.md
Normal file
@ -0,0 +1,448 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Fargate Illusion)
|
||||
[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
|
||||
[#]: author: (Lee Briggs https://leebriggs.co.uk/)
|
||||
|
||||
The Fargate Illusion
|
||||
======
|
||||
|
||||
I’ve been building a Kubernetes based platform at $work now for almost a year, and I’ve become a bit of a Kubernetes apologist. It’s true, I think the technology is fantastic. I am however under no illusions about how difficult it is to operate and maintain. I read posts like [this][1] one earlier in the year and found myself nodding along to certain aspects of the opinion. If I was in a smaller company, with 10/15 engineers, I’d be horrified if someone suggested managing and maintaining a fleet of Kubernetes clusters. The operational overhead is just too high.
|
||||
|
||||
Despite my love for all things Kubernetes at this point, I do remain curious about the notion that “serverless” computing will kill the ops engineer. The main source of intrigue here is the desire to stay gainfully employed in the future - if we aren’t going to need OPS engineers in our glorious future, I’d like to see what all the fuss is about. I’ve done some experimentation in Lamdba and Google Cloud Functions and been impressed by what I saw, but I still firmly believe that serverless solutions only solve a percentage of the problem.
|
||||
|
||||
I’ve had my eye on [AWS Fargate][2] for some time now and it’s something that developers at $work have been gleefully pointed at as “serverless computing” - mainly because with Fargate, you can run your Docker container without having to manage the underlying nodes. I wanted to see what that actually meant - so I set about trying to get an app running on Fargate from scratch. I defined the succes criteria here as something close-ish to a “production ready” application, so I wanted to have the following:
|
||||
|
||||
* A running container on Fargate
|
||||
* With configuration pushed down in the form of environment variables
|
||||
* “Secrets” should not be in plaintext
|
||||
* Behind a loadbalancer
|
||||
* TLS enabled with a valid SSL certificate
|
||||
|
||||
|
||||
|
||||
I approached this whole task from an infrastructure as code mentality, and instead of following the default AWS console wizards, I used terraform to define the infrastructure. It’s very possible this overcomplicated things, but I wanted to make sure any deployment was repeatable and discoverable to anyone else wanting to follow along.
|
||||
|
||||
All of the above criteria is generally achieveable with a Kubernetes based platform using a few external add-ons and plugins, so I’m admittedly approaching this whole task with a comparitive mentality - because I’m comparing it with my common workflow. My main goal was to see how easy this was with Fargate, especially when compared with Kubernetes. I was pretty surprised with the outcome.
|
||||
|
||||
### AWS has overhead
|
||||
|
||||
I had a clean AWS account and was determined to go from zero to a deployed webapp. Like any other infrastructure in AWS, I had to get the baseline infrastructure working - so I first had to define a VPC.
|
||||
|
||||
I wanted to follow the best practices, so I carved the VPC up into subnets across availability zones, with a public and a private subnet. It occurred to me at this point that as long as this need was always there, I’d probably be able to find a job of some description. The notion that AWS is operationally “free” is something that has irked me for quite some time now. Many people in the developer community take for granted how much work and effort there is in setting up and defining a well designed AWS account and infrastructure. This is _before_ we even start talking about a multi-account architecture - I’m still in a single account here and I’m already having to define infrastructure and traditional network items.
|
||||
|
||||
It’s also worth remembering here, I’ve done this quite a few times now, so I _knew_ exactly what to do. I could have used the default VPC in my account, and the pre-provided subnets, which I expect many people who are getting started might do. This took me about half an hour to get running, but I couldn’t help but think here that even if I want to run lambda functions, I still need some kind of connectivity and networking. Defining NAT gateways and routing in a VPC doesn’t feel very serveless at all, but it has to be done to get things moving.
|
||||
|
||||
### Run my damn container
|
||||
|
||||
Once I had the base infrastructure up and running, I now wanted to get my docker container running. I started examining the Fargate docs and browsed through the [Getting Started][3] docs and something immediately popped out at me:
|
||||
|
||||
> [][4]
|
||||
|
||||
Hold on a minute, there’s at least THREE steps here just to get my container up and running? This isn’t quite how this whole thing was sold to me, but let’s get started.
|
||||
|
||||
#### Task Definitions
|
||||
|
||||
A task definition defines the actual container you want to run. The problem I ran into immediately here is that this thing is insanely complicated. Lots of the options here are very straightforward, like specifying the docker image and memory limits, but I also had to define a networking model and a variety of other options that I wasn’t really familiar with. Really? If I had come into this process with absolutely no AWS knowledge I’d be incredibly overwhelmed at this stage. A full list of the [parameters][5] can be found on the AWS page, and the list is long. I knew my container needed to have some environment variables, and it needed to expose a port. So I defined that first, with the help of a fantastic [terraform module][6] which really made this easier. If I didn’t have this, I’d be hand writing JSON to define my container definition.
|
||||
|
||||
First, I defined some environment variables:
|
||||
|
||||
```
|
||||
container_environment_variables = [
|
||||
{
|
||||
name = "USER"
|
||||
value = "${var.user}"
|
||||
},
|
||||
{
|
||||
name = "PASSWORD"
|
||||
value = "${var.password}"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Then I compiled the task definition using the module I mentioned above:
|
||||
|
||||
```
|
||||
module "container_definition_app" {
|
||||
source = "cloudposse/ecs-container-definition/aws"
|
||||
version = "v0.7.0"
|
||||
|
||||
container_name = "${var.name}"
|
||||
container_image = "${var.image}"
|
||||
|
||||
container_cpu = "${var.ecs_task_cpu}"
|
||||
container_memory = "${var.ecs_task_memory}"
|
||||
container_memory_reservation = "${var.container_memory_reservation}"
|
||||
|
||||
port_mappings = [
|
||||
{
|
||||
containerPort = "${var.app_port}"
|
||||
hostPort = "${var.app_port}"
|
||||
protocol = "tcp"
|
||||
},
|
||||
]
|
||||
|
||||
environment = "${local.container_environment_variables}"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
I was pretty confused at this point - I need to define a lot of configuration here to get this running and I’ve barely even started, but it made a little sense - anything running a docker container needs to have _some_ idea of the configuration values of the docker container. I’ve [previously written][7] about the problems with Kubernetes and configuration management and the same problem seemed to be rearing its ugly head again here.
|
||||
|
||||
Next, I defined the task definition from the module above (which thankfully abstracted the required JSON away from me - if I had to hand write JSON at this point I’ve have probably given up).
|
||||
|
||||
I realised immediately I was missing something as I was defining the module parameters. I need an IAM role as well! Okay, let me define that:
|
||||
|
||||
```
|
||||
resource "aws_iam_role" "ecs_task_execution" {
|
||||
name = "${var.name}-ecs_task_execution"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2008-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "ecs-tasks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy_attachment" "ecs_task_execution" {
|
||||
count = "${length(var.policies_arn)}"
|
||||
|
||||
role = "${aws_iam_role.ecs_task_execution.id}"
|
||||
policy_arn = "${element(var.policies_arn, count.index)}"
|
||||
}
|
||||
```
|
||||
|
||||
That makes sense, I’d need to define an RBAC policy in Kubernetes, so I’m still not exactly losing or gaining anything here. I am starting to think at this point that this feels very familiar from a Kubernetes perspective.
|
||||
|
||||
```
|
||||
resource "aws_ecs_task_definition" "app" {
|
||||
family = "${var.name}"
|
||||
network_mode = "awsvpc"
|
||||
requires_compatibilities = ["FARGATE"]
|
||||
cpu = "${var.ecs_task_cpu}"
|
||||
memory = "${var.ecs_task_memory}"
|
||||
execution_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
|
||||
task_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
|
||||
|
||||
container_definitions = "${module.container_definition_app.json}"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
At this point, I’ve written quite a few lines of code to get this running, read a lot of ECS documentation and all I’ve done is define a task definition. I still haven’t got this thing running yet. I’m really confused at this point what the value add is here over a Kubernetes based platform, but I continued onwards.
|
||||
|
||||
#### Services
|
||||
|
||||
A service is partly how to expose the container to the world, and partly how you define how many replicas it has. My first thought was “Ah! This is like a Kubernetes service!” and I set about mapping the ports and such like. Here was my first run at the terraform:
|
||||
|
||||
```
|
||||
resource "aws_ecs_service" "app" {
|
||||
name = "${var.name}"
|
||||
cluster = "${module.ecs.this_ecs_cluster_id}"
|
||||
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
|
||||
desired_count = "${var.ecs_service_desired_count}"
|
||||
launch_type = "FARGATE"
|
||||
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
|
||||
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
|
||||
|
||||
network_configuration {
|
||||
subnets = ["${values(local.private_subnets)}"]
|
||||
security_groups = ["${module.app.this_security_group_id}"]
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
I again got frustrated when I had to define the security group for this that allowed access to the ports needed, but I did so and plugged that into the network configuration. Then I got a smack in the face.
|
||||
|
||||
I need to define my own loadbalancer?
|
||||
|
||||
What?
|
||||
|
||||
Surely not?
|
||||
|
||||
##### LoadBalancers Never Go Away
|
||||
|
||||
I was honestly kind floored by this, I’m not even sure why. I’ve gotten so used to Kubernetes services and ingress objects that I completely took for granted how easy it is to get my application on the web with Kubernetes. Of course, we’ve spent months building a platform to make this easier at $work. I’m a heavy user of [external-dns][8] and [cert-manager][9] to automate populating DNS entries on ingress objects and automating TLS certificates and I am very aware of the work needed to get these set up, but I honestly thought it would be easier to do this on Fargate. I recognise that Fargate isn’t claiming to be the be all and end-all of how to run applications - it’s just abstracting away the node management - but I have been consistently told this is _easier_ than Kubernetes. I really was surprised. Defining a LoadBalancer (even if you don’t want to use Ingresses and Ingress controllers) is part and parcel of deploying a service to Kubernetes, and I had to do the same thing again here. It just all felt so familiar.
|
||||
|
||||
I now realised I needed:
|
||||
|
||||
* A loadbalancer
|
||||
* A TLS certificate
|
||||
* A DNS entry
|
||||
|
||||
|
||||
|
||||
So I set about making those. I made use of some popular terraform modules, and came up with this:
|
||||
|
||||
```
|
||||
# Define a wildcard cert for my app
|
||||
module "acm" {
|
||||
source = "terraform-aws-modules/acm/aws"
|
||||
version = "v1.1.0"
|
||||
|
||||
create_certificate = true
|
||||
|
||||
domain_name = "${var.route53_zone_name}"
|
||||
zone_id = "${data.aws_route53_zone.this.id}"
|
||||
|
||||
subject_alternative_names = [
|
||||
"*.${var.route53_zone_name}",
|
||||
]
|
||||
|
||||
|
||||
tags = "${local.tags}"
|
||||
|
||||
}
|
||||
# Define my loadbalancer
|
||||
resource "aws_lb" "main" {
|
||||
name = "${var.name}"
|
||||
subnets = [ "${values(local.public_subnets)}" ]
|
||||
security_groups = ["${module.alb_https_sg.this_security_group_id}", "${module.alb_http_sg.this_security_group_id}"]
|
||||
}
|
||||
|
||||
resource "aws_lb_target_group" "main" {
|
||||
name = "${var.name}"
|
||||
port = "${var.app_port}"
|
||||
protocol = "HTTP"
|
||||
vpc_id = "${local.vpc_id}"
|
||||
target_type = "ip"
|
||||
depends_on = [ "aws_lb.main" ]
|
||||
}
|
||||
|
||||
# Redirect all traffic from the ALB to the target group
|
||||
resource "aws_lb_listener" "main" {
|
||||
load_balancer_arn = "${aws_lb.main.id}"
|
||||
port = "80"
|
||||
protocol = "HTTP"
|
||||
|
||||
default_action {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
type = "forward"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_lb_listener" "main-tls" {
|
||||
load_balancer_arn = "${aws_lb.main.id}"
|
||||
port = "443"
|
||||
protocol = "HTTPS"
|
||||
certificate_arn = "${module.acm.this_acm_certificate_arn}"
|
||||
|
||||
default_action {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
type = "forward"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
I’ll be completely honest here - I screwed this up several times. I had to fish around in the AWS console to figure out what I’d done wrong. It certainly wasn’t an “easy” process - and I’ve done this before - many times. Honestly, at this point, Kubernetes looked positively _enticing_ to me, but I realised it was because I was very familiar with it. If I was lucky enough to be using a managed Kubernetes platform (with external-dns and cert-manager preinstalled) I’d really wonder what value add I was missing from Fargate. It just really didn’t feel that easy.
|
||||
|
||||
After a bit of back and forth, I now had a working ECS service. The final definition, including the service, looked a bit like this:
|
||||
|
||||
```
|
||||
data "aws_ecs_task_definition" "app" {
|
||||
task_definition = "${var.name}"
|
||||
depends_on = ["aws_ecs_task_definition.app"]
|
||||
}
|
||||
|
||||
resource "aws_ecs_service" "app" {
|
||||
name = "${var.name}"
|
||||
cluster = "${module.ecs.this_ecs_cluster_id}"
|
||||
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
|
||||
desired_count = "${var.ecs_service_desired_count}"
|
||||
launch_type = "FARGATE"
|
||||
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
|
||||
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
|
||||
|
||||
network_configuration {
|
||||
subnets = ["${values(local.private_subnets)}"]
|
||||
security_groups = ["${module.app_sg.this_security_group_id}"]
|
||||
}
|
||||
|
||||
load_balancer {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
container_name = "app"
|
||||
container_port = "${var.app_port}"
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
"aws_lb_listener.main",
|
||||
]
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
I felt like it was close at this point, but then I remembered I’d only done 2 of the required 3 steps from the original “Getting Started” document - I still needed to define the ECS cluster.
|
||||
|
||||
#### Clusters
|
||||
|
||||
Thanks to a very well [defined module][10], defining the cluster to run all this on was actually very easy.
|
||||
|
||||
```
|
||||
module "ecs" {
|
||||
source = "terraform-aws-modules/ecs/aws"
|
||||
version = "v1.1.0"
|
||||
|
||||
name = "${var.name}"
|
||||
}
|
||||
```
|
||||
|
||||
What surprised me the _most_ here is why I had to define a cluster at all. As someone reasonably familiar with ECS it makes some sense you’d need a cluster, but I tried to consider this from the point of view of someone having to go through this process as a complete newcomer - it seems surprising to me that Fargate is billed as “serverless” but you still need to define a cluster. It’s a small detail, but it really stuck in my mind.
|
||||
|
||||
### Tell me your secrets
|
||||
|
||||
At this stage of the process, I was fairly happy I managed to get something running. There was however something missing from my original criteria. If we go all the way back to the task definition, you’ll remember my app has an environment variable for the password:
|
||||
|
||||
```
|
||||
container_environment_variables = [
|
||||
{
|
||||
name = "USER"
|
||||
value = "${var.user}"
|
||||
},
|
||||
{
|
||||
name = "PASSWORD"
|
||||
value = "${var.password}"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
If I looked at my task definition in the AWS console, my password was there, staring at me in plaintext. I wanted this to end, so I set about trying to move this into something else, similar to [Kubernetes secrets][11]
|
||||
|
||||
#### AWS SSM
|
||||
|
||||
The way Fargate/ECS does the secret management portion is to use [AWS SSM][12] (the full name for this service is AWS Systems Manager Parameter Store, but I refuse to use that name because quite frankly it’s stupid)
|
||||
|
||||
The AWS documentation [covers this][13] fairly well, so I set about converting this to terraform.
|
||||
|
||||
##### Specifying the Secret
|
||||
|
||||
First, you have to define a parameter and give it a name. In terraform, it looks like this:
|
||||
|
||||
```
|
||||
resource "aws_ssm_parameter" "app_password" {
|
||||
name = "${var.app_password_param_name}" # The name of the value in AWS SSM
|
||||
type = "SecureString"
|
||||
value = "${var.app_password}" # The actual value of the password, like correct-horse-battery-stable
|
||||
}
|
||||
```
|
||||
|
||||
Obviously the key component here is the “SecureString” type. This uses the default AWS KMS key to encrypt the data, something that was not immediately obvious to me. This has a huge advantage over Kubernetes secrets, which aren’t encrypted in etcd by default.
|
||||
|
||||
Then I specified another local value map for ECS, and passed that as a secret parameter:
|
||||
|
||||
```
|
||||
container_secrets = [
|
||||
{
|
||||
name = "PASSWORD"
|
||||
valueFrom = "${var.app_password_param_name}"
|
||||
},
|
||||
]
|
||||
|
||||
module "container_definition_app" {
|
||||
source = "cloudposse/ecs-container-definition/aws"
|
||||
version = "v0.7.0"
|
||||
|
||||
container_name = "${var.name}"
|
||||
container_image = "${var.image}"
|
||||
|
||||
container_cpu = "${var.ecs_task_cpu}"
|
||||
container_memory = "${var.ecs_task_memory}"
|
||||
container_memory_reservation = "${var.container_memory_reservation}"
|
||||
|
||||
port_mappings = [
|
||||
{
|
||||
containerPort = "${var.app_port}"
|
||||
hostPort = "${var.app_port}"
|
||||
protocol = "tcp"
|
||||
},
|
||||
]
|
||||
|
||||
environment = "${local.container_environment_variables}"
|
||||
secrets = "${local.container_secrets}"
|
||||
```
|
||||
|
||||
##### A problem arises
|
||||
|
||||
At this point, I redeployed my task definition, and was very confused. Why isn’t the task rolling out properly? I kept seeing in the console that the running app was still using the previous task definition (version 7) when the new task definition (version 8) was available. This took me way longer than it should have to figure out, but in the events screen on the console, I noticed an IAM error. I had missed a step, and the container couldn’t read the secret from AWS SSM, because it didn’t have the correct IAM permissions. This was the first time I got genuinely frustrated with this whole thing. The feedback here was _terrible_ from a user experience perspective. If I hadn’t known any better, I would have figured everything was fine, because there was still a task running, and my app was still available via the correct URL - I was just getting the old config.
|
||||
|
||||
In a Kubernetes world, I would have clearly seen an error in the pod definition. It’s absolutely fantastic that Fargate makes sure my app doesn’t go down, but as an operator I need some actual feedback as to what’s happening. This really wasn’t good enough. I genuinely hope someone from the Fargate team reads this and tries to improve this experience.
|
||||
|
||||
### That’s a wrap?
|
||||
|
||||
This was the end of the road - my app was running and I’d met all my criteria. I did realise that I had some improvements to make, which included:
|
||||
|
||||
* Defining a cloudwatch log group, so I could write logs correctly
|
||||
* Add a route53 hosted zone to make the whole thing a little easier to automate from a DNS perspective
|
||||
* Fix and rescope the IAM permissions, which were very broad at this point
|
||||
|
||||
|
||||
|
||||
But honestly at this point, I wanted to reflect on the experience. I threw out a [twitter thread][14] about my experience and then spent the rest of the time thinking about what I really felt here.
|
||||
|
||||
### Table Stakes
|
||||
|
||||
What I realised, after an evening of reflection, was that this process is largely the same whether you’re using Fargate or Kubernetes. What surprised me the most was that despite the regular claims I’ve heard that Fargate is “easier” I really just couldn’t see any benefits over a Kubernetes based platform. Now, if you’re in a world where you’re building Kubernetes clusters I can absolutely see the value here - managing nodes and the control plane is just overhead you don’t really need. The problem is - most consumers of a Kubernetes based platform don’t _have_ to do this. If you’re lucky enough to be using GKE, you barely even need to think about the management of the cluster, you can run a cluster with a single gcloud command nowadays. I regularly use Digital Ocean’s managed Kubernetes service and I can safely say that it was as easy as spinning up a Fargate cluster - in fact in some way’s it was easier.
|
||||
|
||||
Having to define some infrastructure to run your container is table stakes at this point. Google may have just changed the game this week with their [Google Cloud Run][15] product, but they’re massively ahead of everyone else in this field.
|
||||
|
||||
What I think can be safely said from this whole experience though is this: _Running containers at scale is still hard_. It requires thought, it requires domain knowledge, it requires collaboration between Operations and Developers. It also requires a foundation to build on - any AWS based operation is going to need to have some fundamental infrastructure defined and running. I’m very intrigued by the “NoOps” concept that some companies seem to aspire for. I guess if you’re running a stateless application, and you can put it all inside a lambda function and an API gateway you’re probably in a good position, but are we really close to this in any kind of enterprise environment? I really don’t think so.
|
||||
|
||||
#### Fair Comparisons
|
||||
|
||||
Another realisation that struck me is that often the comparisons between technology A and technology B sometimes aren’t really fair, and I see this very often with AWS. The reality of the situation is often very different from the Jeff Barr blogpost. If you’re a small enough company that you can deploy your application in AWS using the AWS console and select all of the defaults, this absolutely is easier. However, I didn’t want to use the defaults, because the defaults are almost always not production ready. Once you start to peel back the layers of cloud provider services, you begin to realise that at the end of the day - you’re still running software. It still needs to be designed well, deployed well and operated well. I believe that the value add of AWS and Kubernetes and all the other cloud providers is it makes it much, much easier to run, design and operate things well, but it is definitely not free.
|
||||
|
||||
#### Arguing for Kubernetes
|
||||
|
||||
My final takeaway here is this: if you view Kubernetes purely as a container orchestration tool, you’re probably going to love Fargate. However, as I’ve become more familiar with Kubernetes, I’ve come to appreciate just how important it is as a technology - not just because it’s a great container orchestration tool but also because of its design patterns - it’s declarative, API driven platform. A simple though that occurred to me during _all_ of this Fargate process was that if I deleted any of this stuff, Fargate isn’t necessarily going to recreate it for me. Autoscaling is nice, not having to manage servers and patching and OS updates is awesome, but I felt I’d lost so much by not being able to use Kubernetes self healing and API driven model. Sure, Kubernetes has a learning curve - but from this experience, so does Fargate.
|
||||
|
||||
### Summary
|
||||
|
||||
Despite my confusion during some of this process, I really did enjoy the experience. I still believe Fargate is a fantastic technology, and what the AWS team has done with ECS/Fargate really is nothing short of remarkable. My perspective however is that this is definitely not “easier” than Kubernetes, it’s just.. different.
|
||||
|
||||
The problems that arise when running containers in production are largely the same. If you take anything away from this post it should be this: _whichever way you choose is going to have operational overhead_. Don’t fall into the trap of believing that you can just pick something and your world is going to be easier. My personal opinion is this: If you have an operations team and your company is going to be deploying containers across multiple app teams - pick a technology and build processes and tooling around it to make it easier.
|
||||
|
||||
I’m certainly going to take the claims from people that certain technology is easier with a grain of salt from now on. At this stage, when it comes to Fargate, this sums up my feelings:
|
||||
|
||||
> [][16]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
|
||||
|
||||
作者:[Lee Briggs][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://leebriggs.co.uk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/
|
||||
[2]: https://aws.amazon.com/fargate/
|
||||
[3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
|
||||
[4]: https://imgur.com/FpU0lds
|
||||
[5]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
|
||||
[6]: https://github.com/cloudposse/terraform-aws-ecs-container-definition
|
||||
[7]: https://leebriggs.co.uk/blog/2018/05/08/kubernetes-config-mgmt.html
|
||||
[8]: https://github.com/kubernetes-incubator/external-dns
|
||||
[9]: https://github.com/jetstack/cert-manager
|
||||
[10]: https://github.com/terraform-aws-modules/terraform-aws-ecs
|
||||
[11]: https://kubernetes.io/docs/concepts/configuration/secret/
|
||||
[12]: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
|
||||
[13]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
|
||||
[14]: https://twitter.com/briggsl/status/1116870900719030272
|
||||
[15]: https://cloud.google.com/run/
|
||||
[16]: https://imgur.com/QfFg225
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Working with Microsoft Exchange from your Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/microsoft-exchange-linux-desktop/)
|
||||
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
Working with Microsoft Exchange from your Linux Desktop
|
||||
======
|
||||
|
||||
Recently I had to do some research (and even magic) to be able to work on my Ubuntu Desktop with Exchange Mail Server from my current employer. I am going to share my experience with you.
|
||||
|
||||
### Microsoft Exchange on Linux desktop
|
||||
|
||||
I guess many readers might feel confused, I mean, it shouldn’t be that hard if you simply use [Thunderbird][1] or any other [Linux email client][2] with your Office365 Exchange Account, right? Well, for better or for worse it was not this case for me.
|
||||
|
||||
Here’s my ordeal and what I did to make Microsoft Exchange work on my Linux desktop.
|
||||
|
||||
![][3]
|
||||
|
||||
#### The initial problem, no Office365
|
||||
|
||||
The first problem encountered in my situation was that we don’t currently use Office365 like probably majority of current people does for hosting their Exchange accounts, we currently use an on premises Exchange server and a very old version of it.
|
||||
|
||||
So, this means I didn’t have the luxury of using automatic configuration that comes in majority of email clients to simply connect to Office365.
|
||||
|
||||
#### Webmail is always an option… right?
|
||||
|
||||
Short answer is yes, however, as I mentioned we are using Exchange 2010, so the webmail interface is not only outdated, it even won’t allow you to have a decent email signature as it has a limit of characters in webmail configuration, so I needed to use an email client if I really wanted to be able to use the email the way I needed.
|
||||
|
||||
#### Another problem, I am picky for my email client
|
||||
|
||||
I am a regular Google user, I have been using GMail for the past 14 years as my personal email, so I really like how it looks and works. I actually use the webmail as I don’t like to be tied to my email client or even my computer device, if something happens and I need to switch to a newer device I don’t want to have to copy things over, I just want things to be there waiting for me to use them.
|
||||
|
||||
This leads me not liking Thunderbird, K-9 or Evolution Mail clients. All of these are capable of being connected to Exchange servers (one way or the other) but again, they don’t meet the standard of a clean, easy and modern GUI I wanted plus they couldn’t even manage my Exchange calendar well (which was a real deal breaker for me).
|
||||
|
||||
#### Found some options as email clients!
|
||||
|
||||
After some other research I found there were a couple of options for email clients that I could use and that actually would work the way I expected.
|
||||
|
||||
These were: [Hiri][4], which had a very modern and innovative user interface and had Exchange Server capabilities and there also was [Mailspring][5] which is a fork of an old foe ([Nylas Mail][6]) and which was my real favorite.
|
||||
|
||||
However, Mailspring couldn’t connect directly to an Exchange server (using Exchange’s protocol) unless you use Office365, it required [IMAP][7] (another luxury!) and the IT department at my office was reluctant to activate IMAP for “security reasons”.
|
||||
|
||||
Hiri is a good option but it’s not free.
|
||||
|
||||
#### No IMAP, no Office365, game over? Not yet!
|
||||
|
||||
I have to confess, I was really ready to give up and simply use the old webmail and learn to live with it, however, I gave a last shot on my research capabilities and I found a possible solution: what if I had a way to put a “man in the middle”? What if I was able to make the IMAP to run locally on my computer while my computer simply pull the emails via Exchange protocol? It was a long shot but, could work…
|
||||
|
||||
So I started looking here and there and found this [DavMail][8], which works as a Gateway to “talk” with an Exchange server and then locally provide you whatever you need in order to use it. Basically it was like a “translator” between by computer and the Exchange and then provided me with whatever service I needed.
|
||||
|
||||
![DavMail Settings][9]
|
||||
|
||||
So basically I only had to give DavMail my Exchange Server’s URL (even OWA URL) and set whatever ports I wanted on my local computer to be the new ports where my email client could connect.
|
||||
|
||||
This way I was free to basically use ANY client I wanted, at least any client which was capable of using IMAP protocol would work, as long as I configure the same ports I set up as my local ports.
|
||||
|
||||
![Mailspring working my office’s on premises Exchange. Information has been blurred due to non-disclosure agreement at my office.][10]
|
||||
|
||||
And that was it! I was able to use MailSpring (which is my preferred choice for email client) under my non favorable conditions.
|
||||
|
||||
#### Bonus point: this is a multi-platform solution!
|
||||
|
||||
What’s best is that this solution will work for any platform! So if you have the same problem while using Windows or macOS, DavMail has a version for all tastes!
|
||||
|
||||
![avatar][11]
|
||||
|
||||
![avatar][11]
|
||||
|
||||
### Helder Martins
|
||||
|
||||
Systems Engineer, technology evangelist, Ubuntu user, Linux enthusiast, father and husband.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/microsoft-exchange-linux-desktop/
|
||||
|
||||
作者:[It's FOSS Community][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.thunderbird.net/en-US/
|
||||
[2]: https://itsfoss.com/best-email-clients-linux/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-exchange-linux-desktop.png?resize=800%2C450&ssl=1
|
||||
[4]: https://www.hiri.com/
|
||||
[5]: https://getmailspring.com/
|
||||
[6]: https://itsfoss.com/n1-open-source-email-client/
|
||||
[7]: https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol
|
||||
[8]: http://davmail.sourceforge.net/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings.png?resize=800%2C597&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings-1.jpg?ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/helder-martins-1.jpeg?ssl=1
|
@ -0,0 +1,292 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?
|
||||
======
|
||||
|
||||
You may need to run these commands based on your requirements.
|
||||
|
||||
I can tell you few examples, where you would be needed this.
|
||||
|
||||
When you add a new network interface or when you create a new virtual network interface from the original physical interface.
|
||||
|
||||
you may need to bounce these commands to bring up the new interface.
|
||||
|
||||
Also, if you made any changes or if it’s down then you need to run one of the below commands to bring them up.
|
||||
|
||||
It can be done on many ways and we would like to add best five method which we used in the article.
|
||||
|
||||
It can be done using the below five methods.
|
||||
|
||||
* **`ifconfig Command:`** The ifconfig command is used configure a network interface. It provides so many information about NIC.
|
||||
* **`ifdown/up Command:`** The ifdown command take a network interface down and the ifup command bring a network interface up.
|
||||
* **`ip Command:`** ip command is used to manage NIC. It’s replacement of old and deprecated ifconfig command. It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
|
||||
* **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager and reporting network status.
|
||||
* **`nmtui Command:`** nmtui is a curses‐based TUI application for interacting with NetworkManager.
|
||||
|
||||
|
||||
|
||||
The below output shows the available network interface card (NIC) information in my Linux system.
|
||||
|
||||
```
|
||||
# ip a
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86049sec preferred_lft 86049sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:30:5d:52 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.3/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s8
|
||||
valid_lft 86049sec preferred_lft 86049sec
|
||||
inet6 fe80::32b7:8727:bdf2:2f3/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 1) How To Bring UP And Bring Down A Network Interface In Linux Using ifconfig Command?
|
||||
|
||||
The ifconfig command is used configure a network interface.
|
||||
|
||||
It is used at boot time to set up interfaces as necessary. It provides so many information about NIC. We can use ifconfig command when we need to make any changes on NIC.
|
||||
|
||||
Common Syntax for ifconfig:
|
||||
|
||||
```
|
||||
# ifconfig [NIC_NAME] Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. Make a note, you have to input your interface name instead of us.
|
||||
|
||||
```
|
||||
# ifconfig enp0s3 down
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 1 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ifconfig enp0s3 up
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86294sec preferred_lft 86294sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 2) How To Enable And Disable A Network Interface In Linux Using ifdown/up Command?
|
||||
|
||||
The ifdown command take a network interface down and the ifup command bring a network interface up.
|
||||
|
||||
**Note:**It doesn’t work on new interface device name like `enpXXX`
|
||||
|
||||
Common Syntax for ifdown/ifup:
|
||||
|
||||
```
|
||||
# ifdown [NIC_NAME]
|
||||
|
||||
# ifup [NIC_NAME]
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ifdown eth0
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ip a | grep -A 3 "eth1:"
|
||||
3: eth1: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
|
||||
link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ifup eth0
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "eth1:"
|
||||
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.7/24 brd 192.168.1.255 scope global eth1
|
||||
inet6 fe80::a00:27ff:fed5:a018/64 scope link tentative dadfailed
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
ifup and ifdown doesn’t supporting the latest interface device `enpXXX` names. I got the below message when i ran the command.
|
||||
|
||||
```
|
||||
# ifdown enp0s8
|
||||
Unknown interface enp0s8
|
||||
```
|
||||
|
||||
### 3) How To Bring UP/Bring Down A Network Interface In Linux Using ip Command?
|
||||
|
||||
ip command is used to manage Network Interface Card (NIC). It’s replacement of old and deprecated ifconfig command on modern Linux systems.
|
||||
|
||||
It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
|
||||
|
||||
Common Syntax for ip:
|
||||
|
||||
```
|
||||
# ip link set Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ip link set enp0s3 down
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 1 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ip link set enp0s3 up
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86294sec preferred_lft 86294sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 4) How To Enable And Disable A Network Interface In Linux Using nmcli Command?
|
||||
|
||||
nmcli is a command-line tool for controlling NetworkManager and reporting network status.
|
||||
|
||||
It can be utilized as a replacement for nm-applet or other graphical clients. nmcli is used to create, display, edit, delete, activate, and deactivate network
|
||||
|
||||
connections, as well as control and display network device status.
|
||||
|
||||
Run the following command to identify the interface name because nmcli command is perform most of the task using `profile name` instead of `device name`.
|
||||
|
||||
```
|
||||
# nmcli con show
|
||||
NAME UUID TYPE DEVICE
|
||||
Wired connection 1 3d5afa0a-419a-3d1a-93e6-889ce9c6a18c ethernet enp0s3
|
||||
Wired connection 2 a22154b7-4cc4-3756-9d8d-da5a4318e146 ethernet enp0s8
|
||||
```
|
||||
|
||||
Common Syntax for ip:
|
||||
|
||||
```
|
||||
# nmcli con Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
|
||||
|
||||
```
|
||||
# nmcli con down 'Wired connection 1'
|
||||
Connection 'Wired connection 1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# nmcli dev status
|
||||
DEVICE TYPE STATE CONNECTION
|
||||
enp0s8 ethernet connected Wired connection 2
|
||||
enp0s3 ethernet disconnected --
|
||||
lo loopback unmanaged --
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
|
||||
|
||||
```
|
||||
# nmcli con up 'Wired connection 1'
|
||||
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# nmcli dev status
|
||||
DEVICE TYPE STATE CONNECTION
|
||||
enp0s8 ethernet connected Wired connection 2
|
||||
enp0s3 ethernet connected Wired connection 1
|
||||
lo loopback unmanaged --
|
||||
```
|
||||
|
||||
### 5) How To Bring UP/Bring Down A Network Interface In Linux Using nmtui Command?
|
||||
|
||||
nmtui is a curses based TUI application for interacting with NetworkManager.
|
||||
|
||||
When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument.
|
||||
|
||||
Run the following command launch the nmtui interface. Select “Active a connection” and hit “OK”
|
||||
|
||||
```
|
||||
# nmtui
|
||||
```
|
||||
|
||||
[![][1]![][1]][2]
|
||||
|
||||
Select the interface which you want to bring down then hit “Deactivate” button.
|
||||
[![][1]![][1]][3]
|
||||
|
||||
For activation do the same above procedure.
|
||||
[![][1]![][1]][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-1.png
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-2.png
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-3.png
|
Loading…
Reference in New Issue
Block a user