mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
0b0a9a27b8
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cray to license Fujitsu Arm processor for supercomputers)
|
||||
[#]: via: (https://www.networkworld.com/article/3453341/cray-to-license-fujitsu-arm-processor-for-supercomputers.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Cray to license Fujitsu Arm processor for supercomputers
|
||||
======
|
||||
HPE's Cray will co-develop Fujitsu's A64FX CPU to meet the requirements of likely customers such as universities and national research laboratories.
|
||||
Riken Advanced Institute for Computational Science
|
||||
|
||||
Cray says it will be the first supercomputer vendor to license Fujitsu’s A64FX Arm-based processor with high-bandwidth memory (HBM) for exascale computing.
|
||||
|
||||
Under the agreement, Cray – now a part of HPE – is developing the first-ever commercial supercomputer powered by the A64FX processor, with initial customers being the usual suspects in HPC: Los Alamos National Laboratory, Oak Ridge National Laboratory, RIKEN, Stony Brook University, and University of Bristol.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
As part of this new partnership, Cray and Fujitsu will explore engineering collaboration, co-development, and joint go-to-market to meet customer demand in the supercomputing space. Cray will also bring its Cray Programming Environment (CPE) for Arm processors over to the A64FX to optimize applications and take full advantage of SVE and HBM2.
|
||||
|
||||
[][2]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][2]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
The A64FX was announced last year as the processor for Fujitsu’s next supercomputer, known as [Post-K][3]. The K supercomputer is a massive system at Japan’s RIKEN Center for Computational Science, based on the Sparc architecture. Fujitsu had a Sparc license from Sun Microsystems and made its own chips for the Japanese market.
|
||||
|
||||
A64FX is the first CPU to adopt the [Scalable Vector Extension][4] (SVE), an extension of Armv8-A instruction set architecture for supercomputers. SVE is focused on parallel processing to run applications faster.
|
||||
|
||||
The A64FX also uses HBM2, which has much greater memory performance than DDR4, the memory standard in servers. The A64FX has a maximum theoretical memory bandwidth greater than 1 terabyte per second (TB/s).
|
||||
|
||||
Fujitsu claims the A64FX will offer a peak double precision (64-bit) floating-point operations performance of over 2.7 teraflops. That pales in comparison to the 100 TFlops for an Nvidia Tesla V100, but the A64FX has a power draw of 160 watts vs. 300 watts for the Tesla.
|
||||
|
||||
However, there is more going on. The 32GB of on-chip HBM2 and high-speed interconnects mean a much faster internal chip, and in early tests, Fujitsu is claiming a 2.5-times performance improvement over the Sparc XIIfx chips used in the K computer.
|
||||
|
||||
The Cray supercomputer powered by Fujitsu A64FX will be available through Cray to customers in mid-2020.
|
||||
|
||||
**Now see** [**10 of the world's fastest supercomputers**][5]
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3453341/cray-to-license-fujitsu-arm-processor-for-supercomputers.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html
|
||||
[4]: http://www.datacenterdynamics.com/content-tracks/servers-storage/arm-boosts-supercomputing-potential-with-long-vector-support/96823.fullarticle
|
||||
[5]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hiring a technical writer in the age of DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/11/hiring-technical-writers-devops)
|
||||
[#]: author: (Will Kelly https://opensource.com/users/willkelly)
|
||||
|
||||
Hiring a technical writer in the age of DevOps
|
||||
======
|
||||
As organizations mature their DevOps practices, it's time to make the
|
||||
technical writer a bigger part of the team.
|
||||
![Women talking][1]
|
||||
|
||||
It's common for enterprises to leave the technical writer's role out of the DevOps discussion. Even the marketing department [joins the discussion][2] in some DevOps-first organizations—so why not the writers?
|
||||
|
||||
Our industry doesn't ask enough of its technical writers. Documentation is an afterthought. Companies farm out technical writing to contractors at the end of the project lifecycle. Corners get cut. Likewise, technical writers don't ask enough of their industry. The expectations for the role vary from company to company. Both circumstances lead to technical writers being left out of the DevOps discussion.
|
||||
|
||||
As your organization matures its DevOps practices, it's time to revisit the role of your technical writer.
|
||||
|
||||
### Recast your technical writer for DevOps
|
||||
|
||||
I remember one of the first agile projects I ever worked on, back when I was still writing technical documentation. One of the other writers on the team had a hard time grasping the fact that we had to write about a product that wasn't 100% complete. Those days are gone. Thank you, DevOps and agile.
|
||||
|
||||
It's time for organizations to revisit how they hire technical writers. Throw out your waterfall technical writer job description. Right. Now. I've always divided up technical writers into operations technical writers, who document infrastructure, and software development technical writers, who document software development. The writers flit back and forth. But, finding a technical writer with a grounding in software development and operations can be helpful for staffing a technical writer position on your DevOps team.
|
||||
|
||||
DevOps means you may need to make changes to your standard "corporate technical writer" job description. For instance, weigh software and operations documentation experience higher than before because the flexibility will only help your team. The same goes for writers with experience creating more modular online documentation using tools such as Twiki or Atlassian Confluence.
|
||||
|
||||
DevOps also requires technical writers who can be full participants. Writers can no longer add value if they expect features to be complete before they get involved. Look for writers with experience driving documentation efforts. Your goal should be to find a technical writer who can work with fewer dependencies that you can plug into various parts of your delivery cycle. This can be easier said than done when it comes to hiring. The only advice I can give is to color outside the lines of the traditional technical writer role and be prepared to pay for it.
|
||||
|
||||
Another skill to seek in a technical writer for your DevOps team is collaboration platform management. Adding that duty to your technical writer job description takes a non-critical task off of a developer's to-do list.
|
||||
|
||||
The DevOps technical writer should take the same onboarding path as the developers and other project team members you bring on board. Give them access to the systems they need to document in a sandbox running the same builds as everybody else.
|
||||
|
||||
Measuring the technical writer's success in a DevOps world takes on some new shades of meaning. You can tie your online documentation to analytics to track readership. You also need to track the technical writer's work the same way you're tracking developers' work. Documentation has bugs, too.
|
||||
|
||||
### Retool your documentation process
|
||||
|
||||
Technical documentation has to take a more toolchain velocity-driven approach to keep pace with DevOps. There have been some stops and starts in rethinking documentation publishing in a high-velocity DevOps world.
|
||||
|
||||
A movement called [DocOps][3], out of CA (now part of Broadcom), brought together technical documentation and DevOps practices, but the original team behind the concept appears to have moved on. The effort fizzled out, but I still recommend researching it online. You will get the [CA.com][4] documentation subsite in your search returns. It's running on Atlassian Confluence with CA branding and other customizations. It's been migrated from the more traditional documentation formats of Webhelp and PDF to separate documentation away from the applications they support. While the development team and documentation still have to be in sync for releases, they aren't as dependent on each other for maintenance and updates.
|
||||
|
||||
[Content-as-Code][5] also holds value for your move to a more DevOps-friendly documentation strategy. It uses software engineering practices to support content reuse. It breaks away from the traditional content management system (CMS) model; it uses Git for content versioning, and technical writers and other content authors use Markdown for authoring. Content-as-Code as a development model supports static website generators such as [Jekyll][6] and [Hugo][7] with interoperability with CMSes.
|
||||
|
||||
Whatever direction you choose for publishing your documentation, it's important to do the upfront work. Start with a small proof of concept. Experiment with tools and workflows. Involve your development team in the initial process to get their feedback on the new publishing model you are building. Make sure to document your publishing tools and workflow, just as you've done with your DevOps toolchain.
|
||||
|
||||
### The DevOps technical writer's time is now
|
||||
|
||||
The cultural and technology transformation DevOps brings to organizations means there could be more work for an experienced and well-placed technical writer. Just as you brought your developers and system administrators into the DevOps age, do the same with your technical writers.
|
||||
|
||||
How is your organization adjusting the technical writer role for DevOps? Please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/hiring-technical-writers-devops
|
||||
|
||||
作者:[Will Kelly][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/willkelly
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/conversation-interview-mentor.png?itok=HjoOPcrB (Women talking)
|
||||
[2]: https://martechseries.com/mts-insights/guest-authors/marketing-team-can-learn-devops/
|
||||
[3]: https://contentmarketinginstitute.com/2015/04/intelligent-content-application-economy/
|
||||
[4]: http://CA.com
|
||||
[5]: https://iilab.github.io/contentascode/
|
||||
[6]: https://jekyllrb.com/
|
||||
[7]: https://gohugo.io/
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Edit images on Fedora easily with GIMP)
|
||||
[#]: via: (https://fedoramagazine.org/edit-images-on-fedora-easily-with-gimp/)
|
||||
[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/)
|
||||
|
||||
Edit images on Fedora easily with GIMP
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
GIMP (short for GNU Image Manipulation Program) is free and open-source image manipulation software. With many capabilities ranging from simple image editing to complex filters, scripting and even animation, it is a good alternative to popular commercial options.
|
||||
|
||||
Read on to learn how to install and use GIMP on Fedora. This article covers basic daily image editing.
|
||||
|
||||
### Installing GIMP
|
||||
|
||||
GIMP is available in the official Fedora repository. To install it run:
|
||||
|
||||
```
|
||||
sudo dnf install gimp
|
||||
```
|
||||
|
||||
### Single window mode
|
||||
|
||||
Once you open the application, it shows you the dark theme window with toolbox and the main editing area. Note that it has two window modes that you can switch between by selecting _Windows_ -> _Single Window Mode_. By checking this option all components of the UI are displayed in a single window. Otherwise, they will be separate.
|
||||
|
||||
### Loading an image
|
||||
|
||||
![][2]
|
||||
|
||||
To load an image, go to _File_ -> _Open_ and choose your file and choose your image file.
|
||||
|
||||
### Resizing an image
|
||||
|
||||
To resize the image, you have the option to resize based on a couple of parameters, including pixel and percentage — the two parameters which are often handy in editing images.
|
||||
|
||||
Let’s say we need to scale down the Fedora 30 background image to 75% of its current size. To do that, select _Image_ -> _Scale_ and then on the scale dialog, select percentage in the unit drop down. Next, enter _75_ as width or height and press the **Tab** key. By default, the other dimension will automatically resize in correspondence with the changed dimension to preserve aspect ratio. For now, leave other options unchanged and press Scale.
|
||||
|
||||
![][3]
|
||||
|
||||
The image scales to 0.75 percent of its original size.
|
||||
|
||||
### Rotating images
|
||||
|
||||
Rotating is a transform operation, so you find it under _Image_ -> _Transform_ from the main menu, where there are options to rotate the image by 90 or 180 degrees. There are also options for flipping the image vertically or horizontally under the mentioned option.
|
||||
|
||||
Let’s say we need to rotate the image 90 degrees. After applying a 90-degree clockwise rotation and horizontal flip, our image will look like this:
|
||||
|
||||
![Transforming an image with GIMP][4]
|
||||
|
||||
### Adding text
|
||||
|
||||
Adding text is very easy. Just select the A icon from the toolbox, and click on a point on your image where you want to add the text. If the toolbox is not visible, open it from Windows->New Toolbox.
|
||||
|
||||
As you edit the text, you might notice that the text dialog has font customization options including font family, font size, etc.
|
||||
|
||||
![Adding text to image in GIMP][5]
|
||||
|
||||
### Saving and exporting
|
||||
|
||||
You can save your edit as as a GIMP project with the _xcf_ extension from _File_ -> _Save_ or by pressing **Ctrl+S**. Or you can export your image in formats such as PNG or JPEG. To export, go to _File_ -> _Export As_ or hit **Ctrl+Shift+E** and you will be presented with a dialog where you can select the output image and name.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/edit-images-on-fedora-easily-with-gimp/
|
||||
|
||||
作者:[Mehdi Haghgoo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/powergame/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/gimp-magazine-816x346.jpg
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/10/Screenshot-from-2019-10-25-11-00-44-300x165.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/10/Screenshot-from-2019-10-25-11-17-33-300x262.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/10/Screenshot-from-2019-10-25-11-41-28-300x243.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/10/Screenshot-from-2019-10-25-11-47-54-300x237.png
|
98
sources/tech/20191114 Cleaning up with apt-get.md
Normal file
98
sources/tech/20191114 Cleaning up with apt-get.md
Normal file
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cleaning up with apt-get)
|
||||
[#]: via: (https://www.networkworld.com/article/3453032/cleaning-up-with-apt-get.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Cleaning up with apt-get
|
||||
======
|
||||
Most of us with Debian-based systems use apt-get routinely to install packages and upgrades, but how often do we pull out the cleaning tools? Let's check out some of the tool's options for cleaning up after itself.
|
||||
[Félix Prado Modified by IDG Comm.][1] [(CC0)][2]
|
||||
|
||||
Running **apt-get** commands on a Debian-based system is routine. Packages are updated fairly frequently and commands like **apt-get update** and **apt-get upgrade** make the process quite easy. On the other hand, how often do you use **apt-get clean**, **apt-get autoclean** or **apt-get autoremove**?
|
||||
|
||||
These commands clean up after apt-get's installation operations and remove files that are still on your system but are no longer needed – often because the application that required them is no longer installed.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
### apt-get clean
|
||||
|
||||
The apt-get clean command clears the local repository of retrieved package files that are left in **/var/cache**. The directories it cleans out are **/var/cache/apt/archives/** and **/var/cache/apt/archives/partial/**. The only files it leaves in **/var/cache/apt/archives** are the **lock** file and the **partial** subdirectory.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
You might have a number of files in the directory prior to running the clean operation:
|
||||
|
||||
```
|
||||
/var/cache/apt/archives/db5.3-util_5.3.28+dfsg1-0.6ubuntu1_amd64.deb
|
||||
/var/cache/apt/archives/db-util_1%3a5.3.21~exp1ubuntu2_all.deb
|
||||
/var/cache/apt/archives/lock
|
||||
/var/cache/apt/archives/postfix_3.4.5-1ubuntu1_amd64.deb
|
||||
/var/cache/apt/archives/sasl2-bin_2.1.27+dfsg-1build3_amd64.deb
|
||||
```
|
||||
|
||||
You should only have these afterwards:
|
||||
|
||||
```
|
||||
$ sudo ls -lR /var/cache/apt/archives
|
||||
/var/cache/apt/archives:
|
||||
total 4
|
||||
-rw-r----- 1 root root 0 Jan 5 2018 lock
|
||||
drwx------ 2 _apt root 4096 Nov 12 07:24 partial
|
||||
|
||||
/var/cache/apt/archives/partial:
|
||||
total 0 <== empty
|
||||
```
|
||||
|
||||
The **apt-get clean** command is generally used to clear disk space as needed, generally as part of regularly scheduled maintenance.
|
||||
|
||||
### apt-get autoclean
|
||||
|
||||
The **apt-get** **autoclean** option, like **apt-get clean**, clears the local repository of retrieved package files, but it only removes files that can no longer be downloaded and are virtually useless. It helps to keep your cache from growing too large.
|
||||
|
||||
### apt-get autoremove
|
||||
|
||||
The **autoremove** option removes packages that were automatically installed because some other package required them but, with those other packages removed, they are no longer needed. Sometimes, an upgrade will suggest that you run this command.
|
||||
|
||||
```
|
||||
The following packages were automatically installed and are no longer required:
|
||||
g++-8 gir1.2-mutter-4 libapache2-mod-php7.2 libcrystalhd3
|
||||
libdouble-conversion1 libgnome-desktop-3-17 libigdgmm5 libisl19 libllvm8
|
||||
liblouisutdml8 libmutter-4-0 libmysqlclient20 libpoppler85 libstdc++-8-dev
|
||||
libtagc0 libvpx5 libx265-165 php7.2 php7.2-cli php7.2-common php7.2-json
|
||||
php7.2-opcache php7.2-readline
|
||||
Use 'sudo apt autoremove' to remove them. <==
|
||||
```
|
||||
|
||||
The packages to be removed are often called "unused dependencies". In fact, a good practice to follow is to use **autoremove** after uninstalling a package to be sure that no unneeded files are left behind.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3453032/cleaning-up-with-apt-get.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/nbKaLT4cmRM
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
212
sources/tech/20191115 How to port an awk script to Python.md
Normal file
212
sources/tech/20191115 How to port an awk script to Python.md
Normal file
@ -0,0 +1,212 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to port an awk script to Python)
|
||||
[#]: via: (https://opensource.com/article/19/11/awk-to-python)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
How to port an awk script to Python
|
||||
======
|
||||
Porting an awk script to Python is more about code style than
|
||||
transliteration.
|
||||
![Woman sitting in front of her laptop][1]
|
||||
|
||||
Scripts are potent ways to solve a problem repeatedly, and awk is an excellent language for writing them. It excels at easy text processing in particular, and it can bring you through some complicated rewriting of config files or reformatting file names in a directory.
|
||||
|
||||
### When to move from awk to Python
|
||||
|
||||
At some point, however, awk's limitations start to show. It has no real concept of breaking files into modules, it lacks quality error reporting, and it's missing other things that are now considered fundamentals of how a language works. When these rich features of a programming language are helpful to maintain a critical script, porting becomes a good option.
|
||||
|
||||
My favorite modern programming language that is perfect for porting awk is Python.
|
||||
|
||||
Before porting an awk script to Python, it is often worthwhile to consider its original context. For example, because of awk's limitations, the awk code is commonly called from a Bash script and includes some calls to other command-line favorites like sed, sort, and the gang. It's best to convert all of it into one coherent Python program. Other times, the script makes overly broad assumptions; for example, the code might allow for any number of files, even though it's run with only one in practice.
|
||||
|
||||
After carefully considering the context and determining the thing to substitute with Python, it is time to write code.
|
||||
|
||||
### Standard awk to Python functionality
|
||||
|
||||
The following Python functionality is useful to remember:
|
||||
|
||||
|
||||
```
|
||||
with open(some_file_name) as fpin:
|
||||
for line in fpin:
|
||||
pass # do something with line
|
||||
```
|
||||
|
||||
This code will loop through a file line-by-line and process the lines.
|
||||
|
||||
If you want to access a line number (equivalent to awk's **NR**), you can use the following code:
|
||||
|
||||
|
||||
```
|
||||
with open(some_file_name) as fpin:
|
||||
for nr, line in enumerate(fpin):
|
||||
pass # do something with line
|
||||
```
|
||||
|
||||
### awk-like behavior over multiple files in Python
|
||||
|
||||
If you need to be able to iterate through any number of files while keeping a persistent count of the number of lines (like awk's **FNR**), this loop can do it:
|
||||
|
||||
|
||||
```
|
||||
def awk_like_lines(list_of_file_names):
|
||||
def _all_lines():
|
||||
for filename in list_of_file_names:
|
||||
with open(filename) as fpin:
|
||||
yield from fpin
|
||||
yield from enumerate(_all_lines())
|
||||
```
|
||||
|
||||
This syntax uses Python's _generators_ and **yield from** to build an _iterator_ that loops through all lines and keeps a persistent count.
|
||||
|
||||
If you need the equivalent of both **FNR** and **NR**, here is a more sophisticated loop:
|
||||
|
||||
|
||||
```
|
||||
def awk_like_lines(list_of_file_names):
|
||||
def _all_lines():
|
||||
for filename in list_of_file_names:
|
||||
with open(filename) as fpin:
|
||||
yield from enumerate(fpin)
|
||||
for nr, (fnr, line) in _all_lines:
|
||||
yield nr, fnr, line
|
||||
```
|
||||
|
||||
### More complex awk functionality with FNR, NR, and line
|
||||
|
||||
The question remains if you need all three: **FNR**, **NR**, and **line**. If you really do, using a three-tuple where two of the items are numbers can lead to confusion. Named parameters can make this code easier to read, so it's better to use a **dataclass**:
|
||||
|
||||
|
||||
```
|
||||
import dataclass
|
||||
|
||||
@dataclass.dataclass(frozen=True)
|
||||
class AwkLikeLine:
|
||||
content: str
|
||||
fnr: int
|
||||
nr: int
|
||||
|
||||
def awk_like_lines(list_of_file_names):
|
||||
def _all_lines():
|
||||
for filename in list_of_file_names:
|
||||
with open(filename) as fpin:
|
||||
yield from enumerate(fpin)
|
||||
for nr, (fnr, line) in _all_lines:
|
||||
yield AwkLikeLine(nr=nr, fnr=fnr, line=line)
|
||||
```
|
||||
|
||||
You might wonder, why not start with this approach? The reason to start elsewhere is that this is almost always too complicated. If your goal is to make a generic library that makes porting awk to Python easier, then consider doing so. But writing a loop that gets you exactly what you need for a specific case is usually easier to do and easier to understand (and thus maintain).
|
||||
|
||||
### Understanding awk fields
|
||||
|
||||
Once you have a string that corresponds to a line, if you are converting an awk program, you often want to break it up into _fields_. Python has several ways of doing that. This will return a list of strings, splitting the line on any number of consecutive whitespaces:
|
||||
|
||||
|
||||
```
|
||||
`line.split()`
|
||||
```
|
||||
|
||||
If another field separator is needed, something like this will split the line by **:**; the **rstrip** method is needed to remove the last newline:
|
||||
|
||||
|
||||
```
|
||||
`line.rstrip("\n").split(":")`
|
||||
```
|
||||
|
||||
After doing the following, the list **parts** will have the broken-up string:
|
||||
|
||||
|
||||
```
|
||||
`parts = line.rstrip("\n").split(":")`
|
||||
```
|
||||
|
||||
This split is good for choosing what to do with the parameters, but we are in an [off-by-one error][2] scenario. Now **parts[0]** will correspond to awk's **$1**, **parts[1]** will correspond to awk's **$2**, etc. This off-by-one is because awk starts counting the "fields" from 1, while Python counts from 0. In awk's **$0** is the whole line -- equivalent to **line.rstrip("\n") **and awk's **NF** (number of fields) is more easily retrieved as **len(parts)**.
|
||||
|
||||
### Porting awk fields in Python
|
||||
|
||||
As an example, let's convert the one-liner from "[How to remove duplicate lines from files with awk][3]" to Python.
|
||||
|
||||
The original in awk is:
|
||||
|
||||
|
||||
```
|
||||
`awk '!visited[$0]++' your_file > deduplicated_file`
|
||||
```
|
||||
|
||||
An "authentic" Python conversion would be:
|
||||
|
||||
|
||||
```
|
||||
import collections
|
||||
import sys
|
||||
|
||||
visited = collections.defaultdict(int)
|
||||
for line in open("your_file"):
|
||||
did_visit = visited[line]
|
||||
visited[line] += 1
|
||||
if not did_visit:
|
||||
sys.stdout.write(line)
|
||||
```
|
||||
|
||||
However, Python has more data structures than awk. Instead of _counting_ visits (which we do not use, except to know whether we saw a line), why not record the visited lines?
|
||||
|
||||
|
||||
```
|
||||
import sys
|
||||
|
||||
visited = set()
|
||||
for line in open("your_file"):
|
||||
if line in visited:
|
||||
continue
|
||||
visited.add(line)
|
||||
sys.stdout.write(line)
|
||||
```
|
||||
|
||||
### Making Pythonic awk code
|
||||
|
||||
The Python community advocates for writing Pythonic code, which means it follows a commonly agreed-upon code style. An even more Pythonic approach will separate the concerns of _uniqueness_ and _input/output_. This change would make it easier to unit test your code:
|
||||
|
||||
|
||||
```
|
||||
def unique_generator(things):
|
||||
visited = set()
|
||||
for thing in things:
|
||||
if thing in visited:
|
||||
continue
|
||||
visited.add(things)
|
||||
yield thing
|
||||
|
||||
import sys
|
||||
|
||||
for line in unique_generator(open("your_file")):
|
||||
sys.stdout.write(line)
|
||||
```
|
||||
|
||||
Putting all logic away from the input/output code leads to better separation of concerns and more usability and testability of code.
|
||||
|
||||
### Conclusion: Python can be a good choice
|
||||
|
||||
Porting an awk script to Python is often more a matter of reimplementing the core requirements while thinking about proper Pythonic code style than a slavish transliteration of condition/action by condition/action. Take the original context into account and produce a quality Python solution. While there are times when a Bash one-liner with awk can get the job done, Python coding is a path toward more easily maintainable code.
|
||||
|
||||
Also, if you're writing awk scripts, I am confident you can learn Python as well! Let me know if you have any questions in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/awk-to-python
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
|
||||
[2]: https://en.wikipedia.org/wiki/Off-by-one_error
|
||||
[3]: https://opensource.com/article/19/10/remove-duplicate-lines-files-awk
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (PyRadio: An open source alternative for internet radio)
|
||||
[#]: via: (https://opensource.com/article/19/11/pyradio)
|
||||
[#]: author: (Lee Tusman https://opensource.com/users/leeto)
|
||||
|
||||
PyRadio: An open source alternative for internet radio
|
||||
======
|
||||
Play your favorite internet radio stations—while keeping your personal
|
||||
data private—with PyRadio.
|
||||
![Stereo radio with dials][1]
|
||||
|
||||
[PyRadio][2] is a convenient, open source, command-line application for playing any radio station that has a streaming link. And in 2019, almost every radio station (certainly, every one that has a web presence) has a way to listen online. Using the free PyRadio program, you can add, edit, play and switch between your own selected list of streaming radio stations. It is a command-line tool for Linux that can run on many computers, including Macintosh and tiny computers like Raspberry Pi. To some, a command-line client for playing music might sound needlessly complicated, but it's actually a simple alternative and one that serves as an instant text-based dashboard to easily select music to listen to.
|
||||
|
||||
A little background about myself: I spend a lot of time browsing for and listening to new music on [Bandcamp][3], on various blogs, and even Spotify. I don't spend time casually listening to app *radio* stations, which are really algorithmically-generated continuous streams of similarly tagged music. Rather, I prefer listening to non-profit, college and locally-produced independent radio stations that are run by a community and don't rely on advertisements to sustain themselves.
|
||||
|
||||
I have always been a huge fan of community radio, from Drexel University's great reggae weekends on WKDU; the uncanny experimental WFMU from Orange, N.J.; and WNYC's eclectic schedule, including New Sounds. In my college days, I was a DJ on Brandeis' WBRS 100.1FM, playing experimental electronic music on the show Frequency. And as recently as 2018, I helped manage the station managers and schedule for [KCHUNG Radio][4], an artist-run internet and low-power AM station run out of Chinatown, Los Angeles.
|
||||
|
||||
![The PyRadio interface][5]
|
||||
|
||||
Just as a car radio (in days of yore) had buttons with presets for the owner's favorite radio stations, PyRadio lets me create a very simple list of radio stations that I can easily turn on and switch between. Since I spend most days working, researching, or writing to music, it's become my go-to software for listening. In an era where many people are used to commercial streaming services like curated Spotify mood playlists or Pandora "stations," it's nice to be able to set my own radio stations from a variety of sources outside of a commercial app and sans additional advertising.
|
||||
|
||||
Importantly, by not using commercial clients in the cloud, nothing is sending my user data or preferences to a company for whatever purposes they see fit. Nothing is collecting my preferences to build a profile to sell me more things.
|
||||
|
||||
PyRadio just works, and it's easy to use. Like some other Linux software, the hardest part of using PyRadio is installing it. This tutorial will help you install and run PyRadio for the first time. It assumes some basic knowledge of the command line. If you have no experience working in the terminal, I recommend reading a beginner-friendly [introduction to the command line][6] first.
|
||||
|
||||
### Installing PyRadio
|
||||
|
||||
In the past, I've used the Python package installer [pip][7] to install PyRadio, but the latest version is not yet installable from pip, and I couldn't find a package on Homebrew for my Mac. On my laptop running Ubuntu, I really wanted the latest version of PyRadio for its excellent new features, but I couldn't find an installation on Apt.
|
||||
|
||||
**[[Download our pip cheat sheet][8]]**
|
||||
|
||||
To get the current version on these computers, I built it from source. You can download the latest release from [github.com/coderholic/pyradio/releases][9], and then unzip or [untar][10] it. Change directory into the PyRadio source folder, and you're ready to begin.
|
||||
|
||||
Install the dependencies using your distribution's package manager (such as **dnf** on Fedora or **apt** on Ubuntu):
|
||||
|
||||
* python3-setuptools
|
||||
* git
|
||||
* MPV, MPlayer, or VLC
|
||||
|
||||
|
||||
|
||||
On a Mac, install [Git][11], [sed][12], and [MPlayer][13] dependencies using Homebrew:
|
||||
|
||||
|
||||
```
|
||||
brew install git
|
||||
brew install gnu-sed --default-names
|
||||
brew install mplayer
|
||||
```
|
||||
|
||||
Once all dependencies are resolved, run the installer script, using the argument **3** to indicate that you want PyRadio to build for Python3:
|
||||
|
||||
|
||||
```
|
||||
`$ sh devel/build_install_pyradio 3`
|
||||
```
|
||||
|
||||
The installation process takes about a minute.
|
||||
|
||||
### Using and tweaking the station list
|
||||
|
||||
To launch the application, just run **pyradio**. You can navigate up and down the default station list with the arrow or [Vim][14] keys and select a station with Enter. The artist name and track title currently streaming from the station should be displayed, if they are available. Typing **?** brings up a help text box that lists available commands. You can change the interface color themes with **t** or modify your configuration with **c**.
|
||||
|
||||
Out of the box, PyRadio comes with an extensive list of internet streaming stations. But I wanted to add my favorite public radio and college radio stations to the list, as well as some online music playlists. You can find streaming URLs on your favorite radio stations' websites or by browsing online station directories such as [Shoutcast][15]. In particular, I recommend the variety of excellent stations from [Soma FM][16]. You'll need to input the station's streaming playlist file, a URL that ends in **.pls**. You can also enter direct links to streaming audio files, such as MP3s.
|
||||
|
||||
The easiest way to add a station is to type **a**. PyRadio will ask you for the name of the station and its streaming URL, and you can press Enter to add it to your **stations** file. To delete any station, navigate to it and press **x**. You'll be prompted to confirm. The default station list is stored in **~/.config/pyradio/stations.csv**. The station list is a two-column CSV file with the station names and the stream URLs.
|
||||
|
||||
![Adding a station to PyRadio][17]
|
||||
|
||||
Those are the basics of PyRadio. You can find additional information in its [GitHub repo][18]. I hope you have many hours of audio enjoyment ahead of you. If you have any other PyRadio tips or suggestions for stations, please leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/pyradio
|
||||
|
||||
作者:[Lee Tusman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/leeto
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-stereo-radio-music.png?itok=st66SdwS (Stereo radio with dials)
|
||||
[2]: http://www.coderholic.com/pyradio/
|
||||
[3]: http://bandcamp.com
|
||||
[4]: https://kchungradio.org/
|
||||
[5]: https://opensource.com/sites/default/files/interface_0.png (The PyRadio interface)
|
||||
[6]: https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal
|
||||
[7]: https://pypi.org/project/pip/
|
||||
[8]: https://opensource.com/article/19/11/introducing-our-python-pip-cheat-sheet
|
||||
[9]: https://github.com/coderholic/pyradio/releases
|
||||
[10]: https://opensource.com/article/17/7/how-unzip-targz-file
|
||||
[11]: https://git-scm.com/
|
||||
[12]: https://www.gnu.org/software/sed/manual/sed.html
|
||||
[13]: http://www.mplayerhq.hu/design7/news.html
|
||||
[14]: https://www.vim.org/
|
||||
[15]: https://directory.shoutcast.com/
|
||||
[16]: https://somafm.com/
|
||||
[17]: https://opensource.com/sites/default/files/pyradio-add.png (Adding a station to PyRadio)
|
||||
[18]: https://github.com/coderholic/pyradio
|
@ -0,0 +1,457 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 Methods to Quickly Check if a Website is up or down from the Linux Terminal)
|
||||
[#]: via: (https://www.2daygeek.com/linux-command-check-website-is-up-down-alive/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
6 Methods to Quickly Check if a Website is up or down from the Linux Terminal
|
||||
======
|
||||
|
||||
This tutorial shows you how to quickly check whether a given website is up (alive) or down from a Linux terminal.
|
||||
|
||||
You may already know some of these commands to verify about this, namely ping, curl, and wget.
|
||||
|
||||
But we have added some other commands as well in this tutorial.
|
||||
|
||||
Also, we have added various options to check this information for single host and multiple hosts.
|
||||
|
||||
This article will help you to check whether the website is up or down.
|
||||
|
||||
But if you maintain some websites and want to get real-time alerts when the website is down.
|
||||
|
||||
I recommend you to use real-time website monitoring tools. There are many tools for this, and some are free and most of them are paid.
|
||||
|
||||
So choose the preferred one based on your needs. We will cover this topic in our upcoming article.
|
||||
|
||||
### Method-1: How to Check if a Website is up or down Using the fping Command
|
||||
|
||||
**[fping command][1]** is a program such as ping, which uses the Internet Control Message Protocol (ICMP) echo request to determine whether a target host is responding.
|
||||
|
||||
fping differs from ping because it allows users to ping any number of host in parallel. Also, hosts can be entered from a text file.
|
||||
|
||||
fping sends an ICMP echo request, moves the next target in a round-robin fashion, and does not wait until the target host responds.
|
||||
|
||||
If a target host replies, it is noted as active and removed from the list of targets to check; if a target does not respond within a certain time limit and/or retry limit it is designated as unreachable.
|
||||
|
||||
```
|
||||
# fping 2daygeek.com linuxtechnews.com magesh.co.in
|
||||
|
||||
2daygeek.com is alive
|
||||
linuxtechnews.com is alive
|
||||
magesh.co.in is alive
|
||||
```
|
||||
|
||||
### Method-2: How to Quickly Check Whether a Website is up or down Using the http Command
|
||||
|
||||
HTTPie (pronounced aitch-tee-tee-pie) is a command line HTTP client.
|
||||
|
||||
The **[httpie tool][2]** is a modern command line http client which makes CLI interaction with web services.
|
||||
|
||||
It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output.
|
||||
|
||||
HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
|
||||
|
||||
```
|
||||
# http 2daygeek.com
|
||||
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
CF-RAY: 535b66722ab6e5fc-LHR
|
||||
Cache-Control: max-age=3600
|
||||
Connection: keep-alive
|
||||
Date: Thu, 14 Nov 2019 19:30:28 GMT
|
||||
Expires: Thu, 14 Nov 2019 20:30:28 GMT
|
||||
Location: https://2daygeek.com/
|
||||
Server: cloudflare
|
||||
Transfer-Encoding: chunked
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### Method-3: How to Check if a Website is up or down Using the curl Command
|
||||
|
||||
**[curl command][3]** is a tool to transfer data from a server or to server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP).
|
||||
|
||||
The command is designed to work without user interaction.
|
||||
|
||||
Also curl support proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer resume, Metalink, and more.
|
||||
|
||||
curl is powered by libcurl for all transfer-related features.
|
||||
|
||||
```
|
||||
# curl -I https://www.magesh.co.in
|
||||
|
||||
HTTP/2 200
|
||||
date: Thu, 14 Nov 2019 19:39:47 GMT
|
||||
content-type: text/html
|
||||
set-cookie: __cfduid=db16c3aee6a75c46a504c15131ead3e7f1573760386; expires=Fri, 13-Nov-20 19:39:46 GMT; path=/; domain=.magesh.co.in; HttpOnly
|
||||
vary: Accept-Encoding
|
||||
last-modified: Sun, 14 Jun 2015 11:52:38 GMT
|
||||
x-cache: HIT from Backend
|
||||
cf-cache-status: DYNAMIC
|
||||
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
|
||||
server: cloudflare
|
||||
cf-ray: 535b74123ca4dbf3-LHR
|
||||
```
|
||||
|
||||
Use the following curl command if you want to see only the HTTP status code instead of entire output.
|
||||
|
||||
```
|
||||
# curl -I "www.magesh.co.in" 2>&1 | awk '/HTTP\// {print $2}'
|
||||
200
|
||||
```
|
||||
|
||||
If you want to see if a given website is up or down, use the following Bash script.
|
||||
|
||||
```
|
||||
# vi curl-url-check.sh
|
||||
|
||||
#!/bin/bash
|
||||
if curl -I "https://www.magesh.co.in" 2>&1 | grep -w "200\|301" ; then
|
||||
echo "magesh.co.in is up"
|
||||
else
|
||||
echo "magesh.co.in is down"
|
||||
fi
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh curl-url-check.sh
|
||||
|
||||
HTTP/2 200
|
||||
magesh.co.in is up
|
||||
```
|
||||
|
||||
Use the following shell script if you want to see the status of multiple websites.
|
||||
|
||||
```
|
||||
# vi curl-url-check-1.sh
|
||||
|
||||
#!/bin/bash
|
||||
for site in www.google.com google.co.in www.xyzzz.com
|
||||
do
|
||||
if curl -I "$site" 2>&1 | grep -w "200\|301" ; then
|
||||
echo "$site is up"
|
||||
else
|
||||
echo "$site is down"
|
||||
fi
|
||||
echo "----------------------------------"
|
||||
done
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh curl-url-check-1.sh
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
www.google.com is up
|
||||
----------------------------------
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
google.co.in is up
|
||||
----------------------------------
|
||||
www.xyzzz.com is down
|
||||
----------------------------------
|
||||
```
|
||||
|
||||
### Method-4: How to Quickly Check Whether a Website is up or down Using the wget Command
|
||||
|
||||
**[wget command][4]** (formerly known as Geturl) is a Free, open source, command line download tool which is retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols.
|
||||
|
||||
It is a non-interactive command line tool and Its name is derived from World Wide Web and get.
|
||||
|
||||
wget handle download pretty much good compared with other tools, futures included working in background, recursive download, multiple file downloads, resume downloads, non-interactive downloads & large file downloads.
|
||||
|
||||
```
|
||||
# wget -S --spider https://www.magesh.co.in
|
||||
|
||||
Spider mode enabled. Check if remote file exists.
|
||||
--2019-11-15 01:22:00-- https://www.magesh.co.in/
|
||||
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
|
||||
Resolving www.magesh.co.in (www.magesh.co.in)… 104.18.35.52, 104.18.34.52, 2606:4700:30::6812:2334, …
|
||||
Connecting to www.magesh.co.in (www.magesh.co.in)|104.18.35.52|:443… connected.
|
||||
HTTP request sent, awaiting response…
|
||||
HTTP/1.1 200 OK
|
||||
Date: Thu, 14 Nov 2019 19:52:01 GMT
|
||||
Content-Type: text/html
|
||||
Connection: keep-alive
|
||||
Set-Cookie: __cfduid=db73306a2f1c72c1318ad4709ef49a3a01573761121; expires=Fri, 13-Nov-20 19:52:01 GMT; path=/; domain=.magesh.co.in; HttpOnly
|
||||
Vary: Accept-Encoding
|
||||
Last-Modified: Sun, 14 Jun 2015 11:52:38 GMT
|
||||
X-Cache: HIT from Backend
|
||||
CF-Cache-Status: DYNAMIC
|
||||
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
|
||||
Server: cloudflare
|
||||
CF-RAY: 535b85fe381ee684-LHR
|
||||
Length: unspecified [text/html]
|
||||
Remote file exists and could contain further links,
|
||||
but recursion is disabled -- not retrieving.
|
||||
```
|
||||
|
||||
Use the following wget command if you want to see only the HTTP status code instead of entire output.
|
||||
|
||||
```
|
||||
# wget --spider -S "www.magesh.co.in" 2>&1 | awk '/HTTP\// {print $2}'
|
||||
200
|
||||
```
|
||||
|
||||
If you want to see if a given website is up or down, use the following Bash script.
|
||||
|
||||
```
|
||||
# vi wget-url-check.sh
|
||||
|
||||
#!/bin/bash
|
||||
if wget --spider -S "https://www.google.com" 2>&1 | grep -w "200\|301" ; then
|
||||
echo "Google.com is up"
|
||||
else
|
||||
echo "Google.com is down"
|
||||
fi
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# wget-url-check.sh
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Google.com is up
|
||||
```
|
||||
|
||||
Use the following shell script if you want to see the status of multiple websites.
|
||||
|
||||
```
|
||||
# vi curl-url-check-1.sh
|
||||
|
||||
#!/bin/bash
|
||||
for site in www.google.com google.co.in www.xyzzz.com
|
||||
do
|
||||
if wget --spider -S "$site" 2>&1 | grep -w "200\|301" ; then
|
||||
echo "$site is up"
|
||||
else
|
||||
echo "$site is down"
|
||||
fi
|
||||
echo "----------------------------------"
|
||||
done
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh wget-url-check-1.sh
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
www.google.com is up
|
||||
----------------------------------
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
google.co.in is up
|
||||
----------------------------------
|
||||
www.xyzzz.com is down
|
||||
----------------------------------
|
||||
```
|
||||
|
||||
### Method-5: How to Quickly Check Whether a Website is up or down Using the lynx Command
|
||||
|
||||
**[lynx][5]** is a highly configurable text-based web browser for use on cursor-addressable character cell terminals. It’s the oldest web browser and it’s still in active development.
|
||||
|
||||
```
|
||||
# lynx -head -dump http://www.magesh.co.in
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Date: Fri, 15 Nov 2019 08:14:23 GMT
|
||||
Content-Type: text/html
|
||||
Connection: close
|
||||
Set-Cookie: __cfduid=df3cb624024b81df7362f42ede71300951573805662; expires=Sat, 1
|
||||
4-Nov-20 08:14:22 GMT; path=/; domain=.magesh.co.in; HttpOnly
|
||||
Vary: Accept-Encoding
|
||||
Last-Modified: Sun, 14 Jun 2015 11:52:38 GMT
|
||||
X-Cache: HIT from Backend
|
||||
CF-Cache-Status: DYNAMIC
|
||||
Server: cloudflare
|
||||
CF-RAY: 535fc5704a43e694-LHR
|
||||
```
|
||||
|
||||
Use the following lynx command if you want to see only the HTTP status code instead of entire output.
|
||||
|
||||
```
|
||||
# lynx -head -dump https://www.magesh.co.in 2>&1 | awk '/HTTP\// {print $2}'
|
||||
200
|
||||
```
|
||||
|
||||
If you want to see if a given website is up or down, use the following Bash script.
|
||||
|
||||
```
|
||||
# vi lynx-url-check.sh
|
||||
|
||||
#!/bin/bash
|
||||
if lynx -head -dump http://www.magesh.co.in 2>&1 | grep -w "200\|301" ; then
|
||||
echo "magesh.co.in is up"
|
||||
else
|
||||
echo "magesh.co.in is down"
|
||||
fi
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh lynx-url-check.sh
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
magesh.co.in is up
|
||||
```
|
||||
|
||||
Use the following shell script if you want to see the status of multiple websites.
|
||||
|
||||
```
|
||||
# vi lynx-url-check-1.sh
|
||||
|
||||
#!/bin/bash
|
||||
for site in http://www.google.com https://google.co.in http://www.xyzzz.com
|
||||
do
|
||||
if lynx -head -dump "$site" 2>&1 | grep -w "200\|301" ; then
|
||||
echo "$site is up"
|
||||
else
|
||||
echo "$site is down"
|
||||
fi
|
||||
echo "----------------------------------"
|
||||
done
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh lynx-url-check-1.sh
|
||||
|
||||
HTTP/1.0 200 OK
|
||||
http://www.google.com is up
|
||||
----------------------------------
|
||||
HTTP/1.0 301 Moved Permanently
|
||||
https://google.co.in is up
|
||||
----------------------------------
|
||||
www.xyzzz.com is down
|
||||
----------------------------------
|
||||
```
|
||||
|
||||
### Method-6: How to Check if a Website is up or down Using the ping Command
|
||||
|
||||
**[ping command][1]** stands for (Packet Internet Groper) command is a networking utility that used to test the target of a host availability/connectivity on an Internet Protocol (IP) network.
|
||||
|
||||
It’s verify a host availability by sending Internet Control Message Protocol (ICMP) Echo Request packets to the target host and waiting for an ICMP Echo Reply.
|
||||
|
||||
It summarize statistical results based on the packets transmitted, packets received, packet loss, typically including the min/avg/max times.
|
||||
|
||||
```
|
||||
# ping -c 5 2daygeek.com
|
||||
|
||||
PING 2daygeek.com (104.27.157.177) 56(84) bytes of data.
|
||||
64 bytes from 104.27.157.177 (104.27.157.177): icmp_seq=1 ttl=58 time=228 ms
|
||||
64 bytes from 104.27.157.177 (104.27.157.177): icmp_seq=2 ttl=58 time=227 ms
|
||||
64 bytes from 104.27.157.177 (104.27.157.177): icmp_seq=3 ttl=58 time=250 ms
|
||||
64 bytes from 104.27.157.177 (104.27.157.177): icmp_seq=4 ttl=58 time=171 ms
|
||||
64 bytes from 104.27.157.177 (104.27.157.177): icmp_seq=5 ttl=58 time=193 ms
|
||||
|
||||
--- 2daygeek.com ping statistics ---
|
||||
5 packets transmitted, 5 received, 0% packet loss, time 13244ms
|
||||
rtt min/avg/max/mdev = 170.668/213.824/250.295/28.320 ms
|
||||
```
|
||||
|
||||
### Method-7: How to Quickly Check Whether a Website is up or down Using the telnet Command
|
||||
|
||||
The Telnet command is an old network protocol used to communicate with another host over a TCP/IP network using the TELNET protocol.
|
||||
|
||||
It uses port 23 to connect to other devices, such as computer and network equipment.
|
||||
|
||||
Telnet is not a secure protocol and is now not recommended to use because the data sent to the protocol is not encrypted and can be intercepted by hackers.
|
||||
|
||||
Everyone uses SSH protocol instead of telnet, which is encrypted and very secure.
|
||||
|
||||
```
|
||||
# telnet google.com 80
|
||||
|
||||
Trying 216.58.194.46…
|
||||
Connected to google.com.
|
||||
Escape character is '^]'.
|
||||
^]
|
||||
telnet> quit
|
||||
Connection closed.
|
||||
```
|
||||
|
||||
### Method-8: How to Check if a Website is up or down Using the Bash Script
|
||||
|
||||
In simple words, a **[shell script][6]** is a file that contains a series of commands. The shell reads this file and executes the commands one by one as they are entered directly on the command line.
|
||||
|
||||
To make this more useful we can add some conditions. This reduces the Linux admin task.
|
||||
|
||||
If you want to see the status of multiple websites using the wget command, use the following shell script.
|
||||
|
||||
```
|
||||
# vi wget-url-check-2.sh
|
||||
|
||||
#!/bin/bash
|
||||
for site in www.google.com google.co.in www.xyzzz.com
|
||||
do
|
||||
if wget --spider -S "$site" 2>&1 | grep -w "200\|301" > /dev/null ; then
|
||||
echo "$site is up"
|
||||
else
|
||||
echo "$site is down"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh wget-url-check-2.sh
|
||||
|
||||
www.google.com is up
|
||||
google.co.in is up
|
||||
www.xyzzz.com is down
|
||||
```
|
||||
|
||||
If you want to see the status of multiple websites using the curl command, use the following **[bash script][7]**.
|
||||
|
||||
```
|
||||
# vi curl-url-check-2.sh
|
||||
|
||||
#!/bin/bash
|
||||
for site in www.google.com google.co.in www.xyzzz.com
|
||||
do
|
||||
if curl -I "$site" 2>&1 | grep -w "200\|301" > /dev/null ; then
|
||||
echo "$site is up"
|
||||
else
|
||||
echo "$site is down"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
Once you have added the above script to a file, run the file to see the output.
|
||||
|
||||
```
|
||||
# sh curl-url-check-2.sh
|
||||
|
||||
www.google.com is up
|
||||
google.co.in is up
|
||||
www.xyzzz.com is down
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-command-check-website-is-up-down-alive/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-use-ping-fping-gping-in-linux/
|
||||
[2]: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
|
||||
[3]: https://www.2daygeek.com/curl-linux-command-line-download-manager/
|
||||
[4]: https://www.2daygeek.com/wget-linux-command-line-download-utility-tool/
|
||||
[5]: https://www.2daygeek.com/best-text-mode-based-command-line-web-browser-for-linux/
|
||||
[6]: https://www.2daygeek.com/category/shell-script/
|
||||
[7]: https://www.2daygeek.com/category/bash-script/
|
@ -0,0 +1,166 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Troubleshooting “E: Unable to locate package” Error on Ubuntu [Beginner’s Tutorial])
|
||||
[#]: via: (https://itsfoss.com/unable-to-locate-package-error-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Troubleshooting “E: Unable to locate package” Error on Ubuntu [Beginner’s Tutorial]
|
||||
======
|
||||
|
||||
_**This beginner tutorial shows how to go about fixing the E: Unable to locate package error on Ubuntu Linux.**_
|
||||
|
||||
One of the [many ways of installing software in Ubuntu][1] is to use the [apt-get][2] or the [apt command][3]. You open a terminal and use the program name to install it like this:
|
||||
|
||||
```
|
||||
sudo apt install package_name
|
||||
```
|
||||
|
||||
Sometimes, you may encounter an error while trying to install application in this manner. The error reads:
|
||||
|
||||
```
|
||||
sudo apt-get install package_name
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
E: Unable to locate package package_name
|
||||
```
|
||||
|
||||
The error is self explanatory. Your Linux system cannot find the package that you are trying to install. But why is it so? Why can it not find the package? Let’s see some of the actions you can take to fix this issue.
|
||||
|
||||
### Fixing ‘Unable to locate package error’ on Ubuntu
|
||||
|
||||
![][4]
|
||||
|
||||
Let’s see how to troubleshoot this issue one step at a time.
|
||||
|
||||
#### 1\. Check the package name (no, seriously)
|
||||
|
||||
This should be the first thing to check. Did you make a typo in the package name? I mean, if you are trying to [install vlc][5] and you typed vcl, it will surely fail. Typos are common so make sure that you have not made any mistakes in typing the name of the package.
|
||||
|
||||
#### 2\. Update the repository cache
|
||||
|
||||
If this is the first time you are using your system after installing, you should run the update command:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
This command won’t [update Ubuntu][6] straightaway. I recommend to get through the [concept of Ubuntu repositories][7]. Basically, the ‘apt update’ command builds a local cache of available packages.
|
||||
|
||||
When you use the install command, apt package manager searches the cache to get the package and version information and then download it from its repositories over the network. If the package is not in this cache, your system won’t be able to install it.
|
||||
|
||||
When you have a freshly installed Ubuntu system, the cache is empty. This is why you should run the apt update command right after installing Ubuntu or any other distributions based on Ubuntu (like Linux Mint).
|
||||
|
||||
Even if its not a fresh install, your apt cache might be outdated. It’s always a good idea to update it.
|
||||
|
||||
#### 3\. Check if package is available for your Ubuntu version
|
||||
|
||||
Alright! You checked the name of the package and it is correct. You run the update command to rebuild the cache and yet you see the unable to locate package error.
|
||||
|
||||
It is possible that the package is really not available. But you are following the instructions mentioned on some website and everyone else seems to be able to install it like that. What could be the issue?
|
||||
|
||||
I can see two things here. Either the package available in Universe repository and your system hasn’t enabled it or the package is not available on your Ubuntu version altogether. Don’t get confused. I’ll explain it for you.
|
||||
|
||||
First step, [check the Ubuntu version you are running][8]. Open a terminal and use the following command:
|
||||
|
||||
```
|
||||
lsb_release -a
|
||||
```
|
||||
|
||||
You’ll get the Ubuntu version number and the codename in the output. The codename is what important here:
|
||||
|
||||
```
|
||||
[email protected]:~$ lsb_release -a
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 18.04.3 LTS
|
||||
Release: 18.04
|
||||
Codename: bionic
|
||||
```
|
||||
|
||||
![Ubuntu Version Check][9]
|
||||
|
||||
As you can see here, I am using Ubuntu 18.04 and its codename is _bionic_. You may have something else but you get the gist of what you need to note here.
|
||||
|
||||
Once you have the version number and the codename, head over to the Ubuntu packages website:
|
||||
|
||||
[Ubuntu Packages][10]
|
||||
|
||||
Scroll down a bit on this page and go to the Search part. You’ll see a keyword field. Enter the package name (which cannot be found by your system) and then set the correct distribution codename. The section should be ‘any’. When you have set these three details, hit the search button.
|
||||
|
||||
![Ubuntu Package Search][11]
|
||||
|
||||
This will show if the package is available for your Ubuntu version and if yes, which repository it belongs to. In my case, I searched for [Shutter screenshot tool][12] and this is what it showed me for Ubuntu 18.04 Bionic version:
|
||||
|
||||
![Package Search Result][13]
|
||||
|
||||
In my case, the package name is an exact match. This means the package shutter is available for Ubuntu 18.04 Bionic but in the ‘Universe repository’. If you are wondering what the heck is Universe repository, please [refer to the Ubuntu repository article I had mentioned earlier][7].
|
||||
|
||||
If the intended package is available for your Ubuntu version but it a repository like universe or multiverse, you should enable these additional repositories:
|
||||
|
||||
```
|
||||
sudo add-apt-repository universe multiverse
|
||||
```
|
||||
|
||||
You must also update the cache so that your system is aware of the new packages available through these repositories:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Now if you try to install the package, things should be fine.
|
||||
|
||||
#### Nothing works, what now?
|
||||
|
||||
If Ubuntu Packages website also shows that the package is not available for your specific version, then you’ll have to find some other ways to install the package.
|
||||
|
||||
Take Shutter for example. It’s an [excellent screenshot tool for Linux][14] but it hasn’t been updated in years and thus Ubuntu has dropped it from Ubuntu 18.10 and newer versions. How to install it now? Thankfully, some third party developer created a personal repository (PPA) and you can install it using that. [Please read this detailed guide to [understand PPA in Ubuntu][15].] You can search for packages and their PPA on Ubuntu’s Launchpad website.
|
||||
|
||||
Do keep in mind that you shouldn’t add random (unofficial) PPAs to your repositories list. I advise sticking with what your distribution provides.
|
||||
|
||||
If there are no PPAs, check the official website of the project and see if they provide some alternative ways of installing the application. Some projects provide .[DEB files][16] or [AppImage][17] files. Some projects have switched to [Snap packages][18].
|
||||
|
||||
In other words, check the official website of the project and check if they have changed their installation method.
|
||||
|
||||
If nothing works, perhaps the project itself is discontinued and if that’s the case, you should look for its alternative application.
|
||||
|
||||
**In the end…**
|
||||
|
||||
If you are new to Ubuntu or Linux, things could be overwhelming. This is why I am covering some basic topics like this so that you get a better understanding of how things work in your system.
|
||||
|
||||
I hope this tutorial helps you handling the package error in Ubuntu. If you have questions or suggestions, please feel free to ask in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/unable-to-locate-package-error-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/remove-install-software-ubuntu/
|
||||
[2]: https://itsfoss.com/apt-get-linux-guide/
|
||||
[3]: https://itsfoss.com/apt-command-guide/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/unable_to_locate_package_error_ubuntu.png?ssl=1
|
||||
[5]: https://itsfoss.com/install-latest-vlc/
|
||||
[6]: https://itsfoss.com/update-ubuntu/
|
||||
[7]: https://itsfoss.com/ubuntu-repositories/
|
||||
[8]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/ubuntu_version_check.jpg?ssl=1
|
||||
[10]: https://packages.ubuntu.com/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/ubuntu_package_search.png?ssl=1
|
||||
[12]: https://itsfoss.com/install-shutter-ubuntu/
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/package_search_result.png?resize=800%2C311&ssl=1
|
||||
[14]: https://itsfoss.com/take-screenshot-linux/
|
||||
[15]: https://itsfoss.com/ppa-guide/
|
||||
[16]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[17]: https://itsfoss.com/use-appimage-linux/
|
||||
[18]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
|
Loading…
Reference in New Issue
Block a user