mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
e03b9f5714
88
published/20190911 4 open source cloud security tools.md
Normal file
88
published/20190911 4 open source cloud security tools.md
Normal file
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11432-1.html)
|
||||
[#]: subject: (4 open source cloud security tools)
|
||||
[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security)
|
||||
[#]: author: (Alison Naylor https://opensource.com/users/asnaylor,Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
4 种开源云安全工具
|
||||
======
|
||||
|
||||
> 查找并排除你存储在 AWS 和 GitHub 中的数据里的漏洞。
|
||||
|
||||
![Tools in a cloud][1]
|
||||
|
||||
如果你的日常工作是开发者、系统管理员、全栈工程师或者是网站可靠性工程师(SRE),工作内容包括使用 Git 从 GitHub 上推送、提交和拉取,并部署到亚马逊 Web 服务上(AWS),安全性就是一个需要持续考虑的一个点。幸运的是,开源工具能帮助你的团队避免犯常见错误,这些常见错误会导致你的组织损失数千美元。
|
||||
|
||||
本文介绍了四种开源工具,当你在 GitHub 和 AWS 上进行开发时,这些工具能帮助你提升项目的安全性。同样的,本着开源的精神,我会与三位安全专家——[Travis McPeak][2],奈飞高级云安全工程师;[Rich Monk][3],红帽首席高级信息安全分析师;以及 [Alison Naylor][4],红帽首席信息安全分析师——共同为本文做出贡献。
|
||||
|
||||
我们已经按场景对每个工具都做了区分,但是它们并不是相互排斥的。
|
||||
|
||||
### 1、使用 gitrob 发现敏感数据
|
||||
|
||||
你需要发现任何出现于你们团队的 Git 仓库中的敏感信息,以便你能将其删除。借助专注于攻击应用程序或者操作系统的工具以使用红/蓝队模型,这样可能会更有意义,在这个模型中,一个信息安全团队会划分为两块,一个是攻击团队(又名红队),以及一个防守团队(又名蓝队)。有一个红队来尝试渗透你的系统和应用要远远好于等待一个攻击者来实际攻击你。你的红队可能会尝试使用 [Gitrob][5],该工具可以克隆和爬取你的 Git 仓库,以此来寻找凭证和敏感信息。
|
||||
|
||||
即使像 Gitrob 这样的工具可以被用来造成破坏,但这里的目的是让你的信息安全团队使用它来发现无意间泄露的属于你的组织的敏感信息(比如 AWS 的密钥对或者是其他被失误提交上去的凭证)。这样,你可以修整你的仓库并清除敏感数据——希望能赶在攻击者发现它们之前。记住不光要修改受影响的文件,还要[删除它们的历史记录][6]。
|
||||
|
||||
### 2、使用 git-secrets 来避免合并敏感数据
|
||||
|
||||
虽然在你的 Git 仓库里发现并移除敏感信息很重要,但在一开始就避免合并这些敏感信息岂不是更好?即使错误地提交了敏感信息,使用 [git-secrets][7] 可以避免你陷入公开的困境。这款工具可以帮助你设置钩子,以此来扫描你的提交、提交信息和合并信息,寻找常见的敏感信息模式。注意你选择的模式要匹配你的团队使用的凭证,比如 AWS 访问密钥和秘密密钥。如果发现了一个匹配项,你的提交就会被拒绝,一个潜在的危机就此得到避免。
|
||||
|
||||
为你已有的仓库设置 git-secrets 是很简单的,而且你可以使用一个全局设置来保护所有你以后要创建或克隆的仓库。你同样可以在公开你的仓库之前,使用 git-secrets 来扫描它们(包括之前所有的历史版本)。
|
||||
|
||||
### 3、使用 Key Conjurer 创建临时凭证
|
||||
|
||||
有一点额外的保险来防止无意间公开了存储的敏感信息,这是很好的事,但我们还可以做得更好,就完全不存储任何凭证。追踪凭证,谁访问了它,存储到了哪里,上次更新是什么时候——太麻烦了。然而,以编程的方式生成的临时凭证就可以避免大量的此类问题,从而巧妙地避开了在 Git 仓库里存储敏感信息这一问题。使用 [Key Conjurer][8],它就是为解决这一需求而被创建出来的。有关更多 Riot Games 为什么创建 Key Conjurer,以及 Riot Games 如何开发的 Key Conjurer,请阅读 [Key Conjurer:我们最低权限的策略][9]。
|
||||
|
||||
### 4、使用 Repokid 自动化地提供最小权限
|
||||
|
||||
任何一个参加过基本安全课程的人都知道,设置最小权限是基于角色的访问控制的最佳实现。难过的是,离开校门,会发现手动运用最低权限策略会变得如此艰难。一个应用的访问需求会随着时间的流逝而变化,开发人员又太忙了没时间去手动削减他们的权限。[Repokid][10] 使用 AWS 提供提供的有关身份和访问管理(IAM)的数据来自动化地调整访问策略。Repokid 甚至可以在 AWS 中为超大型组织提供自动化地最小权限设置。
|
||||
|
||||
### 工具而已,又不是大招
|
||||
|
||||
这些工具并不是什么灵丹妙药,它们只是工具!所以,在尝试使用这些工具或其他的控件之前,请和你的组织里一起工作的其他人确保你们已经理解了你的云服务的使用情况和用法模式。
|
||||
|
||||
应该严肃对待你的云服务和代码仓库服务,并熟悉最佳实现的做法。下面的文章将帮助你做到这一点。
|
||||
|
||||
**对于 AWS:**
|
||||
|
||||
* [管理 AWS 访问密钥的最佳实现][11]
|
||||
* [AWS 安全审计指南][12]
|
||||
|
||||
**对于 GitHub:**
|
||||
|
||||
* [介绍一种新方法来让你的代码保持安全][13]
|
||||
* [GitHub 企业版最佳安全实现][14]
|
||||
|
||||
同样重要的一点是,和你的安全团队保持联系;他们应该可以为你团队的成功提供想法、建议和指南。永远记住:安全是每个人的责任,而不仅仅是他们的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/open-source-cloud-security
|
||||
|
||||
作者:[Alison Naylor][a1],[Anderson Silva][a2]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a1]: https://opensource.com/users/asnaylor
|
||||
[a2]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
|
||||
[2]: https://twitter.com/travismcpeak?lang=en
|
||||
[3]: https://github.com/rmonk
|
||||
[4]: https://www.linkedin.com/in/alperkins/
|
||||
[5]: https://github.com/michenriksen/gitrob
|
||||
[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository
|
||||
[7]: https://github.com/awslabs/git-secrets
|
||||
[8]: https://github.com/RiotGames/key-conjurer
|
||||
[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege
|
||||
[10]: https://github.com/Netflix/repokid
|
||||
[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
|
||||
[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html
|
||||
[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/
|
||||
[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cloud Native Computing: The Hidden Force behind Swift App Development)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/)
|
||||
[#]: author: (Robert Shimp https://opensourceforu.com/author/robert-shimp/)
|
||||
|
||||
Cloud Native Computing: The Hidden Force behind Swift App Development
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Cloud native computing can bolster the development of advanced applications powered by artificial intelligence, machine learning and the Internet of Things._
|
||||
|
||||
Modern enterprises are constantly adapting their business strategies and processes as they respond to evolving market conditions. This is especially true for enterprises serving fast-growing economies in the Asia Pacific, such as India and Australia. For these businesses, cloud computing is an invaluable means to accelerate change. From quickly deploying new applications to rapidly scaling infrastructure, enterprises are using cloud computing to create new value, build better solutions and expand business.
|
||||
|
||||
Now cloud providers are introducing new ‘cloud native computing’ services that enable even more dynamic application development. This new technology will make cloud application developers more agile and efficient, even as it reduces deployment costs and increases cloud vendor independence.
|
||||
|
||||
Many enterprises are intrigued but are also feeling overwhelmed by the rapidly changing cloud native technology landscape and hence, aren’t sure how to proceed. While cloud native computing has demonstrated success among early adopters, harnessing this technology has posed a challenge for many mainstream businesses.
|
||||
|
||||
**Choosing the right cloud native open source projects**
|
||||
There are several ways that an enterprise can bring cloud native computing on board. One option is to build its own cloud native environment using open source software. This comes at the price of carefully evaluating many different open source projects before choosing which software to use. Once the software is selected, the IT department will need to staff and train hard-to-find talent to provide in-house support. All in all, this can be an expensive and risky way to adopt new technology.
|
||||
|
||||
A second option is to contract with a software vendor to provide a complete cloud native solution. But this compromises the organisation’s freedom to choose the best open source technologies in exchange for better vendor support, not to mention the added perils of a closed contract.
|
||||
|
||||
This dilemma can be resolved by using a technology provider that offers the best of both worlds — i.e., delivering standards-based off-the-shelf software from the open source projects designated by the Cloud Native Computing Foundation (CNCF), and also providing integration, testing and enterprise-class support for the entire software stack.
|
||||
|
||||
CNCF uses experts from across the industry to evaluate the maturity, quality and security of cloud native open source projects and give guidance on which ones are ready for enterprise use. Selected cloud native technologies cover the entire scope of containers, microservices, continuous integration, serverless functions, analytics and much more.
|
||||
|
||||
Once CNCF declares these cloud native open source projects as having ‘graduated’, they can confidently be incorporated into an enterprise’s cloud native strategy with the knowledge that these are high quality, mainstream technologies that will get industry-wide support.
|
||||
|
||||
**Finding that single vendor who offers multiple benefits**
|
||||
But adopting CNCF’s rich cloud native technology framework is only half the battle won. You also must choose a technology provider who will package these CNCF-endorsed technologies without proprietary extensions that lock you in, and provide the necessary integration, testing, support, documentation, training and more.
|
||||
|
||||
A well-designed software stack built based on CNCF guidelines and offered by a single vendor has many benefits. First, it reduces the risks associated with technology adoption. Second, it provides a single point of contact to rapidly get support when needed and resolve issues, which means faster time to market and higher customer satisfaction. Third, it helps make cloud native applications portable to any popular cloud. This flexibility can help enterprises improve their operating margins by reducing expenses and unlocking future revenue growth opportunities.
|
||||
|
||||
Cloud native computing is becoming an everyday part of mainstream cloud application development. It can also bolster the development of advanced applications powered by artificial intelligence (AI), machine learning (ML) and the Internet of Things (IoT), among others.
|
||||
|
||||
Leading users of cloud native technologies include R&D laboratories; high tech, manufacturing and logistics companies; critical service providers and many others.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/
|
||||
|
||||
作者:[Robert Shimp][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/robert-shimp/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?resize=696%2C459&ssl=1 (Cloud Native_Cloud Computing)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?fit=900%2C593&ssl=1
|
71
sources/talk/20191007 DevOps is Eating the World.md
Normal file
71
sources/talk/20191007 DevOps is Eating the World.md
Normal file
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (DevOps is Eating the World)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/devops-is-eating-the-world/)
|
||||
[#]: author: (Jens Eckels https://opensourceforu.com/author/jens-eckels/)
|
||||
|
||||
DevOps is Eating the World
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Ten years ago, DevOps wasn’t a thing. Now, if you’re not adopting DevOps practices, you’re in danger of being left behind the competition. Over the last decade, JFrog’s Liquid Software vision has driven a commitment to helping companies around the world adopt, mature and evolve their CI/CD pipelines. Why? DevOps powers the software that powers the world. Most companies today turn into Software companies with more and more applications to build and to update. They have to manage releases fast and secure with distributed development teams and a growing amount of data to manage._
|
||||
|
||||
**Our Mission**
|
||||
JFrog is on a mission to enable continuous updates through liquid software, empowering developers to code high-quality applications that securely flow to end-users with zero downtime. We are the creators of [_Artifactory_][3], the heart of the end-to-end Universal DevOps platform for automating, managing, securing, distributing, and monitoring [_all types of technologies_][4]. As the leading universal, highly available enterprise DevOps Solution, the [_JFrog platform_][5] empowers customers with trusted and expedited software releases from code-to-production. Trusted by more than 5,500 customers, the world’s top brands, such as Amazon, Facebook, Google, Netflix, Uber, VMware, and Spotify depend on JFrog to manage their binaries for their mission-critical applications.
|
||||
|
||||
**“Liquid Software”**
|
||||
In its truest form, Liquid Software updates software continuously from code to the edge seamlessly, securely and with no downtime. No versions. No big buttons. Just flowing updates that frictionlessly power all the devices and applications around you. Why? To an edge device or browser or end-user, versions don’t really have to matter. What version of Facebook is on your phone? You don’t care – until it’s time to update it and you get annoyed. What is the current version of the operating system in your laptop? You might know, but again you don’t really care as long as it’s up to date. How about your version of Microsoft products? The version of your favorite website? You don’t care. You want it to work, and the more transparently it works the better. In fact, you’d prefer it most times if software would just update and you didn’t even need to click a button. JFrog is powering that change.
|
||||
|
||||
**A fully Automated CI/CD Pipeline**
|
||||
The idea of automating everything in the CI/CD pipeline is exciting and groundbreaking. Imagine a single platform where you could automate every step from code into production. It’s not a pipe dream (or a dream for your pipeline). It’s the Liquid Software vision: a world without versions. We’re excited about it, and eager to share the possibilities with you.
|
||||
|
||||
**The Frog gives back!**
|
||||
JFrog’s roots are in the many **open source** communities that are mainstays today. In addition to the many community contributions through global standards organizations, JFrog is proud to give enterprise-grade tools away for open source committers, as well as provide free versions of products for specific package types. There are “developer-first” companies that like to talk about their target market. JFrog is a developer company built by and for developers. We’re happy to support you.
|
||||
|
||||
**JFrog is all over the world!**
|
||||
JFrog has nine-and-counting global offices, including one in India, where we have a rapidly-growing team with R&D and support functions. **And, we’re hiring fast!** ([_see open positions_][6]). Join us and the Liquid Software revolution!
|
||||
|
||||
**We are a proud sponsor of Open Source India**
|
||||
As the sponsor of the DevOps track, we want to be sure that you see and have access to all the cool tools and methods available. So, we have a couple of amazing experiences you can enjoy:
|
||||
|
||||
1. Stop by the booth where we will be demonstrating the latest versions of the JFrog Platform, enabling Liquid Software. We’re excited to show you what’s possible.
|
||||
2. Join **Stephen Chin**, world-renowned speaker and night-hacker who will be giving a talk on the future of Liquid Software. Stephen spent many years at Sun and Oracle running teams of developer advocates.
|
||||
|
||||
|
||||
|
||||
He’s a developer and advocate for your communities, and he’s excited to join you.
|
||||
|
||||
**The bottom line:** JFrog is proud to be a developer company, serving your needs and the needs of the OSS communities around the globe with DevOps, DevSecOps and pipeline automation solutions that are changing how the world does business. We’re happy to help and eager to serve.
|
||||
|
||||
JFrog products are available as [_open-source_][7], [_on-premise_][8], and [_on the cloud_][9] on [_AWS_][10], [_Microsoft Azure_][11], and [_Google Cloud_][12]. JFrog is privately held with offices across North America, Europe, and Asia. **Learn more at** [_jfrog.com_][13].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/devops-is-eating-the-world/
|
||||
|
||||
作者:[Jens Eckels][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/jens-eckels/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?resize=696%2C498&ssl=1 (DevOps Statup rising)
|
||||
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?fit=1460%2C1045&ssl=1
|
||||
[3]: https://jfrog.com/artifactory/
|
||||
[4]: https://jfrog.com/integration/
|
||||
[5]: https://jfrog.com/enterprise-plus-platform/
|
||||
[6]: https://join.jfrog.com/
|
||||
[7]: https://jfrog.com/open-source/
|
||||
[8]: https://jfrog.com/artifactory/free-trial/
|
||||
[9]: https://jfrog.com/artifactory/free-trial/#saas
|
||||
[10]: https://jfrog.com/artifactory/cloud-native-aws/
|
||||
[11]: https://jfrog.com/artifactory/cloud-native-azure/
|
||||
[12]: https://jfrog.com/artifactory/cloud-native-gcp/
|
||||
[13]: https://jfrog.com/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (amwps290)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,103 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You Can Now Use OneDrive in Linux Natively Thanks to Insync)
|
||||
[#]: via: (https://itsfoss.com/use-onedrive-on-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
You Can Now Use OneDrive in Linux Natively Thanks to Insync
|
||||
======
|
||||
|
||||
[OneDrive][1] is a cloud storage service from Microsoft and it provides 5 GB of free storage to every user. This is integrated with Microsoft account and if you use Windows, you are have OneDrive preinstalled there.
|
||||
|
||||
OneDrive as a desktop application is not available on Linux. You can access your stored files via the web interface but you won’t get that native feel of using the cloud storage in the file manager.
|
||||
|
||||
The good news is that you can now use an unofficial tool that lets you use OneDrive in Ubuntu or other Linux distributions.
|
||||
|
||||
[Insync][2] is a quite popular premium third-party sync tool when it comes to Google Drive cloud storage management on Linux. We already have a detailed review of [Insync with Google Drive][3] support for that matter.
|
||||
|
||||
However, recently, [Insync 3 was released][4] with OneDrive support. So, in this article, we are going to take a quick look at how OneDrive can be used with it and what’s new in Insync 3.
|
||||
|
||||
Non-FOSS Alert
|
||||
|
||||
_Few developers take the pain of bringing their software to Linux. As a portal focusing on desktop Linux, we cover such software here even if they are not FOSS._
|
||||
|
||||
_Insync 3 is neither open source software nor it is free to use. You only get a 15-day trial to test it out._ _If you like it, you can purchase it for a lifetime fee of $29.99 per account._
|
||||
|
||||
_And no, we are not getting any money to promote them (in case you were going that way). We don’t do that here._
|
||||
|
||||
### Get A Native OneDrive Experience in Linux With Insync
|
||||
|
||||
![][5]
|
||||
|
||||
Even though it is a premium tool – the users who rely on OneDrive may want to get this for a seamless experience to sync OneDrive on their Linux system.
|
||||
|
||||
To get started, you have to download the suitable package for your Linux distribution from the [official download page][6].
|
||||
|
||||
[Download Insync][7]
|
||||
|
||||
You can also choose to add the repository and get it installed. You will get the instructions at Insync’s [official website][7].
|
||||
|
||||
Once you have it installed, just launch it and choose the OneDrive option.
|
||||
|
||||
![][8]
|
||||
|
||||
Also, it is worth noting that you need a separate license for each OneDrive or Google Drive account you add.
|
||||
|
||||
Now, after authorizing the OneDrive account, you have to select a base folder where you would want to sync everything – which is a new feature in Insync 3.
|
||||
|
||||
![Insync 3 Base Folder][9]
|
||||
|
||||
In addition to this, you also get the ability to selectively sync files/folders locally or from the cloud after you set it up.
|
||||
|
||||
![Insync Selective Sync][10]
|
||||
|
||||
You can also customize the sync preference by adding your own rules to ignore/sync folders and files that you want – it is totally optional.
|
||||
|
||||
![Insync Customize Sync Preferences][11]
|
||||
|
||||
Finally, you have it ready:
|
||||
|
||||
![Insync 3][12]
|
||||
|
||||
You can now start syncing files/folders using OneDrive across multiple platforms including your Linux desktop with Insync. In addition to all the new features/changes mentioned above, you also get a faster/smoother experience on Insync.
|
||||
|
||||
Also, with Insync 3, you can now take a look at the progress of your sync:
|
||||
|
||||
![][13]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
Overall, Insync 3 is an impressive upgrade for those looking to sync OneDrive on their Linux system. In case you do not want to pay – you can try other [free cloud services for Linux][14].
|
||||
|
||||
What do you think about Insync? If you’re already using it, how’s the experience so far? Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/use-onedrive-on-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://onedrive.live.com
|
||||
[2]: https://www.insynchq.com
|
||||
[3]: https://itsfoss.com/insync-linux-review/
|
||||
[4]: https://www.insynchq.com/blog/insync-3/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/onedrive-linux.png?ssl=1
|
||||
[6]: https://www.insynchq.com/downloads?start=true
|
||||
[7]: https://www.insynchq.com/downloads
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3one-drive-sync.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-base-folder-1.png?ssl=1
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-selective-syncs.png?ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-customize-sync.png?ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-homescreen.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-progress-bar.png?ssl=1
|
||||
[14]: https://itsfoss.com/cloud-services-linux/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,242 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (All That You Can Do with Google Analytics, and More)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/)
|
||||
[#]: author: (Ashwin Sathian https://opensourceforu.com/author/ashwin-sathian/)
|
||||
|
||||
All That You Can Do with Google Analytics, and More
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
*We have all heard about or used Google Analytics (GA) – the most popular tool to track user activity such as, but not limited to, page visits. Its utility and popularity means that everybody wishes to use it. This article focuses on how to use it correctly in a world where single page Angular and React applications are becoming more popular by the day. *
|
||||
|
||||
A pertinent question is how does one track page visits in applications that have just one page?
|
||||
As always, there are ways around this and we will look at one such option in this article. While we will do the implementation in an Angular application, the usage and concepts aren’t very different if the app is in React. So, let’s get started!
|
||||
|
||||
**Getting the application ready**
|
||||
Getting a tracking ID: Before we write actual code, we need to get a tracking ID, the unique identifier that tells Google Analytics that data like a click or a page view is coming from a particular application.
|
||||
|
||||
To get this, we do the following:
|
||||
|
||||
1. Go to _<https://analytics.google.com>_.
|
||||
2. Sign up by entering the required details. Make sure the registration is for the Web – ours is a Web application, after all.
|
||||
3. Agree to the _Terms and Conditions_, and generate your tracking ID.
|
||||
4. Copy the ID, which will perhaps look something like ‘UA-ID-Y’.
|
||||
|
||||
|
||||
|
||||
Now that the ID is ready, let’s write some code.
|
||||
|
||||
**Adding _analytics.js_ script**
|
||||
While the team at Google has done all the hard work to get the Google Analytics tools ready for us to use, this is where we do our part – make it available to the application. This is simple – all that’s to be done is to add the following script to your application’s _index.html_:
|
||||
|
||||
```
|
||||
<script>
|
||||
(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
})(window,document,’script’,’https://www.google-analytics.com/analytics.js’,’ga’);
|
||||
</script>
|
||||
```
|
||||
|
||||
With that out of the way, let’s see how we can initialise Google Analytics in the application.
|
||||
|
||||
**Creating the tracker**
|
||||
Let’s now set up the tracker for the application. For this, _open app.component.ts_ and perform the following steps:
|
||||
|
||||
1. Declare a global variable named ga, of type _any_ (remember, this is Typescript and you need to mention types).
|
||||
2. Insert the following line of code into _ngOnInit()_ of your component.
|
||||
|
||||
|
||||
|
||||
```
|
||||
ga(‘create’, YOUR_TRACKING_ID, ‘auto’);
|
||||
```
|
||||
|
||||
Congratulations! You have now successfully initiated a Google Analytics tracker in your application. Since the tracker initiation is made inside the _OnInit_ function, the tracker will get activated every time the application starts up.
|
||||
|
||||
**Recording page visits in the single page application**
|
||||
What we need to do is record route-visits.
|
||||
|
||||
Now comes the tricky bit – tracking the parts of your application visited by users. From a functional aspect, we know that routes are stand-ins for traditional pages in modern single page Web applications. It means we need to record route-visits. This is not easy, but we can still achieve this.
|
||||
In the _ngOnInit()_ function inside _app.component.ts_, add the following code snippet:
|
||||
|
||||
```
|
||||
import { Router, NavigationEnd } from ‘@angular/router’;
|
||||
…
|
||||
constructor(public router: Router) {}
|
||||
...
|
||||
this.router.events.subscribe(
|
||||
event => {
|
||||
if (event instanceof NavigationEnd) {
|
||||
ga(‘set’, ‘page’, event.urlAfterRedirects);
|
||||
ga(‘send’, { hitType: ‘pageview’, hitCallback: () => { this.pageViewSent = true; }
|
||||
});
|
||||
}
|
||||
} );
|
||||
```
|
||||
|
||||
Believe it or not, those few lines of code have taken care of the ‘pageview’ recording issue in the Angular application.
|
||||
|
||||
Despite only a few lines of code, a fair bit is happening here:
|
||||
|
||||
1. Import Router and NavigationEnd from Angular Router.
|
||||
2. Add Router to the component through its constructor.
|
||||
3. Next, subscribe to router events; i.e., all events emitted by the Angular Router.
|
||||
4. Whenever there is an instance of a NavigationEnd event (emitted whenever the application navigates to a route), we set that destination/ route as a page and this sends a pageview.
|
||||
|
||||
|
||||
|
||||
Now, every time a routing occurs, a pageview is sent to Google Analytics. You can view this in the online Google Analytics dashboard.
|
||||
|
||||
Like pageviews, we can record a lot of other activities like screenview, timing, etc, using the same syntax. We can set the program to react in any way we want to for all such send activities, through the _hitCallback()_, as shown in the code snippet. Here we are setting a variable value to true, but any piece of code can be executed in _hitCallback_.
|
||||
|
||||
**Tracking user interactions**
|
||||
After pageviews, the most commonly tracked activities on Google Analytics are user interactions such as, but not limited to, button clicks. ‘How many times was the _Submit_ button clicked?’, ‘How often is the product brochure viewed?’ These are questions that are often asked in product review meetings for Web applications. In this section, we’ll look at this implementation using Google Analytics in the application.
|
||||
|
||||
**Button clicks:** Consider a case for which you wish to track the number of times a certain button/link in the application is clicked – a metric that is most associated with sign-ups, call-to-action buttons, etc. We’ll look at an example for this.
|
||||
|
||||
For this purpose, assume that you have a ‘Show Interest’ button in your application for an upcoming event. The organisers wish to keep track of how many people are interested in the event by tracking those clicking the button. The following code snippet facilitates this:
|
||||
|
||||
```
|
||||
…
|
||||
params = {
|
||||
eventCategory:
|
||||
‘Button’
|
||||
,
|
||||
eventAction:
|
||||
‘Click’
|
||||
,
|
||||
eventLabel:
|
||||
‘Show interest’
|
||||
,
|
||||
eventValue:
|
||||
1
|
||||
};
|
||||
|
||||
showInterest() {
|
||||
ga(‘send’, ‘event’, this.params);
|
||||
}
|
||||
…
|
||||
```
|
||||
|
||||
Let’s look at what is being done here. Google Analytics, as already discussed, records activities when we send the data to it. It is the parameters that we pass to this ‘send’ method that distinguish between various events like tracking clicks on two separate buttons.
|
||||
1\. First, we define a ‘params’ object that should have the following fields:
|
||||
|
||||
1. _eventCategory_ – An object with which interaction happens; in this case, a button.
|
||||
2. _eventAction_ – The type of interaction; in our case, a click.
|
||||
3. _eventLabel_ – An identifier for the interaction. In this case, we could call it Show Interest.
|
||||
4. _eventValue_ – Basically, the value you wish to associate with each instance of this event.
|
||||
|
||||
|
||||
|
||||
Since this example is measuring the number of people showing interest, we can set this value to 1.
|
||||
|
||||
2\. After constructing this object, the next part is pretty simple and one of the most commonly used methods as far as Google Analytics tracking goes – sending the event, with the ‘params’ object as a payload. We do this by using event binding on the button and attaching the _showInterest()_ button to it.
|
||||
|
||||
That’s it! Google Analytics (GA) will now track data of people expressing interest by clicking the button.
|
||||
|
||||
**Tracking social engagements:** Google Analytics also lets you track people’s interactions on social media through the application. One such case would be a Facebook-type ‘Like’ button for our brand’s page that we place in the application. Let’s look at how we can do this tracking using GA.
|
||||
|
||||
```
|
||||
…
|
||||
fbLikeParams = {
|
||||
socialNetwork:
|
||||
'Facebook',
|
||||
socialAction:
|
||||
'Like',
|
||||
socialTarget:
|
||||
'https://facebook.com/mypage'
|
||||
};
|
||||
…
|
||||
fbLike() {
|
||||
ga('send', 'social', this.fbLikeParams);
|
||||
}
|
||||
```
|
||||
|
||||
If that code looks familiar, it’s because it is very similar to the method by which we track button clicks. Let’s look at the steps:
|
||||
|
||||
1\. Construct the payload for sending data. This payload should have the following fields:
|
||||
|
||||
1. _socialNetwork_ – The network the interaction is happening with, e.g., Facebook, Twitter, etc.
|
||||
2. _socialAction_ – What sort of interaction is happening, e.g., Like, Tweet, Share, etc.
|
||||
3. _socialTarget_ – What URL is being targeted by using the interaction. This could be the URL of the social media profile/page.
|
||||
|
||||
|
||||
|
||||
2\. The next method is, of course, to add a function to report this activity. Unlike a button click, we don’t use the _send_ method here, but the social method. Also, after this function is written, we bind it to the Like button we have in place.
|
||||
|
||||
There’s more that can be done with GA as far as tracking user interactions go, one of the top items being exception tracking. This allows us to track errors and exceptions occurring in the application using GA. We won’t delve deeper into it in this article; however, the reader is encouraged to explore this.
|
||||
|
||||
**User identity**
|
||||
**Privacy is a right, not a luxury:** While Google Analytics can record a lot of activities as well as user interactions, there is one comparatively less known aspect, which we will look at in this section. There’s a lot of control we can place over tracking (and not tracking) user identity.
|
||||
**Cookies:** GA uses cookies as a means to track a user’s activity. But we can define what these cookies are named and a couple of other aspects about them, as shown in the code snippet below:
|
||||
|
||||
```
|
||||
trackingID =
|
||||
‘UA-139883813-1’
|
||||
;
|
||||
cookieParams = {
|
||||
cookieName: ‘myGACookie’,
|
||||
cookieDomain: window.location.hostname,
|
||||
cookieExpires: 604800
|
||||
};
|
||||
…
|
||||
ngOnInit() {
|
||||
ga(‘create’, this.trackingID, this.cookieParams);
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Here, we are setting the GA cookie’s name, domain and cookie expiration date, which allows us to distinguish cookies set by our GA tracker from those set by GA trackers from other websites/Web applications. Rather than a cryptic auto-generated identity, we’ll be able to set custom identities for our application’s GA tracker cookies.
|
||||
|
||||
**IP anonymisation:** There may be cases when we do not want to know where the traffic to our application is coming from. For instance, consider a button click activity tracker – we do not necessarily need to know the geographical source of that interaction, so long as the number of hits is tracked. In such situations, GA allows us to track users’ activity without them having to reveal their IP address.
|
||||
|
||||
```
|
||||
ipParams = {
|
||||
anonymizeIp: true
|
||||
};
|
||||
…
|
||||
ngOnInit() {
|
||||
…
|
||||
ga('set', this.ipParams);
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Here, we are setting the parameters of the GA tracker so that IP anonymisation is set to _true_. Thus, our user’s IP address will not be tracked by Google Analytics, which will give users a sense of privacy.
|
||||
|
||||
**Opt-out:** At times, users may not want their browsing data to be tracked. GA allows for this too, and hence has the option to enable users to completely opt out of GA tracking.
|
||||
|
||||
```
|
||||
...
|
||||
optOut() {
|
||||
window[‘ga-disable-UA-139883813-1’] = true;
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
_optOut()_ is a custom function which disables GA tracking from the window. We can employ this function using event binding on a button/check box, which allows users to opt out of GA tracking.
|
||||
We have looked at what makes integrating Google Analytics into single page applications tricky and explored a way to work around this. We also saw how we can track ‘page’ views in single page applications as well as touched upon tracking users’ interactions with the application, such as button clicks, social media engagements, etc.
|
||||
|
||||
Finally, we examined opportunities offered by GA to ensure user privacy, especially when their identity isn’t critical to our application’s analytics, to the extent of allowing users to entirely opt out of Google Analytics tracking. Since there is much more that can be done, you’re encouraged to keep exploring and keep playing around with the methods offered by GA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/
|
||||
|
||||
作者:[Ashwin Sathian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/ashwin-sathian/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?resize=696%2C396&ssl=1 (Analytics illustration)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?fit=900%2C512&ssl=1
|
@ -0,0 +1,192 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install and Configure VNC Server on Centos 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Install and Configure VNC Server on Centos 8 / RHEL 8
|
||||
======
|
||||
|
||||
A **VNC** (Virtual Network Computing) Server is a GUI based desktop sharing platform that allows you to access remote desktop machines. In **Centos 8** and **RHEL 8** systems, VNC servers are not installed by default and need to be installed manually. In this article, we’ll look at how to install VNC Server on CentOS 8 / RHEL 8 systems with a simple step-by-step installation guide.
|
||||
|
||||
### Prerequisites to Install VNC Server on Centos 8 / RHEL 8
|
||||
|
||||
To install VNC Server in your system, make sure you have the following requirements readily available on your system:
|
||||
|
||||
* CentOS 8 / RHEL 8
|
||||
* GNOME Desktop Environment
|
||||
* Root access
|
||||
* DNF / YUM Package repositories
|
||||
|
||||
|
||||
|
||||
### Step by Step Guide to Install VNC Server on Centos 8 / RHEL 8
|
||||
|
||||
### Step 1) Install GNOME Desktop environment
|
||||
|
||||
Before installing VNC Server in your CentOS 8 / RHEL 8, make sure you have a desktop Environment (DE) installed. In case GNOME desktop is already installed or you have installed your server with gui option then you can skip this step.
|
||||
|
||||
In CentOS 8 / RHEL 8, GNOME is the default desktop environment. if you don’t have it in your system, install it using the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf groupinstall "workstation"
|
||||
Or
|
||||
[root@linuxtechi ~]# dnf groupinstall "Server with GUI
|
||||
```
|
||||
|
||||
Once the above packages are installed successfully then run the following command to enable the graphical mode
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl set-default graphical
|
||||
```
|
||||
|
||||
Now reboot the system so that we get GNOME login screen.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# reboot
|
||||
```
|
||||
|
||||
Once the system is rebooted successfully uncomment the line “**WaylandEnable=false**” from the file “**/etc/gdm/custom.conf**” so that remote desktop session request via vnc is handled by xorg of GNOME desktop in place of wayland display manager.
|
||||
|
||||
**Note:** Wayland is the default display manager (GDM) in GNOME and it not is configured to handled remote rendering API like X.org
|
||||
|
||||
### Step 2) Install VNC Server (tigervnc-server)
|
||||
|
||||
Next we’ll install the VNC Server, there are lot of VNC Servers available, and for installation purposes, we’ll be installing **TigerVNC Server**. It is one of the most popular VNC Server and a high-performance and platform-independent VNC that allows users to interact with remote machines easily.
|
||||
|
||||
Now install TigerVNC Server using the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install tigervnc-server tigervnc-server-module -y
|
||||
```
|
||||
|
||||
### Step 3) Set VNC Password for Local User
|
||||
|
||||
Let’s assume we want ‘pkumar’ user to use VNC for remote desktop session, then switch to the user and set its password using vncpasswd command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# su - pkumar
|
||||
[root@linuxtechi ~]$ vncpasswd
|
||||
Password:
|
||||
Verify:
|
||||
Would you like to enter a view-only password (y/n)? n
|
||||
A view-only password is not used
|
||||
[root@linuxtechi ~]$
|
||||
[root@linuxtechi ~]$ exit
|
||||
logout
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 4) Setup VNC Server Configuration File
|
||||
|
||||
Next step is to configure VNC Server Configuration file. Create a file “**/etc/systemd/system/[root@linuxtechi][1]**” with the following content so that tigervnc-server’s service started for above local user “pkumar”.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vim /etc/systemd/system/root@linuxtechi
|
||||
[Unit]
|
||||
Description=Remote Desktop VNC Service
|
||||
After=syslog.target network.target
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
WorkingDirectory=/home/pkumar
|
||||
User=pkumar
|
||||
Group=pkumar
|
||||
|
||||
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
|
||||
ExecStart=/usr/bin/vncserver -autokill %i
|
||||
ExecStop=/usr/bin/vncserver -kill %i
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Save and exit the file,
|
||||
|
||||
**Note:** Replace the user name in above file which suits to your setup.
|
||||
|
||||
By default, VNC server listen on tcp port 5900+n, where n is the display number, if the display number is “1” then VNC server will listen its request on TCP port 5901.
|
||||
|
||||
### Step 5) Start VNC Service and allow port in firewall
|
||||
|
||||
I am using display number as 1, so use the following commands to start and enable vnc service on display number “1”,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl daemon-reload
|
||||
[root@linuxtechi ~]# systemctl start root@linuxtechi:1.service
|
||||
[root@linuxtechi ~]# systemctl enable vnroot@linuxtechi:1.service
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/root@linuxtechi:1.service → /etc/systemd/system/root@linuxtechi
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use below **netstat** or **ss** command to verify whether VNC server start listening its request on 5901,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# netstat -tunlp | grep 5901
|
||||
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 8169/Xvnc
|
||||
tcp6 0 0 :::5901 :::* LISTEN 8169/Xvnc
|
||||
[root@linuxtechi ~]# ss -tunlp | grep -i 5901
|
||||
tcp LISTEN 0 5 0.0.0.0:5901 0.0.0.0:* users:(("Xvnc",pid=8169,fd=6))
|
||||
tcp LISTEN 0 5 [::]:5901 [::]:* users:(("Xvnc",pid=8169,fd=7))
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use below systemctl command to verify the status of VNC server,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl status root@linuxtechi:1.service
|
||||
```
|
||||
|
||||
![vncserver-status-centos8-rhel8][2]
|
||||
|
||||
Above command’s output confirms that VNC is started successfully on port tcp port 5901. Use the following command allow VNC Server port “5901” in os firewall,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5901/tcp
|
||||
success
|
||||
[root@linuxtechi ~]# firewall-cmd --reload
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 6) Connect to Remote Desktop Session
|
||||
|
||||
Now we are all set to see if the remote desktop connection is working. To access the remote desktop, Start the VNC Viewer from your Windows / Linux workstation and enter your **VNC server IP Address** and **Port Number** and then hit enter
|
||||
|
||||
[![VNC-Viewer-Windows10][2]][3]
|
||||
|
||||
Next, it will ask for your VNC password. Enter the password that you have created earlier for your local user and click OK to continue
|
||||
|
||||
[![VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server][2]][4]
|
||||
|
||||
Now you can see the remote desktop,
|
||||
|
||||
[![VNC-Desktop-Screen-CentOS8][2]][5]
|
||||
|
||||
That’s it, you’ve successfully installed VNC Server in Centos 8 / RHEL 8.
|
||||
|
||||
**Conclusion**
|
||||
|
||||
Hope the step-by-step guide to install VNC server on Centos 8 / RHEL 8 has provided you with all the information to easily setup VNC Server and access remote desktops. Please provide your comments and suggestion in the feedback section below. See you in the next article…Until then a big THANK YOU and BYE for now!!!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/cdn-cgi/l/email-protection
|
||||
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Windows10.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Desktop-Screen-CentOS8.jpg
|
222
sources/tech/20191007 7 Java tips for new developers.md
Normal file
222
sources/tech/20191007 7 Java tips for new developers.md
Normal file
@ -0,0 +1,222 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 Java tips for new developers)
|
||||
[#]: via: (https://opensource.com/article/19/10/java-basics)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
7 Java tips for new developers
|
||||
======
|
||||
If you're just getting started with Java programming, here are seven
|
||||
basics you need to know.
|
||||
![Coffee and laptop][1]
|
||||
|
||||
Java is a versatile programming language used, in some way, in nearly every industry that touches a computer. Java's greatest power is that it runs in a Java Virtual Machine (JVM), a layer that translates Java code into bytecode compatible with your operating system. As long as a JVM exists for your operating system, whether that OS is on a server (or [serverless][2], for that matter), desktop, laptop, mobile device, or embedded device, then a Java application can run on it.
|
||||
|
||||
This makes Java a popular language for both programmers and users. Programmers know that they only have to write one version of their software to end up with an application that runs on any platform, and users know that an application will run on their computer regardless of what operating system they use.
|
||||
|
||||
Many languages and frameworks are cross-platform, but none deliver the same level of abstraction. With Java, you target the JVM, not the OS. For programmers, that's the path of least resistance when faced with several programming challenges, but it's only useful if you know how to program Java. If you're just getting started with Java programming, here are seven basic tips you need to know.
|
||||
|
||||
But first, if you're not sure whether you have Java installed, you can find out in a terminal (such as [Bash][3] or [PowerShell][4]) by running:
|
||||
|
||||
|
||||
```
|
||||
$ java --version
|
||||
openjdk 12.0.2 2019-07-16
|
||||
OpenJDK Runtime Environment 19.3 (build 12.0.2+9)
|
||||
OpenJDK 64-Bit Server VM 19.3 (build 12.0.2+9, mixed mode, sharing)
|
||||
```
|
||||
|
||||
If you get an error or nothing in return, then you should install the [Java Development Kit][5] (JDK) to get started with Java development. Or install a Java Runtime Environment ****(JRE) if you just need to run Java applications.
|
||||
|
||||
### 1\. Java packages
|
||||
|
||||
In Java, related classes are grouped into a _package_. The basic Java libraries you get when you download the JDK are grouped into packages starting with **java** or **javax**. Packages serve a similar function as folders on your computer: they provide structure and definition for related elements (in programming terminology, a _namespace_). Additional packages can be obtained from independent coders, open source projects, and commercial vendors, just as libraries can be obtained for any programming language.
|
||||
|
||||
When you write a Java program, you should declare a package name at the top of your code. If you're just writing a simple application to get started with Java, your package name can be as simple as the name of your project. If you're using a Java integrated development environment (IDE), like [Eclipse][6], it generates a sane package name for you when you start a new project.
|
||||
|
||||
|
||||
```
|
||||
package helloworld;
|
||||
|
||||
/**
|
||||
* @author seth
|
||||
* An application written in Java.
|
||||
*/
|
||||
```
|
||||
|
||||
Otherwise, you can determine the name of your package by looking at its path in relation to the broad definition of your project. For instance, if you're writing a set of classes to assist in game development and the collection is called **jgamer**, then you might have several unique classes within it.
|
||||
|
||||
|
||||
```
|
||||
package jgamer.avatar;
|
||||
|
||||
/**
|
||||
* @author seth
|
||||
* An imaginary game library.
|
||||
*/
|
||||
```
|
||||
|
||||
The top level of your package is **jgamer**, and each package inside it is a descendant, such as **jgamer.avatar** and **jgamer.score** and so on. In your filesystem, the structure reflects this, with **jgamer** being the top directory containing the files **avatar.java** and **score.java**.
|
||||
|
||||
### 2\. Java imports
|
||||
|
||||
The most fun you'll ever have as a polyglot programmer is trying to keep track of whether you **include**, **import**, **use**, **require**, or **some other term** a library in whatever programming language you're writing in. Java, for the record, uses the **import** keyword when importing libraries needed for your code.
|
||||
|
||||
|
||||
```
|
||||
package helloworld;
|
||||
|
||||
import javax.swing.*;
|
||||
import java.awt.*;
|
||||
import java.awt.event.*;
|
||||
|
||||
/**
|
||||
* @author seth
|
||||
* A GUI hello world.
|
||||
*/
|
||||
```
|
||||
|
||||
Imports work based on an environment's Java path. If Java doesn't know where Java libraries are stored on a system, then an import cannot be successful. As long as a library is stored in a system's Java path, then an import can succeed, and a library can be used to build and run a Java application.
|
||||
|
||||
If a library is not expected to be in the Java path (because, for instance, you are writing the library yourself), then the library can be bundled with your application (license permitting) so that the import works as expected.
|
||||
|
||||
### 3\. Java classes
|
||||
|
||||
A Java class is declared with the keywords **public class** along with a unique class name mirroring its file name. For example, in a file **Hello.java** in project **helloworld**:
|
||||
|
||||
|
||||
```
|
||||
package helloworld;
|
||||
|
||||
import javax.swing.*;
|
||||
import java.awt.*;
|
||||
import java.awt.event.*;
|
||||
|
||||
/**
|
||||
* @author seth
|
||||
* A GUI hello world.
|
||||
*/
|
||||
|
||||
public class Hello {
|
||||
// this is an empty class
|
||||
}
|
||||
```
|
||||
|
||||
You can declare variables and functions inside a class. In Java, variables within a class are called _fields_.
|
||||
|
||||
### 4\. Java methods
|
||||
|
||||
Java methods are, essentially, functions within an object. They are defined as being **public** (meaning they can be accessed by any other class) or **private** (limiting their use) based on the expected type of returned data, such as **void**, **int**, **float**, and so on.
|
||||
|
||||
|
||||
```
|
||||
public void helloPrompt([ActionEvent][7] event) {
|
||||
[String][8] salutation = "Hello %s";
|
||||
|
||||
string helloMessage = "World";
|
||||
message = [String][8].format(salutation, helloMessage);
|
||||
[JOptionPane][9].showMessageDialog(this, message);
|
||||
}
|
||||
|
||||
private int someNumber (x) {
|
||||
return x*2;
|
||||
}
|
||||
```
|
||||
|
||||
When calling a method directly, it is referenced by its class and method name. For instance, **Hello.someNumber** refers to the **someNumber** method in the **Hello** class.
|
||||
|
||||
### 5\. Static
|
||||
|
||||
The **static** keyword in Java makes a member in your code accessible independently of the object that contains it.
|
||||
|
||||
In object-oriented programming, you write code that serves as a template for "objects" that get spawned as the application runs. You don't code a specific window, for instance, but an _instance_ of a window based upon a window class in Java (and modified by your code). Since nothing you are coding "exists" until the application generates an instance of it, most methods and variables (and even nested classes) cannot be used until the object they depend upon has been created.
|
||||
|
||||
However, sometimes you need to access or use data in an object before it is created by the application (for example, an application can't generate a red ball without first knowing that the ball is meant to be red). For those cases, there's the **static** keyword.
|
||||
|
||||
### 6\. Try and catch
|
||||
|
||||
Java is excellent at catching errors, but it can only recover gracefully if you tell it what to do. The cascading hierarchy of attempting to perform an action in Java starts with **try**, falls back to **catch**, and ends with **finally**. Should the **try** clause fail, then **catch** is invoked, and in the end, there's always **finally** to perform some sensible action regardless of the results. Here's an example:
|
||||
|
||||
|
||||
```
|
||||
try {
|
||||
cmd = parser.parse(opt, args);
|
||||
|
||||
if(cmd.hasOption("help")) {
|
||||
HelpFormatter helper = new HelpFormatter();
|
||||
helper.printHelp("Hello <options>", opt);
|
||||
[System][10].exit(0);
|
||||
}
|
||||
else {
|
||||
if(cmd.hasOption("shell") || cmd.hasOption("s")) {
|
||||
[String][8] target = cmd.getOptionValue("tgt");
|
||||
} // else
|
||||
} // fi
|
||||
} catch ([ParseException][11] err) {
|
||||
[System][10].out.println(err);
|
||||
[System][10].exit(1);
|
||||
} //catch
|
||||
finally {
|
||||
new Hello().helloWorld(opt);
|
||||
} //finally
|
||||
} //try
|
||||
```
|
||||
|
||||
It's a robust system that attempts to avoid irrecoverable errors or, at least, to provide you with the option to give useful feedback to the user. Use it often, and your users will thank you!
|
||||
|
||||
### 7\. Running a Java application
|
||||
|
||||
Java files, usually ending in **.java**, theoretically can be run with the **java** command. If an application is complex, however, whether running a single file results in anything meaningful is another question.
|
||||
|
||||
To run a **.java** file directly:
|
||||
|
||||
|
||||
```
|
||||
`$ java ./Hello.java`
|
||||
```
|
||||
|
||||
Usually, Java applications are distributed as Java Archives (JAR) files, ending in **.jar**. A JAR file contains a manifest file specifying the main class, some metadata about the project structure, and all the parts of your code required to run the application.
|
||||
|
||||
To run a JAR file, you may be able to double-click its icon (depending on how you have your OS set up), or you can launch it from a terminal:
|
||||
|
||||
|
||||
```
|
||||
`$ java -jar ./Hello.jar`
|
||||
```
|
||||
|
||||
### Java for everyone
|
||||
|
||||
Java is a powerful language, and thanks to the [OpenJDK][12] project and other initiatives, it's an open specification that allows projects like [IcedTea][13], [Dalvik][14], and [Kotlin][15] to thrive. Learning Java is a great way to prepare to work in a wide variety of industries, and what's more, there are plenty of [great reasons to use it][16].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/java-basics
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
|
||||
[2]: https://www.redhat.com/en/resources/building-microservices-eap-7-reference-architecture
|
||||
[3]: https://www.gnu.org/software/bash/
|
||||
[4]: https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-6
|
||||
[5]: http://openjdk.java.net/
|
||||
[6]: http://www.eclipse.org/
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+joptionpane
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+parseexception
|
||||
[12]: https://openjdk.java.net/
|
||||
[13]: https://icedtea.classpath.org/wiki/Main_Page
|
||||
[14]: https://source.android.com/devices/tech/dalvik/
|
||||
[15]: https://kotlinlang.org/
|
||||
[16]: https://opensource.com/article/19/9/why-i-use-java
|
76
sources/tech/20191007 IceWM - A really cool desktop.md
Normal file
76
sources/tech/20191007 IceWM - A really cool desktop.md
Normal file
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IceWM – A really cool desktop)
|
||||
[#]: via: (https://fedoramagazine.org/icewm-a-really-cool-desktop/)
|
||||
[#]: author: (tdawson https://fedoramagazine.org/author/tdawson/)
|
||||
|
||||
IceWM – A really cool desktop
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
IceWM is a very lightweight desktop. It’s been around for over 20 years, and its goals today are still the same as back then: speed, simplicity, and getting out of the users way.
|
||||
|
||||
I used to add IceWM to Scientific Linux, for a lightweight desktop. At the time, it was only a .5 Meg rpm. When running, it used only 5 Meg of memory. Over the years, IceWM has grown a little bit. The rpm package is now 1 Meg. When running, IceWM now uses 10 Meg of memory. Even though it literally doubled in size in the past 10 years, it is still extremely small.
|
||||
|
||||
What do you get in such a small package? Exactly what it says, a Window Manager. Not much else. You have a toolbar with a menu or icons to launch programs. You have speed. And finally you have themes and options. Besides the few goodies in the toolbar, that’s about it.
|
||||
|
||||
![][2]
|
||||
|
||||
### Installation
|
||||
|
||||
Because IceWM is so small, you just install the main package.
|
||||
|
||||
```
|
||||
$ sudo dnf install icewm
|
||||
```
|
||||
|
||||
If you want to save disk space, many of the dependencies are soft options. IceWM works just fine without them.
|
||||
|
||||
```
|
||||
$ sudo dnf install icewm --setopt install_weak_deps=false
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
The defaults for IceWM are set so that your average windows user feels comfortable. This is a good thing, because options are done manually, through configuration files.
|
||||
|
||||
I hope I didn’t loose you there, because it’s not as bad as it sounds. There are only 8 configuration files, and most people only use a couple. The main three config files are keys (keybinding), preferences (overall preferences), and toolbar (what is shown on the toolbar). The default config files are found in /usr/share/icewm/
|
||||
|
||||
To make a change, you copy the default config to you home icewm directory (~/.icewm), edit the file, and then restart IceWM. The first time you do this might be a little scary because “Restart Icewm” is found under the “Logout” menu entry. But when you restart IceWM, you just see a single flicker, and your changes are there. Any open programs are unaffected and stay as they were.
|
||||
|
||||
### Themes
|
||||
|
||||
![IceWM in the NanoBlue theme][3]
|
||||
|
||||
If you install the icewm-themes package, you get quite a few themes. Unlike regular options, you do not need to restart IceWM to change into a new theme. Usually I wouldn’t talk much about themes, but since there are so few other features, I figured I’m mention them.
|
||||
|
||||
### Toolbar
|
||||
|
||||
The toolbar is the one place where a few extra features have been added to IceWM. You will see that you can switch between workplaces. Workspaces are sometimes called Virtual Desktops. Click on the workspace to move to it. Right clicking on a windows taskbar entry allows you to move it between workspaces. If you like workspaces, this has all the functionality you will like. If you don’t like workspaces, it’s an option and can be turned off.
|
||||
|
||||
The toolbar also has Network/Memory/CPU monitoring graphs. Hover your mouse over the graph to get details. Click on the graph to get a window with full monitoring. These little graphs used to be on every window manager. But as those desktops matured, they have all taken the graphs out. I’m very glad that IceWM has left this nice feature alone.
|
||||
|
||||
## Summary
|
||||
|
||||
If you want something lightweight, but functional, IceWM is the desktop for you. It is setup so that new linux users can use it out of the box. It is flexible so that unix users can tweak it to their liking. Most importantly, IceWM lets your programs run without getting in the way.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/icewm-a-really-cool-desktop/
|
||||
|
||||
作者:[tdawson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/tdawson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm-1-816x346.png
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.2-1024x768.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.3-1024x771.png
|
@ -0,0 +1,202 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Introduction to open source observability on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/10/open-source-observability-kubernetes)
|
||||
[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn)
|
||||
|
||||
Introduction to open source observability on Kubernetes
|
||||
======
|
||||
In the first article in this series, learn the signals, mechanisms,
|
||||
tools, and platforms you can use to observe services running on
|
||||
Kubernetes.
|
||||
![Looking back with binoculars][1]
|
||||
|
||||
With the advent of DevOps, engineering teams are taking on more and more ownership of the reliability of their services. While some chafe at the increased operational burden, others welcome the opportunity to treat service reliability as a key feature, invest in the necessary capabilities to measure and improve reliability, and deliver the best possible customer experiences.
|
||||
|
||||
This change is measured explicitly in the [2019 Accelerate State of DevOps Report][2]. One of its most interesting conclusions (as written in the summary) is:
|
||||
|
||||
> "Delivering software quickly, **reliably** _[emphasis mine]_, and safely is at the heart of technology transformation and organizational performance. We see continued evidence that software speed, stability, and **availability** _[emphasis mine]_ contribute to organizational performance (including profitability, productivity, and customer satisfaction). Our highest performers are twice as likely to meet or exceed their organizational performance goals."
|
||||
|
||||
The full [report][3] says:
|
||||
|
||||
> "**Low performers use more proprietary software than high and elite performers**: The cost to maintain and support proprietary software can be prohibitive, prompting high and elite performers to use open source solutions. This is in line with results from previous reports. In fact, the 2018 Accelerate State of DevOps Report found that elite performers were 1.75 times more likely to make extensive use of open source components, libraries, and platforms."
|
||||
|
||||
This is a strong testament to the value of open source as a general accelerator of performance. Combining these two conclusions leads to the rather obvious thesis for this series:
|
||||
|
||||
> Reliability is a critical feature, observability is a necessary component of reliability, and open source tooling is at least _A_ right approach, if not _THE_ right approach.
|
||||
|
||||
This article, the first in a series, will introduce the types of signals engineers typically rely on and the mechanisms, tools, and platforms that you can use to instrument services running on Kubernetes to emit these signals, ingest and store them, and use and interpret them.
|
||||
|
||||
From there, the series will continue with hands-on tutorials, where I will walk through getting started with each of the tools and technologies. By the end, you should be well-equipped to start improving the observability of your own systems!
|
||||
|
||||
### What is observability?
|
||||
|
||||
While observability as a general [concept in control theory][4] has been around since at least 1960, its applicability to digital systems and services is rather new and in some ways an evolution of how these systems have been monitored for the last two decades. You are likely familiar with the necessity of monitoring services to ensure you know about issues before your users are impacted. You are also likely familiar with the idea of using metric data to better understand the health and state of a system, especially in the context of troubleshooting during an incident or debugging.
|
||||
|
||||
The key differentiation between monitoring and observability is that observability is an inherent property of a system or service, rather than something someone does to the system, which is what monitoring fundamentally is. [Cindy Sridharan][5], author of a free [e-book][6] on observability in distributed systems, does a great job of explaining the difference in an excellent [Medium article][7].
|
||||
|
||||
It is important to distinguish between these two terms because observability, as a property of the service you build, is your responsibility. As a service developer and owner, you have full control over the signals your system emits, how and where those signals are ingested and stored, and how they're utilized. This is in contrast to "monitoring," which may be done by others (and by you) to measure the availability and performance of your service and generate alerts to let you know that service reliability has degraded.
|
||||
|
||||
### Signals
|
||||
|
||||
Now that you understand the idea of observability as a property of a system that you control and that is explicitly manifested as the signals you instruct your system to emit, it's important to understand and describe the kinds of signals generally considered in this context.
|
||||
|
||||
#### What are metrics?
|
||||
|
||||
A metric is a fundamental type of signal that can be emitted by a service or the infrastructure it's running on. At its most basic, it is the combination of:
|
||||
|
||||
1. Some identifier, hopefully descriptive, that indicates what the metric represents
|
||||
2. A series of data points, each of which contains two elements:
|
||||
a. The timestamp at which the data point was generated (or ingested)
|
||||
b. A numeric value representing the state of the thing you're measuring at that time
|
||||
|
||||
|
||||
|
||||
Time-series metrics have been and remain the key data structure used in monitoring and observability practice and are the primary way that the state and health of a system are represented over time. They are also the primary mechanism for alerting, but that practice and others (like incident management, on-call, and postmortems) are outside the scope here. For now, the focus is on how to instrument systems to emit metrics, how to store them, and how to use them for charts and dashboards to help you visualize the current and historical state of your system.
|
||||
|
||||
Metrics are used for two primary purposes: health and insight.
|
||||
|
||||
Understanding the health and state of your infrastructure, platform, and service is essential to keeping them available to users. Generally, these are emitted by the various components chosen to build services, and it's just a matter of setting up the right collection and storage infrastructure to be able to use them. Metrics from the simple (node CPU utilization) to the esoteric (garbage collection statistics) fall into this category.
|
||||
|
||||
Metrics are also essential to understanding what is happening in the system to avoid interruptions to your services. From this perspective, a service can emit custom telemetry that precisely describes specific aspects of how the service is functioning and performing. This will require you to instrument the code itself, usually by including specific libraries, and specify an export destination.
|
||||
|
||||
#### What are logs?
|
||||
|
||||
Unlike metrics that represent numeric values that change over time, logs represent discrete events. Log entries contain both the log payload—the message emitted by a component of the service or the code—and often metadata, such as the timestamp, label, tag, or other identifiers. Therefore, this is by far the largest volume of data you need to store, and you should carefully consider your log ingestion and storage strategies as you look to take on increasing user traffic.
|
||||
|
||||
#### What are traces?
|
||||
|
||||
Distributed tracing is a relatively new addition to the observability toolkit and is specifically relevant to microservice architectures to allow you to understand latency and how various backend service calls contribute to it. Ted Young published an [excellent article on the concept][8] that includes its origins with Google's [Dapper paper][9] and subsequent evolution. This series will be specifically concerned with the various implementations available.
|
||||
|
||||
### Instrumentation
|
||||
|
||||
Once you identify the signals you want to emit, store, and analyze, you need to instruct your system to create the signals and build a mechanism to store and analyze them. Instrumentation refers to those parts of your code that are used to generate metrics, logs, and traces. In this series, we'll discuss open source instrumentation options and introduce the basics of their use through hands-on tutorials.
|
||||
|
||||
### Observability on Kubernetes
|
||||
|
||||
Kubernetes is the dominant platform today for deploying and maintaining containers. As it rose to the top of the industry's consciousness, so did new technologies to provide effective observability tooling around it. Here is a short list of these essential technologies; they will be covered in greater detail in future articles in this series.
|
||||
|
||||
#### Metrics
|
||||
|
||||
Once you select your preferred approach for instrumenting your service with metrics, the next decision is where to store those metrics and what set of services will support your effort to monitor your environment.
|
||||
|
||||
##### Prometheus
|
||||
|
||||
[Prometheus][10] is the best place to start when looking to monitor both your Kubernetes infrastructure and the services running in the cluster. It provides everything you'll need, including client instrumentation libraries, the [storage backend][11], a visualization UI, and an alerting framework. Running Prometheus also provides a wealth of infrastructure metrics right out of the box. It further provides [integrations][12] with third-party providers for storage, although the data exchange is not bi-directional in every case, so be sure to read the documentation if you want to store metric data in multiple locations.
|
||||
|
||||
Later in this series, I will walk through setting up Prometheus in a cluster for basic infrastructure monitoring and adding custom telemetry to an application using the Prometheus client libraries.
|
||||
|
||||
##### Graphite
|
||||
|
||||
[Graphite][13] grew out of an in-house development effort at Orbitz and is now positioned as an enterprise-ready monitoring tool. It provides metrics storage and retrieval mechanisms, but no instrumentation capabilities. Therefore, you will still need to implement Prometheus or OpenCensus instrumentation to collect metrics. Later in this series, I will walk through setting up Graphite and sending metrics to it.
|
||||
|
||||
##### InfluxDB
|
||||
|
||||
[InfluxDB][14] is another open source database purpose-built for storing and retrieving time-series metrics. Unlike Graphite, InfluxDB is supported by a company called InfluxData, which provides both the InfluxDB software and a cloud-hosted version called InfluxDB Cloud. Later in this series, I will walk through setting up InfluxDB in a cluster and sending metrics to it.
|
||||
|
||||
##### OpenTSDB
|
||||
|
||||
[OpenTSDB][15] is also an open source purpose-built time-series database. One of its advantages is the ability to use [HBase][16] as the storage layer, which allows integration with a cloud managed service like Google's Cloud Bigtable. Google has published a [reference guide][17] on setting up OpenTSDB to monitor your Kubernetes cluster (assuming it's running in Google Kubernetes Engine, or GKE). Since it's a great introduction, I recommend following Google's tutorial if you're interested in learning more about OpenTSDB.
|
||||
|
||||
##### OpenCensus
|
||||
|
||||
[OpenCensus][18] is the open source version of the [Census library][19] developed at Google. It provides both metric and tracing instrumentation capabilities and supports a number of backends to [export][20] the metrics to—including Prometheus! Note that OpenCensus does not monitor your infrastructure, and you will still need to determine the best approach if you choose to use OpenCensus for custom metric telemetry.
|
||||
|
||||
We'll revisit this library later in this series, and I will walk through creating metrics in a service and exporting them to a backend.
|
||||
|
||||
#### Logging for observability
|
||||
|
||||
If metrics provide "what" is happening, logging tells part of the story of "why." Here are some common options for consistently gathering and analyzing logs.
|
||||
|
||||
##### Collecting with fluentd
|
||||
|
||||
In the Kubernetes ecosystem, [fluentd][21] is the de-facto open source standard for collecting logs emitted in the cluster and forwarding them to a specified backend. You can use config maps to modify fluentd's behavior, and later in the series, I'll walk through deploying it in a cluster and modifying the associated config map to parse unstructured logs and convert them to structured for better and easier analysis. In the meantime, you can read my post "[Customizing Kubernetes logging (Part 1)][22]" on how to do that on GKE.
|
||||
|
||||
##### Storing and analyzing with ELK
|
||||
|
||||
The most common storage mechanism for logs is provided by [Elastic][23] in the form of the "ELK" stack. As Elastic says:
|
||||
|
||||
> "'ELK' is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a 'stash' like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch."
|
||||
|
||||
Later in the series, I'll walk through setting up Elasticsearch, Kibana, and Logstash in
|
||||
a cluster to store and analyze logs being collected by fluentd.
|
||||
|
||||
#### Distributed traces and observability
|
||||
|
||||
When asking "why" in analyzing service issues, logs can only provide the information that applications are designed to share with it. The way to go even deeper is to gather traces. As the [OpenTracing initiative][24] says:
|
||||
|
||||
> "Distributed tracing, also called distributed request tracing, is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance."
|
||||
|
||||
##### Istio
|
||||
|
||||
The [Istio][25] open source service mesh provides multiple benefits for microservice architectures, including traffic control, security, and observability capabilities. It does not combine multiple spans into a single trace to assemble a full picture of what happens when a user call traverses a distributed system, but it can nevertheless be useful as an easy first step toward distributed tracing. It also provides other observability benefits—it's the easiest way to get ["golden signal"][26] metrics for each service, and it also adds logging for each request, which can be very useful for calculating error rates. You can read my post on [using it with Google's Stackdriver][27]. I'll revisit it in this series and show how to install it in a cluster and configure it to export observability data to a backend.
|
||||
|
||||
##### OpenCensus
|
||||
|
||||
I described [OpenCensus][28] in the Metrics section above, and that's one of the main reasons for choosing it for distributed tracing: Using a single library for both metrics and traces is a great option to reduce your instrumentation work—with the caveat that you must be working in a language that supports both the traces and stats exporters. I'll come back to OpenCensus and show how to get started instrumenting code for distributed tracing. Note that OpenCensus provides only the instrumentation library, and you'll still need to use a storage and visualization layer like Zipkin, Jaeger, Stackdriver (on GCP), or X-Ray (on AWS).
|
||||
|
||||
##### Zipkin
|
||||
|
||||
[Zipkin][29] is a full, distributed tracing solution that includes instrumentation, storage, and visualization. It's a tried and true set of tools that's been around for years and has a strong user and developer community. It can also be used as a backend for other instrumentation options like OpenCensus. In a future tutorial, I'll show how to set up the Zipkin server and instrument your code.
|
||||
|
||||
##### Jaeger
|
||||
|
||||
[Jaeger][30] is another open source tracing solution that includes all the components you'll need. It's a newer project that's being incubated at the Cloud Native Computing Foundation (CNCF). Whether you choose to use Zipkin or Jaeger may ultimately depend on your experience with them and their support for the language you're writing your service in. In this series, I'll walk through setting up Jaeger and instrumenting code for tracing.
|
||||
|
||||
### Visualizing observability data
|
||||
|
||||
The final piece of the toolkit for using metrics is the visualization layer. There are basically two options here: the "native" visualization that your persistence layers enable (e.g., the Prometheus UI or Flux with InfluxDB) or a purpose-built visualization tool.
|
||||
|
||||
[Grafana][31] is currently the de facto standard for open source visualization. I'll walk through setting it up and using it to visualize data from various backends later in this series.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
Observability on Kubernetes has many parts and many options for each type of need. Metric, logging, and tracing instrumentation provide the bedrock of information needed to make decisions about services. Instrumenting, storing, and visualizing data are also essential. Future articles in this series will dive into all of these options with hands-on tutorials for each.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/open-source-observability-kubernetes
|
||||
|
||||
作者:[Yuri Grinshteyn][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/yuri-grinshteyn
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/look-binoculars-sight-see-review.png?itok=NOw2cm39 (Looking back with binoculars)
|
||||
[2]: https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling
|
||||
[3]: https://services.google.com/fh/files/misc/state-of-devops-2019.pdf
|
||||
[4]: https://en.wikipedia.org/wiki/Observability
|
||||
[5]: https://twitter.com/copyconstruct
|
||||
[6]: https://t.co/0gOgZp88Jn?amp=1
|
||||
[7]: https://medium.com/@copyconstruct/monitoring-and-observability-8417d1952e1c
|
||||
[8]: https://opensource.com/article/18/5/distributed-tracing
|
||||
[9]: https://research.google.com/pubs/pub36356.html
|
||||
[10]: https://prometheus.io/
|
||||
[11]: https://prometheus.io/docs/prometheus/latest/storage/
|
||||
[12]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage
|
||||
[13]: https://graphiteapp.org/
|
||||
[14]: https://www.influxdata.com/get-influxdb/
|
||||
[15]: http://opentsdb.net/
|
||||
[16]: https://hbase.apache.org/
|
||||
[17]: https://cloud.google.com/solutions/opentsdb-cloud-platform
|
||||
[18]: https://opencensus.io/
|
||||
[19]: https://opensource.googleblog.com/2018/03/how-google-uses-opencensus-internally.html
|
||||
[20]: https://opencensus.io/exporters/#exporters
|
||||
[21]: https://www.fluentd.org/
|
||||
[22]: https://medium.com/google-cloud/customizing-kubernetes-logging-part-1-a1e5791dcda8
|
||||
[23]: https://www.elastic.co/
|
||||
[24]: https://opentracing.io/docs/overview/what-is-tracing
|
||||
[25]: http://istio.io/
|
||||
[26]: https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/
|
||||
[27]: https://medium.com/google-cloud/istio-and-stackdriver-59d157282258
|
||||
[28]: http://opencensus.io/
|
||||
[29]: https://zipkin.io/
|
||||
[30]: https://www.jaegertracing.io/
|
||||
[31]: https://grafana.com/
|
66
sources/tech/20191007 Understanding Joins in Hadoop.md
Normal file
66
sources/tech/20191007 Understanding Joins in Hadoop.md
Normal file
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Understanding Joins in Hadoop)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/)
|
||||
[#]: author: (Bhaskar Narayan Das https://opensourceforu.com/author/bhaskar-narayan/)
|
||||
|
||||
Understanding Joins in Hadoop
|
||||
======
|
||||
|
||||
[![Hadoop big data career opportunities][1]][2]
|
||||
|
||||
_Those who have just begun the study of Hadoop might have come across different types of joins. This article briefly discusses normal joins, map side joins and reduce side joins. The differences between map side joins and reduce side joins, as well as their pros and cons, are also discussed._
|
||||
|
||||
Normally, the term join is used to refer to the combination of the record-sets of two tables. Thus when we run a query, tables are joined and we get the data from two tables in the joined format, as is the case in SQL joins. Joins find maximum usage in Hadoop processing. They should be used when large data sets are encountered and there is no urgency to generate the outcome. In case of Hadoop common joins, Hadoop distributes all the rows on all the nodes based on the join key. Once this is achieved, all the keys that have the same values end up on the same node and then, finally, the join at the reducer happens. This scenario is perfect when both the tables are huge, but when one table is small and the other is quite big, common joins become inefficient and take more time to distribute the row.
|
||||
|
||||
While processing data using Hadoop, we generally do it over the map phase and the reduce phase. Thus there are mappers and reducers that do the job for the map phase and the reduce phase. We use map reduce joins when we encounter a large data set that is too big to use data-sharing techniques.
|
||||
|
||||
**Map side joins**
|
||||
Map side join is the term used when the record sets of two tables are joined within the mapper. In this case, the reduce phase is not involved. In the map side join, the record sets of the tables are loaded into memory, ensuring a faster join operation. Map side join is convenient for small tables and not recommended for large tables. In situations where you have queries running too frequently with small table joins you could experience a very significant reduction in query computation time.
|
||||
|
||||
**Reduce side joins**
|
||||
Reduce side joins happen at the reduce side of Hadoop processing. They are also known as repartitioned sort merge joins, or simply, repartitioned joins or distributed joins or common joins. They are the most widely used joins. Reduce side joins happen when both the tables are so big that they cannot fit into the memory. The process flow of reduce side joins is as follows:
|
||||
|
||||
1. The input data is read by the mapper, which needs to be combined on the basis of the join key or common column.
|
||||
2. Once the input data is processed by the mapper, it adds a tag to the processed input data in order to distinguish the input origin sources.
|
||||
3. The mapper returns the intermediate key-value pair, where the key is also the join key.
|
||||
4. For the reducer, a key and a list of values is generated once the sorting and shuffling phase is complete.
|
||||
5. The reducer joins the values that are present in the generated list along with the key to produce the final outcome.
|
||||
|
||||
|
||||
|
||||
The join at the reduce side combines the output of two mappers based on a common key. This scenario is quite synonymous with SQL joins, where the data sets of two tables are joined based on a primary key. In this case we have to decide which field is the primary key.
|
||||
There are a few terms associated with reduce side joins:
|
||||
1\. _Data source:_ This is nothing but the input files.
|
||||
2\. _Tag:_ This is basically used to distinguish each input data on the basis of its origin.
|
||||
3\. _Group key:_ This refers to the common column that is used as a join key to combine the output of two mappers.
|
||||
|
||||
**Difference between map side joins and reduce side joins**
|
||||
|
||||
1. A map side join, as explained earlier, happens on the map side whereas a reduce side join happens on the reduce side.
|
||||
2. A map side join happens in the memory whereas a reduce side join happens off the memory.
|
||||
3. Map side joins are effective when one data set is big while the other is small, whereas reduce side joins work effectively for big size data sets.
|
||||
4. Map side joins are expensive, whereas reduce side joins are cheap.
|
||||
|
||||
|
||||
|
||||
Opt for map side joins when the table size is small and fits in memory, and you require the job to be completed in a short span of time. Use the reduce side join when dealing with large data sets, which cannot fit into the memory. Reduce side joins are easy to implement and have the advantage of their inbuilt sorting and shuffling algorithms. Besides this, there is no requirement to strictly follow any formatting rule for input in case of reduce side joins, and these could also be performed on unstructured data sets.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/
|
||||
|
||||
作者:[Bhaskar Narayan Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/bhaskar-narayan/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?resize=696%2C441&ssl=1 (Hadoop big data career opportunities)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?fit=750%2C475&ssl=1
|
273
sources/tech/20191007 Using the Java Persistence API.md
Normal file
273
sources/tech/20191007 Using the Java Persistence API.md
Normal file
@ -0,0 +1,273 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using the Java Persistence API)
|
||||
[#]: via: (https://opensource.com/article/19/10/using-java-persistence-api)
|
||||
[#]: author: (Stephon Brown https://opensource.com/users/stephb)
|
||||
|
||||
Using the Java Persistence API
|
||||
======
|
||||
Learn how to use the JPA by building an example app for a bike store.
|
||||
![Coffee beans][1]
|
||||
|
||||
The Java Persistence API (JPA) is an important Java functionality for application developers to understand. It translates exactly how Java developers turn method calls on objects into accessing, persisting, and managing data stored in NoSQL and relational databases.
|
||||
|
||||
This article examines the JPA in detail through a tutorial example of building a bicycle loaning service. This example will create a create, read, update, and delete (CRUD) layer for a larger application using the Spring Boot framework, the MongoDB database (which is [no longer open source][2]), and the Maven package manager. I also use NetBeans 11 as my IDE of choice.
|
||||
|
||||
This tutorial focuses on the open source angle of the Java Persistence API, rather than the tools, to show how it works. This is all about learning the pattern of programming applications, but it's still smart to understand the software. You can access the full code in my [GitHub repository][3].
|
||||
|
||||
### Java: More than 'beans'
|
||||
|
||||
Java is an object-oriented language that has gone through many changes since the Java Development Kit (JDK) was released in 1996. Understanding the language's various pathways and its virtual machine is a history lesson in itself; in brief, the language has forked in many directions, similar to the Linux kernel, since its release. There are standard editions that are free to the community, enterprise editions for business, and an open source alternatives contributed to by multiple vendors. Major versions are released at six-month intervals; since there are often major differences in features, you may want to do some research before choosing a version.
|
||||
|
||||
All and all, Java is steeped in history. This tutorial focuses on [JDK 11][4], which is the open source implementation of Java 11, because it is one of the long-term-support versions that is still active.
|
||||
|
||||
* **Spring Boot: **Spring Boot is a module from the larger Spring framework developed by Pivotal. Spring is a very popular framework for working with Java. It allows for a variety of architectures and configurations. Spring also offers support for web applications and security. Spring Boot offers basic configurations for bootstrapping various types of Java projects quickly. This tutorial uses Spring Boot to quickly write a console application and test functionality against the database.
|
||||
* **Maven:** Maven is a project/package manager developed by Apache. Maven allows for the management of packages and various dependencies within its POM.xml file. If you have used NPM, you may be familiar with how package managers function. Maven also manages build and reporting functionality.
|
||||
* **Lombok:** Lombok is a library that allows the creation of object getters/setters through annotation within the object file. This is already present in languages like C#, and Lombok introduces this functionality into Java.
|
||||
* **NetBeans: **NetBeans is a popular open source IDE that focuses specifically on Java development. Many of its tools provide an implementation for the latest Java SE and EE updates.
|
||||
|
||||
|
||||
|
||||
This group of tools will be used to create a simple application for a fictional bike store. It will implement functionality for inserting collections for "Customer" and "Bike" objects.
|
||||
|
||||
### Brewed to perfection
|
||||
|
||||
Navigate to the [Spring Initializr][5]. This website enables you to generate basic project needs for Spring Boot and the dependencies you will need for the project. Select the following options:
|
||||
|
||||
1. **Project:** Maven Project
|
||||
2. **Language:** Java
|
||||
3. **Spring Boot:** 2.1.8 (or the most stable release)
|
||||
4. **Project Metadata:** Whatever your naming conventions are (e.g., **com.stephb**)
|
||||
* You can keep Artifact as "Demo"
|
||||
5. **Dependencies:** Add:
|
||||
* Spring Data MongoDB
|
||||
* Lombok
|
||||
|
||||
|
||||
|
||||
Click **Download** and open the new project in your chosen IDE (e.g., NetBeans).
|
||||
|
||||
#### Model outline
|
||||
|
||||
The models represent information collected about specific objects in the program that will be persisted in your database. Focus on two objects: **Customer** and **Bike**. First, create a **dto** folder within the **src** folder. Then, create the two Java class objects named **Customer.java** and **Bike.java**. They will be structured in the program as follows:
|
||||
|
||||
**Customer. Java**
|
||||
|
||||
|
||||
```
|
||||
1 package com.stephb.JavaMongo.dto;
|
||||
2
|
||||
3 import lombok.Getter;
|
||||
4 import lombok.Setter;
|
||||
5 import org.springframework.data.annotation.Id;
|
||||
6
|
||||
7 /**
|
||||
8 *
|
||||
9 * @author stephon
|
||||
10 */
|
||||
11 @Getter @Setter
|
||||
12 public class Customer {
|
||||
13
|
||||
14 private @Id [String][6] id;
|
||||
15 private [String][6] emailAddress;
|
||||
16 private [String][6] firstName;
|
||||
17 private [String][6] lastName;
|
||||
18 private [String][6] address;
|
||||
19
|
||||
20 }
|
||||
```
|
||||
|
||||
**Bike.java**
|
||||
|
||||
|
||||
```
|
||||
1 package com.stephb.JavaMongo.dto;
|
||||
2
|
||||
3 import lombok.Getter;
|
||||
4 import lombok.Setter;
|
||||
5 import org.springframework.data.annotation.Id;
|
||||
6
|
||||
7 /**
|
||||
8 *
|
||||
9 * @author stephon
|
||||
10 */
|
||||
11 @Getter @Setter
|
||||
12 public class Bike {
|
||||
13 private @Id [String][6] id;
|
||||
14 private [String][6] modelNumber;
|
||||
15 private [String][6] color;
|
||||
16 private [String][6] description;
|
||||
17
|
||||
18 @Override
|
||||
19 public [String][6] toString() {
|
||||
20 return "This bike model is " + this.modelNumber + " is the color " + this.color + " and is " + description;
|
||||
21 }
|
||||
22 }
|
||||
```
|
||||
|
||||
As you can see, Lombok annotation is used within the object to generate the getters/setters for the properties/attributes. Properties can specifically receive the annotations if you do not want all of the attributes to have getters/setters within that class. These two classes will form the container carrying your data to wherever you want to display information.
|
||||
|
||||
#### Set up a database
|
||||
|
||||
I used a [Mongo Docker][7] container for testing. If you have MongoDB installed on your system, you do not have to run an instance in Docker. You can install MongoDB from its website by selecting your system information and following the installation instructions.
|
||||
|
||||
After installing, you can interact with your new MongoDB server through the command line, a GUI such as MongoDB Compass, or IDE drivers for connecting to data sources. Now you can define your data layer to pull, transform, and persist your data. To set your database access properties, navigate to the **applications.properties** file in your application and provide the following:
|
||||
|
||||
|
||||
```
|
||||
1 spring.data.mongodb.host=localhost
|
||||
2 spring.data.mongodb.port=27017
|
||||
3 spring.data.mongodb.database=BikeStore
|
||||
```
|
||||
|
||||
#### Define the data access object/data access layer
|
||||
|
||||
The data access objects (DAO) in the data access layer (DAL) will define how you will interact with data in the database. The awesome thing about using a **spring-boot-starter** is that most of the work for querying the database is already done.
|
||||
|
||||
Start with the **Customer** DAO. Create an interface in a new **dao** folder within the **src** folder, then create another Java class name called **CustomerRepository.java**. The class should look like:
|
||||
|
||||
|
||||
```
|
||||
1 package com.stephb.JavaMongo.dao;
|
||||
2
|
||||
3 import com.stephb.JavaMongo.dto.Customer;
|
||||
4 import java.util.List;
|
||||
5 import org.springframework.data.mongodb.repository.MongoRepository;
|
||||
6
|
||||
7 /**
|
||||
8 *
|
||||
9 * @author stephon
|
||||
10 */
|
||||
11 public interface CustomerRepository extends MongoRepository<Customer, String>{
|
||||
12 @Override
|
||||
13 public List<Customer> findAll();
|
||||
14 public List<Customer> findByFirstName([String][6] firstName);
|
||||
15 public List<Customer> findByLastName([String][6] lastName);
|
||||
16 }
|
||||
```
|
||||
|
||||
This class is an interface that extends or inherits from the **MongoRepository** class with your DTO (**Customer.java**) and a string because they will be used for querying with your custom functions. Because you have inherited from this class, you have access to many functions that allow persistence and querying of your object without having to implement or reference your own functions. For example, after you instantiate the **CustomerRepository** object, you can use the **Save** function immediately. You can also override these functions if you need more extended functionality. I created a few custom queries to search my collection, given specific elements of my object.
|
||||
|
||||
The **Bike** object also has a repository for interacting with the database. Implement it very similarly to the **CustomerRepository**. It should look like:
|
||||
|
||||
|
||||
```
|
||||
1 package com.stephb.JavaMongo.dao;
|
||||
2
|
||||
3 import com.stephb.JavaMongo.dto.Bike;
|
||||
4 import java.util.List;
|
||||
5 import org.springframework.data.mongodb.repository.MongoRepository;
|
||||
6
|
||||
7 /**
|
||||
8 *
|
||||
9 * @author stephon
|
||||
10 */
|
||||
11 public interface BikeRepository extends MongoRepository<Bike,String>{
|
||||
12 public Bike findByModelNumber([String][6] modelNumber);
|
||||
13 @Override
|
||||
14 public List<Bike> findAll();
|
||||
15 public List<Bike> findByColor([String][6] color);
|
||||
16 }
|
||||
```
|
||||
|
||||
#### Run your program
|
||||
|
||||
Now that you have a way to structure your data and a way to pull, transform, and persist it, run your program!
|
||||
|
||||
Navigate to your **Application.java** file (it may have a different name, depending on what you named your application, but it should include "application"). Where the class is defined, include an **implements CommandLineRunner** afterward. This will allow you to implement a **run** method to create a command-line application. Override the **run** method provided by the **CommandLineRunner** interface and include the following to test the **BikeRepository**:
|
||||
|
||||
|
||||
```
|
||||
1 package com.stephb.JavaMongo;
|
||||
2
|
||||
3 import com.stephb.JavaMongo.dao.BikeRepository;
|
||||
4 import com.stephb.JavaMongo.dao.CustomerRepository;
|
||||
5 import com.stephb.JavaMongo.dto.Bike;
|
||||
6 import java.util.Scanner;
|
||||
7 import org.springframework.beans.factory.annotation.Autowired;
|
||||
8 import org.springframework.boot.CommandLineRunner;
|
||||
9 import org.springframework.boot.SpringApplication;
|
||||
10 import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
11
|
||||
12
|
||||
13 @SpringBootApplication
|
||||
14 public class JavaMongoApplication implements CommandLineRunner {
|
||||
15 @Autowired
|
||||
16 private BikeRepository bikeRepo;
|
||||
17 private CustomerRepository custRepo;
|
||||
18
|
||||
19 public static void main([String][6][] args) {
|
||||
20 SpringApplication.run(JavaMongoApplication.class, args);
|
||||
21 }
|
||||
22 @Override
|
||||
23 public void run([String][6]... args) throws [Exception][8] {
|
||||
24 Scanner scan = new Scanner([System][9].in);
|
||||
25 [String][6] response = "";
|
||||
26 boolean running = true;
|
||||
27 while(running){
|
||||
28 [System][9].out.println("What would you like to create? \n C: The Customer \n B: Bike? \n X:Close");
|
||||
29 response = scan.nextLine();
|
||||
30 if ("B".equals(response.toUpperCase())) {
|
||||
31 [String][6][] bikeInformation = new [String][6][3];
|
||||
32 [System][9].out.println("Enter the information for the Bike");
|
||||
33 [System][9].out.println("Model Number");
|
||||
34 bikeInformation[0] = scan.nextLine();
|
||||
35 [System][9].out.println("Color");
|
||||
36 bikeInformation[1] = scan.nextLine();
|
||||
37 [System][9].out.println("Description");
|
||||
38 bikeInformation[2] = scan.nextLine();
|
||||
39
|
||||
40 Bike bike = new Bike();
|
||||
41 bike.setModelNumber(bikeInformation[0]);
|
||||
42 bike.setColor(bikeInformation[1]);
|
||||
43 bike.setDescription(bikeInformation[2]);
|
||||
44
|
||||
45 bike = bikeRepo.save(bike);
|
||||
46 [System][9].out.println(bike.toString());
|
||||
47
|
||||
48
|
||||
49 } else if ("X".equals(response.toUpperCase())) {
|
||||
50 [System][9].out.println("Bye");
|
||||
51 running = false;
|
||||
52 } else {
|
||||
53 [System][9].out.println("Sorry nothing else works right now!");
|
||||
54 }
|
||||
55 }
|
||||
56
|
||||
57 }
|
||||
58 }
|
||||
```
|
||||
|
||||
The **@Autowired** annotation allows automatic dependency injection of the **BikeRepository** and **CustomerRepository** beans. You will use these classes to persist and gather data from the database.
|
||||
|
||||
There you have it! You have created a command-line application that connects to a database and is able to perform CRUD operations with minimal code on your part.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Translating from programming language concepts like objects and classes into calls to store, retrieve, or change data in a database is essential to building an application. The Java Persistence API (JPA) is an important tool in the Java developer's toolkit to solve that challenge. What databases are you exploring in Java? Please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/using-java-persistence-api
|
||||
|
||||
作者:[Stephon Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stephb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-beans.jpg?itok=3hkjX5We (Coffee beans)
|
||||
[2]: https://www.techrepublic.com/article/mongodb-ceo-tells-hard-truths-about-commercial-open-source/
|
||||
[3]: https://github.com/StephonBrown/SpringMongoJava
|
||||
[4]: https://openjdk.java.net/projects/jdk/11/
|
||||
[5]: https://start.spring.io/
|
||||
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[7]: https://hub.docker.com/_/mongo
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
@ -0,0 +1,201 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Best Password Managers For Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/password-managers-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
5 Best Password Managers For Linux Desktop
|
||||
======
|
||||
|
||||
_**A password manager is a useful tool for creating unique passwords and storing them securely so that you don’t have to remember them. Check out the best password managers available for Linux desktop.**_
|
||||
|
||||
Passwords are everywhere. Websites, forums, web apps and what not, you need to create accounts and password for them. The trouble comes with the password. Keeping the same password for various accounts poses a security risk because [if one of the websites is compromised, hackers try the same email-password combination on other websites][1] as well.
|
||||
|
||||
But keeping unique passwords for all the new accounts means that you have to remember all of them and it’s not possible for normal humans. This is where password managers come to your help.
|
||||
|
||||
Password managing apps suggest/create strong passwords for you and store them in an encrypted database. You just need to remember the master password for the password manager.
|
||||
|
||||
Mainstream modern web browsers like Mozilla Firefox and Google Chrome have built in password manager. This helps but you are restricted to use it on their web browser only.
|
||||
|
||||
There are third party, dedicated password managers and some of them also provide native desktop applications for Linux. In this article, we filter out the best password managers available for Linux.
|
||||
|
||||
Before you see that, I would also advise going through the list of [free password generators for Linux][2] to generate strong, unique passwords for you.
|
||||
|
||||
### Password Managers for Linux
|
||||
|
||||
Possible non-FOSS alert!
|
||||
|
||||
We’ve given priority to the ones which are open source (with some proprietary options, don’t hate me!) and also offer a standalone desktop app (GUI) for Linux. The proprietary options have been highlighted.
|
||||
|
||||
#### 1\. Bitwarden
|
||||
|
||||
![][3]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Open Source
|
||||
* Free for personal use (paid options available for upgrade)
|
||||
* End-to-end encryption for Cloud servers
|
||||
* Cross-platform
|
||||
* Browser Extensions available
|
||||
* Command-line tools
|
||||
|
||||
|
||||
|
||||
Bitwarden is one of the most impressive password managers for Linux. I’ll be honest that I didn’t know about this until now – and I’m already making the switch from [LastPass][4]. I was able to easily import the data from LastPass without any issues and had no trouble whatsoever.
|
||||
|
||||
The premium version costs just $10/year – which seems to be worth it (I’ve upgraded for my personal usage).
|
||||
|
||||
It is an open source solution – so there’s nothing shady about it. You can even host it on your own server and create a password solution for your organization.
|
||||
|
||||
In addition to that, you get all the necessary features like 2FA for login, import/export options for your credentials, fingerprint phrase (a unique key), password generator, and more.
|
||||
|
||||
You can upgrade your account as an organization account for free to be able to share your information with 2 users in total. However, if you want additional encrypted vault storage and the ability to share passwords with 5 users, premium upgrades are available starting from as low as $1 per month. I think it’s definitely worth a shot!
|
||||
|
||||
[Bitwarden][5]
|
||||
|
||||
#### 2\. Buttercup
|
||||
|
||||
![][6]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Open Source
|
||||
* Free, with no premium options.
|
||||
* Cross-platform
|
||||
* Browser Extensions available
|
||||
|
||||
|
||||
|
||||
Yet another open-source password manager for Linux. Buttercup may not be a very popular solution – but if you are looking for a simpler alternative to store your credentials, this would be a good start.
|
||||
|
||||
Unlike some others, you do not have to be skeptical about its cloud servers because it sticks to offline usage only and supports connecting cloud sources like [Dropbox][7], [OwnCloud][8], [Nextcloud][9], and [WebDAV][10].
|
||||
|
||||
So, you can opt for the cloud source if you need to sync the data. You’ve got the choice for it.
|
||||
|
||||
[Buttercup][11]
|
||||
|
||||
#### 4\. KeePassXC
|
||||
|
||||
![][12]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Open Source
|
||||
* Simple password manager
|
||||
* Cross-platform
|
||||
* No mobile support
|
||||
|
||||
|
||||
|
||||
KeePassXC is a community fork of [KeePassX][13] – which was originally a Linux port for [KeePass][14] on Windows.
|
||||
|
||||
Unless you’re not aware, KeePassX hasn’t been maintained for years – so KeePassXC is a good alternative if you are looking for a dead-simple password manager. KeePassXC may not be the most prettiest or fanciest password manager, but it does the job.
|
||||
|
||||
It is secure and open source as well. I think that makes it worth a shot, what say?
|
||||
|
||||
[KeePassXC][15]
|
||||
|
||||
#### 4\. Enpass (not open source)
|
||||
|
||||
![][16]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Proprietary
|
||||
* A lot of features – including ‘Wearable’ device support.
|
||||
* Completely free for Linux (with premium features)
|
||||
|
||||
|
||||
|
||||
Enpass is a quite popular password manager across multiple platforms. Even though it’s not an open source solution, a lot of people rely on it – so you can be sure that it works, at least.
|
||||
|
||||
It offers a great deal of features and if you have a wearable device, it will support that too – which is rare.
|
||||
|
||||
It’s great to see that Enpass manages the package for Linux distros actively. Also, note that it works for 64-bit systems only. You can find the [official instructions for installation][17] on their website. It will require utilizing the terminal, but I followed the steps to test it out and it worked like a charm.
|
||||
|
||||
[Enpass][18]
|
||||
|
||||
#### 5\. myki (not open source)
|
||||
|
||||
![][19]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Proprietary
|
||||
* Avoids cloud servers for storing passwords
|
||||
* Focuses on local peer-to-peer syncing
|
||||
* Ability to replace passwords with Fingerprint IDs on mobile
|
||||
|
||||
|
||||
|
||||
This may not be a popular recommendation – but I found it very interesting. It is a proprietary password manager which lets you avoid cloud servers and relies on peer-to-peer sync.
|
||||
|
||||
So, if you do not want to utilize any cloud servers to store your information, this is for you. It is also interesting to note that the app available for Android and iOS helps you replace passwords with your fingerprint ID. If you want convenience on your mobile phone along with the basic functionality on a desktop password manager – this looks like a good option.
|
||||
|
||||
However, if you are opting for a premium upgrade, the pricing plans are for you to judge, definitely not cheap.
|
||||
|
||||
Do try it out and let us know how it goes!
|
||||
|
||||
[myki][20]
|
||||
|
||||
### Some Other Password Managers Worth Pointing Out
|
||||
|
||||
Even without offering a standalone app for Linux, there are some password managers that may deserve a mention.
|
||||
|
||||
If you need to utilize browser-based (extensions) password managers, we would recommend trying out [LastPass][21], [Dashlane][22], and [1Password][23]. LastPass even offers a [Linux client (and a command-line tool)][24].
|
||||
|
||||
If you are looking for CLI password managers, you should check out [Pass][25].
|
||||
|
||||
[Password Safe][26] is also an option – but the Linux client is in beta. I wouldn’t recommend relying on “beta” applications for storing passwords. [Universal Password Manager][27] exists but it’s no longer maintained. You may have also heard about [Password Gorilla][28] but it isn’t actively maintained.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
Bitwarden seems to be my personal favorite for now. However, there are several options to choose from on Linux. You can either opt for something that offers a native app or just a browser extension – the choice is yours.
|
||||
|
||||
If we missed listing out a password manager worth trying out, let us know about it in the comments below. As always, we’ll extend our list with your suggestion.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/password-managers-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@computerphonedude/one-of-my-old-passwords-was-hacked-on-6-different-sites-and-i-had-no-clue-heres-how-to-quickly-ced23edf3b62
|
||||
[2]: https://itsfoss.com/password-generators-linux/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/bitward.png?ssl=1
|
||||
[4]: https://www.lastpass.com/
|
||||
[5]: https://bitwarden.com/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/buttercup.png?ssl=1
|
||||
[7]: https://www.dropbox.com/
|
||||
[8]: https://owncloud.com/
|
||||
[9]: https://nextcloud.com/
|
||||
[10]: https://en.wikipedia.org/wiki/WebDAV
|
||||
[11]: https://buttercup.pw/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/KeePassXC.png?ssl=1
|
||||
[13]: https://www.keepassx.org/
|
||||
[14]: https://keepass.info/
|
||||
[15]: https://keepassxc.org
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/enpass.png?ssl=1
|
||||
[17]: https://www.enpass.io/support/kb/general/how-to-install-enpass-on-linux/
|
||||
[18]: https://www.enpass.io/
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/myki.png?ssl=1
|
||||
[20]: https://myki.com/
|
||||
[21]: https://lastpass.com/
|
||||
[22]: https://www.dashlane.com/
|
||||
[23]: https://1password.com/
|
||||
[24]: https://lastpass.com/misc_download2.php
|
||||
[25]: https://www.passwordstore.org/
|
||||
[26]: https://pwsafe.org/
|
||||
[27]: http://upm.sourceforge.net/
|
||||
[28]: https://github.com/zdia/gorilla/wiki
|
@ -1,89 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 open source cloud security tools)
|
||||
[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security)
|
||||
[#]: author: (Alison NaylorAaron Rinehart https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo)
|
||||
|
||||
4 种开源云安全工具
|
||||
======
|
||||
查找并排除你存储在 AWS 和 GitHub 中的数据里的漏洞。
|
||||
![Tools in a cloud][1]
|
||||
|
||||
如果你的日常工作是开发者、系统管理员、全栈工程师或者是网站可用性保障工程师,内容包括使用 Git 从 GitHub 上 pushes、commits 和 pulls,并部署到亚马逊 Web 服务上(AWS),安全性就是一个需要持续考虑的一个点。幸运的是,开源工具能帮助你的团队避免犯常见错误,这些常见错误会导致你的组织损失数千美元。
|
||||
|
||||
本文介绍了四种开源工具,当你在 GitHub 和 AWS 上进行开发时,这些工具能帮助你提升项目的安全性。同样的,本着开源的精神,我会与三位安全专家——Travis McPeak;网飞高级云安全工程师,Rich Monk,红帽首席高级信息安全分析师;以及 Alison Naylor,红帽首席信息安全分析师——共同为本文做出贡献。
|
||||
|
||||
我们已经按场景对每个工具都做了区分,但是他们并不是相互排斥的。
|
||||
|
||||
### 1\. 使用 gitrob 发现敏感数据
|
||||
|
||||
你需要在你们团队的 Git 仓库中发现 任何现在还存在的敏感信息,以便你能将其删除。使用红/蓝队模型并利用工具专注于攻击应用程序或者时操作系统,这样可能会更有意义,在这个模型中,一个信息安全团队会划分为两块,一个是攻击团队(又名红队),以及一个防守团队(又名蓝队)。有一个红队来尝试渗透你的系统和应用要远远好于等待一个黑客来实际攻击你。你的红队可能会尝试使用 Gitrob,该工具可以克隆和爬虫你的 Git 仓库,以此来寻找凭证和敏感信息。
|
||||
|
||||
即使像 Gitrob 这样的工具可以被用来造成破坏,但这里的目的是让你的信息安全团队使用它来发现无意间泄露的属于您的组织的敏感信息(比如 AWS 的密钥对或者是其他被失误提交上去的凭证)。这样,你可以修整你的仓库并清除敏感数据——希望能赶在黑客发现它们之前。记住不光要修改受影响的文件,还要删除它们的历史记录。
|
||||
|
||||
### 2\. 使用 git-secrets 来避免合并敏感数据
|
||||
|
||||
虽然在你的 Git 仓库里发现并移除敏感信息很重要,但在一开始就避免合并这些敏感信息岂不是更好?即使错误地提交了敏感信息,使用 git-secrets 从公开的困境保护你自己。这款工具可以帮助你设置钩子,以此来扫描你的提交、已提交的代码和合并的代码,寻找暴露在公共仓库里的敏感信息。注意你选择的模式要匹配你的团队使用的凭证,比如 AWS 访问密钥和秘密密钥。如果发现了一个匹配项,你的提交就会被拒绝,一个潜在的危机就此得到避免。
|
||||
|
||||
为你已有的仓库设置 git-secrets 是很简单的,而且你可以使用一个全局设置来保护所有你以后要创建或克隆的仓库。你同样可以在公开你的仓库之前,使用 git-secrets 来扫描它们(包括之前所有的历史版本)。
|
||||
|
||||
### 3\. 使用 Key Conjurer 创建临时凭证
|
||||
|
||||
有一点额外的保险来防止无意间公开了存储的敏感信息,这是很好的事,但我们还可以做得更好,就完全不存储任何凭证。追踪凭证,谁访问了它,存储到了哪里,上次被访问事什么时候——太麻烦了。然而,以编程的方式生成的临时凭证就可以避免大量的此类问题,从而巧妙地避开了在 Git 仓库里存储敏感信息这一问题。使用 Key Conjurer,它就是为解决这一需求而被创建出来的。有关更多 Riot Games 为什么创建 Key Conjurer,以及 Riot Games 如何开发的 Key Conjurer,请阅读 Key Conjurer:我们最低权限的策略。
|
||||
|
||||
### 4\. 使用 Repokid 自动化地提供最小权限
|
||||
|
||||
任何一个参加过安全 101 课程的人都知道,设置最小权限是基于角色的访问控制的最佳实现。难过的是,在学校之外手动运用最低权限策略会变得如此艰难。一个应用的访问需求会随着时间的流逝而变化,开发人员又太忙了没时间去手动调整它们的权限。Repokid 使用 AWS 提供提供的有关身份和访问管理(IAM)的数据来自动化地调整访问策略。Repokid 甚至可以在 AWS 中为超大型组织提供自动化地最小权限设置。
|
||||
|
||||
### 工具而已,又不是大招
|
||||
|
||||
这些工具并不是什么灵丹妙药,它们只是工具!所以,在尝试使用这些工具或其他的控件之前,请和你的组织里一起工作的其他人确保你们已经理解了你的云服务的使用情况和用法模式。
|
||||
|
||||
应该严肃对待你的云服务和代码仓库服务,并熟悉最佳实现的做法。下面的文章将帮助你做到这一点。
|
||||
|
||||
**对于 AWS:**
|
||||
|
||||
* [管理 AWS 访问密钥的最佳实现][11]
|
||||
* [AWS 安全审计指南][12]
|
||||
|
||||
|
||||
|
||||
**对于 GitHub:**
|
||||
|
||||
* [介绍一种新方法来让你的代码保持安全][13]
|
||||
* [GitHub 企业版最佳安全实现][14]
|
||||
|
||||
|
||||
|
||||
最后但并非最不重要的一点,和你的安全团队保持联系;他们应该可以为你团队的成功提供想法、建议和指南。永远记住:安全是每个人的责任,而不仅仅是他们的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/open-source-cloud-security
|
||||
|
||||
作者:[Alison NaylorAaron Rinehart][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
|
||||
[2]: https://twitter.com/travismcpeak?lang=en
|
||||
[3]: https://github.com/rmonk
|
||||
[4]: https://www.linkedin.com/in/alperkins/
|
||||
[5]: https://github.com/michenriksen/gitrob
|
||||
[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository
|
||||
[7]: https://github.com/awslabs/git-secrets
|
||||
[8]: https://github.com/RiotGames/key-conjurer
|
||||
[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege
|
||||
[10]: https://github.com/Netflix/repokid
|
||||
[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
|
||||
[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html
|
||||
[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/
|
||||
[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/
|
@ -7,35 +7,35 @@
|
||||
[#]: via: (https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Execute Commands on Remote Linux System over SSH
|
||||
如何通过 SSH 在远程 Linux 系统上运行命令
|
||||
======
|
||||
|
||||
We may need to perform some commands on the remote machine.
|
||||
我们有时可能需要在远程机器上运行一些命令。
|
||||
|
||||
To do so, log in to a remote system and do it, if it’s once in a while.
|
||||
如果只是偶尔进行的操作,要实现这个目的,可以登录到远程系统上直接执行命令。
|
||||
|
||||
But every time you do this, it can irritate you
|
||||
但是每次都这么做的话,就有点烦人了。
|
||||
|
||||
If so, what is the better way to get out of it.
|
||||
既然如此,有没有摆脱这种麻烦操作的更佳方案?
|
||||
|
||||
Yes, you can do this from your local system instead of logging in to the remote system.
|
||||
是的,你可以从你本地系统上执行这些操作,而不用登陆到远程系统上。
|
||||
|
||||
Will it benefit me? Yeah definitely. This will save you good time.
|
||||
这有什么好处吗?毫无疑问。这会为你节省很多好时光。
|
||||
|
||||
How’s that happening? Yes, SSH allows you to run a command on a remote machine without logging in to a computer.
|
||||
这是怎么实现的?SSH 允许你无需登录到远程计算机就可以在它上面运行命令。
|
||||
|
||||
**The general syntax is as follow:**
|
||||
**通用语法如下所示:**
|
||||
|
||||
```
|
||||
$ ssh [User_Name]@[Rremote_Host_Name or IP] [Command or Script]
|
||||
$ ssh [用户名]@[远程主机名或 IP] [命令或脚本]
|
||||
```
|
||||
|
||||
### 1) How to Run the Command on a Remote Linux System Over SSH
|
||||
### 1) 如何通过 SSH 在远程 Linux 系统上运行命令
|
||||
|
||||
The following example allows users to run the **[df command][1]** via ssh on a remote Linux machine
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 **[df 命令][1]**。
|
||||
|
||||
```
|
||||
$ ssh [email protected] df -h
|
||||
$ ssh [邮件地址隐去] df -h
|
||||
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/centos-root 27G 4.4G 23G 17% /
|
||||
@ -48,14 +48,14 @@ $ ssh [email protected] df -h
|
||||
tmpfs 184M 0 184M 0% /run/user/1000
|
||||
```
|
||||
|
||||
### 2) How to Run Multiple Commands on a Remote Linux System Over SSH
|
||||
### 2) 如何通过 SSH 在远程 Linux 系统上运行多条命令
|
||||
|
||||
The following example allows users to run multiple commands at once over ssh on the remote Linux system.
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上一次运行多条命令。
|
||||
|
||||
It’s running uptime command and free command on the remote Linux system simultaneously.
|
||||
下面的命令同时在远程 Linux 系统上运行 uptime 命令和 free 命令。
|
||||
|
||||
```
|
||||
$ ssh [email protected] "uptime && free -m"
|
||||
$ ssh [邮件地址隐去] "uptime && free -m"
|
||||
|
||||
23:05:10 up 10 min, 0 users, load average: 0.00, 0.03, 0.03
|
||||
|
||||
@ -65,15 +65,15 @@ $ ssh [email protected] "uptime && free -m"
|
||||
Swap: 3071 0 3071
|
||||
```
|
||||
|
||||
### 3) How to Run the Command with sudo Privilege on a Remote Linux System Over SSH
|
||||
### 3) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的命令
|
||||
|
||||
The following example allows users to run the **fdisk** command with **[sudo][2]** [][2]**[privilege][2]** on the remote Linux system via ssh.
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 **[sudo 权限][2]** 的 **fdisk** 命令。
|
||||
|
||||
Normal users are not allowed to execute commands available under the system binary **(/usr/sbin/)** directory. Users need root privileges to run it.
|
||||
普通用户不允许执行系统二进制 **(/usr/sbin/)** 目录下提供的命令。用户需要 root 权限来运行它。
|
||||
|
||||
So to run the **[fdisk command][3]** on a Linux system, you need root privileges.
|
||||
所以你需要 root 权限,好在 Linux 系统上运行 **[fdisk 命令][3]**。
|
||||
|
||||
The which command returns the full path of the executable of the given command.
|
||||
which 命令返回给定命令的完整可执行路径。
|
||||
|
||||
```
|
||||
$ which fdisk
|
||||
@ -81,7 +81,7 @@ $ which fdisk
|
||||
```
|
||||
|
||||
```
|
||||
$ ssh -t [email protected] "sudo fdisk -l"
|
||||
$ ssh -t [邮件地址隐去] "sudo fdisk -l"
|
||||
[sudo] password for daygeek:
|
||||
|
||||
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
|
||||
@ -113,23 +113,23 @@ $ ssh -t [email protected] "sudo fdisk -l"
|
||||
Connection to centos7.2daygeek.com closed.
|
||||
```
|
||||
|
||||
### 4) How to Run the Service Command with sudo Privilege on a Remote Linux System Over SSH
|
||||
### 4) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的服务控制命令
|
||||
|
||||
The following example allows users to run the service command with sudo privilege on the remote Linux system via ssh.
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 sudo 权限的服务控制命令。
|
||||
|
||||
```
|
||||
$ ssh -t [email protected] "sudo systemctl restart httpd"
|
||||
$ ssh -t [邮件地址隐去] "sudo systemctl restart httpd"
|
||||
|
||||
[sudo] password for daygeek:
|
||||
Connection to centos7.2daygeek.com closed.
|
||||
```
|
||||
|
||||
### 5) How to Run the Command on a Remote Linux System Over SSH With Non-Standard Port
|
||||
### 5) 如何通过非标准端口 SSH 在远程 Linux 系统上运行命令
|
||||
|
||||
The following example allows users to run the **[hostnamectl command][4]** via ssh on a remote Linux machine with non-standard port.
|
||||
下面的例子允许用户通过 ssh 在使用了非标准端口的远程 Linux 机器上运行 **[hostnamectl 命令][4]**。
|
||||
|
||||
```
|
||||
$ ssh -p 2200 [email protected] hostnamectl
|
||||
$ ssh -p 2200 [邮件地址隐去] hostnamectl
|
||||
|
||||
Static hostname: Ubuntu18.2daygeek.com
|
||||
Icon name: computer-vm
|
||||
@ -142,12 +142,12 @@ $ ssh -p 2200 [email protected] hostnamectl
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### 6) How to Save Output from Remote System to Local System
|
||||
### 6) 如何将远程系统的输出保存到本地系统
|
||||
|
||||
The following example allows users to remotely execute the **[top command][5]** on a Linux system via ssh and save the output to the local system.
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 **[top 命令][5]**,并将输出保存到本地系统。
|
||||
|
||||
```
|
||||
$ ssh [email protected] "top -bc | head -n 35" > /tmp/top-output.txt
|
||||
$ ssh [邮件地址隐去] "top -bc | head -n 35" > /tmp/top-output.txt
|
||||
```
|
||||
|
||||
```
|
||||
@ -180,17 +180,17 @@ cat /tmp/top-output.txt
|
||||
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
|
||||
```
|
||||
|
||||
Alternatively, you can use the following format to run multiple commands on a remote system.
|
||||
或者你也可以使用以下格式在远程系统上运行多条命令。
|
||||
|
||||
```
|
||||
$ ssh [email protected] << EOF
|
||||
$ ssh [邮件地址隐去] << EOF
|
||||
hostnamectl
|
||||
free -m
|
||||
grep daygeek /etc/passwd
|
||||
EOF
|
||||
```
|
||||
|
||||
Output of the above command.
|
||||
上面命令的输出如下。
|
||||
|
||||
```
|
||||
Pseudo-terminal will not be allocated because stdin is not a terminal.
|
||||
@ -212,11 +212,11 @@ Pseudo-terminal will not be allocated because stdin is not a terminal.
|
||||
daygeek:x:1000:1000:2daygeek:/home/daygeek:/bin/bash
|
||||
```
|
||||
|
||||
### 7) How to Execute Local Bash Scripts on Remote System
|
||||
### 7) 如何在远程系统上运行本地 Bash 脚本
|
||||
|
||||
The following example allows users to run local **[bash script][6]** “remote-test.sh” via ssh on a remote Linux machine.
|
||||
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行本地 **[bash 脚本][5]** “remote-test.sh”。
|
||||
|
||||
Create a shell script and execute it.
|
||||
创建一个 shell 脚本并执行它。
|
||||
|
||||
```
|
||||
$ vi /tmp/remote-test.sh
|
||||
@ -231,10 +231,10 @@ $ vi /tmp/remote-test.sh
|
||||
hostnamectl
|
||||
```
|
||||
|
||||
Output for the above command.
|
||||
上面命令的输出如下。
|
||||
|
||||
```
|
||||
$ ssh [email protected] 'bash -s' < /tmp/remote-test.sh
|
||||
$ ssh [邮件地址隐去] 'bash -s' < /tmp/remote-test.sh
|
||||
|
||||
01:17:09 up 22 min, 1 user, load average: 0.00, 0.02, 0.08
|
||||
|
||||
@ -266,7 +266,7 @@ $ ssh [email protected] 'bash -s' < /tmp/remote-test.sh
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
Alternatively, the pipe can be used. If you think the output is not good, add few changes to make it more elegant.
|
||||
或者也可以使用 pipe。如果你觉得输出不太好看,再做点修改让它更优雅些。
|
||||
|
||||
```
|
||||
$ vi /tmp/remote-test-1.sh
|
||||
@ -290,10 +290,10 @@ $ vi /tmp/remote-test-1.sh
|
||||
echo "------------------------------------------------------------------"
|
||||
```
|
||||
|
||||
Output for the above script.
|
||||
上面脚本的输出如下。
|
||||
|
||||
```
|
||||
$ cat /tmp/remote-test.sh | ssh [email protected]
|
||||
$ cat /tmp/remote-test.sh | ssh [邮件地址隐去]
|
||||
Pseudo-terminal will not be allocated because stdin is not a terminal.
|
||||
---------System Uptime--------------------------------------------
|
||||
03:14:09 up 2:19, 1 user, load average: 0.00, 0.01, 0.05
|
||||
@ -331,22 +331,22 @@ $ cat /tmp/remote-test.sh | ssh [email protected]
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### 8) How to Run Multiple Commands on Multiple Remote Systems Simultaneously
|
||||
### 8) 如何同时在多个远程系统上运行多条指令
|
||||
|
||||
The following bash script allows users to run multiple commands on multiple remote systems simultaneously. Use simple for loop to achieve it.
|
||||
下面的 bash 脚本允许用户同时在多个远程系统上运行多条指令。使用简单的 for 循环实现。
|
||||
|
||||
For this purpose, you can try with with the **[PSSH command][7]** or **[ClusterShell command][8]** or **[DSH command][9]**
|
||||
为了实现这个目的,你可以尝试 **[PSSH 命令][7]** 或 **[ClusterShell 命令][8]** 或 **[DSH 命令][9]**。
|
||||
|
||||
```
|
||||
$ vi /tmp/multiple-host.sh
|
||||
|
||||
for host in CentOS7.2daygeek.com CentOS6.2daygeek.com
|
||||
do
|
||||
ssh [email protected]${host} "uname -a;uptime;date;w"
|
||||
ssh [邮件地址隐去]${host} "uname -a;uptime;date;w"
|
||||
done
|
||||
```
|
||||
|
||||
Output for the above script:
|
||||
上面脚本的输出如下:
|
||||
|
||||
```
|
||||
$ sh multiple-host.sh
|
||||
@ -358,7 +358,7 @@ $ sh multiple-host.sh
|
||||
Wed Sep 25 01:33:57 CDT 2019
|
||||
|
||||
01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06
|
||||
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
|
||||
USER TTY FROM [邮件地址隐去] IDLE JCPU PCPU WHAT
|
||||
daygeek pts/0 192.168.1.6 01:08 23:25 0.06s 0.06s -bash
|
||||
|
||||
Linux CentOS6.2daygeek.com 2.6.32-754.el6.x86_64 #1 SMP Tue Jun 19 21:26:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
@ -368,21 +368,21 @@ $ sh multiple-host.sh
|
||||
Tue Sep 24 23:33:58 MST 2019
|
||||
|
||||
23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00
|
||||
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
|
||||
USER TTY FROM [邮件地址隐去] IDLE JCPU PCPU WHAT
|
||||
```
|
||||
|
||||
### 9) How to Add a Password Using the sshpass Command
|
||||
### 9) 如何使用 sshpass 命令添加一个密码
|
||||
|
||||
If you are having trouble entering your password each time, I advise you to go with any one of the methods below as per your requirement.
|
||||
如果你觉得每次输入密码很麻烦,我建议你视你的需求选择以下方法中的一项来解决这个问题。
|
||||
|
||||
If you are going to perform this type of activity frequently, I advise you to set up **[password-less authentication][10]** since it’s a standard and permanent solution.
|
||||
如果你经常进行类似的操作,我建议你设置 **[免密码认证][10]**,因为它是标准且永久的解决方案。
|
||||
|
||||
If you only do these tasks a few times a month. I recommend you to use the **“sshpass”** utility.
|
||||
如果你一个月只是执行几次这些任务,我推荐你使用 **“sshpass”** 工具。
|
||||
|
||||
Just provide a password as an argument using the **“-p”** option.
|
||||
只需要使用 **“-p”** 参数选项提供你的密码即可。
|
||||
|
||||
```
|
||||
$ sshpass -p 'Your_Password_Here' ssh -p 2200 [email protected] ip a
|
||||
$ sshpass -p '在这里输入你的密码' ssh -p 2200 [邮件地址隐去] ip a
|
||||
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
@ -404,7 +404,7 @@ via: https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You Can Now Use OneDrive in Linux Natively Thanks to Insync)
|
||||
[#]: via: (https://itsfoss.com/use-onedrive-on-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
现在你可以借助 Insync 在 Linux 中原生使用 OneDrive
|
||||
======
|
||||
|
||||
[OneDrive][1] 是 Microsoft 的一项云存储服务,它为每个用户提供 5GB 的免费存储空间。它已与 Microsoft 帐户集成,如果你使用 Windows,那么已在其中预安装了 OneDrive。
|
||||
|
||||
OneDrive 无法在 Linux 中作为桌面应用使用。你可以通过网页访问已存储的文件,但无法像在文件管理器中那样使用云存储。
|
||||
|
||||
好消息是,你现在可以使用一个非官方工具,它可让你在 Ubuntu 或其他 Linux 发行版中使用 OneDrive。
|
||||
|
||||
当 [Insync][2] 在 Linux 上支持 Google Drive 时,它变成了 Linux 上非常流行的高级第三方同步工具。我们有篇详细回顾 [Insync 支持 Google Drive][3] 的文章。
|
||||
|
||||
但是,最近[发布的 Insync 3][4] 支持 OneDrive。因此在本文中,我们将看下如何在 Insync 中使用 OneDrive 以及它的新功能。
|
||||
|
||||
非 FOSS 警告
|
||||
|
||||
_少数开发者会对非 FOSS 软件引入 LInux 感到痛苦。作为专注于桌面 Linux 的门户,即使不是 FOSS,我们也会在此介绍此类软件。_
|
||||
|
||||
_Insync 3 既不是开源软件,也不免费使用。你只有 15 天的试用期进行测试。如果你喜欢它,那么可以按每个帐户终生 29.99 美元的费用购买_。
|
||||
|
||||
_我们不会拿钱来推广它们(以防你这么想)。我们不会在这里这么做_
|
||||
|
||||
### 在 Linux 中通过 Insync 获得原生 OneDrive 体验
|
||||
|
||||
![][5]
|
||||
|
||||
尽管它是一个高级工具,但依赖 OneDrive 的用户或许希望在他们的 LInux系统中获得同步 OneDrive 的无缝体验。
|
||||
|
||||
首先,你需要从[官方页面][6]下载适合你 Linux 发行版的软件包。
|
||||
|
||||
[下载 Insync][7]
|
||||
|
||||
你也可以选择添加仓库并进行安装。你将在 Insync 的[官方网站][7]看到说明。
|
||||
|
||||
安装完成后,只需启动并选择 OneDrive 选项。
|
||||
|
||||
![][8]
|
||||
|
||||
另外,要注意的是,你添加的每个 OneDrive 或 Google Drive 帐户都需要单独的许可证。
|
||||
|
||||
现在,在授权 OneDrive 帐户后,你必须选择一个用于同步所有内容的文件夹,这是 Insync 3 中的一项新功能。
|
||||
|
||||
![Insync 3 Base Folder][9]
|
||||
|
||||
除此之外,设置完成后,你还可以选择性地同步本地或云端的文件/文件夹。
|
||||
|
||||
![Insync Selective Sync][10]
|
||||
|
||||
你还可以通过添加自己的规则来自定义同步选项,以忽略/同步所需的文件夹和文件,这完全是可选的。
|
||||
|
||||
![Insync Customize Sync Preferences][11]
|
||||
|
||||
最后,就这样完成了。
|
||||
|
||||
![Insync 3][12]
|
||||
|
||||
你现在可以在包括有 Insync 的 Linux 桌面的多个平台使用 OneDrive 开始同步文件/文件夹。除了上面所有新功能/更改之外,你还可以在 Insync 上获得更快/更流畅的体验。
|
||||
|
||||
此外,借助 Insync 3,你可以查看同步进度:
|
||||
|
||||
![][13]
|
||||
|
||||
**总结**
|
||||
|
||||
总的来说,对于希望在 Linux 系统上同步 OneDrive 的用户而言,Insync 3 是令人印象深刻的升级。如果你不想付款,你可以尝试其他 [Linux 的免费云服务][14]。
|
||||
|
||||
你如何看待 Insync?如果你已经在使用它,到目前为止的体验如何?在下面的评论中让我们知道你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/use-onedrive-on-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://onedrive.live.com
|
||||
[2]: https://www.insynchq.com
|
||||
[3]: https://itsfoss.com/insync-linux-review/
|
||||
[4]: https://www.insynchq.com/blog/insync-3/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/onedrive-linux.png?ssl=1
|
||||
[6]: https://www.insynchq.com/downloads?start=true
|
||||
[7]: https://www.insynchq.com/downloads
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3one-drive-sync.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-base-folder-1.png?ssl=1
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-selective-syncs.png?ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-customize-sync.png?ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-homescreen.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-progress-bar.png?ssl=1
|
||||
[14]: https://itsfoss.com/cloud-services-linux/
|
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (All That You Can Do with Google Analytics, and More)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/)
|
||||
[#]: author: (Ashwin Sathian https://opensourceforu.com/author/ashwin-sathian/)
|
||||
|
||||
Google Analytics 的一些用法介绍
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
Google Analytics 这个最流行的用户活动追踪工具我们或多或少都听说过甚至使用过,但它的用途并不仅仅限于对页面访问的追踪。作为一个既实用又流行的工具,它已经受到了广泛的欢迎,因此我们将要在下文中介绍如何在各种 Angular 和 React 单页应用中使用 Google Analytics。
|
||||
|
||||
这篇文章源自这样一个问题:如何对单页应用中的页面访问进行跟踪?
|
||||
|
||||
通常来说,有很多的方法可以解决这个问题,在这里我们只讨论其中的一种方法。下面我会使用 Angular 来写出对应的实现,如果你使用的是 React,相关的用法和概念也不会有太大的差别。接下来就开始吧。
|
||||
|
||||
**准备好应用程序**
|
||||
首先需要有一个<ruby>追踪 ID<rt>tracking ID</rt></ruby>。在开始编写业务代码之前,要先准备好一个追踪 ID,通过这个唯一的标识,Google Analytics 才能识别出某个点击或者某个页面访问是来自于哪一个应用。
|
||||
|
||||
按照以下的步骤:
|
||||
|
||||
1. 访问 _<https://analytics.google.com>_;
|
||||
2. 提交指定信息以完成注册,并确保可用于 Web 应用,因为我们要开发的正是一个 Web 应用;
|
||||
3. 同意相关的条款,生成一个追踪 ID;
|
||||
4. 保存好追踪 ID。
|
||||
|
||||
|
||||
|
||||
追踪 ID 拿到了,就可以开始编写业务代码了。
|
||||
|
||||
**添加 `analytics.js` 脚本**
|
||||
Google 已经帮我们做好了接入之前需要做的所有事情,接下来就是我们的工作了。不过我们要做的也很简单,只需要把下面这段脚本添加到应用的 `index.html` 里,就可以了:
|
||||
|
||||
```
|
||||
<script>
|
||||
(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
})(window,document,’script’,’https://www.google-analytics.com/analytics.js’,’ga’);
|
||||
</script>
|
||||
```
|
||||
|
||||
现在我们来看一下 Google Analytics 是如何在应用程序中初始化的。
|
||||
|
||||
**创建追踪器**
|
||||
首先创建一个应用程序的追踪器。在 `app.component.ts` 中执行以下两个步骤:
|
||||
|
||||
1. 声明一个名为 `ga`,类型为 `any` 的全局变量(在 Typescript 中需要制定变量类型);
|
||||
2. 将下面一行代码加入到 `ngInInit()` 中。
|
||||
|
||||
|
||||
|
||||
```
|
||||
ga(‘create’, <你的追踪 ID>, ‘auto’);
|
||||
```
|
||||
|
||||
这样就已经成功地在应用程序中初始化了一个 Google Analytics 的追踪器了。由于追踪器的初始化是在 OnInit() 函数中执行的,因此每当应用程序启动,追踪器就会启动。
|
||||
|
||||
**在单页应用中记录页面访问情况**
|
||||
我们需要实现的是记录<ruby>访问路由<rt>route-visits</rt></ruby>。
|
||||
|
||||
记录用户在一个应用中不同部分的访问,这是一个难点。从功能上来看,单页应用中的路由对应了传统多页面应用中各个页面之间的跳转,因此我们需要记录访问路由。要做到这一点尽管不算简单,但仍然是可以实现的。在 `app.component.ts` 的 `ngOnInit()` 函数中添加以下的代码片段:
|
||||
|
||||
```
|
||||
import { Router, NavigationEnd } from ‘@angular/router’;
|
||||
…
|
||||
constructor(public router: Router) {}
|
||||
...
|
||||
this.router.events.subscribe(
|
||||
event => {
|
||||
if (event instanceof NavigationEnd) {
|
||||
ga(‘set’, ‘page’, event.urlAfterRedirects);
|
||||
ga(‘send’, { hitType: ‘pageview’, hitCallback: () => { this.pageViewSent = true; }
|
||||
});
|
||||
}
|
||||
} );
|
||||
```
|
||||
|
||||
神奇的是,只需要这么几行代码,就实现了 Angular 应用中记录页面访问情况的功能。
|
||||
|
||||
这段代码实际上做了以下几件事情:
|
||||
|
||||
1. 从 Angular Router 中导入了 Router、NavigationEnd;
|
||||
2. 通过构造函数中将 Router 添加到组件中;
|
||||
3. 然后订阅 router 事件,也就是由 Angular Router 发出的所有事件;
|
||||
4. 只要产生了一个 NavigationEnd 事件实例,就将路由和目标作为一个页面访问进行记录。
|
||||
|
||||
|
||||
|
||||
这样,只要使用到了页面路由,就会向 Google Analytics 发送一条页面访问记录,在 Google Analytics 的在线控制台中可以看到这些记录。
|
||||
|
||||
类似地,我们可以用相同的方式来记录除了页面访问之外的活动,例如某个界面的查看次数或者时长等等。只要像上面的代码那样使用 `hitCallBack()` 就可以在有需要收集的数据的时候让应用程序作出反应,这里我们做的事情仅仅是把一个变量的值设为 `true`,但实际上 `hitCallBack()` 中可以执行任何代码。
|
||||
|
||||
**追踪用户交互活动**
|
||||
除了页面访问记录之外,Google Analytics 还经常被用于追踪用户的交互活动,例如某个按钮的点击情况。“提交按钮被用户点击了多少次?”“产品手册会被经常查阅吗?”这些都是 Web 应用程序的产品评审会议上的常见问题。这一节我们将会介绍如何实现这些数据的统计。
|
||||
|
||||
**按钮点击:** 设想这样一种场景,需要统计到应用程序中某个按钮或链接被点击的次数,这是一个和注册之类的关键动作关系最密切的数据指标。下面我们来举一个例子:
|
||||
|
||||
假设应用程序中有一个“感兴趣”按钮,用于显示即将推出的活动,你希望通过统计这个按钮被点击的次数来推测有多少用户对此感兴趣。那么我们可以使用以下的代码来实现这个功能:
|
||||
|
||||
```
|
||||
…
|
||||
params = {
|
||||
eventCategory:
|
||||
‘Button’
|
||||
,
|
||||
eventAction:
|
||||
‘Click’
|
||||
,
|
||||
eventLabel:
|
||||
‘Show interest’
|
||||
,
|
||||
eventValue:
|
||||
1
|
||||
};
|
||||
|
||||
showInterest() {
|
||||
ga(‘send’, ‘event’, this.params);
|
||||
}
|
||||
…
|
||||
```
|
||||
|
||||
现在看下这段代码实际上做了什么。正如前面说到,当我们向 Google Analytics 发送数据的时候,Google Analytics 就会记录下来。因此我们可以向 `send()` 方法传递不同的参数,以区分不同的事件,例如两个不同按钮的点击记录。
|
||||
1\. 首先我们定义了一个 `params` 对象,这个对象包含了以下几个字段:
|
||||
|
||||
1. `eventCategory` – 交互发生的对象,这里对应的是按钮(button)
|
||||
2. `eventAction` – 发生的交互的类型,这里对应的是点击(click)
|
||||
3. `eventLabel` – 交互动作的标识,这里对应的是这个按钮的内容,也就是“感兴趣”
|
||||
4. `eventValue` – 与每个发生的事件实例相关联的值
|
||||
|
||||
|
||||
|
||||
由于这个例子中是要统计点击了“感兴趣”按钮的用户数,因此我们把 `eventValue` 的值定为 1。
|
||||
|
||||
2\. 对象构造完成之后,下一步就是将 `params` 对象作为请求负载发送到 Google Analytics,而这一步是通过事件绑定将 `showInterest()` 绑定在按钮上实现的。这也是使用 Google Analytics 追踪中最常用的发送事件方法。
|
||||
|
||||
至此,Google Analytics 就可以通过记录按钮的点击次数来统计感兴趣的用户数了。
|
||||
|
||||
**追踪社交活动:** Google Analytics 还可以通过应用程序追踪用户在社交媒体上的互动。其中一种场景就是在应用中放置类似 Facebook 的点赞按钮,下面我们来看看如何使用 Google Analytics 来追踪这一行为。
|
||||
|
||||
```
|
||||
…
|
||||
fbLikeParams = {
|
||||
socialNetwork:
|
||||
'Facebook',
|
||||
socialAction:
|
||||
'Like',
|
||||
socialTarget:
|
||||
'https://facebook.com/mypage'
|
||||
};
|
||||
…
|
||||
fbLike() {
|
||||
ga('send', 'social', this.fbLikeParams);
|
||||
}
|
||||
```
|
||||
|
||||
如果你觉得这段代码似曾相识,那是因为它确实跟上面统计“感兴趣”按钮点击次数的代码非常相似。下面我们继续看其中每一步的内容:
|
||||
|
||||
1\. 构造发送的数据负载,其中包括以下字段:
|
||||
|
||||
1. `socialNetwork` – 交互发生的社交媒体,例如 Facebook、Twitter 等等
|
||||
2. `socialAction` – 发生的交互类型,例如点赞、发表推文、分享等等
|
||||
3. `socialTarget` – 交互的目标 URL,一般是社交媒体账号的主页
|
||||
|
||||
|
||||
|
||||
2\. 下一步是增加一个函数来发送整个交互记录。和统计按钮点击数量时相比,这里使用 `send()` 的方式有所不同。另外,我们还需要把这个函数绑定到已有的点赞按钮上。
|
||||
|
||||
在追踪用户交互方面,Google Analytics 还可以做更多的事情,其中最重要的一种是针对异常的追踪,这让我们可以通过 Google Analytics 来追踪应用程序中出现的错误和异常。在本文中我们就不赘述这一点了,但我们鼓励读者自行探索。
|
||||
|
||||
**用户识别**
|
||||
**隐私是一项权利,而不是奢侈品:** Google Analytics 除了可以记录很多用户的操作和交互活动之外,这一节还将介绍一个不太常见的功能,就是可以控制是否对用户的身份进行追踪。
|
||||
**Cookies:** Google Analytics 追踪用户活动的方式是基于 Cookies 的,因此我们可以自定义 Cookies 的名称以及一些其它的内容,请看下面这段代码:
|
||||
|
||||
```
|
||||
trackingID =
|
||||
‘UA-139883813-1’
|
||||
;
|
||||
cookieParams = {
|
||||
cookieName: ‘myGACookie’,
|
||||
cookieDomain: window.location.hostname,
|
||||
cookieExpires: 604800
|
||||
};
|
||||
…
|
||||
ngOnInit() {
|
||||
ga(‘create’, this.trackingID, this.cookieParams);
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
在上面这段代码中,我们设置了 Google Analytics Cookies 的名称、域以及过期时间,这就让我们能够将不同网站或 Web 应用的 Cookies 区分开来。因此我们需要为我们自己的应用程序的 Google Analytics 追踪器的 Cookies 设置一个自定义的标识1,而不是一个自动生成的随机标识。
|
||||
|
||||
**IP 匿名:** 在某些场景下,我们可能不需要知道应用程序的流量来自哪里。例如对于一个按钮点击的追踪器,我们只需要关心按钮的点击量,而不需要关心点击者的地理位置。在这种场景下,Google Analytics 允许我们只追踪用户的操作行为,而不获取到用户的 IP 地址。
|
||||
|
||||
```
|
||||
ipParams = {
|
||||
anonymizeIp: true
|
||||
};
|
||||
…
|
||||
ngOnInit() {
|
||||
…
|
||||
ga('set', this.ipParams);
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
在上面这段代码中,我们将 Google Analytics 追踪器的 `abibymizeIp` 参数设置为 `true`。这样用户的 IP 地址就不会被 Google Analytics 所追踪,这可以让用户知道自己的隐私正在被保护。
|
||||
|
||||
**不被跟踪:** 还有些时候用户可能不希望自己的行为受到追踪,而 Google Analytics 也允许这样的需求。因此也存在让用户不被追踪的选项。
|
||||
|
||||
```
|
||||
...
|
||||
optOut() {
|
||||
window[‘ga-disable-UA-139883813-1’] = true;
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
`optOut()` 是一个自定义函数,它可以禁用页面中的 Google Analytics 追踪,我们可以使用按钮或复选框上得事件绑定来使用这一个功能,这样用户就可以选择是否会被 Google Analytics 追踪。
|
||||
|
||||
在本文中,我们讨论了 Google Analytics 集成到单页应用时的难点,并探索出了一种相关的解决方法。我们还了解到了如何在单页应用中追踪页面访问和用户交互,例如按钮点击、社交媒体活动等。
|
||||
|
||||
最后,我们还了解到 Google Analytics 为用户提供了保护隐私的功能,尤其是用户的一些隐私数据并不需要参与到统计当中的时候。而用户也可以选择完全不受到 Google Analytics 的追踪。除此以外,Google Analytics 还可以做到很多其它的事情,这就需要我们继续不断探索了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/
|
||||
|
||||
作者:[Ashwin Sathian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/ashwin-sathian/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?resize=696%2C396&ssl=1 (Analytics illustration)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?fit=900%2C512&ssl=1
|
Loading…
Reference in New Issue
Block a user