Merge pull request #5 from LCTT/master

同步
This commit is contained in:
MareDevi 2022-10-24 16:22:46 +08:00 committed by GitHub
commit 433d7d5530
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
114 changed files with 13032 additions and 4838 deletions

View File

@ -0,0 +1,191 @@
[#]: collector: (lujun9972)
[#]: translator: (aREversez)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-15139-1.html)
[#]: subject: (The Real Novelty of the ARPANET)
[#]: via: (https://twobithistory.org/2021/02/07/arpanet.html)
[#]: author: (Two-Bit History https://twobithistory.org)
ARPANET 的真正创新之处
======
![](https://img.linux.net.cn/data/attachment/album/202210/14/180115j5hae51hv1a1ohp5.jpg)
如果你在搜索引擎中输入“ARPANET”搜索相关图片你会看到许多地图的图片上面是这个上世纪六十年代末七十年代初 [美国政府创建的研究网络][1],该网络不断延伸扩展,横跨了整个美国。我猜很多人第一次了解到 ARPANET 的时候都看过这种地图。
可以说,这些地图很有意思,毕竟我们很难想象过去连接网络的计算机是那么少,就连如此低保真的图片都可以表示出美国全部机器的所在位置(这里的<ruby>低保真<rt>lo-fi</rt></ruby>指的是高射投影仪成像技术,而不是大家熟知的 lo-fi 氛围音乐。不过这些地图是有问题的。地图上用加粗的线条连接着大陆各地强化了人们的一种观念ARPANET 最大的贡献就是首次将横跨美国东西两地的电脑连接了起来。
今天,即便是在病毒肆虐、人们困居家中的情况下,网络也能把我们联系起来,可谓是我们的生命线。所以,如果认为 ARPANET 是最早的互联网那么在那之前世界必然相互隔绝毕竟那时还没有今天的互联网对吧ARPANET 首次通过计算机将人们连接起来,一定是一件惊天动地的大事。
但是,这一观点却与历史事实不符,而且它也没有进一步解释 ARPANET 的重要性。
### 初露锋芒
华盛顿希尔顿酒店坐落于<ruby>国家广场<rt>National Mall</rt></ruby>东北方向约 2.4 千米处的一座小山丘山顶附近。酒店左右两侧白色的现代化立面分别向外延展出半个圆形活像一只飞鸟的双翼。1965 年,酒店竣工之后,《纽约时报》报道称这座建筑物就像“一只栖息在山顶巢穴上的海鸥” [^1]。
不过,这家酒店最有名的特点却深藏在地下。在车道交汇处下方,有着一个巨大的蛋形活动场地,这就是人们熟知的<ruby>国际宴会厅<rt>International Ballroom</rt></ruby>多年来一直是华盛顿特区最大的无柱宴会厅。1967 年大门乐队在此举办了一场音乐会。1968 年,“吉他之神”吉米·亨德里克斯也在此举办了一场音乐会。到了 1972 年,国际宴会厅隐去了以往的喧嚣,举办了首届<ruby>国际计算机通信会议<rt>International Conference on Computing Communication</rt></ruby>ICCC。在这场大会上研究项目 ARPANET 首次公开亮相。
这场会议举办时间为 10 月 24-26 日,与会人数约八百人 [^2]。在这场大会上,计算机网络这一新兴领域的领袖人物齐聚一堂。<ruby>因特网<rt>internet</rt></ruby>的先驱<ruby>鲍勃·卡恩<rt>Bob Kahn</rt></ruby>称,“如果有人在华盛顿希尔顿酒店上方丢了一颗炸弹,那么美国的整个网络研究领域将会毁于一旦” [^3]。
当然,不是所有的与会人员都是计算机科学家。根据当时的宣传广告,这场大会将“以用户为中心”,面向“律师、医务人员、经济学家、政府工作者、工程师以及通信员等从业人员”[^4]。虽然大会的部分议题非常专业,比如《数据网络设计问题(一)》与《数据网络设计问题(二)》,但是正如宣传广告所承诺的,大部分会议的主要关注点还是计算机网络给经济社会带来的潜在影响。其中甚至有一场会议以惊人的先见之明探讨了如何积极利用法律制度“保护计算机数据库中的隐私权益” [^5]。
展示 ARPANET 的目的是作为与会者的一个附带景点。在国际宴会厅或酒店更下一层的其他地方举行的会议间歇,与会者可以自由进入<ruby>乔治敦宴会厅<rt>Georgetown Ballroom</rt></ruby>(在国际宴会厅走廊尽头的一个较小的宴会厅,也可以说是会议室)[^6],那里放置着用以访问 ARPANET 的 40 台由不同制造商生产的终端 [^7]。这些终端属于<ruby>哑终端<rt>dumb terminal</rt></ruby>,也就是说,只能用来输入命令、输出结果,本身无法进行计算。事实上,在 1972 年,所以这些终端可能都是<ruby>硬拷贝终端<rt>hardcopy terminal</rt></ruby>,即<ruby>电传打字机<rt>teletype machine</rt></ruby>。哑终端与一台被称为“<ruby>终端接口信息处理机<rt>Terminal Interface Message Processor</rt></ruby>TIP的计算机相连接后者放置在宴会厅中间的一个高台上。TIP 是早期的一种路由器,哑终端可通过 TIP 连接到 ARPANET。有了终端和 TIPICCC 与会者可以尝试登录和访问组成 ARPANET 的 29 个主机站的计算机 [^8]。
为了展示网络的性能,美国全国各主机站的研究员们通力合作,准备了 19 个简易的“情景”,供用户测试使用。他们还出了 [一份小册子][10],将这些情景收录其中。如果与会人员打算进入这个满是电线与哑终端的房间,就会得到这样一本小册子 [^9]。通过这些情景,研究员不仅要证明网络这项新技术的可行性,还要证明其实用性,因为 ARPANET 那时还只是“一条没有汽车驶过的公路”。此外,来自国防部的投资者们也希望,公开展示 ARPANET 可以进一步激发人们对网络的兴趣 [^10]。
因此,这些情景充分展示了在 ARPANET 网络上可以使用的软件的丰富性有程序语言解释器其中一个用于麻省理工学院MIT的 Lisp 语言,另一个用于加州大学洛杉矶分校的数值计算环境 Speakeasy还有一些游戏包括国际象棋和 <ruby>康威生命游戏<rt>Conway's Game of Life</rt></ruby>;以及几个也许最受与会者欢迎的人工智能聊天程序,包括由 MIT 的计算机科学家<ruby>约瑟夫·魏泽堡<rt>Joseph Weizenbaum</rt></ruby>开发的著名聊天程序<ruby>伊莉莎<rt>ELIZA</rt></ruby>
设置这些情景的研究人员小心翼翼地列出了他们想让用户在终端机上输入的每一条命令。这点很重要,因为用于连接 ARPANET 主机的命令序列可能会因为主机的不同而发生变化。比如,为了能在 MIT 人工智能实验室的 PDP-10 微型电脑上测试人工智能国际象棋程序,与会者需要按照指示输入以下命令:
> 在下方代码块中,`[LF]`、`[SP]` 以及 `[CR]` 分别代表换行、空格以及回车键。我在每行的 `//` 符号后面都解释了当前一行命令的含义,不过当时的小册子本来是没有使用这一符号的。
```
@r [LF] // 重置 TIP
@e [SP] r [LF] // “远程回显”设置, 主机回显字符TIP 不回显
@L [SP] 134 [LF] // 连接 134 号主机
:login [SP] iccXXX [CR] // 登录 MIT 人工智能实验室的系统“XXX”代表用户名首字母缩写
:chess [CR] // 启动国际象棋程序
```
如果与会者输入了上述命令,那么他就可以体验当时最先进的国际象棋程序,其棋盘布局如下:
```
BR BN BB BQ BK BB BN BR
BP BP BP BP ** BP BP BP
-- ** -- ** -- ** -- **
** -- ** -- BP -- ** --
-- ** -- ** WP ** -- **
** -- ** -- ** -- ** --
WP WP WP WP -- WP WP WP
WR WN WB WQ WK WB WN WR
```
与之不同的是,如果要连接加州大学洛杉矶分校的 IBM System/360 机器,运行 Speakeasy 数值计算环境,与会者需要输入以下命令:
```
@r [LF] // 重置 TIP
@t [SP] o [SP] L [LF] // “传递换行”设置
@i [SP] L [LF] // “插入换行”设置,即回车时发送换行符。
@L [SP] 65 [LF] // 连接 65 号主机
tso // 连接 IBM 分时可选软件系统
logon [SP] icX [CR] // 输入用户名进行登录“X”可为任意数字
iccc [CR] // 输入密码(够安全!)
speakez [CR] // 启动 Speakeasy
```
输入上述命令后,与会者可以在终端中对矩阵进行乘法、转置以及其他运算,如下所示:
```
:+! a=m*transpose(m);a [CR]
:+! eigenvals(a) [CR]
```
当时,这场演示给许多人都留下了深刻的印象,但原因并不是我们所想的那样,毕竟我们有的只是后见之明。今天的人们总是记不住,在 1972 年,即便身处两个不同的城市,远程登录使用计算机也已经不是一件新鲜事儿了。在那之前的数十年,电传打字机就已经用于与相隔很远的计算机传递信息了。在 ICCC 第一届大会之前,差不多整整五年前,在西雅图的一所高中,<ruby>比尔·盖茨<rt>Bill Gates</rt></ruby>使用电传打字机,在该市其他地方的<ruby>通用电气<rt>General Electric</rt></ruby>GE计算机上运行了他的第一个 BASIC 程序。在当时,登录远程计算机,运行几行命令或者玩一些文字游戏,只不过是家常便饭。因此,虽说上文提到的软件的确很不错,但是即便没有 ARPANET我刚刚介绍的两个情景勉强也是可以实现的。
当然ARPANET 一定带来了新的东西。参加本次大会的律师、政治家与经济学家可能被国际象棋游戏与聊天机器人所吸引,但是网络专家们可能对另外两个情景更感兴趣,因为它们将 ARPANET 的作用更好地展示了出来。
在其中一个情景下MIT <ruby>非兼容分时系统<rt>Incompatible Timesharing System</rt></ruby>ITS上运行了一个名为 `NETWRK` 的程序。`NETWRK` 命令下有若干个子命令,输入这些子命令就能得到 ARPANET 各方面的运行状态。`SURVEY` 子命令可以列出 ARPANET 上哪些主机正在运行和可用(它们都在一个列表中);`SUMMARY.OF.SURVEY` 子命令汇总了过去 `SURVEY` 子命令过去的运行结果,得出每台主机的“正常运行比率”,以及每台主机响应消息的平均时间。`SUMMARY.OF.SURVEY` 子命令以表格的形式输出结果,如下所示:
```
--HOST-- -#- -%-UP- -RESP-
UCLA-NMC 001 097% 00.80
SRI-ARC 002 068% 01.23
UCSB-75 003 059% 00.63
...
```
可以看到,主机编号的占位不超过三个数字(哈!)。其他 `NETWRK` 子命令能够查看较长时间内查询结果的概要,或者检查单个主机查询结果的日志。
第二个情景用到了斯坦福大学开发的一款软件 —— SRI-ARC 联机系统。这款软件功能齐全,非常优秀。美国发明家<ruby>道格拉斯·恩格尔巴特<rt>Douglas Engelbart</rt></ruby>在 “<ruby>所有演示之母<rt>Mother of All Demos</rt></ruby>” 上演示的正是 SRI-ARC 联机系统。这款软件可以在加州大学圣芭芭拉分校的主机上运行本质上属于文件托管的服务。使用华盛顿希尔顿酒店的终端,用户可以将斯坦福大学主机上创建的文件复制到加州大学圣芭芭拉分校的主机上。操作也很简单,只需执行 `copy` 命令,然后回答计算机的下列问题:
> 在下方的代码块中,`[ESC]`、`[SP]` 与 `[CR]` 分别代表退出、空格与回车键;圆括号中的文字是计算机打印出的提示信息;第三行中的退出键用于自动补全文件名。此处复制的文件是 `<system>sample.txt;1`,其中文件名末尾的数字 1 代表文件的版本号,`<system>` 表示文件路径。这种文件名是 TENEX 操作系统上面的惯用写法。[^11]
```
@copy
(TO/FROM UCSB) to
(FILE) <system>sample [ESC] .TXT;1 [CR]
(CREATE/REPLACE) create
```
这两个情景看起来好像和最初提及的两个情景没有太大区别,但是此二者却意义非凡。因为它们证明了,在 ARPANET 上面,不仅人们可以与计算机进行交流,计算机与计算机也可以 _相互_ 交流。MIT 主机上的 `SURVEY` 命令的结果并非由人类定期登录并检查每台机器的运行状态收集而来,而是由一款能在网络上与其他机器进行交流的软件收集得到的。同样的道理,在斯坦福大学与加州大学圣芭芭拉分校之间传输文件的情景下,也没有人守在两所大学的终端旁边,华盛顿特区的终端用户仅仅使用了一款软件,就能让其他两地的计算机相互对话。更重要的是,这一点无关乎你使用的是宴会厅里的哪一台电脑,因为只要输入同样的命令序列,就能在任意一台电脑上浏览 MIT 的网络监视数据,或者在加州大学圣芭芭拉分校的计算机上储存文件。
这才是 ARPANET 的全新之处。本次国际计算机通信会议演示的不仅仅是人与远程电脑之间的交互,也不仅仅是远程输入输出的操作,更是一个软件与其他软件之间的远程通讯,这一点才是史无前例的。
为什么这一点才是最重要的,而不是地图上画着的那些贯穿整个美国、实际连接起来的电线呢(这些线是租赁的电话线,而且它们以前就在那了!)?要知道,早在 1966 年 ARPANET 项目启动之前美国国防部的高级研究计划署ARPA打造了一间终端室里面有三台终端。三台终端分别连接着位于 MIT、加州大学伯克利分校以及圣塔莫尼卡三地的计算机 [^12]。对于 ARPA 的工作人员来说,即便他们身处华盛顿特区,使用这三台计算机也非常方便。不过,这其中也有不便之处:工作人员必须购买和维护来自三家不同制造商的终端,牢记三种不同的登录步骤,熟悉三种不同的计算环境。虽然这三台终端机可能就放在一起,但是它们只是电线另一端主机系统的延申,而且操作也和那些计算机一样各不相同。所以说,在 ARPANET 项目诞生之前,远程连接计算机进行通讯就已经实现了,但问题是不同的计算系统阻碍了通讯朝着更加先进复杂的方向发展。
### 集合起来,就在此刻
因此我想说的是说法一ARPANET 首次通过计算机将不同地方的人们连接了起来与说法二ARPANET 首次将多个计算机系统彼此连接了起来)之间有着云泥之别。听起来似乎有些吹毛求疵,咬文嚼字,但是相较于说法二,说法一忽略了一些重要的历史发展阶段。
首先,历史学家<ruby>乔伊·利西·兰金<rt>Joy Lisi Rankin</rt></ruby>指出,早在 ARPANET 诞生之前,人们就已经在网络空间中进行交流了。在《<ruby>美国计算机的人民历史<rt>A Peoples History of Computing in the United States</rt></ruby>》一书中,兰金介绍了几个覆盖全美的数字社区,这些社区运行在早于 ARPANET 的<ruby>分时网络<rt>time-sharing network</rt></ruby>上面。从技术层面讲分时网络并不是计算机网络因为它仅仅由一台大型主机构成。这种计算机放置在地下室中为多台哑终端提供计算颇像一只又黑又胖的奇怪生物触手向外伸展着遍及整个美国。不过在分时网络时代被后社交媒体时代称为“网络”的大部分社会行为应有尽有。例如Kiewit 网络是<ruby>达特茅斯分时系统<rt>Dartmouth Time-Sharing System</rt></ruby>的延伸应用,服务于美国东北部的各个大学和高中。在 Kiewit 网络上,高中生们共同维护着一个“<ruby>八卦档案<rt>gossip file</rt></ruby>”,用来记录其他学校发生的趣闻趣事,“在康涅狄格州和缅因州之间建立起了社交联系” [^13]。同时,曼荷莲女子学院的女生通过网络与达特茅斯学院的男生进行交流,或者是安排约会,或者是与男朋友保持联系 [^14]。这些事实都发生在上世纪六十年代。兰金认为,如果忽视了早期的分时网络,我们对美国过去 50 年数字文化发展的认识必然是贫瘠的:我们眼里可能只有所谓的“<ruby>硅谷神话<rt>Silicon Valley mythology</rt></ruby>”,认为计算机领域的所有发展都要归功于少数的几位天才,或者说互联网科技巨头的创始人。
回到 ARPANET如果我们能意识到真正的困难是计算机 _系统_ 的联通,而非机器本身的物理连接,那么在探讨 ARPANET 的创新点时我们就会更加倾向于第二种说法。ARPANET 是有史以来第一个<ruby>分组交换网络<rt>packet-switched network</rt></ruby>涉及到许多重要的技术应用。但是如果仅仅因为这项优势就说它是一项突破我觉得这种说法本身就是错的。ARPANET 旨在促进全美计算机科学家之间的合作目的是要弄明白不同的操作系统、不同语言编写的软件如何配合使用而非如何在麻省和加州之间实现高效的数据传输。因此ARPANET 不仅是第一个分组交换网络,它还是一项非常成功且优秀的标准。在我看来,后者更有意思,毕竟我在博客上曾经写过许多颇有失败的标准:[语义网][17]、[RSS][18] 与 [FOAF][19]。
ARPANET 项目初期没有考虑到网络协议,协议的制定是后来的事情了。因此,这项工作自然落到了主要由研究生组成的组织 —— <ruby>网络工作组<rt>Network Working Group</rt></ruby>NWG身上。该组织的首次会议于 1968 年在加州大学圣芭芭拉分校举办 [^15]。当时只有 12 人参会,大部分都是来自上述四所大学的代表 [^16]。来自加州大学洛杉矶分校的研究生<ruby>史蒂夫·克罗克<rt>Steve Crocker</rt></ruby>参加了这场会议。他告诉我,工作组首次会议的参会者清一色都是年轻人,最年长的可能要数会议主席<ruby>埃尔默·夏皮罗<rt>Elmer Shapiro</rt></ruby>了,他当年 38 岁左右。ARPA 没有派人负责研究计算机连接之后如何进行通信,但是很明显它需要提供一定的协助。随着工作组会议的陆续开展,克罗克一直期望着更有经验与威望的“法定成年人”从东海岸飞过来接手这项工作,但是期望终究还是落空了。在 ARPA 的默许之下,工作组举办了多场会议,其中包括很多长途旅行,差旅费由 ARPA 报销,这些就是它给与工作组的全部协助了 [^17]。
当时,网络工作组面临着巨大的挑战。从没有人有过使用通用方式连接计算机系统的经验,而且这本来就与上世纪六十年代末计算机领域盛行的全部观点相悖:
> 那个时候典型的主机表现得就像是它是全宇宙唯一的计算机。即便是最简短的交流会话,两台主机也无法轻易做到。并不是说机器没办法相互连接,只是连接之后,两台计算机又能做些什么呢?当时,计算机和与其相连的其他设备之间的通讯,就像帝王与群臣之间的对话一般。连接到主机的设备各自执行着自己的任务,每台外围设备都保持着常备不懈的状态,等待着上司的命令。当时的计算机就是严格按照这类互动需求设计出来的;它们向读卡器、终端与磁带机等下属设备发号施令,发起所有会话。但是,如果一台计算机拍了拍另一台计算机的肩膀,说道,“你好,我也是一台计算机”,那么另一台计算机可就傻眼了,什么也回答不上来 [^18]。
于是,工作组的最初进展很缓慢 [^19]。直到 1970 年 6 月,也就是首次会议将近两年之后,工作组才为网络协议选定了一套“正式”规范 [^20]。
不过,到了 1972 年,在国际计算机通信会议上展示 ARPANET 的时候,所有的协议已经准备就绪了。会议期间,这些协议运用到了国际象棋等情景之中。用户运行 `@e r` 命令(`@echo remote` 命令的缩写形式),可以指示 TIP 使用新 TELNET 虚拟终端协议提供的服务,通知远程主机它应该回显用户输入的内容。接着,用户运行 `@L 134` 命令(`@login 134` 命令的缩写形式),让 TIP 在 134 号主机上调用<ruby>初始连接协议<rt>Initial Connection Protocol</rt></ruby>,该协议指示远程主机分配出连接所需的全部必要资源,并将用户带入 TELNET 会话中。上述文件传输的情景也许用到了 <ruby>文件传输协议<rt>File Transfer Protocol</rt></ruby>FTP而该协议恰好是在大会举办前夕才刚刚完成的 [^21]。所有这些协议都是“三层”协议其下的第二层是主机到主机的协议定义了主机之间可以相互发送和接收的信息的基本格式第一层是主机到接口通信处理机IMP的协议定义了主机如何与连接的远程设备进行通信。令人感到不可思议的是这些协议都能正常运行。
在我看来,网络工作组之所以能够在大会举办之前做好万全的准备,顺利且出色地完成任务,在于他们采用了开放且非正式的标准化方法,其中一个典型的例子就是著名的 <ruby>征求意见<rt>Request for Comments</rt></ruby>RFC系列文档。RFC 文档最初通过<ruby>传统信件<rt>snail mail</rt></ruby>在工作组成员之间进行传阅让成员们在没有举办会议的时候也能保持联系同时收集成员反馈汇集各方智慧。RFC 框架是克罗克提出的,他写出了第一篇 RFC 文档,并在早期负责管理 RFC 的邮寄列表。他这样做是为了强调工作组开放协作的活动本质。有了这套框架以及触手可及的文档ARPANET 的协议设计过程成了一个大熔炉每个人都可以贡献出自己的力量步步推进精益求精让最棒的想法脱颖而出而没有人失去面子。总而言之RFC 获得了巨大成功,并且直至今天,长达半个世纪之后,它依旧是网络标准的“说明书”。
因此,说起 ARPANET 的影响力,我认为不得不强调的一点正是工作组留下的这一成果。今天,互联网可以把世界各地的人们连接起来,这也是它最神奇的属性之一。不过如果说这项技术到了上世纪才开始使用,那可就有些滑稽可笑了。要知道,在 ARPANET 出现之前,人们就已经通过电报打破了现实距离的限制。而 ARPANET 打破的应该是各个主机站因使用不同的操作系统、字符编码、程序语言以及组织策略而在逻辑层面产生的差异限制。当然,将第一个分组交换网络投入使用在技术方面绝对是一大壮举,这肯定值得一提,不过,制定统一的标准并用以连接原本无法相互协作的计算机,是建立 ARPANET 网络过程中遇到的这两大难题中更为复杂的一个。而这一难题的解决方案,也成了 ARPANET 整个建立与发展历史中最为神奇的一个章节。
1981 年,高级研究计划署发表了一份“完工报告”,回顾了 ARPANET 项目的第一个十年。在《付出收获了回报的技术方面以及付出未能实现最初设想的技术方面》这一冗长的小标题下,作者们写道:
> 或许,在 ARPANET 的开发过程中,最艰难的一项任务就是,尽管主机制造商各不相同,或者同一制造商下操作系统各不相同,我们仍需在众多的独立主机系统之间实现通讯交流。好在这项任务后来取得了成功 [^22]。
你可以从美国联邦政府获得相关信息。
_如果你喜欢这篇文章欢迎关注推特 [@TwoBitHistory][28],也可通过 [RSS 馈送][29] 订阅获取最新文章。_
[^1]: “Hilton Hotel Opens in Capital Today.” _The New York Times_, 20 March 1965, <https://www.nytimes.com/1965/03/20/archives/hilton-hotel-opens-in-capital-today.html?searchResultPosition=1>. Accessed 7 Feb. 2021.
[^2]: James Pelkey. _Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988,_ Chapter 4, Section 12, 2007, <http://www.historyofcomputercommunications.info/Book/4/4.12-ICCC%20Demonstration71-72.html>. Accessed 7 Feb. 2021.
[^3]: Katie Hafner and Matthew Lyon. _Where Wizards Stay Up Late: The Origins of the Internet_. New York, Simon &amp; Schuster, 1996, p. 178.
[^4]: “International Conference on Computer Communication.” _Computer_, vol. 5, no. 4, 1972, p. c2, <https://www.computer.org/csdl/magazine/co/1972/04/01641562/13rRUxNmPIA>. Accessed 7 Feb. 2021.
[^5]: “Program for the International Conference on Computer Communication.” _The Papers of Clay T. Whitehead_, Box 42, <https://d3so5znv45ku4h.cloudfront.net/Box+042/013_Speech-International+Conference+on+Computer+Communications,+Washington,+DC,+October+24,+1972.pdf>. Accessed 7 Feb. 2021.
[^6]: 我其实并不清楚 ARPANET 是在哪个房间展示的。很多地方都提到了“宴会厅”,但是华盛顿希尔顿酒店更习惯于叫它“乔治敦”,而不是把它当成一间会议室。因此,或许这场展示是在国际宴会厅举办的。但是 RFC 372 号文件又提到了预定“乔治敦”作为展示场地一事。华盛顿希尔顿酒店的楼层平面图可以点击 [此处][36] 查看。
[^7]: Hafner, p. 179.
[^8]: ibid., p. 178.
[^9]: Bob Metcalfe. “Scenarios for Using the ARPANET.” _Collections-Computer History Museum_, <https://www.computerhistory.org/collections/catalog/102784024>. Accessed 7 Feb. 2021.
[^10]: Hafner, p. 176.
[^11]: Robert H. Thomas. “Planning for ACCAT Remote Site Operations.” BBN Report No. 3677, October 1977, <https://apps.dtic.mil/sti/pdfs/ADA046366.pdf>. Accessed 7 Feb. 2021.
[^12]: Hafner, p. 12.
[^13]: Joy Lisi Rankin. _A Peoples History of Computing in the United States_. Cambridge, MA, Harvard University Press, 2018, p. 84.
[^14]: Rankin, p. 93.
[^15]: Steve Crocker. Personal interview. 17 Dec. 2020.
[^16]: 克罗克将会议记录文件发给了我,文件列出了所有的参会者。
[^17]: Steve Crocker. Personal interview.
[^18]: Hafner, p. 146.
[^19]: “Completion Report / A History of the ARPANET: The First Decade.” BBN Report No. 4799, April 1981, <https://walden-family.com/bbn/arpanet-completion-report.pdf>, p. II-13.
[^20]: 这里我指的是 RFC 54 号文件中的“正式协议”。
[^21]: Hafner, p. 175.
[^22]: “Completion Report / A History of the ARPANET: The First Decade,” p. II-29.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2021/02/07/arpanet.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[aREversez](https://github.com/aREversez)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/ARPANET
[10]: https://archive.computerhistory.org/resources/access/text/2019/07/102784024-05-001-acc.pdf
[17]: https://twobithistory.org/2018/05/27/semantic-web.html
[18]: https://twobithistory.org/2018/12/18/rss.html
[19]: https://twobithistory.org/2020/01/05/foaf.html
[28]: https://twitter.com/TwoBitHistory
[29]: https://twobithistory.org/feed.xml
[30]: https://twitter.com/TwoBitHistory/status/1277259930555363329?ref_src=twsrc%5Etfw
[36]: https://www3.hilton.com/resources/media/hi/DCAWHHH/en_US/pdf/DCAWH.Floorplans.Apr25.pdf

View File

@ -0,0 +1,125 @@
[#]: subject: (New ways to learn about open organizations)
[#]: via: (https://opensource.com/open-organization/21/6/celebrate-sixth-anniversary)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
[#]: collector: (lujun9972)
[#]: translator: (MareDevi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-15133-1.html)
了解开放组织的新途径
======
> 通过参与两个令人兴奋的新项目来庆祝开放组织社区的六周年。
![](https://img.linux.net.cn/data/attachment/album/202210/12/143419tx8nrr51v8x6r515.jpg)
2021 年 6 月 2 日,<ruby>开放组织<rt>Open Organization</rt></ruby>社区庆祝其成立六周年。这是六年来([上百篇的][2])文章、([一系列的][3])书籍、([具有启发性的][4])对话、(我们所 [喜欢的][5])教学和学习。我们非常自豪地成为一个充满活力的开放专家和领导者的社区,致力于将 [开放原则][6] 带到大大小小的组织。事实上,许多 <ruby>[开放组织大使][7]<rt>Open Organization Ambassadors</rt></ruby> 以帮助他人变得更加开放为职业,我们的社区仍然致力于帮助各行业的领导者以开放的心态和行为融入他们的社区和环境中。
[去年][8] 是开放组织项目的一个 [成长][9] 和 [发展][10] 时期。今年,我们将在这一势头的基础上继续努力。今天,我们很自豪地介绍两项新的倡议——当然,也邀请你的参加。
### 开启,调整,开放
首先,我们很高兴地宣布:我们社区的工作有了一个全新的场所。[OpenOrgTV][11]。这不仅仅是一个新的平台。它也是另一种媒介的实验:视频。
在我们的频道上,我们将举办各种对话 —— 从深层次的书评到社区圆桌会议。首先,请查看“<ruby>[开放领导力对话][12]<rt>Open Leadership Conversations</rt></ruby>”系列,其中包括对某些富有洞察力的领导者的采访,提供他们对根据开放原则进行领导的意义的观点。或者观看我们的 Q&A 式写作节目 “<ruby>[问大使][13]<rt>Ask the Ambassadors</rt></ruby>”,由社区专家回答你关于组织文化和设计的问题。也想参与这个节目吗?在我们的 [新的专门的论坛][14] 中向社区成员提交你的问题。
整个月,我们都会介绍 <ruby>[开放组织大使][15]<rt>Open Organization Ambassadors</rt></ruby>,让你终于可以看到他们的面孔,并听到你多年来阅读的故事、案例研究和采访背后的声音。
### 定义开放领导力
自从我们几年前发布它以来,<ruby>[开放组织定义][16]<rt>Open Organization Definition</rt></ruby> 已成为更好地理解开放组织文化和设计本质的组织指导框架(并且我们已经做了很多工作来 [教导其他人][17])。随着时间的推移,我们甚至开发了 [一个成熟度模型][18] 来操作该定义,因此组织可以评估自己的开放程度并制定具体计划以变得 _更加_ 开放。
现在,我们认为是时候将这项工作更进一步了。
但是,开放组织社区不仅仅是平台、工具或项目的任何组合。它是所有人都热情地一起工作,以帮助传播开放原则和实践。
受我们自己经验、[红帽][19] 和 [Mozilla][20] 等开放组织已有的框架、多年研究和采访该领域的开放领袖的启发,以及我们对更好地理解开放领导力如何 _真正_ 发挥作用的渴望,我们很高兴公布一份全新文件的早期草案:<ruby>开放领导力定义<rt>Open Leadership Definition</rt></ruby>
本文档概述了建立开放型组织,并使其成为思想开放的人能够成长和茁壮成长的地方的各类领导者所特有的心态和行为。它建立在<ruby>开放领导力定义<rt>Open Leadership Definition</rt></ruby>的基础上,解释了开放型领导者如何体现和倡导开放型组织的特征,如透明度、包容性、适应性、协作性和社区性。
而且我们渴望与世界分享。
从今天开始(在接下来的两周内),我们将收集你对我们文件草案的见解和意见。我们渴望听到你的想法,并将接受你的意见的 _整体_ 或片段。你可以对文件的个别部分或整个文件提出意见。请查看下面的链接。我们期待着听到你的意见。
![Open Leadership Definition word cloud][21]
*Laura Hiliger 提供的开放领导力定义词云 (CC BY-SA)*
#### 开放领导力定义
- [开放领导力定义:简介][22]
- [开放领导力定义:透明度][23]
- [开放领导力定义:包容性][24]
- [开放领导力定义:适应性][25]
- [开放领导力定义:协作][26]
- [开放领导力定义:社区][27]
在我们的共享文件夹中 [阅读全文][28]。
### 联系我们
当然,你仍然可以在所有常见的地方找到我们的社区,如:
* [我们的项目网站][29],你通往整个开放组织项目和社区的门户。
* [我们的对话中心][4],在这里你可以与社区成员互动,提出问题,了解新项目,寻找资源,并帮助他人。
* [我们的 GitHub 组织][30],我们一直在公开研究新项目,并邀请你加入我们
* [我们在 Opensource.com 的发表频道][2],我们在这里为各地区和各行业的从业人员发布最新的分析、案例研究、访谈和资源。
* 我们的 [Twitter][31] 和 [LinkedIn][32] 平台,我们将在这里分享我们的最新进展,并促进新的对话。
但开放组织社区不仅仅是平台、工具或项目的任何组合。 是 _人_,所有人都热情地一起工作以帮助传播开放的原则和实践。正是这些人使我们的社区如此伟大。
六年来一直如此,并将永远保持下去。
### 从数字上看
![][33]
*Jen Kelchner 提供的信息图*
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/21/6/celebrate-sixth-anniversary
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[MareDevi](https://github.com/MareDevi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openorg_sixth_anniversary.png?itok=3RWyEk5S
[2]: https://opensource.com/open-organization
[3]: https://theopenorganization.org/books
[4]: https://www.theopenorganization.community/
[5]: https://www.youtube.com/watch?v=Snf6vICDbzw&list=PLLIYDJHuxOkaPEH76mIJe-HHplsiSAVej
[6]: https://theopenorganization.org/definition
[7]: https://theopenorganization.org/about
[8]: https://opensource.com/open-organization/20/6/scaling-energetic-community
[9]: https://opensource.com/open-organization/20/7/evolving-project-governance
[10]: https://opensource.com/open-organization/20/8/open-community-rebrands
[11]: http://theopenorganization.tv
[12]: https://www.youtube.com/watch?v=07YBs0ss9rU&list=PLLIYDJHuxOkYDTLbKRjcd9THTFtpnK8lh
[13]: https://www.youtube.com/watch?v=ukkZMYqRuUQ&list=PLLIYDJHuxOkY1gDbOFLDxGxwwmxeOATrI
[14]: https://www.theopenorganization.community/c/ask-community/19
[15]: http://theopenorganization.org/roster/
[16]: https://theopenorganization.org/definition/
[17]: https://youtu.be/NYngFYGgxro
[18]: https://github.com/open-organization/open-org-maturity-model
[19]: https://github.com/red-hat-people-team/red-hat-multiplier
[20]: https://mozilla.github.io/open-leadership-framework/framework/#the-open-leadership-framework
[21]: https://opensource.com/sites/default/files/images/open-org/open_leadership_word_cloud.png (Open Leadership Definition word cloud)
[22]: https://docs.google.com/document/d/1blmf94ED_p4BHGv0luU_XrU26aF7tCzV6WTmh_v-PDY/edit?usp=sharing
[23]: https://docs.google.com/document/d/14ssBBL0h2vxU0WZoMnWs6eo_8oRfJhnAr5yr-fAiLGU/edit?usp=sharing
[24]: https://docs.google.com/document/d/1lRutADes5E0mcwtc6GR_Qw06PuJLc9-wUK5W1Gcf_BA/edit?usp=sharing
[25]: https://docs.google.com/document/d/1RcwWTpkT42bgkf6EPiECt8LyAJ1XZjNGhzk0cQuBB7c/edit?usp=sharing
[26]: https://docs.google.com/document/d/1hTvnpqQkOc76-0UJbV6tAvRxOE--bdt96mqGmAKGqiI/edit?usp=sharing
[27]: https://docs.google.com/document/d/1Zl1smi-4jDZNNWd0oNY8qRH-GDi9q5VfvgyZ7YLkvm4/edit?usp=sharing
[28]: https://drive.google.com/drive/folders/1e1N_0p5lJEwAo_s6hQ3OK0KaJIfc7fgF?usp=sharing
[29]: http://theopenorganization.org/
[30]: https://github.com/open-organization
[31]: https://twitter.com/openorgproject
[32]: https://www.linkedin.com/company/the-open-organization/
[33]: https://opensource.com/sites/default/files/images/open-org/openorgproject_6_anniversary_stats.png

View File

@ -0,0 +1,184 @@
[#]: subject: (Write your first web component)
[#]: via: (https://opensource.com/article/21/7/web-components)
[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
[#]: collector: (lujun9972)
[#]: translator: (cool-summer-021)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-15148-1.html)
开发你的第一个 Web 组件
======
> 不要做重复的工作;基于浏览器开发 Web App 时,需要制作一些可重用的模块。
![](https://img.linux.net.cn/data/attachment/album/202210/17/101134uzsiis8xsu9wqibi.jpg)
Web 组件是一系列开源技术(例如 JavaScript 和 HTML的集合你可以用它们创建一些 Web App 中可重用的自定义元素。你创建的组件是独立于其他代码的,所以这些组件可以方便地在多个项目中重用。
首先,它是一个平台标准,所有主流的浏览器都支持它。
### Web 组件中包含什么?
* **定制元素**JavaScript API 支持定义 HTML 元素的新类别。
* **影子 DOM**JavaScript API 提供了一种将一个隐藏的、独立的 [文档对象模型][2]DOM附加到一个元素的方法。它通过保留从页面的其他代码分离出来的样式、标记结构和行为特征对 Web 组件进行了封装。它会确保 Web 组件内样式不会被外部样式覆盖反之亦然Web 组件内样式也不会“泄露”到页面的其他部分。
* **HTML 模板**:该元素支持定义可重用的 DOM 元素。可重用 DOM 元素和它的内容不会呈现在 DOM 内,但仍然可以通过 JavaScript 被引用。
### 开发你的第一个 Web 组件
你可以借助你最喜欢的文本编辑器和 JavaScript 写一个简单的 Web 组件。本指南使用 Bootstrap 生成简单的样式,并创建一个简易的卡片式的 Web 组件,给定了位置信息,该组件就能显示该位置的温度。该组件使用了 [Open Weather API][3],你需要先注册,然后创建 APPID/APIKey才能正常使用。
调用该组件,需要给出位置的经度和纬度:
```
<weather-card longitude='85.8245' latitude='20.296' />
```
创建一个名为 `weather-card.js` 的文件,这个文件包含 Web 组件的所有代码。首先,需要定义你的组件,创建一个模板元素,并在其中加入一些简单的 HTML 标签:
```
const template = document.createElement('template');
template.innerHTML = `
  <div class="card">
    <div class="card-body"></div>
  </div>
`
```
定义 Web 组件的类及其构造函数:
```
class WeatherCard extends HTMLElement {
  constructor() {
    super();
    this._shadowRoot = this.attachShadow({ 'mode': 'open' });
    this._shadowRoot.appendChild(template.content.cloneNode(true));
  }
  ......
}
```
构造函数中,附加了 `shadowRoot` 属性,并将它设置为开启模式。然后这个模板就包含了 shadowRoot 属性。
接着,编写获取属性的函数。对于经度和纬度,你需要向 Open Weather API 发送 GET 请求。这些功能需要在 `connectedCallback` 函数中完成。你可以使用 `getAttribute` 方法访问相应的属性,或定义读取属性的方法,把它们绑定到本对象中。
```
get longitude() {
  return this.getAttribute('longitude');
}
get latitude() {
  return this.getAttribute('latitude');
}
```
现在定义 `connectedCallBack` 方法,它的功能是在需要时获取天气数据:
```
connectedCallback() {
var xmlHttp = new XMLHttpRequest();
const url = `http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}&lon=${this.longitude}&appid=API_KEY`
xmlHttp.open("GET", url, false);
xmlHttp.send(null);
this.$card = this._shadowRoot.querySelector('.card-body');
let responseObj = JSON.parse(xmlHttp.responseText);
let $townName = document.createElement('p');
$townName.innerHTML = `Town: ${responseObj.name}`;
this._shadowRoot.appendChild($townName);
let $temperature = document.createElement('p');
$temperature.innerHTML = `${parseInt(responseObj.main.temp - 273)} &deg;C`
this._shadowRoot.appendChild($temperature);
}
```
一旦获取到天气数据,附加的 HTML 元素就添加进了模板。至此,完成了类的定义。
最后,使用 `window.customElements.define` 方法定义并注册一个新的自定义元素:
```
window.customElements.define('weather-card', WeatherCard);
```
其中,第一个参数是自定义元素的名称,第二个参数是所定义的类。这里是 [整个组件代码的链接][5]。
你的第一个 Web 组件的代码已完成!现在应该把它放入 DOM。为了把它放入 DOM你需要在 HTML 文件(`index.html`)中载入指向 Web 组件的 JavaScript 脚本。
```
<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
</head>
<body>
<weather-card longitude='85.8245' latitude='20.296'/>
  <script src='./weather-card.js'></script>
</body>
</html>
```
这就是显示在浏览器中的 Web 组件:
![Web component displayed in a browser][6]
由于 Web 组件中只包含 HTML、CSS 和 JavaScript它们本来就是浏览器所支持的并且可以无瑕疵地跟前端框架例如 React 和 Vue一同使用。下面这段简单的代码展现的是它跟一个由 [Create React App][8] 引导的一个简单的 React App 的整合方法。如果你需要,可以引入前面定义的 `weather-card.js`,把它作为一个组件使用:
```
import './App.css';
import './weather-card';
function App() {
  return (
  <weather-card longitude='85.8245' latitude='20.296'></weather-card>
  );
}
export default App;
```
### Web 组件的生命周期
一切组件都遵循从初始化到移除的生命周期法则。每个生命周期事件都有相应的方法你可以借助这些方法令组件更好地工作。Web 组件的生命周期事件包括:
* `Constructor`Web 组件的构造函数在它被挂载前调用,意味着在元素附加到文档对象前被创建。它用于初始化本地状态、绑定事件处理器以及创建影子 DOM。在构造函数中必须调用 `super()`,执行父类的构造函数。
* `ConnectedCallBack`:当一个元素被挂载(即,插入 DOM 树)时调用。该函数处理创建 DOM 节点的初始化过程中的相关事宜大多数情况下用于类似于网络请求的操作。React 开发者可以将它与 `componentDidMount` 相关联。
* `attributeChangedCallback`:这个方法接收三个参数:`name`, `oldValue``newValue`。组件的任一属性发生变化,就会执行这个方法。属性由静态 `observedAttributes` 方法声明:
```
static get observedAttributes() {
  return ['name', '_id'];
}
```
一旦属性名或 `_id` 改变,就会调用 `attributeChangedCallback` 方法。
* `DisconnectedCallBack`:当一个元素从 DOM 树移除,会执行这个方法。它相当于 React 中的 `componentWillUnmount`。它可以用于释放不能由垃圾回收机制自动清除的资源,比如 DOM 事件的取消订阅、停用计时器或取消所有已注册的回调方法。
* `AdoptedCallback`:每次自定义元素移动到一个新文档时调用。只有在处理 IFrame 时会发生这种情况。
### 模块化开源
Web 组件对于开发 Web App 很有用。无论你是熟练使用 JavaScript 的老手,还是初学者,无论你的目标客户使用哪种浏览器,借助这种开源标准创建可重用的代码都是一件可以轻松完成的事。
*插图Ramakrishna Pattnaik, [CC BY-SA 4.0][7]*
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/web-components
作者:[Ramakrishna Pattnaik][a]
选题:[lujun9972][b]
译者:[cool-summer-021](https://github.com/cool-summer-021)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rkpattnaik780
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/Document_Object_Model
[3]: https://openweathermap.org/api
[4]: http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&lon=${this.longitude}\&appid=API\_KEY\`
[5]: https://gist.github.com/rkpattnaik780/acc683d3796102c26c1abb03369e31f8
[6]: https://opensource.com/sites/default/files/uploads/webcomponent.png (Web component displayed in a browser)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://create-react-app.dev/docs/getting-started/

View File

@ -0,0 +1,160 @@
[#]: subject: "Troubleshooting “Bash: Command Not Found” Error in Linux"
[#]: via: "https://itsfoss.com/bash-command-not-found/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15164-1.html"
解决 Linux 中的 “Bash: Command Not Found” 报错
======
> 本新手教程展示了在 Debian、Ubuntu 和其他的 Linux 发行版上如何解决 “Bash: command not found” 这一报错。
当你在 Linux 中使用命令时,你希望得到终端输出的结果。但有时候,你会遇到终端显示“<ruby>命令未找到<rt>command not found</rt></ruby>”这一报错。
![][1]
对于这个问题,并没有直截了当且单一的解决方案。你必须自己做一些故障排除来解决这个报错。
老实说,要解决它并不难。该报错信息已经给出了一些提示:“命令未找到”,这说明你的 shell或者 Linux 系统)找不到你输入的那条命令。
shell或 Linux 系统)找不到命令,有三个可能的原因:
* 你将命令的名称拼错了
* 该命令还没有安装
* 该命令是一个可执行脚本,但其位置未知
接下来,我们会详细介绍“命令未找到”这一报错的每一个原因。
### 解决“命令未找到”报错
![][2]
#### 方法 1再次检查命令名称有没有写错
每个人都会犯错误,尤其是在打字的时候。你输入的命令可能存在错别字(也就是你写错啦)。
你应该特别注意:
* 是否拼对了正确的命令名称
* 是否在命令与其选项之间加上了空格
* 是否在拼写中混淆了 1数字 1、I大写的 i和 l小写的 L
* 是否正确使用了大写字母或者小写字母
看看下面的示例,因为我写错了 `ls` 命令所以会导致“command not found”报错。
![][3]
所以,请再次仔细确认你输入得对不对。
#### 方法 2确保命令已安装在你的系统上
这是“命令未找到”错误的另一个常见原因。如果命令尚未安装,则无法运行该命令。
虽然在默认情况下,你的 Linux 发行版自带安装了大量命令,但是不会在系统中预装 _所有的_ 命令行工具。如果你尝试运行的命令不是一个流行的常用命令,那么你需要先安装它。
你可以使用发行版的软件包管理器来安装命令。
![You may have to install the missing command][4]
有时候,某一常用命令可能也不再能使用了,甚至你也不能够安装这个命令了。这种情况下,你需要找到一个替代的命令,来得到结果。
以现已弃用的 `ifconfig` 命令为例。网络上的旧教程依旧会让你使用 `ifconfig` 命令,来 [获取本机的 IP 地址][5] 和网络接口信息,但是,在较新的 Linux 版本中,你已经无法使用 `ifconfig` 了。`ifconfig` 命令已被 `ip` 命令所取代。
![Some popular commands get discontinued over the time][1]
有时候,你的系统可能甚至找不到一些非常常见的命令。当你在 Docker 容器中运行 Linux 发行版时就通常如此。Docker 容器为了缩小操作系统镜像的大小,容器中通常不包含那些常见的 Linux 命令。
这就是为什么使用 Docker 的用户会碰到 [ping 命令未找到][6] 等报错的原因。
![Docker containers often have only a few commands installed][7]
因此,这种情况下的解决方案是安装缺失的命令,或者是找到一个与缺失命令有同等功能的工具。
### 方法 3确保命令是真实的而不是一个别名
我希望你知道 Linux 中的别名概念。你可以配置你自己的较短的命令来代替一个较长命令的输入。
一些发行版,如 Ubuntu会自动提供 `ll``ls -l` 的别名)、`la``ls -a` 的别名)等命令。
![][13]
想象一下,你习惯于在你的个人系统上输入 `ll``la`,而你登录到另一个 Linux 系统,发现 `ll` 命令并不存在。你甚至不能安装 `ll` 命令,因为它不是一个真正的命令。
所以,如果你找不到一个命令,甚至不能安装,你应该尝试在互联网上搜索该命令是否存在。如果不存在,可能是其他系统上的一个别名。
#### 方法 4检查命令是否是一个路径正确的可执行脚本
这是 Linux 新手在 [运行 shell 脚本][8] 时常犯的错误。
即使你在同一目录下,仅用可执行脚本的名称,来运行可执行脚本,也会显示错误。
```
[email protected]:~/scripts# sample
-bash: sample: command not found
```
因为你需要显式指定 shell 解释器或可执行脚本的路径!
![][9]
如果你在其他目录下,在未提供文件正确路径的情况下,运行 shell 脚本,则会有“<ruby>找不到文件<rt>no such file or directory</rt></ruby>”的报错。
![][10]
> **把可执行文件的路径加到 PATH 变量中**
>
> 有时候你下载了一个软件的压缩文件tar 格式),解压这个 tar 文件,然后找到一个可执行文件和其他程序文件。你需要运行可执行文件,来运行那个软件。
>
> 但是,你需要在可执行文件的同一目录下或指定可执行文件的整个路径,才能运行那个可执行文件。这很令人烦扰。
>
> 你可以使用 `PATH` 变量来解决这个问题。`PATH` 变量包含了有各种 Linux 命令的二进制(可执行)文件的目录集合。当你运行一个命令时,你的 Linux 系统会检查 `PATH` 变量中的上述目录,以查找该命令的可执行文件。
>
> 你可以使用 `which` 命令,来检查某一命令的二进制文件的位置:
>
> ![][11]
>
> 如果你想从系统上的任何地方都能运行可执行文件或脚本,你需要将可执行文件的位置添加到 `PATH` 变量中。
>
> ![][12]
>
> 然后,`PATH` 变量需要添加到 shell 的 rc 文件中,如此对 `PATH` 变量的更改就是永久性的。
>
> 这里的要点是:你的 Linux 系统必须了解可执行脚本的位置。要么在运行时给出可执行文件的整个路径,要么将其位置添加到 `PATH` 变量中。
### 以上的内容有帮到你吗?
我懂得,当你是 Linux 新手时,很多事情可能会让你不知所措。但是,当你了解问题的根本原因时,你的知识会逐渐增加。
对于“未找到命令”报错来说,没有简单的解决方案。我提供给你了一些提示和要点,我希望这对你的故障排除有帮助。
如果你仍然有疑问或需要帮助,请在评论区告诉我吧。
--------------------------------------------------------------------------------
via: https://itsfoss.com/bash-command-not-found/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error.png?resize=741%2C291&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error-1.png?resize=800%2C450&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-error.png?resize=723%2C234&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-debian.png?resize=741%2C348&ssl=1
[5]: https://itsfoss.com/check-ip-address-ubuntu/
[6]: https://linuxhandbook.com/ping-command-ubuntu/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ping-command-not-found-ubuntu.png?resize=786%2C367&ssl=1
[8]: https://itsfoss.com/run-shell-script-linux/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-script-command-not-found-error-800x331.png?resize=800%2C331&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/script-file-not-found-error-800x259.png?resize=800%2C259&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/path-location.png?resize=800%2C241&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/adding-executable-to-PATH-variable-linux.png?resize=800%2C313&ssl=1
[13]: https://itsfoss.com/wp-content/uploads/2022/01/alias-in-ubuntu.png

View File

@ -0,0 +1,182 @@
[#]: subject: "Reasons for servers to support IPv6"
[#]: via: "https://jvns.ca/blog/2022/01/29/reasons-for-servers-to-support-ipv6/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15142-1.html"
服务器支持 IPv6 的原因
======
![](https://img.linux.net.cn/data/attachment/album/202210/15/155046v94vbmo5imykfkxz.jpg)
我一直在努力学习关于 IPv6 的相关知识。一方面IPv6 的基础概念是很简单的(没有足够的 IPv4 地址可以满足互联网上的所有设备,所以人们发明了 IPv6每个人都能有足够的 IPv6 地址!)
但是当我试图进一步理解它时,我遇到了很多问题。其中一个问题是:为什么 twitter.com 不支持 IPv6。假设网站不支持 IPv6 并不会造成很多困难,那么为什么网站需要支持 IPv6 呢?
我在 Twitter 上询问了很多人 [为什么他们的服务器支持 IPv6][1],我得到了很多很好的答案,我将在这里总结一下。事先说明一下,因为我对 IPv6 基本上毫无经验,所以下面所总结的理由中可能会有写得不准确的地方,请大家多多包涵。
首先,我想解释一下为什么 twitter.com 可以不支持 IPv6因为这是最先让我困惑的地方。
### 怎么知道 twitter.com 不支持 IPv6 呢?
你可以使用 `dig` 命令以 `AAAA` 的选项查询某一个域名的 IPv6 地址记录,如果没有记录,则表明该域名不支持 IPv6。除了 twitter.com还有一些大型网站如 github.com 和 stripe.com 也不支持 IPv6。
```
$ dig AAAA twitter.com
(empty response)
$ dig AAAA github.com
(empty response)
$ dig AAAA stripe.com
(empty response)
```
### 为什么 twitter.com 仍然适用于 IPv6 用户?
我发现这真的很令人困惑。我一直听说因为 IPv4 地址已经用完了,从而很多互联网用户被迫要使用 IPv6 地址。但如果这是真的twitter.com 怎么能继续为那些没有 IPv6 支持的人提供服务呢?以下内容是我昨天从 Twitter 会话中学习到的。
互联网服务提供商ISP有两种
1. 能为所有用户拥有足够 IPv4 地址的 ISP
2. 不能为所有用户拥有足够 IPv4 地址的 ISP
我的互联网服务提供商属于第 1 类,因此我的计算机有自己的 IPv4 地址,实际上我的互联网服务提供商甚至根本不支持 IPv6。
但是很多互联网服务提供商(尤其是北美以外的)都属于第 2 类:他们没有足够的 IPv4 地址供所有用户使用。这些互联网服务提供商通过以下方式处理问题:
* 为所有用户提供唯一的 IPv6 地址,以便他们可以直接访问 IPv6 网站
* 让用户 _共享_ IPv4 地址,这可以使用 CGNAT<ruby>[运营商级 NAT][2]<rt>carrier-grade NAT</rt></ruby>或者“464XLAT”或其他方式。
所有互联网服务提供商都需要 _一些_ IPv4 地址,否则他们的用户将无法访问 twitter.com 等只能使用 IPv4 的网站。
### 为什么网站要支持 IPv6
现在,我们已经解释了为什么可以 _不支持_ IPv6。那为什么要支持 IPv6 呢?有下面这些原因。
#### 原因一CGNAT 是一个性能瓶颈
对我而言,支持 IPv6 最有说服力的论点是CGNAT 是一个瓶颈,它会导致性能问题,并且随着对 IPv4 地址的访问变得越来越受限,它的性能会变得更糟。
有人也提到:因为 CGNAT 是一个性能瓶颈因此它成为了一个有吸引力的拒绝服务攻击DDoS的目标因为你可以通过攻击一台服务器影响其他用户对该服务器的网站的可用性。
支持 IPv6 的服务器减少了对 CGNAT 的需求IPv6 用户可以直接连接!),这使得互联网对每个人的响应速度都更快了。
我认为这个论点很有趣,因为它需要各方的努力——仅仅你的网站支持 IPv6并不会让你的网站更好地运行而更重要的是如果 _几乎每个网站_ 都支持 IPv6那么它将使每个人的互联网体验更好尤其对于那些无法轻松访问 IPv4 地址的国家/地区。
实际上,我不知道这在实践中会有多大的关系。
不过,使用 IPv6 还有很多更自私的论点,所以让我们继续探讨吧。
#### 原因二:只能使用 IPv6 的服务器也能够访问你的网站
我之前说过,大多数 IPv6 用户仍然可以通过 NAT 方式访问 IPv4 的网站。但是有些 IPv6 用户是不能访问 IPv4 网站的,因为他们发现他们运行的服务器只有 IPv6 地址,并且不能使用 NAT。因此这些服务器完全无法访问只能使用 IPv4 的网站。
我想这些服务器并没有连接很多主机,也许它们只需要连接到一些支持 IPv6 的主机。
但对我来说,即使没有 IPv4 地址,一台主机也应该能够访问我的站点。
#### 原因三:更好的性能
对于同时使用 IPv4 和 IPv6即具有专用 IPv6 地址和共享 IPv4 地址的用户IPv6 通常更快,因为它不需要经过额外的 NAT 地址转换。
因此,有时支持 IPv6 的网站可以为用户提供更快的响应。
在实际应用中客户端使用一种称为“Happy Eyeballs”的算法该算法能够从 IPv4 和 IPv6 中为用户选择一个最快的链接。
以下是网站支持 IPv6 的一些其他性能优势:
* 使用 IPv6 可以提高搜索引擎优化SEO因为 IPv6 具有更好的性能。
* 使用 IPv6 可能会使你的数据包通过更好(更快)的网络硬件,因为相较于 IPv4IPv6 是一个更新的协议。
#### 原因四:能够恢复 IPv4 互联网中断
有人说他碰到过由于意外的 BGP 中毒,而导致仅影响 IPv4 流量的互联网中断问题。
因此,支持 IPv6 的网站意味着在中断期间,网站仍然可以保持部分在线。
#### 原因五:避免家庭服务器的 NAT 问题
将 IPv6 与家庭服务器一起使用,会变得简单很多,因为数据包不必通过路由器进行端口转发,因此只需为每台服务器分配一个唯一的 IPv6 地址,然后直接访问服务器的 IPv6 地址即可。
当然,要实现这一点,客户端需要支持 IPv6但如今越来越多的客户端也能支持 IPv6 了。
#### 原因六:为了拥有自己的 IP 地址
你也可以自己购买 IPv6 地址,并将它们用于家庭网络的服务器上。如果你更换了互联网服务提供商,可以继续使用相同的 IP 地址。
我不太明白这是如何工作的,是如何让互联网上的计算机将这些 IP 地址路由转发给你的我猜测你需要运行自己的自治系统AS或其他东西。
#### 原因七:为了学习 IPv6
有人说他们在安全领域中工作,为保证信息安全,了解互联网协议的工作原理非常重要(攻击者正在使用互联网协议进行攻击!)。因此,运行 IPv6 服务器有助于他们了解其工作原理。
#### 原因八:为了推进 IPv6
有人说因为 IPv6 是当前的标准,因此他们希望通过支持 IPv6 来为 IPv6 的成功做出贡献。
很多人还说他们的服务器支持 IPv6是因为他们认为只能使用 IPv4 的网站已经太“落后”了。
#### 原因九IPv6 很简单
我还得到了一堆“使用 IPv6 很容易,为什么不用呢”的答案。在所有情况下添加 IPv6 支持并不容易,但在某些情况下添加 IPv6 支持会是很容易的,有以下的几个原因:
* 你可以从托管公司自动地获得 IPv6 地址,因此你只需要做的就是添加指向该地址的 `AAAA` 记录
* 你的网站是基于支持 IPv6 的内容分发网络CDN因此你无需做任何额外的事情
#### 原因十:为了实施更安全的网络实验
因为 IPv6 的地址空间很大,所以如果你想在网络中尝试某些东西的时候,你可以使用 IPv6 子网进行实验,基本上你之后不会再用到这个子网了。
#### 原因十一为了运行自己的自治系统AS
也有人说他们为了运行自己的自治系统(我在这篇 [BGP 帖子][3] 中谈到了什么是 AS因此在服务器中提供 IPv6。IPv4 地址太贵了,所以他们为运行自治系统而购买了 IPv6 地址。
#### 原因十二IPv6 更加安全
如果你的服务器 _只_ 有公共的 IPv6 地址,那么攻击者扫描整个网络,也不能轻易地找出你的服务器地址,这是因为 IPv6 地址空间太大了以至于不能扫描出来!
这显然不能是你仅有的安全策略,但是这是安全上的一个大大的福利。每次我运行 IPv4 服务器时,我都会惊讶于 IPv4 地址一直能够被扫描出来的脆弱性,就像是老版本的 WordPress 博客系统那样。
#### 一个很傻的理由:你可以在你的 IPv6 地址中放个小彩蛋
IPv6 地址中有很多额外的位你可以用它们做一些不重要的事情。例如Facebook 的 IPv6 地址之一是“2a03:2880:f10e:83:face:b00c:0:25de”其中包含 `face:b00c`)。
### 理由还有很多
这就是到目前为止我所了解的“为什么支持 IPv6”的理由。
在我理解这些原因后,相较于以前,我在我的(非常小的)服务器上支持 IPv6 更有动力了。但那是因为我觉得支持 IPv6对我来说只需要很少的努力。现在我使用的是支持 IPv6 的 CDN所以我基本上不用做什么额外的事情
我仍然对 IPv6 知之甚少,但是在我的印象中,支持 IPv6 并不是不需要花费精力的,实际上可能需要大量工作。例如,我不知道 Twitter 在其边缘服务器上添加 IPv6 支持需要做多少繁杂的工作。
### 其它关于 IPv6 的问题
这里还有一些关于 IPv6 的问题,也许我之后再会探讨:
* 支持 IPv6 的缺点是什么?什么会出错呢?
* 对于拥有了足够 IPv4 地址的 ISP 来说,有什么让他们提供 IPv6 的激励措施?(另一种问法是:我的 ISP 是否有可能在未来几年内转为支持 IPv6或者他们可能不会支持 IPv6
* [Digital Ocean][4] LCTT 译注一家建立于美国的云基础架构提供商面向软件开发人员提供虚拟专用服务器VPS只提供 IPv4 的浮动地址,不提供 IPv6 的浮动地址。为什么不提供呢?有更多 IPv6 地址,那提供 IPv6 的浮动地址不是变得更 _便捷_ 吗?
* 当我尝试 ping IPv6 地址时(例如 example.com 的 IP 地址`2606:2800:220:1:248:1893:25c8:1946`),我得到一个报错信息 `ping: connect: Network is unreachable`。这是为什么呢?(回答:因为我的 ISP 不支持 IPv6所以我的电脑没有公共 IPv6 地址)
这篇 [来自 Tailscale 的 IPv4 与 IPv6 文章][5] 非常有意思,并回答了上述的一些问题。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/01/29/reasons-for-servers-to-support-ipv6/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/1487156306884636672
[2]: https://en.wikipedia.org/wiki/Carrier-grade_NAT
[3]: https://jvns.ca/blog/2021/10/05/tools-to-look-at-bgp-routes/
[4]: https://docs.digitalocean.com/products/networking/floating-ips/
[5]: https://tailscale.com/kb/1134/ipv6-faq/

View File

@ -0,0 +1,197 @@
[#]: subject: "7 summer book recommendations from open source enthusiasts"
[#]: via: "https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list"
[#]: author: "Joshua Allen Holm https://opensource.com/users/holmja"
[#]: collector: "lkxed"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15157-1.html"
来自开源爱好者的 7 本读物推荐
======
> 社区的成员们推荐这些书籍,涵盖了从有趣的悬疑小说到发人深省的非小说作品的各种类型,你一定能从中找到一本你想看的书!
![](https://img.linux.net.cn/data/attachment/album/202210/20/115515jsppwzz8s1ssle7p.jpg)
很高兴能为大家介绍 Opensource.com 的 2022 年暑期阅读清单。今年的榜单包含来自 Opensource.com 社区成员的 7 本精彩的读物推荐。你可以发现各种各样的书籍,涵盖从有趣舒适的谜团到探索发人深省主题的非小说类作品。我希望你能在这个榜单中找到感兴趣的书本。
希望你喜欢!
### 《每个 Java 程序员都应该知道的 97 件事:专家的集体智慧》
![Book title 97 Things Every Java Programmer Should Know][4]
> **<ruby>[每个 Java 程序员都应该知道的 97 件事:专家的集体智慧][5]<rt>97 Things Every Java Programmer Should Know: Collective Wisdom from the Experts</rt></ruby>》**
编辑Kevlin Henney 和 Trisha Gee
*[由 Seth Kenlon 推荐][6]*
这本书是由 73 位在软件行业工作的不同作者共同撰写。它的优秀之处在于它不仅仅适用于 Java 编程。当然,有些章节会涉及 Java但是也还有一些其他话题例如了解你的容器环境、如何更快更好地交付软件、以及不要隐藏你的开发工具这些适用于任何语言的开发。
更好的是,有些章节同样适用于生活中的问题。将问题和任务分成小的部分是解决任何问题的好建议;建立多样化的团队对所有合作者都很重要;由从散乱的一块块拼图到拼好的完成品,看起来像是拼图玩家的思路,也适用于不同的工作角色。
每章只有几页,总共有 97 个章节,你可以轻松跳过不适用于你自己的章节。无论你是一直在写 Java 代码、或者只是学过一点 Java亦或是尚未开始学习 Java对于对代码和软件开发过程感兴趣的极客来说这都会是一本好书。
### 《城市不是计算机:其他的城市智能》
![Book title A City is Not a Computer][7]
> **<ruby>[城市不是计算机:其他的城市智能][8]<rt>A City is Not a Computer: Other Urban Intelligences</rt></ruby>》**
作者Shannon Mattern
*[由 Scott Nesbitt 推荐][9]*
如今,让一切变得智能已经成为一种 *时尚*:我们的手机、家用电器、手表、汽车,甚至是城市都变得智能化了。
对于城市的智能化,这意味着传感器变得无处不在,在我们开展业务时收集数据,并根据这些数据向我们推送信息(无论数据有用与否)。
这就引出了一个问题,将所有高科技技术嵌入到城市中是否会使得城市智能化呢?在《城市不是计算机》这本书中,作者 Shannon Mattern 认为并不是这样的。
城市智能化的目标之一是为市民提供服务和更好的城市参与感。Mattern 指出,但是实际上,智慧城市“希望将技术专家的管理想法与公共服务相融合,从而将公民重新设置为‘消费者’和‘用户’”,然而,这并不是在鼓励公民积极参与城市的生活和治理。
第二个问题是关于智慧城市收集的数据。我们不知道收集了什么数据,以及收集了多少数据。我们也不知道这些数据使用在什么地方,以及是谁使用的。收集的数据太多了,以至于处理数据的市政工作人员会不堪重负。他们无法处理所有数据,因此他们专注于短期容易实现的任务,而忽略了更深层次和更紧迫的问题。这绝对达不到在推广智慧城市时所承诺的目标:智慧城市将成为解决城市困境的良药。
《城市不是计算机》是一本短小精悍、经过深入研究的、反对拥抱智慧城市的论证。这本书让我们思考智慧城市的真正目的:要让百姓真正受益于城市智能化,并引发我们的思考:发展智慧城市是否必要呢。
### 《git sync 谋杀案》
![Book title git sync murder][10]
> **<ruby>[git sync 谋杀案][11]<rt>git sync murder</rt></ruby>》**
作者Michael Warren Lucas
*[由 Joshua Allen Holm 推荐][12]*
Dale Whitehead 宁愿呆在家里通过他的电脑终端与世界连接尤其是在他参加的最后一次会议上发生的事情之后。在那次会议上Dale 扮演了一个业余侦探的角色,解决了一桩谋杀案。你可以在该系列的第一本书《<ruby>git commit 谋杀案<rt>git commit murder</rt></ruby>》中读到那个案件。
现在Dale 回到家,参加另一个会议,他再次发现自己成为了侦探。在《<ruby>git sync 谋杀案<rt>git sync murder</rt></ruby>》中Dale 参加了一个当地科技会议/科幻大会会议上发现一具尸体。这是谋杀还是只是一场意外现在Dale 是这些问题的“专家”他发现自己被卷入了这件事并要亲自去弄清楚到底发生了什么。再多说的话就剧透了所以我能说《git sync 谋杀案》这本书十分引人入胜而且读起来很有趣。不必先阅读《git commit 谋杀案》才能阅读《git sync 谋杀案》,但我强烈推荐一起阅读该系列中的这两本书。
作者 Michael Warren Lucas 的《git 谋杀案》系列非常适合喜欢悬疑小说的科技迷。Lucas 写过很多复杂的技术题材的书这本书也延续了他的技术题材《git sync 谋杀案》这本书中的人物在会议活动上谈论技术话题。如果你因为新冠疫情最近没有参加过会议怀念参会体验的话Lucas 将带你参加一个技术会议其中还有一个谋杀之谜以待解决。Dale Whitehead 是一个有趣的业余侦探,我相信大多数读者会喜欢和 Dale 一起参加技术会议,并充当侦探破解谜案的。
### 《像女孩一样踢球》
![Book title Kick Like a Girl][13]
> **<ruby>[像女孩一样踢球][14]<rt>Kick Like a Girl</rt></ruby>》**
作者Melissa Di Donato Roos
*[由 Joshua Allen Holm 推荐][15]*
没有人喜欢被孤立,当女孩 Francesca 想在公园里踢足球时,她也是这样。男孩们不会和她一起玩,因为她是女孩,所以她不高兴地回家了。她的母亲安慰她,讲述了有重要影响力的著名女性的故事。《像女孩一样踢球》中详述的历史人物包括历史中来自许多不同领域的女性。读者将了解 Frida Kahlo、Madeleine Albright、<ruby>阿达·洛芙莱斯<rt>Ada Lovelace</rt></ruby>、Rosa Parks、Amelia Earhart、<ruby>玛丽·居里<rt>Marie Curie</rt></ruby>居里夫人、Valentina Tereshkova、<ruby>弗洛伦斯·南丁格尔<rt>Florence Nightingale</rt></ruby> 和 Malala Yousafzai 的故事。听完这些鼓舞人心的人物故事后Francesca 回到公园,向男孩们发起了一场足球挑战。
《像女孩一样踢球》这本书的特色是作者 Melissa Di Donato RoosSUSE 的 CEOLCTT 译注SUSE 是一家总部位于德国的软件公司,创立于 1992 年,以提供企业级 Linux 为主要业务)引人入胜的写作和 Ange Allen 的出色插图。这本书非常适合年轻读者他们会喜欢押韵的文字和书中的彩色插图。Melissa Di Donato Roos 还写了另外两本童书,《<ruby>美人鱼如何便便<rt>How Do Mermaids Poo?</rt></ruby>》和《<ruby>魔盒<rt>The Magic Box</rt></ruby>》,这两本书也都值得一读。
### 《这是我的!:所有权的潜规则如何控制着我们的生活》
![Book title Mine!][16]
> **<ruby>[这是我的!:所有权的潜规则如何控制着我们的生活][17]<rt>Mine!: How the Hidden Rules of Ownership Control Our Lives</rt></ruby>》**
作者Michael Heller 和 James Salzman
*[由 Bryan Behrenshausen 推荐][18]*
作者 Michael Heller 和 James Salzman 在文章《这是我的》中写道“你对所有权的很多了解都是错误的”。这是一种被吸引到开源领域的人不得不接受所有权规则的对抗性邀请。这本书肯定是为开源爱好者而写的他们对代码、思想、各种知识产权的所有权的看法往往与主流观点和普遍接受的认知不同。在本书中Heller 和 Salzman 列出了“所有权的隐藏规则”,这些规则管理着谁能控制对什么事物的访问。这些所有权规则是微妙的、强大的、有着深刻的历史惯例。这些所有权规则已经变得如此普遍,以至于看起来无可争议,这是因为“先到先得”或“种瓜得瓜,种豆得豆”的规则已经成为陈词滥调。然而,我们看到它们无处不在:在飞机上,为宝贵的腿部空间而战;在街道上,邻居们为铲好雪的停车位发生争执;在法庭上,陪审团决定谁能控制你的遗产和你的 DNA。在当下的数字时代所有权的替代理论能否为重新思考基本权利创造空间作者们认为这是可以的。如果这是正确的我们可能会回应在未来开源软件能否成为所有权运作的模型呢
### 《并非所有童话故事都有幸福的结局:雪乐山公司的兴衰》
![Book Title Not All Fairy Tales Have Happy Endings][19]
> **<ruby>[并非所有童话故事都有幸福的结局:雪乐山公司的兴衰][20]<rt>Not All Fairy Tales Have Happy Endings: The Rise and Fall of Sierra On-Line</rt></ruby>》**
作者Ken Williams
*[由 Joshua Allen Holm 推荐][21]*
在 1980 年代和 1990 年代,<ruby>雪乐山公司<rt>Sierra On-Line</rt></ruby>是计算机软件行业的巨头。这家由 Ken 和 Roberta Williams 夫妻创立的公司,出身并不起眼,但却发布了许多标志性的电脑游戏。《<ruby>国王密使<rt>King's Quest</rt></ruby>》、《<ruby>宇宙传奇<rt>Space Quest</rt></ruby>》、《<ruby>荣耀任务<rt>Quest for Glory</rt></ruby>》、《Leisure Suit Larry》 和 《<ruby>狩魔猎人<rt>Gabriel Knight</rt></ruby>》 只是该公司几个最大的专属系列中的很小一部分。
《并非所有童话故事都有幸福的结局》这本书,涵盖了从雪乐山公司发布第一款游戏 《<ruby>[神秘屋][22]<rt>Mystery House</rt></ruby>》,到该公司不幸地被 CUC 国际公司收购以及后续的所有内容。雪乐山品牌在被收购后仍存活了一段时间,但 Williams 创立的雪乐山已不复存在。Ken Williams 以一种只有他才能做到的方式,讲述了雪乐山公司的整个历史。雪乐山的历史叙述穿插了一些 Williams 提出的管理和计算机编程建议的章节。虽然 Ken Williams 在写这本书时,已经离开这个行业很多年了,但他的建议仍然非常重要。
虽然雪乐山公司已不复存在,但该公司对计算机游戏行业产生了持久的影响。对于任何对计算机软件历史感兴趣的人来说,《并非所有童话故事都有美好的结局》都是值得一读的。雪乐山公司在其鼎盛时期处于游戏开发的最前沿,从带领公司走过那个激动人心的岁月的 Ken Williams 身上,我们可以学到许多宝贵的经验。
### 《新机器的灵魂》
![Book title The Soul of a New Machine][23]
> **<ruby>[新机器的灵魂][24]<rt>The Soul of a New Machine</rt></ruby>》**
作者Tracy Kidder
*[由 Guarav Kamathe 推荐][25]*
我是计算机历史的狂热读者。知道这些人们如此依赖(并且经常被认为是理所当然)的计算机是如何形成的,真是令人着迷!我是在 [Bryan Cantrill][27] 的博客文章中,第一次听说 《[新机器的灵魂][26]》这本书的。这是一本由 [Tracy Kidder][29] 编著的非虚构书籍,于 1981 年出版,作者 Tracy Kidder也因此获得了 [普利策奖][30]。故事发生在 1970 年代,想象一下你是负责设计 [下一代计算机][31] 工程团队中的一员。故事的背景是在<ruby>通用数据公司<rt>Data General Corporation</rt></ruby>,该公司当时是一家小型计算机供应商,正在与美国<ruby>数字设备公司<rt>Digital Equipment Corporation</rt></ruby>DEC的 32 位 VAX 计算机相竞争。该书概述了通用数据公司内部两个相互竞争的团队,都想在设计新机器上一展身手,结果导致了一场争斗。接下来,细致地描绘了随之展开的事件。这本书深入地讲述了相关工程师的思想、他们的工作环境、他们在此过程中面临的技术挑战、他们是如何克服这些困难的、以及压力如何影响到了他们的个人生活等等。任何想知道计算机是怎么制造出来的人都应该阅读这本书。
以上就是 2022 年的推荐阅读书目。它提供了很多非常棒的选择,我相信读者们能得到数小时发人深省的阅读时光。想获取更多书籍推荐,请查看我们历年的阅读书目。
* [2021 年 Opensource.com 推荐阅读书目][32]
* [2020 年 Opensource.com 推荐阅读书目][33]
* [2019 年 Opensource.com 推荐阅读书目][34]
* [2018 年 Open Organization 推荐阅读书目][35]
* [2016 年 Opensource.com 推荐阅读书目][36]
* [2015 年 Opensource.com 推荐阅读书目][37]
* [2014 年 Opensource.com 推荐阅读书目][38]
* [2013 年 Opensource.com 推荐阅读书目][39]
* [2012 年 Opensource.com 推荐阅读书目][40]
* [2011 年 Opensource.com 推荐阅读书目][41]
* [2010 年 Opensource.com 推荐阅读书目][42]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list
作者:[Joshua Allen Holm][a]
选题:[lkxed][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/tea-cup-mug-flowers-book-window.jpg
[2]: https://unsplash.com/@sixteenmilesout?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://opensource.com/sites/default/files/2022-06/97_Things_Every_Java_Programmer_Should_Know_1.jpg
[5]: https://www.oreilly.com/library/view/97-things-every/9781491952689/
[6]: https://opensource.com/users/seth
[7]: https://opensource.com/sites/default/files/2022-06/A_City_is_Not_a_Computer_0.jpg
[8]: https://press.princeton.edu/books/paperback/9780691208053/a-city-is-not-a-computer
[9]: https://opensource.com/users/scottnesbitt
[10]: https://opensource.com/sites/default/files/2022-06/git_sync_murder_0.jpg
[11]: https://mwl.io/fiction/crime#gsm
[12]: https://opensource.com/users/holmja
[13]: https://opensource.com/sites/default/files/2022-06/Kick_Like_a_Girl.jpg
[14]: https://innerwings.org/books/kick-like-a-girl
[15]: https://opensource.com/users/holmja
[16]: https://opensource.com/sites/default/files/2022-06/Mine.jpg
[17]: https://www.minethebook.com/
[18]: https://opensource.com/users/bbehrens
[19]: https://opensource.com/sites/default/files/2022-06/Not_All_Fairy_Tales.jpg
[20]: https://kensbook.com/
[21]: https://opensource.com/users/holmja
[22]: https://en.wikipedia.org/wiki/Mystery_House
[23]: https://opensource.com/sites/default/files/2022-06/The_Soul_of_a_New_Machine.jpg
[24]: https://www.hachettebookgroup.com/titles/tracy-kidder/the-soul-of-a-new-machine/9780316204552/
[25]: https://opensource.com/users/gkamathe
[26]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[27]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[28]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[29]: https://en.wikipedia.org/wiki/Tracy_Kidder
[30]: https://www.pulitzer.org/winners/tracy-kidder
[31]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[32]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list
[33]: https://opensource.com/article/20/6/summer-reading-list
[34]: https://opensource.com/article/19/6/summer-reading-list
[35]: https://opensource.com/open-organization/18/6/summer-reading-2018
[36]: https://opensource.com/life/16/6/2016-summer-reading-list
[37]: https://opensource.com/life/15/6/2015-summer-reading-list
[38]: https://opensource.com/life/14/6/annual-reading-list-2014
[39]: https://opensource.com/life/13/6/summer-reading-list-2013
[40]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading
[41]: https://opensource.com/life/11/7/summer-reading-list
[42]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list

View File

@ -0,0 +1,260 @@
[#]: subject: "Advantages and Disadvantages of Using Linux"
[#]: via: "https://itsfoss.com/advantages-linux/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15136-1.html"
使用 Linux 的优势和劣势
======
![](https://img.linux.net.cn/data/attachment/album/202210/13/000526wn58kyntpp0ynt0z.jpg)
Linux 是一个流行词,你到处都能听到与 Linux 相关的内容。人们在技术论坛上讨论它、Linux 是课程中的一部分;你最喜欢的 YouTube 技术主播在兴奋地展示构建他们的 Linux 内核;你在 Twitter 上关注的 <ruby>10 倍效率开发者<rt>10x developers</rt></ruby>都是 Linux 粉丝。
基本上Linux 无处不在,每个人都在谈论它,因此你可能会不自主地陷入到对错失了 “学习 Linux” 的不安中。
所以,你想知道 Linux 的优势是什么,以及它是否值得去学习。
在这篇文章中,我总结了很多 Linux 的优势和劣势。
如果你在选择 Linux 还是你喜欢的操作系统上犹豫不决,我们愿意为你提供一些帮助。
> 在开始之前我们要指出的是“Linux” 本身并不是一个操作系统,它的操作系统被称为 [Linux 发行版][1],而且 Linux 的发行版有数百种。为简单起见,我将其称为 Linux 操作系统,而不是某个特定的 Linux 发行版。可以参考 [这篇文章][2],来更好地理解这些东西。
### 使用 Linux 的优势
如果你想使用 Linux 替代现在的操作系统,那么只有当你了解 Linux 的优势,才会有意义。
如果 Linux 在你想要它做的事情上表现出色,你将永远都不会后悔你的决定。
#### 不用购买许可证
![open source proprietary illustration][3]
你需要拥有苹果公司的设备,才能使用 macOS 作为日常使用;你需要拥有 Windows 许可证,才能使用微软的 Windows。
因此,你需要对这些东西进行一定的投资。但是,对于 Linux 呢?它是完全免费的!
与 Windows 和 macOS 相比不仅仅是操作系统上的不同Linux 上还有许多免费的软件包。
你无需支付许可证费用,就可以使用所有主流的 Linux 发行版。当然,你可以选择捐赠来支持该项目,但这完全取决于你自己的意愿。
**此外**Linux 是完全开源的,这意味着所有人都能检查源代码的透明度。
#### 能以最小的系统资源运行
![linux mint 21 resource usage][4]
通常,用户考虑尝试另一个操作系统,是因为他们对现有系统的性能感到沮丧。
这也是我的个人经历。我受朋友的委托,使用 Linux 来更新他们的旧笔记本电脑或经常滞后的系统。
而且Linux 发行版能够在普通的硬件配置上运行,你不需要拥有最新最好的硬件。此外,还有专门的 [轻量级 Linux 发行版][5] 可以在较旧的硬件上运行而不会出现问题。
因此,如果你立即使用 Linux你有更多的机会恢复你的旧系统或在短时间内获得一个快速的计算机。
#### 更少地受到恶意软件的威胁
![malware illustration][6]
没有操作系统可以免受恶意文件或脚本的侵害。如果你从未知来源下载并运行某些内容,则无法保证其安全性。
然而,对于 Linux情况会更好一些。诚然研究人员已经发现了针对 Linux 物联网设备的攻击者。但是,对于桌面 Linux还无须担心。
恶意攻击者攻击的目标是更受家庭欢迎的平台,而 Linux 在桌面领域并没有很大的市场份额来吸引到这种关注。在某种程度上,这可能是一件好事。
你要做的就是坚持使用官方软件包,并在执行任何操作之前阅读指导说明。
另外,在 Linux 上,你也不用安装防病毒程序,来保护本机免受恶意软件的威胁。
#### 可个性化定制
![Pop!_OS 22.04 LTS][7]
有了开源的代码,你就可以根据需要自由定制你的 Linux 体验。
当然,你需要具备一些专业知识,才能充分地定制你的 Linux。但是与 macOS 和 Windows 相比,即使你没有任何经验,也可以在 Linux 操作系统中获得更多自定义功能。
![Customized Linux experience | Reddit user: u/ZB652][8]
如果你想要个性化你的体验,并愿意付出额外的努力,那么 Linux 就非常适合你。例如,你可以参考 [KDE 定制指南][10] 和 [停靠区选项][11] 以获得基本的自定义方法。
#### 适用于所有人
使用 macOS 或 Windows你只能在微软或苹果最终确定的设计/偏好选择中,做出你的选择。
但是,对于 Linux你能发现专注于各种事情的不同的 Linux 发行版。
例如,你可以选择能始终获取最新功能的 Linux 发行版,或者你也可以选择只为你提供安全/维护更新的 Linux 发行版。
你可以使用有开箱即用、外观好看的 Linux 发行版,或提供最大程度的自定义选项的 Linux 发行版。Linux 发行版的选择是多种多样的。
我建议你从 [能提供最佳用户体验的选项][12] 开始。
#### 完整的开发环境
如果你是软件开发人员或学习编程的学生Linux 绝对是有优势的。许多构建工具都能在 Linux 上使用,并能够集成到 Linux 中。使用容器Docker你可以轻松创建专门的测试环境。
微软知道这个价值,因此它创建了 WSL让开发人员可以在 Windows 内访问 Linux 环境。尽管如此WSL 并没有接近真正的 Linux 体验,在 Windows 上使用 Docker 也同样如此。
但是这并不适用于网页设计,因为极为好用的 Adobe 工具并不能在 Linux 上使用。但是,如果你的工作不需要 AdobeLinux 会是一个不错的选择。
#### Linux 是一项必须学习的技能!
使用 Linux 有一个学习曲线,刚开始时掌握的速度最快,之后则逐渐变得平缓,但是它给你提供了对各种事物的洞察力。
你可以通过探索和自由定制 Linux或者仅仅是通过使用它来了解操作系统中的事物是如何工作的。
不是每个人都知道如何使用 Linux。
因此,通过学习 Linux 来获得和扩展你对软件和计算机的知识会是一项很棒的技能。
#### Linux 是一个必要的工作技能
![job illustration][13]
正如我之前提及的,学习 Linux 是一个很好的技能,这不仅仅能增长你的知识,它在职业方面也很有用。
通过学习 Linux 的基础知识,你可以成为 Linux 系统管理员或安全专家,并且能胜任很多其他的工作。
因此,学习 Linux 开辟了一系列机会!
#### 保护隐私
如果你没有微软账号,那么你就不能使用 Windows。当你启动 Windows 时,你会发现它会在很多的服务和应用中记录你的数据。
![privacy windows][14]
虽然你可以找到此类设置并禁用它们但很明显Windows 的默认配置不会考虑你的隐私。
而在 Linux 中,并非如此。虽然某些应用程序/发行版会有一个可选功能让你可以与他们分享有用的东西但这并不是什么大问题。Linux 上的大多数东西都是经过定制的,默认情况下可以为你提供最大的隐私,从而无需配置任何东西。
但是,苹果和微软会采用巧妙的策略从你的计算机收集匿名的使用数据。偶尔,他们会记录你在他们的应用商店的活动,以及当你通过你的账户登录时的信息。
#### 自定义项目和自托管
你是一个喜欢捣鼓小发明的人吗如果你喜欢制作电子或软件项目Linux 会是你的发明天堂。
你可以在 [诸如树莓派这样的单板机][15] 上使用 Linux开发出一些很酷的东西例如复古游戏机、家庭自动化系统等等。
你也能在你自己的服务器上部署开源的软件,并维护他们。这称为自托管,它有以下的优点:
* 减少托管费用
* 掌控你的数据
* 对于你的每个需求,定制应用/服务
你能直接使用 Linux 或者使用基于 Linux 的工具,来做这所有的事情。
### 使用 Linux 的劣势
Linux 并不是一个没有缺点的选择。任何事都具有两面性Linux 也有一些不好的地方,包括:
#### 不容易快速上手
![too much learn illustration][16]
学习的目的通常不在于掌握一项新技能,更重要的是尽可能快地适应。
如果用户使用某一个东西,却无法完成任务,那么它并不适合他们。对于每个操作系统也是如此。例如,使用 Windows/macOS 的用户可能不会很快适应 Linux。
你可以阅读我们的比较文章以了解 [macOS 和 Linux 之间的区别][17]。
我同意一些人会比其他人学习速度更快。但是,总体而言,当你踏入 Linux 世界时,你需要付出一点努力,去学习那些不明显的东西。
#### 多样性
虽然我们建议使用 [为初学者量身定制的最佳 Linux 发行版][18],但一开始就选择你喜欢的版本,可能会让人不知所措。
你可能会想尝试其中多个版本,以查看最适合你的 Linux 发行版,但是这既耗时又令人十分困惑。
最好选择其中一种 Linux 发行版。但是,如果你仍然感到困惑,你可以仍旧使用 Windows/macOS。
#### 在桌面领域的市场份额相对较低
![linux desktop market share][19]
Linux 不是流行的桌面操作系统。
这不应该是用户关心的问题。但是,如果没有大的市场占有率,就不能指望应用程序开发人员为 Linux 开发/维护工具。
当然,现在 Linux 有很多重要且流行的工具,比以往任何时候都多。但是,这仍然是一个因素,意味着并非所有好的工具/服务都可以在 Linux 上运行。
请参阅我们定期更新的关于 [Linux 的市场份额][20] 的文章,了解相关内容。
#### 缺少专有软件
正如我上面提到的,并不是开发者都对将他们的工具/应用程序引入 Linux 感兴趣。
因此,你可能在 Linux 上找不到适用于 Windows/macOS 的所有优质专有产品。诚然,你可以使用兼容层在 Linux 上运行 Windows/macOS 程序。
但这并不总是有效。例如,你没有支持 Linux 的官方微软 365 和像 Wallpaper Engine 这样的工具。
#### 不是游戏优先的操作系统
![gaming illustration][21]
如果你想在电脑上玩游戏Windows 仍然是支持最新硬件和技术的最佳选择。
谈到 Linux有很多 “如果和但是” 需要一个明确的答案。
请注意,你可以在 Linux 上玩很多现代游戏,但在各种不同的硬件上可能不会有一致的体验。正如我们的一位读者在评论中建议的那样,你可以使用 Steam Play 在 Linux 上尝试许多 Windows 独占的游戏,而不会出现潜在的障碍。
Steam Deck 正在鼓励更多的游戏开发者使他们的游戏在 Linux 上运行得更好。而且,这在不久的将来只会得到改善。因此,如果你能花点功夫在 Linux 上尝试你最喜欢的游戏,可能不会让人失望。
话虽如此,在 Linux 上玩游戏并不方便。如果你有兴趣,可以参考我们的 [Linux 游戏指南][22] 以了解更多信息。
#### 缺少专业的技术支持
我知道不是每个人都需要技术支持。但是,一些技术支持选项能够在他们的笔记本电脑或计算机上远程指导用户/修复问题。
使用 Linux你可以向社区寻求帮助但它可能不像某些专业技术支持服务那样好用。
你仍然需要自己完成大部分努力,并自己尝试一些东西,并不是每个人都喜欢这样做的。
### 总结
我主要是 Linux 用户,但我在玩游戏时使用 Windows。虽然我偏好 Linux但我尽力在这篇文章中对 Linux 保持中立态度,并给你足够的指导,以便你可以决定 Linux 是否适合你。
如果你打算使用 Linux并且从未使用过它请迈出你的第一步吧可以参考 [在虚拟机中使用 Linux 的第一步][23]。如果你有 Windows 11你也可以使用 WSL2。
我非常乐意收到你的评价和建议。
--------------------------------------------------------------------------------
via: https://itsfoss.com/advantages-linux/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/what-is-linux/
[2]: https://itsfoss.com/what-is-linux/
[3]: https://itsfoss.com/wp-content/uploads/2022/08/open-source-proprietary-illustration.jpg
[4]: https://itsfoss.com/wp-content/uploads/2022/08/linux-mint-21-resource-usage.jpg
[5]: https://itsfoss.com/lightweight-linux-beginners/
[6]: https://itsfoss.com/wp-content/uploads/2022/09/malware-illustration.jpg
[7]: https://itsfoss.com/wp-content/uploads/2022/08/pop-os-screenshot-2022.png
[8]: https://itsfoss.com/wp-content/uploads/2022/09/customization-reddit-unixporn.jpg
[9]: https://www.reddit.com/r/unixporn/comments/wzu5nl/plasma_cscx2n/
[10]: https://itsfoss.com/kde-customization/
[11]: https://itsfoss.com/best-linux-docks/
[12]: https://itsfoss.com/beautiful-linux-distributions/
[13]: https://itsfoss.com/wp-content/uploads/2022/09/job-illustration.jpg
[14]: https://itsfoss.com/wp-content/uploads/2022/09/privacy-windows.webp
[15]: https://itsfoss.com/raspberry-pi-alternatives/
[16]: https://itsfoss.com/wp-content/uploads/2022/09/too-much-learn-illustration.jpg
[17]: https://itsfoss.com/mac-linux-difference/
[18]: https://itsfoss.com/best-linux-beginners/
[19]: https://itsfoss.com/wp-content/uploads/2017/09/linux-desktop-market-share.jpg
[20]: https://itsfoss.com/linux-market-share/
[21]: https://itsfoss.com/wp-content/uploads/2022/08/gaming-illustration.jpg
[22]: https://itsfoss.com/linux-gaming-guide/
[23]: https://itsfoss.com/why-linux-virtual-machine/

View File

@ -0,0 +1,527 @@
[#]: subject: "Python Microservices Using Flask on Kubernetes"
[#]: via: "https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15154-1.html"
在 Kubernetes 上使用 Flask 搭建 Python 微服务
======
![](https://img.linux.net.cn/data/attachment/album/202210/19/124429nmw0xmfz3x3mrrf2.jpg)
*微服务遵循领域驱动设计DDD与开发平台无关。Python 微服务也不例外。Python3 的面向对象特性使得按照 DDD 对服务进行建模变得更加容易。本系列的第 10 部分演示了如何将用户管理系统的查找服务作为 Python 微服务部署在 Kubernetes 上。*
微服务架构的强大之处在于它的多语言性。企业将其功能分解为一组微服务,每个团队自由选择一个平台。
我们的用户管理系统已经分解为四个微服务,分别是添加、查找、搜索和日志服务。添加服务在 Java 平台上开发并部署在 Kubernetes 集群上,以实现弹性和可扩展性。这并不意味着其余的服务也要使用 Java 开发,我们可以自由选择适合个人服务的平台。
让我们选择 Python 作为开发查找服务的平台。查找服务的模型已经设计好了(参考 2022 年 3 月份的文章),我们只需要将这个模型转换为代码和配置。
### Pythonic 方法
Python 是一种通用编程语言,已经存在了大约 30 年。早期,它是自动化脚本的首选。然而,随着 Django 和 Flask 等框架的出现它的受欢迎程度越来越高现在各种领域中都在应用它如企业应用程序开发。数据科学和机器学习进一步推动了它的发展Python 现在是三大编程语言之一。
许多人将 Python 的成功归功于它容易编码。这只是一部分原因。只要你的目标是开发小型脚本Python 就像一个玩具,你会非常喜欢它。然而,当你进入严肃的大规模应用程序开发领域时,你将不得不处理大量的 `if``else`Python 变得与任何其他平台一样好或一样坏。例如,采用一种面向对象的方法!许多 Python 开发人员甚至可能没意识到 Python 支持类、继承等功能。Python 确实支持成熟的面向对象开发,但是有它自己的方式 -- Pythonic让我们探索一下
### 领域模型
`AddService` 通过将数据保存到一个 MySQL 数据库中来将用户添加到系统中。`FindService` 的目标是提供一个 REST API 按用户名查找用户。域模型如图 1 所示。它主要由一些值对象组成,如 `User` 实体的`Name`、`PhoneNumber` 以及 `UserRepository`
![图 1: 查找服务的域模型][1]
让我们从 `Name` 开始。由于它是一个值对象,因此必须在创建时进行验证,并且必须保持不可变。基本结构如所示:
```
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
如你所见,`Name` 包含一个字符串类型的值。作为后期初始化的一部分,我们会验证它。
Python 3.7 提供了 `@dataclass` 装饰器,它提供了许多开箱即用的数据承载类的功能,如构造函数、比较运算符等。如下是装饰后的 `Name` 类:
```
from dataclasses import dataclass
@dataclass
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
以下代码可以创建一个 `Name` 对象:
```
name = Name("Krishna")
```
`value` 属性可以按照如下方式读取或写入:
```
name.value = "Mohan"
print(name.value)
```
可以很容易地与另一个 `Name` 对象比较,如下所示:
```
other = Name("Mohan")
if name == other:
print("same")
```
如你所见,对象比较的是值而不是引用。这一切都是开箱即用的。我们还可以通过冻结对象使对象不可变。这是 `Name` 值对象的最终版本:
```
from dataclasses import dataclass
@dataclass(frozen=True)
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
`PhoneNumber` 也遵循类似的方法,因为它也是一个值对象:
```
@dataclass(frozen=True)
class PhoneNumber:
value: int
def __post_init__(self):
if self.value < 9000000000:
raise ValueError("Invalid Phone Number")
```
`User` 类是一个实体,不是一个值对象。换句话说,`User` 是可变的。以下是结构:
```
from dataclasses import dataclass
import datetime
@dataclass
class User:
_name: Name
_phone: PhoneNumber
_since: datetime.datetime
def __post_init__(self):
if self._name is None or self._phone is None:
raise ValueError("Invalid user")
if self._since is None:
self.since = datetime.datetime.now()
```
你能观察到 `User` 并没有冻结,因为我们希望它是可变的。但是,我们不希望所有属性都是可变的。标识字段如 `_name``_since` 是希望不会修改的。那么,这如何做到呢?
Python3 提供了所谓的描述符协议,它会帮助我们正确定义 getter 和 setter。让我们使用 `@property` 装饰器将 getter 添加到 `User` 的所有三个字段中。
```
@property
def name(self) -> Name:
return self._name
@property
def phone(self) -> PhoneNumber:
return self._phone
@property
def since(self) -> datetime.datetime:
return self._since
```
`phone` 字段的 setter 可以使用 `@<字段>.setter` 来装饰:
```
@phone.setter
def phone(self, phone: PhoneNumber) -> None:
if phone is None:
raise ValueError("Invalid phone")
self._phone = phone
```
通过重写 `__str__()` 函数,也可以为 `User` 提供一个简单的打印方法:
```
def __str__(self):
return self.name.value + " [" + str(self.phone.value) + "] since " + str(self.since)
```
这样,域模型的实体和值对象就准备好了。创建异常类如下所示:
```
class UserNotFoundException(Exception):
pass
```
域模型现在只剩下 `UserRepository` 了。Python 提供了一个名为 `abc` 的有用模块来创建抽象方法和抽象类。因为 `UserRepository` 只是一个接口,所以我们可以使用 `abc` 模块。
任何继承自 `abc.ABC` 的类都将变为抽象类,任何带有 `@abc.abstractmethod` 装饰器的函数都会变为一个抽象函数。下面是 `UserRepository` 的结构:
```
from abc import ABC, abstractmethod
class UserRepository(ABC):
@abstractmethod
def fetch(self, name:Name) -> User:
pass
```
`UserRepository` 遵循仓储模式。换句话说,它在 `User` 实体上提供适当的 CRUD 操作,而不会暴露底层数据存储语义。在本例中,我们只需要 `fetch()` 操作,因为 `FindService` 只查找用户。
因为 `UserRepository` 是一个抽象类,我们不能从抽象类创建实例对象。创建对象必须依赖于一个具体类实现这个抽象类。数据层 `UserRepositoryImpl` 提供了 `UserRepository` 的具体实现:
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
pass
```
由于 `AddService` 将用户数据存储在一个 MySQL 数据库中,因此 `UserRepositoryImpl` 也必须连接到相同的数据库去检索数据。下面是连接到数据库的代码。注意,我们正在使用 MySQL 的连接库。
```
from mysql.connector import connect, Error
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
try:
with connect(
host="mysqldb",
user="root",
password="admin",
database="glarimy",
) as connection:
with connection.cursor() as cursor:
cursor.execute("SELECT * FROM ums_users where name=%s", (name.value,))
row = cursor.fetchone()
if cursor.rowcount == -1:
raise UserNotFoundException()
else:
return User(Name(row[0]), PhoneNumber(row[1]), row[2])
except Error as e:
raise e
```
在上面的片段中,我们使用用户 `root` / 密码 `admin` 连接到一个名为 `mysqldb` 的数据库服务器,使用名为 `glarimy` 的数据库(模式)。在演示代码中是可以包含这些信息的,但在生产中不建议这么做,因为这会暴露敏感信息。
`fetch()` 操作的逻辑非常直观,它对 `ums_users` 表执行 SELECT 查询。回想一下,`AddService` 正在将用户数据写入同一个表中。如果 SELECT 查询没有返回记录,`fetch()` 函数将抛出 `UserNotFoundException` 异常。否则,它会从记录中构造 `User` 实体并将其返回给调用者。这没有什么特殊的。
### 应用层
最终,我们需要创建应用层。此模型如图 2 所示。它只包含两个类:控制器和一个 DTO。
![图 2: 添加服务的应用层][2]
众所周知,一个 DTO 只是一个没有任何业务逻辑的数据容器。它主要用于在 `FindService` 和外部之间传输数据。我们只是提供了在 REST 层中将 `UserRecord` 转换为字典以便用于 JSON 传输:
```
class UserRecord:
def toJSON(self):
return {
"name": self.name,
"phone": self.phone,
"since": self.since
}
```
控制器的工作是将 DTO 转换为用于域服务的域对象,反之亦然。可以从 `find()` 操作中观察到这一点。
```
class UserController:
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.value
record.phone = user.phone.value
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
`find()` 操作接收一个字符串作为用户名,然后将其转换为 `Name` 对象,并调用 `UserRepository` 获取相应的 `User` 对象。如果找到了,则使用检索到的 `User`` 对象创建 `UserRecord`。回想一下,将域对象转换为 DTO 是很有必要的,这样可以对外部服务隐藏域模型。
`UserController` 不需要有多个实例,它也可以是单例的。通过重写 `__new__`,可以将其建模为一个单例。
```
class UserController:
def __new__(self):
if not hasattr(self, instance):
self.instance = super().__new__(self)
return self.instance
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.getValue()
record.phone = user.phone.getValue()
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
我们已经完全实现了 `FindService` 的模型,剩下的唯一任务是将其作为 REST 服务公开。
### REST API
`FindService` 只提供一个 API那就是通过用户名查找用户。显然 URI 如下所示:
```
GET /user/{name}
```
此 API 希望根据提供的用户名查找用户,并以 JSON 格式返回用户的电话号码等详细信息。如果没有找到用户API 将返回一个 404 状态码。
我们可以使用 Flask 框架来构建 REST API它最初的目的是使用 Python 开发 Web 应用程序。除了 HTML 视图,它还进一步扩展到支持 REST 视图。我们选择这个框架是因为它足够简单。
创建一个 Flask 应用程序:
```
from flask import Flask
app = Flask(__name__)
```
然后为 Flask 应用程序定义路由,就像函数一样简单:
```
@app.route('/user/<name>')
def get(name):
pass
```
注意 `@app.route` 映射到 API `/user/<name>`,与之对应的函数的 `get()`
如你所见,每次用户访问 API 如 `http://server:port/user/Krishna` 时,都将调用这个 `get()` 函数。Flask 足够智能,可以从 URL 中提取 `Krishna` 作为用户名,并将其传递给 `get()` 函数。
`get()` 函数很简单。它要求控制器找到该用户,并将其与通常的 HTTP 头一起打包为 JSON 格式后返回。如果控制器返回 `None`,则 `get()` 函数返回合适的 HTTP 状态码。
```
from flask import jsonify, abort
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
```
最后,我们需要 Flask 应用程序提供服务,可以使用 `waitress` 服务:
```
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
```
在上面的片段中,应用程序在本地主机的 8080 端口上提供服务。最终代码如下所示:
```
from flask import Flask, jsonify, abort
from waitress import serve
app = Flask(__name__)
@app.route('/user/<name>')
def get(name):
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
serve(app, host="0.0.0.0", port=8080)
```
### 部署
`FindService` 的代码已经准备完毕。除了 REST API 之外,它还有域模型、数据层和应用程序层。下一步是构建此服务,将其容器化,然后部署到 Kubernetes 上。此过程与部署其他服务妹有任何区别,但有一些 Python 特有的步骤。
在继续前进之前,让我们来看下文件夹和文件结构:
```
+ ums-find-service
+ ums
- domain.py
- data.py
- app.py
- Dockerfile
- requirements.txt
- kube-find-deployment.yml
```
如你所见,整个工作文件夹都位于 `ums-find-service` 下,它包含了 `ums` 文件夹中的代码和一些配置文件,例如 `Dockerfile`、`requirements.txt` 和 `kube-find-deployment.yml`
`domain.py` 包含域模型,`data.py` 包含 `UserRepositoryImpl``app.py` 包含剩余代码。我们已经阅读过代码了,现在我们来看看配置文件。
第一个是 `requirements.txt`,它声明了 Python 系统需要下载和安装的外部依赖项。我们需要用查找服务中用到的每个外部 Python 模块来填充它。如你所见,我们使用了 MySQL 连接器、Flask 和 Waitress 模块。因此,下面是 `requirements.txt` 的内容。
```
Flask==2.1.1
Flask_RESTful
mysql-connector-python
waitress
```
第二步是在 `Dockerfile` 中声明 Docker 相关的清单,如下:
```
FROM python:3.8-slim-buster
WORKDIR /ums
ADD ums /ums
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 8080
ENTRYPOINT ["python"]
CMD ["/ums/app.py"]
```
总的来说,我们使用 Python 3.8 作为基线,除了移动 `requirements.txt` 之外,我们还将代码从 `ums` 文件夹移动到 Docker 容器中对应的文件夹中。然后,我们指示容器运行 `pip3 install` 命令安装对应模块。最后,我们向外暴露 8080 端口(因为 waitress 运行在此端口上)。
为了运行此服务,我们指示容器使用使用以下命令:
```
python /ums/app.py
```
一旦 `Dockerfile` 准备完成,在 `ums-find-service` 文件夹中运行以下命令,创建 Docker 镜像:
```
docker build -t glarimy/ums-find-service
```
它会创建 Docker 镜像,可以使用以下命令查找镜像:
```
docker images
```
尝试将镜像推送到 Docker Hub你也可以登录到 Docker。
```
docker login
docker push glarimy/ums-find-service
```
最后一步是为 Kubernetes 部署构建清单。
在之前的文章中,我们已经介绍了如何建立 Kubernetes 集群、部署和使用服务的方法。我假设仍然使用之前文章中的清单文件来部署添加服务、MySQL、Kafka 和 Zookeeper。我们只需要将以下内容添加到 `kube-find-deployment.yml` 文件中:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ums-find-service
labels:
app: ums-find-service
spec:
replicas: 3
selector:
matchLabels:
app: ums-find-service
template:
metadata:
labels:
app: ums-find-service
spec:
containers:
- name: ums-find-service
image: glarimy/ums-find-service
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: ums-find-service
labels:
name: ums-find-service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: ums-find-service
```
上面清单文件的第一部分声明了 `glarimy/ums-find-service` 镜像的 `FindService`,它包含三个副本。它还暴露 8080 端口。清单的后半部分声明了一个 Kubernetes 服务作为 `FindService` 部署的前端。请记住在之前文章中mysqldb 服务已经是上述清单的一部分了。
运行以下命令在 Kubernetes 集群上部署清单文件:
```
kubectl create -f kube-find-deployment.yml
```
部署完成后,可以使用以下命令验证容器组和服务:
```
kubectl get services
```
输出如图 3 所示:
![图 3: Kubernetes 服务][3]
它会列出集群上运行的所有服务。注意查找服务的外部 IP使用 `curl` 调用此服务:
```
curl http://10.98.45.187:8080/user/KrishnaMohan
```
注意10.98.45.187 对应查找服务,如图 3 所示。
如果我们使用 `AddService` 创建一个名为 `KrishnaMohan` 的用户,那么上面的 `curl` 命令看起来如图 4 所示:
![图 4: 查找服务][4]
用户管理系统UMS的体系结构包含 `AddService``FindService`,以及存储和消息传递所需的后端服务,如图 5 所示。可以看到终端用户使用 `ums-add-service` 的 IP 地址添加新用户,使用 `ums-find-service` 的 IP 地址查找已有用户。每个 Kubernetes 服务都由三个对应容器的节点支持。还要注意:同样的 mysqldb 服务用于存储和检索用户数据。
![图 5: UMS 的添加服务和查找服务][5]
### 其他服务
UMS 系统还包含两个服务:`SearchService` 和 `JournalService`。在本系列的下一部分中,我们将在 Node 平台上设计这些服务,并将它们部署到同一个 Kubernetes 集群,以演示多语言微服务架构的真正魅力。最后,我们将观察一些与微服务相关的设计模式。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-The-domain-model-of-FindService-1.png
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-The-application-layer-of-FindService.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Kubernetes-services-1.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-FindService.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-UMS-with-AddService-and-FindService.png
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Python-Microservices-1-696x477.jpg

View File

@ -0,0 +1,323 @@
[#]: subject: "How To Find Default Gateway IP Address In Linux And Unix From Commandline"
[#]: via: "https://ostechnix.com/find-default-gateway-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15158-1.html"
在 Linux 中如何从命令行查找默认网关的 IP 地址
======
![](https://img.linux.net.cn/data/attachment/album/202210/20/161605f5ispl5jslbpllss.jpg)
> Linux 下查找网关或路由器 IP 地址的 5 种方法。
**网关** 是一个节点或一个路由器,当连接到同一路由器时,它允许两个或多个 IP 地址不同的主机相互通信。如果没有网关,它们将无法相互通信。换句话说,网关充当接入点,将网络数据从本地网络传输到远程网络。在本指南中,我们将看到在 Linux 和 Unix 中从命令行找到默认网关的所有可能方法。
### 在 Linux 中查找默认网关
Linux 中有各种各样的命令行工具可用于查看网关 IP 地址。最常用的工具是:`ip`、`ss` 和 `netcat`。我们将通过示例了解如何使用每种工具查看默认网关。
#### 1、使用 ip 命令查找默认网关
`ip` 命令用于显示和操作 Linux 中的路由、网络设备、接口和隧道。
要查找默认网关或路由器 IP 地址,只需运行:
```
$ ip route
```
或者:
```
$ ip r
```
或者:
```
$ ip route show
```
示例输出:
```
default via 192.168.1.101 dev eth0 proto static metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.20 metric 100
```
你从输出中看到了 `default via 192.168.1.101` 这一行吗?它就是默认网关。我的默认网关是 `192.168.1.101`
你可以使用 `-4` 参数只`显示 IPv4 网关`
```
$ ip -4 route
```
或者,使用 `-6` 参数只**显示 IPv6 网关**
```
$ ip -6 route
```
如你所见IP 地址和子网详细信息也一并显示了。如果你想只显示默认网关,排除所有其他细节,可以使用 `ip route` 搭配 `awk` 命令,如下所示。
使用 `ip route``awk` 命令打印网关地址,执行命令:
```
$ ip route | awk '/^default/{print $3}'
```
LCTT 译注wsl1 上无输出结果,正常 Linux 发行版无问题)
或者:
```
$ ip route show default | awk '{print $3}'
```
这将只列出网关 IP
示例输出:
```
192.168.1.101
```
![使用 ip 命令列出默认网关][1]
你也可以使用 [grep][2] 命令配合 `ip route` 对默认网关进行过滤。
使用 `ip route``grep` 查找默认网关 IP 地址,执行命令:
```
$ ip route | grep default
default via 192.168.1.101 dev eth0 proto static metric 100
```
在最新的 Linux 发行版中,`ip route` 是查找默认网关 IP 地址的推荐命令。然而,你们中的一些人可能仍然在使用传统的工具,如 `route``netstat`。旧习难改,对吧?下面的部分将介绍如何在 Linux 中使用 `route``netstat` 命令确定网关。
#### 2、使用 route 命令显示默认网关 IP 地址
`route` 命令用于在较老的 Linux 发行版中显示和操作路由表,如 RHEL 6、CentOS 6 等。
如果你正在使用较老的 Linux 发行版,你可以使用 `route` 命令来显示默认网关。
请注意,在最新的 Linux 发行版中,`route` 工具已被弃用,`ip route` 命令取而代之。如果你因为某些原因仍然想使用 `route`,你需要安装它。
首先,我们需要检查哪个包提供了 `route` 命令。为此,在基于 RHEL 的系统上运行以下命令:
```
$ dnf provides route
```
示例输出:
```
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : @System
Matched from:
Filename : /usr/sbin/route
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : baseos
Matched from:
Filename : /usr/sbin/route
```
如你所见,`net-tools` 包提供了 `route` 命令。所以,让我们使用以下命令来安装它:
```
$ sudo dnf install net-tools
```
现在,运行带有 `-n` 参数的 `route` 命令来显示 Linux 系统中的网关或路由器 IP 地址:
```
$ route -n
```
示例输出:
```
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
```
![使用 route 命令显示默认网关 IP 地址][3]
如你所见,网关 IP 地址是 192.168.1.101。你还将在 Flags 下面看到两个字母 `UG`。字母 `U` 代表接口是 “Up”在运行`G` 表示 “Gateway”网关
#### 3、使用 netstat 命令查看网关 IP 地址
`netstat` 会输出 Linux 网络子系统的信息。使用 `netstat` 工具,我们可以在 Linux 和 Unix 系统中打印网络连接、路由表、接口统计信息、伪装连接和组播成员关系。
`netstat``net-tools` 包的一部分,所以确保你已经在 Linux 系统中安装了它。使用以下命令在基于 RHEL 的系统中安装它:
```
$ sudo dnf install net-tools
```
使用 netstat 命令打印默认网关 IP 地址:
```
$ netstat -rn
```
示例输出:
```
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
![使用 netstat 命令查看网关 IP 地址][4]
`netstat` 命令与 `route` 命令的输出信息相同。如上输出可知,网关的 IP 地址为 `192.168.1.191``UG` 表示网关连接的网卡是有效的,`G` 表示网关。
请注意 `netstat` 也已弃用,建议使用 `ss` 命令代替 `netstat`
#### 4、使用 routel 命令打印默认网关或路由器 IP 地址
`routel` 是一个脚本,它以一种漂亮格式的输出路由。`routel` 脚本的输出让一些人认为比 `ip route` 列表更直观。
`routel` 脚本也是 `net-tools` 包的一部分。
打印默认网关或路由器 IP 地址,不带任何参数运行 `routel` 脚本,如下所示:
```
$ routel
```
示例输出:
```
target gateway source proto scope dev tbl
default 192.168.1.101 static eth0
172.17.0.0/ 16 172.17.0.1 kernel linkdocker0
192.168.1.0/ 24 192.168.1.20 kernel link eth0
127.0.0.0/ 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
192.168.1.20 local 192.168.1.20 kernel host eth0 local
192.168.1.255 broadcast 192.168.1.20 kernel link eth0 local
::1 kernel lo
::/ 96 unreachable lo
::ffff:0.0.0.0/ 96 unreachable lo
2002:a00::/ 24 unreachable lo
2002:7f00::/ 24 unreachable lo
2002:a9fe::/ 32 unreachable lo
2002:ac10::/ 28 unreachable lo
2002:c0a8::/ 32 unreachable lo
2002:e000::/ 19 unreachable lo
3ffe:ffff::/ 32 unreachable lo
fe80::/ 64 kernel eth0
::1 local kernel lo local
fe80::d085:cff:fec7:c1c3 local kernel eth0 local
```
![使用 routel 命令打印默认网关或路由器 IP 地址][5]
只打印默认网关,和 `grep` 命令配合,如下所示:
```
$ routel | grep default
default 192.168.1.101 static eth0
```
#### 5、从以太网配置文件中查找网关
如果你在 [Linux 或 Unix 中配置了静态 IP 地址][6],你可以通过查看网络配置文件查看默认网关或路由器 IP 地址。
在基于 RPM 的系统上,如 Fedora、RHEL、CentOS、AlmaLinux 和 Rocky Linux 等,网络接口卡配置存储在 `/etc/sysconfig/network-scripts/` 目录下。
查找网卡的名称:
```
# ip link show
```
示例输出:
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d2:85:0c:c7:c1:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
```
网卡名为 `eth0`。所以让我们打开这个网卡文件的网卡配置:
```
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
```
示例输出:
```
DEVICE=eth0
ONBOOT=yes
UUID=eb6b6a7c-37f5-11ed-a59a-a0e70bdf3dfb
BOOTPROTO=none
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.1.101
DNS1=8.8.8.8
```
如你所见,网关 IP 为 `192.168.1.101`
在 Debian、Ubuntu 及其衍生版中,所有的网络配置文件都存储在 `/etc/network` 目录下。
```
$ cat /etc/network/interfaces
```
示例输出:
```
auto ens18
iface ens18 inet static
address 192.168.1.150
netmask 255.255.255.0
gateway 192.168.1.101
dns-nameservers 8.8.8.8
```
请注意,此方法仅在手动配置 IP 地址时有效。对于启用 DHCP 的网络,需要按照前面的 4 种方法操作。
### 总结
在本指南中,我们列出了在 Linux 和 Unix 系统中找到默认网关的 5 种不同方法,我们还在每种方法中包含了显示网关/路由器 IP 地址的示例命令。希望它对你有所帮助。
--------------------------------------------------------------------------------
via: https://ostechnix.com/find-default-gateway-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/wp-content/uploads/2022/09/Find-Default-Gateway-Using-ip-Command.png
[2]: https://ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[3]: https://ostechnix.com/wp-content/uploads/2022/09/Display-Default-Gateway-IP-Address-Using-route-Command.png
[4]: https://ostechnix.com/wp-content/uploads/2022/09/View-Gateway-IP-Address-Using-netstat-Command.png
[5]: https://ostechnix.com/wp-content/uploads/2022/09/Print-Default-Gateway-IP-Address-Or-Router-IP-Address-Using-routel-Command.png
[6]: https://ostechnix.com/configure-static-ip-address-linux-unix/

View File

@ -0,0 +1,186 @@
[#]: subject: "PyLint: The good, the bad, and the ugly"
[#]: via: "https://opensource.com/article/22/9/pylint-good-bad-ugly"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15144-1.html"
PyLint 的优点、缺点和危险
======
![](https://img.linux.net.cn/data/attachment/album/202210/16/093840z9pnzfv9ykfccoq9.jpg)
> 充分利用 PyLint。
敲黑板PyLint 实际上很好!
“PyLint 可以拯救你的生命”这是一句夸张的描述但没有你想象的那么夸张。PyLint 可以让你远离非常难找到的和复杂的缺陷。最差的情况下,它只可以节省测试运行的时间。最好的情况下,它可以帮你避免生产环境中复杂的错误。
### 优点
我不好意思说这种情况是多么普遍。测试的命名总是*那么奇怪*:没有人关心这个名称,而且通常也找不到一个自然的名称。例如以下代码:
```
def test_add_small():
    # Math, am I right?
    assert 1 + 1 == 3
   
def test_add_large():
    assert 5 + 6 == 11
   
def test_add_small():
    assert 1 + 10 == 11
```
测试生效:
```
collected 2 items                                                                        
test.py ..
2 passed
```
但问题是:如果你覆盖了一个测试的名称,测试框架将愉快地跳过这个测试!
实际上,这些文件可能有数百行,而添加新测试的人可能并不知道所有的名称。除非有人仔细查看测试输出,否则一切看起来都很好。
最糟糕的是,*被覆盖测试的添加*、*被覆盖测试造成的破坏*,以及*连锁反应的问题*可能要几天、几月甚至几年才能发现。
### PyLint 会找到它
就像一个好朋友一样PyLint 可以帮助你。
```
test.py:8:0: E0102: function already defined line 1
     (function-redefined)
```
### 缺点
就像 90 年代的情景喜剧一样,你对 PyLint 了解的越多,问题就越多。以下是一个库存建模程序的常规代码:
```
"""Inventory abstractions"""
import attrs
@attrs.define
class Laptop:
    """A laptop"""
    ident: str
    cpu: str
```
但 PyLint 似乎有自己的观点(可能形成于 90 年代),并且不怕把它们作为事实陈述出来:
```
$ pylint laptop.py | sed -n '/^laptop/s/[^ ]*: //p'
R0903: Too few public methods (0/2) (too-few-public-methods)
```
### 危险
有没有想过在一个数百万人使用的工具中加入自己未证实的观点PyLint 每月有 1200 万次下载。
> “如果太挑剔,人们会取消检查” — 这是 PyLint GitHub 的 6987 号议题,于 2022 年 7 月 3 号提出
对于添加一个可能有许多误报的测试,它的态度是 ... “*嗯*”。
### 让它为你工作
PyLint 很好,但你需要小心地与它配合。为了让 PyLint 为你工作,以下是我推荐的三件事:
#### 1、固定版本
固定你使用的 PyLint 版本,避免任何惊喜!
在你的 `.toml` 文件中定义:
```
[project.optional-dependencies]
pylint = ["pylint"]
```
在代码中定义:
```
from unittest import mock
```
这与以下代码对应:
```
# noxfile.py
...
@nox.session(python=VERSIONS[-1])
def refresh_deps(session):
    """Refresh the requirements-*.txt files"""
    session.install("pip-tools")
    for deps in [..., "pylint"]:
        session.run(
            "pip-compile",
            "--extra",
            deps,
            "pyproject.toml",
            "--output-file",
            f"requirements-{deps}.txt",
        )
```
#### 2、默认禁止
禁用所有检查,然后启用那些你认为误报比率高的。(不仅仅是漏报/误报的比率!)
```
# noxfile.py
...
@nox.session(python="3.10")
def lint(session):
    files = ["src/", "noxfile.py"]
    session.install("-r", "requirements-pylint.txt")
    session.install("-e", ".")
    session.run(
        "pylint",
        "--disable=all",
        *(f"--enable={checker}" for checker in checkers)
        "src",
    )
```
#### 3、检查器
以下是我喜欢的检查器。加强项目的一致性,避免一些明显的错误。
```
checkers = [
    "missing-class-docstring",
    "missing-function-docstring",
    "missing-module-docstring",
    "function-redefined",
]
```
### 使用 PyLint
你可以只使用 PyLint 好的部分。在 CI 中运行它以保持一致性,并使用常用检查器。
放弃不好的部分:默认禁止检查器。
避免危险的部分:固定版本以避免意外。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/pylint-good-bad-ugly
作者:[Moshe Zadka][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_programming_question.png

View File

@ -0,0 +1,172 @@
[#]: subject: "5 Git configurations I make on Linux"
[#]: via: "https://opensource.com/article/22/9/git-configuration-linux"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15130-1.html"
我在 Linux 中使用的 5 个 Git 配置
======
> 这份简要指南能够帮助你快速开始使用 Git以及配置一些选项。
![](https://img.linux.net.cn/data/attachment/album/202210/11/162338c314ls57bg51hd45.jpg)
在 Linux 中设置 Git 十分简单,但为了获得完美的配置,我做了以下五件事:
1. 创建全局配置
2. 设置默认名称
3. 设置默认邮箱地址
4. 设置默认分支名称
5. 设置默认编辑器
我使用 Git 管理我的代码、命令行脚本以及文档版本。这意味着每次我开始一项新的任务,首先我需要创建一个文件目录并将其添加到 Git 库中:
```
$ mkdir newproject
$ cd newproject
$ git init
```
有一些我一直想要的常规设置。不多,但可以避免我每次都进行配置。我喜欢利用 Git 的 *全局* 配置功能。
Git 提供了进行手动配置的 `git config` 命令,但这有一些注意事项。例如,通常你会设置邮箱地址。你可以通过运行 `git config user.email 你的邮件地址` 命令进行设置。然而,这只会在你当前所在的 Git 目录下起作用。
```
$ git config user.email alan@opensource.com
fatal: not in a git directory
```
此外,当这个命令在 Git 仓库中运行时,它只会配置特定的一个仓库。在新的仓库中,你不得不重复这个步骤。我可以通过全局配置来避免重复。选项 `--global` 会指示 Git 将邮箱地址写入全局配置 `~/.gitconfig` 文件中,甚至在必要时会创建它:
> 请记住,波浪线(`~`)代表你的主文件夹。在我的电脑中它是 `/home/alan`
```
$ git config --global user.email alan@opensource.com
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
```
这里的缺点是如果你有大量偏好设置需要输入很多命令这将花费大量时间并且很容易出错。Git 提供了更加快捷有效的方式,可以直接编辑你的全局配置文件——这是我列表中的第一项!
### 1、创建全局配置
如果你刚开始使用 Git或许你还没有该文件。不用担心让我们直接开始。只需要用 `--edit` 选项:
```
$ git config --global --edit
```
如果没有该文件Git 将会创建一个包含以下内容的新文件,并使用你终端的默认编辑器打开它:
```
# This is Git's per-user configuration file.
[user]
# Please adapt and uncomment the following lines:
#       name = Alan
#       email = alan@hopper
~
~
~
"~/.gitconfig" 5L, 155B                                     1,1           All
```
现在我们已经打开了编辑器,并且 Git 已经在后台创建了全局配置文件,我们可以继续接下来的设置。
### 2、设置默认名称
名字是该文件中的首要条目,让我们先从它开始。用命令行设置我的名称是 `git config --global user.name "Alan Formy-Duval"`。不用在命令行中运行该命令,只需要在配置文件中编辑 `name` 条目就行:
```
name = Alan Formy-Duval
```
### 3、设置默认邮箱地址
邮箱地址是第二个条目让我们添加它。默认情况下Git 使用你的系统提供的名称和邮箱地址。如果不正确或者你想要更改你可以在配置文件中具体说明。事实上如果你没有配置这些Git 在你第一次提交时会友好的提示你:
```
Committer: Alan <alan@hopper>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate....
```
在命令行中运行 `git config --global user.email "alan@opensource.com"` 会设置好我的邮箱。同样,我们在配置文件中编辑 `email` 条目,提供你的邮箱地址:
```
email = alan@opensource.com
```
我喜欢设置的最后两个设置是默认分支名称和默认编辑器。当你仍在编辑器中时,需要添加这些指令。
### 4、设置默认分支名称
目前有一种趋势,即不再使用 `master` 作为默认分支名称。事实上在新存储库初始化时Git 将通过友好的消息提示更改默认分支名称:
```
$ git init
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:   git config --global init.defaultBranch <name>
```
这个名为 `defaultBranch` 的指令需要位于一个名为 `init` 的新部分中。现在普遍接受的是,许多程序员使用 `main` 这个词作为他们的默认分支。这是我喜欢使用的。将此部分后跟指令添加到配置中:
```
[init]
            defaultBranch = main
```
### 5、设置默认编辑器
第五个设置是设置默认的编辑器。这是指 Git 将使用的编辑器,用于在你每次将更改提交到存储库时输入你的提交消息。不论是 [nano][8]、[emacs][9]、[vi][10] 还是其他编辑器,每个人都有他喜欢的。我喜欢用 vi。添加 `core` 部分,并设置 `editor` 指令为你喜欢的编辑器。
```
[core]
            editor = vi
```
这是最后一项。退出编辑器。Git 在主目录下保存全局配置文件。如果你再次运行编辑命令,将会看到所有内容。注意配置文件是明文存储的文本文件,因此它可以很容易使用文本工具查看,如 [cat][11] 命令。这是我的配置文件内容:
```
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
        name = Alan Formy-Duval
[core]
        editor = vi
[init]
        defaultBranch = main
```
这是一个简单的指南,可以让你快速开始使用 Git 和它的一些配置选项。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/git-configuration-linux
作者:[Alan Formy-Duval][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/linux_keyboard_desktop.png
[2]: https://opensource.com/article/22/9/git-configuration-linux#create-global-configuration
[3]: https://opensource.com/article/22/9/git-configuration-linux#set-default-name
[4]: https://opensource.com/article/22/9/git-configuration-linux#set-default-email-address
[5]: https://opensource.com/article/22/9/git-configuration-linux#set-default-branch-name
[6]: https://opensource.com/article/22/9/git-configuration-linux#set-default-editor
[7]: https://opensource.com/mailto:alan@opensource.com
[8]: https://opensource.com/article/20/12/gnu-nano
[9]: https://opensource.com/resources/what-emacs
[10]: https://opensource.com/article/19/3/getting-started-vim
[11]: https://opensource.com/article/19/2/getting-started-cat-command

View File

@ -3,25 +3,28 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "chai001125"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15132-1.html"
使用 PostgreSQL 建立你的数据库
======
Postgres 是最灵活的数据库之一,并且它是开源的。
数据库是一种有组织性且灵活地存储信息的工具。电子表格在本质上就是一个数据库,但是图形化应用程序这一限制使得大多数的电子表格应用程序对程序员毫无用处。随着 [边缘计算][3] 和物联网设备成为重要的平台,开发者们需要更有效且轻量级的方法,来存储、处理、查询大量的数据。我最爱的一种结合是使用 [Lua 连接][4] PostgreSQL 数据库。无论你使用什么编程语言Postgres 一定是数据库的绝佳选择,但是在使用 Postgres 之前,首先你需要知道一些基本的东西。
![](https://img.linux.net.cn/data/attachment/album/202210/12/100311t4k1k8hfmh4df5hh.jpg)
### 安装 Postgres
> PostgreSQL 是最灵活的数据库之一,并且它是开源的。
在 linux 上安装 PostgreSQL要使用你的软件库。在 FedoraCentOSMegeia 等类似的 linux 版本上使用命令:
数据库是以一种有组织且灵活的方式存储信息的工具。电子表格在本质上就是一个数据库,但是图形化应用程序这一限制使得大多数的电子表格应用程序对程序员毫无用处。随着 [边缘计算][3] 和物联网设备成为重要的平台,开发者们需要更有效且轻量级的方法,来存储、处理、查询大量的数据。我最爱的一种组合是使用 [Lua 连接][4] PostgreSQL 数据库。无论你使用什么编程语言PostgreSQL 一定是数据库的绝佳选择,但是在使用 PostgreSQL 之前,首先你需要知道一些基本的东西。
### 安装 PostgreSQL
在 Linux 上安装 PostgreSQL要使用你的软件库。在 FedoraCentOSMegeia 等类似的 Linux 版本上使用命令:
```
$ sudo dnf install postgresql postgresql-server
```
在 Debian Linux Mint Elementary 等类似的 linux 版本上使用命令:
在 Debian Linux Mint Elementary 等类似的 Linux 版本上使用命令:
```
$ sudo apt install postgresql postgresql-contrib
@ -29,14 +32,13 @@ $ sudo apt install postgresql postgresql-contrib
在 macOs 和 Windows 上,可以从官网 [postgresql.org][5] 下载安装包。
### 配置 Postgres
### 配置 PostgreSQL
Most distributions install the Postgres database without *starting* it, but provide you with a script or [systemd service][6] to help it start reliably. However, before you start PostgreSQL, you must create a database cluster.
大多数发行版安装 Postgres 数据库时没有启动它,但是为你提供了一个脚本或 [systemd 服务][6],能够可靠地启动 Postgres。但是在启动 PostgreSQL 之前,必须创建一个数据库集群。
大多数发行版安装 PostgreSQL 数据库时没有启动它,但是为你提供了一个脚本或 [systemd 服务][6],能够可靠地启动 PostgreSQL。但是在启动 PostgreSQL 之前,必须创建一个数据库集群。
#### Fedora
在 FedoraCentOS 等类似的版本上Postgres 安装包中提供了一个 Postgres 配置脚本。运行这个脚本,可以进行简单地配置:
在 FedoraCentOS 等类似的版本上PostgreSQL 安装包中提供了一个 PostgreSQL 配置脚本。运行这个脚本,可以进行简单地配置:
```
$ sudo /usr/bin/postgresql-setup --initdb
@ -51,17 +53,17 @@ $ sudo /usr/bin/postgresql-setup --initdb
#### 其他版本
最后,如果你是在其他版本上运行的,那么你可以直接使用 Postgres 提供的一些工具。`initdb` 命令会创建一个数据库集群,但是这个命令必须在 `postgres` 用户下运行,你可以使用 `sudo` 来暂时地成为 `postgres` 用户:
最后,如果你是在其他版本上运行的,那么你可以直接使用 PostgreSQL 提供的一些工具。`initdb` 命令会创建一个数据库集群,但是这个命令必须在 `postgres` 用户下运行,你可以使用 `sudo` 来暂时地成为 `postgres` 用户:
```
$ sudo -u postgres \
"initdb -D /var/lib/pgsql/data \
--locale en_US.UTF-8 --auth md5 --pwprompt"
"initdb -D /var/lib/pgsql/data \
--locale en_US.UTF-8 --auth md5 --pwprompt"
```
### 运行 Postgres
### 运行 PostgreSQL
现在,数据库集群已经存在了,使用 `initdb` 的输出中提供给你的命令或者使用 systemd 启动 Postgres 服务器:
现在,数据库集群已经存在了,使用 `initdb` 的输出中提供给你的命令或者使用 systemd 启动 PostgreSQL 服务器:
```
$ sudo systemctl start postgresql
@ -99,15 +101,16 @@ Type "help" for help.
exampledb=>
```
### 创建一个表
#### 创建一个表
数据库包含很多表。这些表可以可视化为表格,有很多行(在数据库中称为 *记录*)和很多列。行和列的交集称为 *字段*
结构化查询语言SQL是以它提供的内容而命名的它能提供可预测且一致的语法来查询数据库内容从而收到有用的结果。
目前,你的数据库是空的,没有任何的表。你可以用 `CTEATE` 语句来创建一个表。结合使用 `IF NOT EXISTS` 是很有用的,它可以避免破坏现有的表。
目前,你的数据库是空的,没有任何的表。你可以用 `CREATE` 语句来创建一个表。结合使用 `IF NOT EXISTS` 是很有用的,它可以避免破坏现有的表。
在你创建一个表之前,想想看你希望这个表包含哪一种数据(在 SQL 术语中称为“数据类型”)。在这个例子中,我创建了一个表,包含两列,有唯一标识符的一列和最多九个字符的可变长的一列。
```
exampledb=> CREATE TABLE IF NOT EXISTS my_sample_table(
exampledb(> id SERIAL,
@ -117,7 +120,7 @@ exampledb(> wordlist VARCHAR(9) NOT NULL
关键字 `SERIAL` 并不是一个数据类型。`SERIAL` 是 [PostgreSQL 中的一个特殊的标记][7],它可以创建一个自动递增的整数字段。关键字 `VARCHAR` 是一个数据类型,表示限制内字符数的可变字符。在此例中,我指定了最多 9 个字符。PostgreSQL 中有很多数据类型,因此请参阅项目文档以获取选项列表。
### 插入数据
#### 插入数据
你可以使用 `INSERT` 语句来给你的新表插入一些样本数据:
@ -133,7 +136,7 @@ exampledb=> INSERT INTO my_sample_table (WORDLIST) VALUES ('Alexandria');
ERROR:  VALUE too long FOR TYPE CHARACTER VARYING(9)
```
### 改变表或者列
#### 改变表或者列
当你需要改变一个域的定义时,你可以使用 `ALTER` 这一 SQL 关键字。例如,如果你想改变 `wordlist` 域中最多只能有 9 个字符的限制,你可以重新设置这个数据类型。
@ -145,20 +148,21 @@ exampledb=> INSERT INTO my_sample_table (WORDLIST) VALUES ('Alexandria');
INSERT 0 1
```
### 查询表中的内容
#### 查询表中的内容
SQL 是一种查询语言,因此你可以通过查询来查看数据库的内容。查询可以是很简单的,也可以涉及连接多个不同表之间的复杂关系。要查看表中的所有内容,请使用 `SELECT` 关键字和 `*``*` 是通配符):
SQL 是一种查询语言,因此你可以通过查询来查看数据库的内容。查询可以是很简单的,也可以涉及连接多个不同表之间的复杂关系。要查看表中的所有内容,请使用 `*` 上的 `SELECT` 关键字(`*` 是通配符):
```
exampledb=> SELECT * FROM my_sample_table;
 id |  wordlist
\----+------------
----+------------
  1 | Alice
  2 | Bob
  3 | Alexandria
(3 ROWS)
```
### 更多信息
### 更多数据
PostgreSQL 可以处理很多数据,但是对于任何数据库来说,关键之处在于你是如何设计你的数据库的,以及数据存储下来之后你是怎么查询数据的。在 [OECD.org][8] 上可以找到一个相对较大的公共数据集,你可以使用它来尝试一些先进的数据库技术。
@ -166,7 +170,7 @@ PostgreSQL 可以处理很多数据,但是对于任何数据库来说,关键
在文本编辑器或电子表格应用程序中浏览数据,来了解有哪些列,以及每列包含哪些类型的数据。仔细查看数据,并留意错误情况。例如,`COU` 列指的是国家代码,例如 `AUS` 表示澳大利亚和 `GRC` 表示希腊,在奇怪的 `BRIICS` 之前,这一列的值通常是 3 个字符。
在你理解了这些数据项后,你就可以准备一个 Postgres 数据库了。
在你理解了这些数据项后,你就可以准备一个 PostgreSQL 数据库了。
```
$ createdb landcoverdb --owner bogus
@ -195,20 +199,20 @@ flag_codes varchar(1),
flag_names varchar(1));
```
### 引入数据
#### 引入数据
Postgres 可以使用特殊的元命令 `\copy` 来直接引入 CSV 数据:
```
landcoverdb=> \copy land_cover from '~/land-cover.csv' with csv header delimiter ','
COPY 22113
```
That's 22,113 records imported. Seems like a good start!
插入了 22113 条记录。这是一个很好的开始!
### 查询数据
#### 查询数据
`SELECT` 语句可以查询这 22113 条记录的所有列,此外 Postgres 将输出通过管道传输到屏幕上因此你可以轻松地滚动鼠标来查看输出的结果。更进一步你可以使用高级SQL来获得一些有用的视图。
`SELECT` 语句可以查询这 22113 条记录的所有列,此外 PostgreSQL 将输出通过管道传输到屏幕上,因此你可以轻松地滚动鼠标来查看输出的结果。更进一步,你可以使用高级 SQL 语句,来获得一些有用的视图。
```
landcoverdb=> SELECT
@ -240,11 +244,10 @@ ORDER BY country_name,
    year_value;
```
Here's some sample output:
下面是样例的一些输出:
```
\---------------+------------+------------
---------------+------------+------------
 Afghanistan    |       2019 |  743.48425
 Albania        |       2019 |  128.82532
 Algeria        |       2019 |  2417.3281
@ -260,7 +263,7 @@ SQL 是一种很丰富的语言,超出了本文的讨论范围。通读 SQL
### 拓展数据库
PostgreSQL 是伟大的开源数据库之一。有了它,你可以为结构化数据设计存储库,然后使用 SQL 以不同的方式查询它以便能够获得有关该数据的新视角。Postgres 也能与许多语言集成包括PythonLuaGroovyJava等,因此无论你使用什么工具集,你都可以充分利用好这个出色的数据库。
PostgreSQL 是伟大的开源数据库之一。有了它,你可以为结构化数据设计存储库,然后使用 SQL 以不同的方式查询它以便能够获得有关该数据的新视角。PostgreSQL 也能与许多语言集成,包括 Python、Lua、Groovy、Java 等,因此无论你使用什么工具集,你都可以充分利用好这个出色的数据库。
--------------------------------------------------------------------------------
@ -269,7 +272,7 @@ via: https://opensource.com/article/22/9/drop-your-database-for-postgresql
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[chai001125](https://github.com/chai001125)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,127 @@
[#]: subject: "How to Setup Internet in CentOS, RHEL, Rocky Linux Minimal Install"
[#]: via: "https://www.debugpoint.com/setup-internet-minimal-install-server/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15126-1.html"
如何在最小安装的 CentOS、RHEL、Rocky Linux 中设置互联网
======
![](https://img.linux.net.cn/data/attachment/album/202210/10/162428grhkhtayzt4cdh5k.jpg)
> 在最小安装的服务器中设置互联网或网络非常容易。本指南将解释如何在最小安装的 CentOS、RHEL 和 Rocky Linux 中设置互联网或网络。
当你安装了任何服务器发行版的最小安装环境,你将没有任何 GUI 或桌面环境来设置你的网络或互联网。因此当你只能访问终端时了解如何设置互联网非常重要。NetworkManager 工具提供了必要的工具,辅以 systemd 服务来完成这项工作。以下是方法。
### 在最小安装的 CentOS、RHEL、Rocky Linux 中设置互联网
在你完成了服务器的安装后,启动进入服务器终端。理想情况下,你会看到一个终端提示符。使用 root 或管理员账户登录。
首先,尝试使用 [nmcli][1] 检查网络接口的状态和详细信息。`nmcli` 是用于控制 NetworkManager 服务的命令行工具。使用以下命令进行检查。
```
nmcli device status
```
这将显示设备名称、状态等。
![nmcli device status][2]
运行工具 `nmtui` 来配置网络接口。
`nmtui` 是 NetworkManager 工具的一部分,它为你提供了一个友好的用户界面来配置网络。
这是 `NetworkManager-tui` 包的一部分,在你完成最小服务器安装后默认安装。
```
nmtui
```
单击 nmtui 窗口中的“<ruby>编辑连接<rt>Edit a connection</rt></ruby>”。
![nmtui - 选择选项][3]
选择接口名称
![选择要编辑的接口][4]
在“<ruby>编辑连接<rt>Edit Connection</rt></ruby>”窗口中,为 IPv4 和 IPv6 选择“<ruby>自动<rt>Automatic</rt></ruby>”选项。并选择“<ruby>自动连接<rt>Automatically Connect</rt></ruby>”。完成后按 “OK”。
![nmtui - 编辑连接][5]
使用以下命令通过 [systemd systemctl][6] 重启 NetworkManager 服务。
```
systemctl restart NetworkManager
```
如果一切顺利,你可以在最小安装的 CentOS、RHEL 和 Rocky Linux 服务器中连接到网络和互联网。前提是你的网络有互联网连接。你可以使用 `ping` 来验证它是否正常工作。
![设置最小化服务器互联网 - CentOS Rocky Linux RHEL][7]
### 附加技巧:在最小化服务器中设置静态 IP
当你将网络配置设置为自动时,接口会在你连接到互联网时动态分配 IP。在你设置局域网的某些情况下你可能希望将静态 IP 分配给你的网络接口。这非常容易。
打开你的网络配置脚本。将 `ens3` 改为为你自己的设备名。
```
vi /etc/sysconfig/network-scripts/ifcfg-ens3
```
在上面的文件中,使用 `IPADDR` 属性添加所需的 IP 地址。保存文件。
```
IPADDR=192.168.0.55
```
`/etc/sysconfig/network` 中为你的网络添加网关。
```
NETWORKING=yes
HOSTNAME=debugpoint
GATEWAY=10.1.1.1
```
`/etc/resolv.conf` 中添加任意公共 DNS 服务器。
```
nameserver 8.8.8.8
nameserver 8.8.4.4
```
然后重启网络服务。
```
systemctl restart NetworkManager
```
这就完成了静态 IP 的设置。你还可以使用 `ip addr` 命令检查 IP 详细信息。
### 总结
我希望本指南可以帮助你在最小化安装的服务器中设置网络、互联网和静态 IP。如果你有任何问题请在评论区告诉我。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/setup-internet-minimal-install-server/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://linux.die.net/man/1/nmcli
[2]: https://www.debugpoint.com/wp-content/uploads/2021/06/nmcli-device-status.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2021/06/nmtui-Select-options.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2021/06/Select-Interface-to-Edit.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2021/06/nmtui-Edit-Connection.jpg
[6]: https://www.debugpoint.com/2020/12/systemd-systemctl-service/
[7]: https://www.debugpoint.com/wp-content/uploads/2021/06/setup-internet-minimal-server-CentOS-Rocky-Linux-RHEL.jpg

View File

@ -0,0 +1,141 @@
[#]: subject: "The story behind Joplin, the open source note-taking app"
[#]: via: "https://opensource.com/article/22/9/joplin-interview"
[#]: author: "Richard Chambers https://opensource.com/users/20i"
[#]: collector: "lkxed"
[#]: translator: "MareDevi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15161-1.html"
开源笔记软件 Joplin 背后的故事
======
![](https://img.linux.net.cn/data/attachment/album/202210/21/112935tfapsvpac06h2sth.jpg)
> Laurent Cozic 与我坐下来,讨论了 Joplin 是如何开始的,以及这个开源笔记软件的下一步计划。
在这次采访中,我见到了笔记软件 Joplin 的创建者 Laurent Cozic。[Joplin][2] 是 [20i][3] 奖励的赢家,所以我想了解是什么让它如此成功,以及他如何实现的。
### 你能概述一下什么是 Joplin 吗?
[Joplin][4] 是一个开源的笔记软件。它可以让你捕获你的想法并从任何设备安全地访问它们。
### 显然,还有很多其他的笔记应用,那么除了免费使用之外,它还有什么不同呢?
对我们的许多用户来说,它是开源的这一事实是一个非常重要的方面,因为这意味着没有供应商对数据的封锁,而且数据可以很容易地被导出并以各种方式访问。
我们还关注用户的安全和数据隐私,特别是端到端加密同步功能,以及通过对应用的任何连接保持透明。我们还与安全研究人员合作,以保证软件更加安全。
最后Joplin 可以通过几种不同的方式进行定制 —— 通过插件(可以添加新的功能)和主题来定制应用程序的外观。我们还公开了一个数据 API它允许第三方应用程序访问 Joplin 的数据。
> **[相关阅读5 款 Linux 上的笔记应用][5]**
### 这是一个竞争非常激烈的市场,那么是什么激发了你创建它的想法?
这是有原因的的。我从 2016 年开始研究它,因为我不喜欢现有的商业记事应用程序:笔记、附件或标签不能轻易被其他工具导出或操作。
这主要是由于供应商的封锁,另外还有供应商缺乏动力,因为他们没有动力帮助用户将他们的数据转移到其他应用程序。还有一个问题是,这些公司通常会以纯文本形式保存笔记,而这有可能造成数据隐私和安全方面的问题。
因此,我决定开始创建一个简单且具有同步功能的移动和终端应用程序,使我的笔记能够轻松地在我的设备上访问。之后又创建了桌面应用程序,项目从此开始发展。
![Chrome OS 上 Joplin 的图片][6]
### 编写 Joplin 花了多长时间呢?
自 2016 年以来,我一直在断断续续地开发,但并不是专门去维护。不过在过去的两年里,我更加专注于它。
### 对于准备创建自己的开源应用的人,你有什么建议?
挑选一个你自己使用的项目和你喜欢的技术来工作。
管理一个开源项目有时是很困难的,所以必须要有足够的兴趣去让它变得更有价值。那么我想 “早发布,多发布” 原则在这里也适用,这样你就可以衡量用户的兴趣,以及是否有必要花时间进一步开发这个项目。
### 有多少人参与了 Joplin 的开发?
有 3、4 人参与开发。目前,我们还有 6 名学生在 <ruby>谷歌编程之夏<rt>Google Summer of Code</rt></ruby> 中为这个项目工作。
### 许多人都在创建开源项目,但 Joplin 对你来说是一个巨大的成功。关于如何获得关注,你能否给开发者提供一些建议?
没有简单的公式,说实话,我不认为我可以在另一个项目中复制这种成功!你必须对你所做的事情充满热情,但同时也要严谨、有组织、稳步前进,确保代码质量保持高水平,并拥有大量的测试单元以防止回归。
同时,对于你收到的用户反馈保持开放的态度,并在此基础上改进项目。
一旦你掌握了这些,剩下的可能就全靠运气了 —— 如果你做的项目让很多人都感兴趣,事情可能会顺利进行!
### 一旦你得到关注,但如果你没有传统的营销预算,你如何保持这种势头?
我认为这在于倾听项目周围的社区。举个例子来说,我从未计划过建立一个论坛,但有人在 GitHub 上提出了这个建议,所以我创建了一个论坛,它成为了一个分享想法、讨论功能、提供支持等很好的方式。社区也普遍欢迎新人,这形成了一种良性循环。
除此以外,定期就项目进行沟通也很重要。
我们没有一个公开的路线图,因为大多数功能的 ETA 通常是 “我不知道”,但我会试图就即将到来的功能、新版本等进行沟通。我们也会就重要的事件进行沟通,特别是谷歌编程之夏,或者当我们有机会赢得像 20i FOSS 奖的时候。
最后,我们很快将在伦敦举行一次面对面的聚会,这是与社区和合作者保持联系的另一种方式。
### 用户的反馈是如何影响路线图的?
很明显,贡献者们经常仅仅因为他们需要某个特性而从事某些工作。但除此之外,我们还根据论坛和 GitHub 问题追踪器上的信息,追踪对用户来说似乎最重要的功能。
例如,移动应用程序现在具有很高的优先级,因为我们经常从用户那里听到,它的限制和缺陷是有效使用 Joplin 的一个问题。
![桌面使用Joplin的图片][8]
### 你是如何跟上最新的开发和编码的发展的?
主要是通过阅读 Hacker News
### 你有个人最喜欢的自由/开源软件可以推荐吗?
在不太知名的项目中,[SpeedCrunch][9] 作为一个计算器非常好。它有很多功能,而且很好的是它能保留以前所有计算的历史。
我还使用 [KeepassXC][10] 作为密码管理器。在过去的几年里,它一直在稳步改进。
最后,[Visual Studio Code][11] 作为一个跨平台的文本编辑器非常棒。
### 我原以为 Joplin 是以 Janis 的名字命名的,但维基百科告诉我来自是 Scoot Joplin。你为什么选择这个名字
我起初想把它命名为 “jot-it”但我想这个名字已经被人占了。
由于我那时经常听 Scoot Joplin 的 <ruby>拉格泰姆<rt>ragtime</rt></ruby>音乐(我相当痴迷于此),我决定使用他的名字。
我认为产品名称的含义并不太重要,只要名称本身易于书写、发音、记忆,并与一些积极的东西(或至少没有消极的东西)有关。
我觉得 “Joplin” 符合所有条件。
### 关于 Joplin 的计划,你还有什么可以说的吗?也许是对一个新功能的独家预告?
如前所述,我们非常希望在用户体验设计和新功能方面对移动应用进行改进。
我们也在考虑创建一个 “插件商店”,以便更容易地浏览和安装插件。
感谢 Laurent — 祝 Joplin 的未来好运。
*图片来自: (Opensource.com, CC BY-SA 4.0)*
*[这篇访谈最初发表在 20i 博客上,已获得许可进行转载。][12]*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/joplin-interview
作者:[Richard Chambers][a]
选题:[lkxed][b]
译者:[MareDevi](https://github.com/MareDevi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/20i
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://joplinapp.org/
[3]: https://www.20i.com/foss-awards/winners
[4]: https://opensource.com/article/19/1/productivity-tool-joplin
[5]: https://opensource.com/article/22/8/note-taking-apps-linux
[6]: https://opensource.com/sites/default/files/2022-09/joplin-chrome-os.png
[7]: https://opensource.com/article/21/10/google-summer-code
[8]: https://opensource.com/sites/default/files/2022-09/joplin-desktop.png
[9]: https://heldercorreia.bitbucket.io/speedcrunch/
[10]: https://opensource.com/article/18/12/keepassx-security-best-practices
[11]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[12]: https://www.20i.com/blog/joplin-creator-laurent-cozic/

View File

@ -0,0 +1,206 @@
[#]: subject: "GUI Apps for Package Management in Arch Linux"
[#]: via: "https://itsfoss.com/arch-linux-gui-package-managers/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15149-1.html"
Arch Linux 中用于包管理的图形化应用
======
![](https://img.linux.net.cn/data/attachment/album/202210/17/110440isl629s0uqnl8b29.jpg)
[安装 Arch Linux][1] 有一些挑战性。这就是为什么 [有几个基于 Arch 的发行版][2] 通过提供图形化的安装程序使事情变得简单。
即使你设法安装了 Arch Linux你也会注意到它严重依赖命令行。如果你需要安装应用或更新系统那么必须打开终端。
是的Arch Linux 没有软件中心。我知道,这让很多人感到震惊。
如果你对使用命令行管理应用感到不舒服,你可以安装一个 GUI 工具。这有助于在舒适的图形化界面中搜索包以及安装和删除它们。
想知道你应该使用 [pacman 命令][3] 的哪个图形前端?我有一些建议可以帮助你。
**请注意,某些软件管理器是特定于桌面环境的。**
### 1、Apper
![使用 Apper 安装 Firefox][4]
Apper 是一个精简的 Qt5 应用,它使用 PackageKit 进行包管理,它还支持 AppStream 和自动更新。但是,**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令:
```
sudo pacman -Syu apper
```
> **[GitLab 上的 Apper][5]**
### 2、深度应用商店
![使用深度应用商店安装 Firefox][6]
深度应用商店是使用 DTKQT5构建的深度桌面环境的应用商店它使用 PackageKit 进行包管理,支持 AppStream同时提供系统更新通知。 **没有 AUR 支持**
要安装它,请使用以下命令:
```
sudo pacman -Syu deepin-store
```
> **[Github 上的深度商店][7]**
### 3、KDE 发现应用
![使用 Discover 安装 Firefox][8]
<ruby>发现<rt>Discover</rt></ruby> 应用不需要为 KDE Plasma 用户介绍。它是一个使用 PackageKit 的基于 Qt 的应用管理器,支持 AppStream、Flatpak 和固件更新。
要在发现应用中安装 Flatpak 和固件更新,需要分别安装 `flatpak``fwupd` 包。**它没有 AUR 支持。**
```
sudo pacman -Syu discover packagekit-qt5
```
> **[GitLab 上的 Discover][9]**
### 4、GNOME PackageKit
![Installing Firefox using GNOME PackageKit][10]
GNOME PackageKit 是一个使用 PackageKit 技术的 GTK3 包管理器,支持 AppStream。不幸的是**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令:
```
sudo pacman -Syu gnome-packagekit
```
> **[freedesktop 上的 PackageKit][11]**
### 5、GNOME 软件应用
![Installing Firefox using GNOME Software][12]
GNOME <ruby>软件<rt>Software</rt></ruby> 应用不需要向 GNOME 桌面用户介绍。它是使用 PackageKit 技术的 GTK4 应用管理器,支持 AppStream、Flatpak 和固件更新。
**它没有 AUR 支持。** 要安装来自 GNOME 软件应用的 Flatpak 和固件更新,需要分别安装 `flatpak``fwupd` 包。
安装它使用:
```
sudo pacman -Syu gnome-software-packagekit-plugin gnome-software
```
> **[GitLab 上的 GNOME 软件][13]**
### 6、tkPacman
![使用 tkPacman 安装 Firefox][14]
它是用 Tcl 编写的 Tk pacman 封装。界面类似于 [Synaptic 包管理器][15]。
由于没有 GTK/Qt 依赖,它非常轻量级,因为它使用 Tcl/Tk GUI 工具包。
**它不支持 AUR**,这很讽刺,因为你需要从 [AUR][16] 安装它。你需要事先安装一个 [AUR 助手][17],如 yay。
```
yay -Syu tkpacman
```
> **[Sourceforge 上的 tkPacman][18]**
### 7、Octopi
![使用 Octopi 安装 Firefox][19]
可以认为它是 tkPacman 的更好看的表亲。它使用 Qt5 和 Alpm还支持 Appstream 和 **AUR通过 yay**
你还可以获得桌面通知、仓库编辑器和缓存清理器。它的界面类似于 Synaptic 包管理器。
要从 AUR 安装它,请使用以下命令。
```
yay -Syu octopi
```
> **[GitHub 上的 Octopi][20]**
### 8、Pamac
![使用 Pamac 安装 Firefox][21]
Pamac 是 Manjaro Linux 的图形包管理器。它基于 GTK3 和 Alpm**支持 AUR、Appstream、Flatpak 和 Snap**。
Pamac 还支持自动下载更新和降级软件包。
它是 Arch Linux 衍生版中使用最广泛的应用。但因为 [DDoS AUR 网页][22] 而臭名昭著。
[在 Arch Linux 上安装 Pamac][23] 有几种方法。最简单的方法是使用 AUR 助手。
```
yay -Syu pamac-aur
```
> **[GitLab 上的 Pamac][24]**
### 总结
要删除任何上面图形化包管理器以及依赖项和配置文件,请使用以下命令将 `packagename` 替换为要删除的包的名称。
```
sudo pacman -Rns packagename
```
这样看来Arch Linux 也可以在不接触终端的情况下使用合适的工具。
还有一些其他应用程序也使用终端用户界面TUI。一些例子是 [pcurses][25]、[cylon][26]、[pacseek][27] 和 [yup][28]。但是,这篇文章只讨论那些有适当的 GUI 的软件。
**注意:** PackageKit 默认打开系统权限,因而 [不推荐][29] 用于一般用途。因为如果用户属于 `wheel` 组,更新或安装任何软件都不需要密码。
**你看到了在 Arch Linux 上使用图形化软件中心的几种选择。现在是时候决定使用其中一个了。你会选择哪一个Pamac 或 OctoPi 还是其他?现在就在下面留言吧**。
---
via: https://itsfoss.com/arch-linux-gui-package-managers/
作者:[Anuj Sharma][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/anuj/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-arch-linux/
[2]: https://itsfoss.com/arch-based-linux-distros/
[3]: https://itsfoss.com/pacman-command/
[4]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[5]: https://invent.kde.org/system/apper
[6]: https://itsfoss.com/wp-content/uploads/2022/09/dde-arch-install-firefox.png
[7]: https://github.com/dekzi/dde-store
[8]: https://itsfoss.com/wp-content/uploads/2022/09/discover-arch-install-firefox.png
[9]: https://invent.kde.org/plasma/discover
[10]: https://itsfoss.com/wp-content/uploads/2022/09/gnome-packagekit-arch-install-firefox.png
[11]: https://freedesktop.org/software/PackageKit/index.html
[12]: https://itsfoss.com/wp-content/uploads/2022/09/gnome-software-arch-install-firefox.png
[13]: https://gitlab.gnome.org/GNOME/gnome-software
[14]: https://itsfoss.com/wp-content/uploads/2022/09/tkpacman-arch-install-firefox.png
[15]: https://itsfoss.com/synaptic-package-manager/
[16]: https://itsfoss.com/aur-arch-linux/
[17]: https://itsfoss.com/best-aur-helpers/
[18]: https://sourceforge.net/projects/tkpacman
[19]: https://itsfoss.com/wp-content/uploads/2022/09/octopi-arch-install-firefox.png
[20]: https://github.com/aarnt/octopi
[21]: https://itsfoss.com/wp-content/uploads/2022/09/pamac-arch-install-firefox.png
[22]: https://gitlab.manjaro.org/applications/pamac/-/issues/1017
[23]: https://itsfoss.com/install-pamac-arch-linux/
[24]: https://gitlab.manjaro.org/applications/pamac
[25]: https://github.com/schuay/pcurses
[26]: https://github.com/gavinlyonsrepo/cylon
[27]: https://github.com/moson-mo/pacseek
[28]: https://github.com/ericm/yup
[29]: https://bugs.archlinux.org/task/50459

View File

@ -0,0 +1,103 @@
[#]: subject: "Get change alerts from any website with this open source tool"
[#]: via: "https://opensource.com/article/22/9/changedetection-io-open-source-website-changes"
[#]: author: "Leigh Morresi https://opensource.com/users/dgtlmoon"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15129-1.html"
用这个开源工具从任何网站获取变化提醒
======
> 使用 changedetection.io 在网站发生变化或更新时获得提醒。
![](https://img.linux.net.cn/data/attachment/album/202210/11/153605ikpi81s1mz8wak2z.jpg)
那一年是 2020 年,关于大流行病的消息迅速涌来,每个人都感到完全被类似的新闻文章所淹没,提供了不同程度的更新。
但我需要知道的是,我们的官方准则何时改变。最后,这就是对我来说最重要的事情。
无论关注的是大流行病还是最新的科技新闻,提前了解网站内容的变化都至关重要。
[changedetection.io][2] 项目为网站变更检测和通知提供了一个简单但强大的开源解决方案。它很容易设置,而且可以通知 70 多个(还在不断增加)不同的通知系统,如 Matrix、Mattermost、[Nextcloud][3]、[Signal][4]、[Zulip][5]、[Home Assistant][6]、电子邮件等等。它还能通知专有应用,如 Discord、Office365、Reddit、Telegram 和许多其他应用。
但 [changedetection.io][7] 并不只是局限于观察网页内容。你也可以监视 XML 和 JSON 源,它将建立一个 RSS 馈送,记录变化的网站。
由于其内置的 JSON 简单存储系统,不需要设置复杂的数据库来接收和存储信息。你可以 [使用 Docker 镜像运行][8] 或用 `pip` 安装它。该项目有一个 [全面的维基帮助页][9],大多数常见的问题都有涵盖。
对于使用复杂 JavaScript 的网站,你可以用内置的 [Playwright 内容获取器][10] 将你的 changedetection.io 连接到 Chromium 或 Chrome 浏览器。
运行后,在你的浏览器(默认情况下是 `http://localhost:5000`)中访问该应用。如果你的电脑可以从外部网络访问,你可以在 <ruby>设置<rt>Settings</rt></ruby>中设置一个密码。
![change detection watch list][11]
提交你想监控的页面的 URL。有几个与如何过滤该网页有关的设置。例如你很可能不想知道一家公司在其网站页脚列出的股票价格何时发生变化但你可能想知道他们在其博客上发布的新闻文章。
### 监控一个网站
想象一下,你想添加你最喜欢的网站 Opensource.com 进行监控。你只想知道主要标注文章何时包含 “python” 一词,并且通过 Matrix 收到通知。
要做到这点,首先要使用“<ruby>视觉选择器<rt>Visual Filter Selector</rt></ruby>”工具。(这需要连接 **playwright** 浏览器界面)。
![Find an element to monitor][12]
该工具会自动计算出针对内容的最佳 Xpath 或 CSS 过滤器。否则,你会从每天的页面更新中得到大量的噪音。
接下来,访问“<ruby>过滤器和触发器<rt>Filters & Triggers</rt></ruby>”标签。
![Filters and triggers][13]
在 “<ruby>CSS/JSON/XPATH 过滤器<rt>CSS/JSON/XPATH Filter</rt></ruby>”区域(蓝色圆圈),你可以看到上一步自动生成的 CSS 过滤器。
有几个有用的过滤器,比如“<ruby>移除元素<rt>Remove elements</rt></ruby>”(适合移除嘈杂的元素)、“<ruby>忽略文本<rt>Ignore text</rt></ruby>”、“<ruby>触发/等待文本<rt>Trigger/wait for text</rt></ruby>”,和“<ruby>如果文本匹配则阻止变化检测<rt>Block change-detection if text matches</rt></ruby>”(用于等待一些文本消失,如“售罄”)。
在“<ruby>触发/等待文本<rt>Trigger/wait for text</rt></ruby>”(红色圆圈)中,输入你想监测的关键词。(在这个例子中是 “python”
最后一步是在“<ruby>通知<rt>Notifications</rt></ruby>”选项卡中,你要在那里配置你想收到的通知。下面我使用 Matrix API 添加了一个 Matrix 房间作为通知目标。
![Notifications tab][14]
通知的 URL 的格式是 `matrixs://username:password@matrix.org/#/room/#room-name:matrix.org`
然而,[t2Bot][15] 格式也支持。这里有更多的 [Matrix 通知选项][16]。
就是这些了! 现在只要内容有变化,你就会通过 Matrix 收到信息。
### 还有更多
changedetection.io 还有很多东西。如果你喜欢调用一个自定义的 JSON API你不需要使用通知的 API使用 `jsons://` )。你还可以创建一个自定义的 HTTP 请求POST 和 GET在检查前执行 JavaScript也许是为了预先填充一个用户名和密码的登录字段以及更多有趣的功能更多的功能将陆续推出。
不要再浏览网站,而是开始监测网络吧!
*图片提供:(Leigh Morresi, CC BY-SA 4.0)*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/changedetection-io-open-source-website-changes
作者:[Leigh Morresi][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dgtlmoon
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/browser_desktop_website_checklist_metrics.png
[2]: https://github.com/dgtlmoon/changedetection.io
[3]: https://opensource.com/tags/nextcloud
[4]: https://opensource.com/article/19/10/secure-private-messaging
[5]: https://opensource.com/article/22/3/open-source-chat-zulip
[6]: https://opensource.com/article/20/11/home-assistant
[7]: https://github.com/dgtlmoon/changedetection.io
[8]: https://github.com/dgtlmoon/changedetection.io#docker
[9]: https://github.com/dgtlmoon/changedetection.io/wiki
[10]: https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher
[11]: https://opensource.com/sites/default/files/2022-09/screenshot.png
[12]: https://opensource.com/sites/default/files/2022-09/changedetect-osdc.png
[13]: https://opensource.com/sites/default/files/2022-09/changedetect-filters-triggers.webp
[14]: https://opensource.com/sites/default/files/2022-09/step3-notification-matrix.png
[15]: https://t2bot.io/
[16]: https://github.com/caronc/apprise/wiki/Notify_matrix

View File

@ -3,19 +3,22 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15137-1.html"
如何在 Brave 浏览器中使用画中画模式
======
Brave 是一款出色的类似于 Chrome, 但可[替代 Chrome 的网络浏览器][1]。
![](https://img.linux.net.cn/data/attachment/album/202210/13/003034agqsflug771cl3uj.jpg)
> Brave 是一款出色的类似于 Chrome但可 [替代 Chrome 的网络浏览器][1]。
[Firefox 和 Brave][2] 是我喜欢在 Linux 系统上使用的两种浏览器。两者都有不同的优势。
Firefox 比 Brave 做得更好的一件事就是画中画 (PIP) 模式,它适用于 YouTube、Netflix 和大多数流媒体网站。
Firefox 比 Brave 做得更好的一件事就是画中画PIP模式,它适用于 YouTube、Netflix 和大多数流媒体网站。
Brave 也有画中画模式,但它是如此隐藏,以至于你觉得根本没有 PIP 支持。
Brave 也有画中画模式,但它是如此隐藏,以至于你觉得根本没有 PIP 支持。
内置画中画适用于某些网站(如 YouTube但可能不适用于其他网站如 Prime Video。不用担心你可以为此使用专用扩展。
@ -29,7 +32,7 @@ Brave 也有画中画模式,但它是如此隐藏,以至于你觉得根本
![第一次右键单击后将光标从上下文菜单稍微移开][3]
再次右键单击。它应该在视频上,但不在上一个上下文菜单上。就在视频的其他地方。
再次右键单击。它应该在视频上,但不在上一个上下文菜单上,而在视频的其他地方。
现在你应该看到另一个带有画中画选项的上下文菜单。
@ -43,7 +46,7 @@ Brave 也有画中画模式,但它是如此隐藏,以至于你觉得根本
![Brave 在画中画模式中播放影片][6]
在最近的 Brave 版本中,可以根据自己的喜好调整弹出窗口的大小。
在最近的 Brave 版本中,可以根据自己的喜好调整弹出窗口的大小。
我不明白为什么 Brave 把它隐藏成这样。为什么不突出它?
@ -51,11 +54,11 @@ Brave 也有画中画模式,但它是如此隐藏,以至于你觉得根本
### 方法 2使用画中画扩展
Google 提供了一个官方插件,可让你在 Google Chrome 中获得画中画功能。由于 Brave 基于 Chromium你可以在 Brave 中使用相同的扩展。
谷歌提供了一个官方插件,可让你在谷歌 Chrome 中获得画中画功能。由于 Brave 基于 Chromium你可以在 Brave 中使用相同的扩展。
[画中画扩展][7]
> **[画中画扩展][7]**
进入扩展页面并**点击 Add to Brave 按钮**
进入扩展页面并点击 <ruby>添加到 Brave<rt>Add to Brave</rt></ruby> 按钮。
![为 Brave 添加画中画扩展][8]
@ -67,7 +70,7 @@ Google 提供了一个官方插件,可让你在 Google Chrome 中获得画中
![使用画中画扩展][10]
播放视频时,单击扩展名,视频应该会弹出。
播放视频时,单击该扩展图标,视频应该会弹出。
### 你能在 Brave 中启用 PIP 模式吗?
@ -82,7 +85,7 @@ via: https://itsfoss.com/picture-in-picture-brave/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -90,7 +93,7 @@ via: https://itsfoss.com/picture-in-picture-brave/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/open-source-browsers-linux/
[2]: https://itsfoss.com/brave-vs-firefox/
[3]: https://itsfoss.com/wp-content/uploads/2022/09/getting-picture-in-picture-in-brave-1.png
[3]: https://itsfoss.com/wp-content/uploads/2022/09/getting-picture-in-picture-in-brave-1.webp
[4]: https://itsfoss.com/wp-content/uploads/2022/09/getting-picture-in-picture-in-brave-2.webp
[5]: https://itsfoss.com/wp-content/uploads/2022/09/brave-picture-in-picture-youtube.webp
[6]: https://itsfoss.com/wp-content/uploads/2022/09/brave-playing-picture-in-picture-mode-on-screen.webp

View File

@ -0,0 +1,206 @@
[#]: subject: "How to Enable RPM Fusion Repo in Fedora, CentOS, RHEL"
[#]: via: "https://www.debugpoint.com/enable-rpm-fusion-fedora-rhel-centos/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15145-1.html"
如何在 Fedora、CentOS、RHEL 中启用 RPM Fusion 仓库
======
> 本指南解释了在 Fedora Linux 发行版中启用第三方软件仓库 RPM Fusion 的步骤。
[RPM Fusion][1] 软件仓库是一个社区维护的软件仓库,它为 Fedora Linux 提供额外的软件包,这些软件包不是由 Fedora 官方团队分发,例如 DVD 播放、媒体播放、来自 GNOME 和 KDE 的软件等。这是因为许可证、其他法律原因和特定国家/地区的软件规范而导致的。
RPM Fusion 为 Red Hat Enterprise LinuxRHEL以及 Fedora 提供了 .rpm 包。
本指南介绍了在 Fedora Linux 中启用 RPM Fusion 仓库所需的步骤。本指南适用于所有 Fedora 发行版本。
这在所有当前支持的 Fedora 版本35、36 和 37中进行了测试。
![](https://img.linux.net.cn/data/attachment/album/202210/16/111338jjr0eh5cjgq017n5.jpg)
### 如何在 Fedora Linux、RHEL、CentOS 中启用 RPM Fusion 仓库
RPM Fusion 有两种版本的仓库:自由和非自由。
顾名思义,自由版包含软件包的自由版本,非自由版包含封闭源代码的编译软件包和“非商业”开源软件。
在继续之前,首先检查你是否安装了 RPM fusion。打开终端并运行以下命令。
```
dnf repolist | grep rpmfusion
```
如果安装了 RPM你应该会看到如下所示的消息。就不用下面的步骤。如果未安装你可以继续执行以下步骤。
![RPM Fusion 已安装][3]
打开终端并根据你的操作系统版本运行以下命令。请注意,这些命令包含自由和非自由版本。如果你愿意,你可以在运行时省略下面的任何一个。
#### Fedora
自由版:
```
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
非自由版:
```
sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### 在 Silverblue 上使用 rpm-ostree
自由版:
```
sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
非自由版:
```
sudo rpm-ostree install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### RHEL 8
先安装 EPEL
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
自由版:
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
非自由版:
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
开发相关软件包:
```
sudo subscription-manager repos --enable "codeready-builder-for-rhel-8-$(uname -m)-rpms"
```
#### CentOS 8
先安装 EPEL
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
自由版:
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
非自由版:
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
启用 PowerTools
```
sudo dnf config-manager --enable PowerTools
```
### 附加说明
RPM Fusion 还可以帮助用户安装来自 GNOME 软件或 KDE Discover 的软件包。要在 Fedora 中启用它,请运行以下命令:
```
sudo dnf groupupdate core
```
你还可以通过以下命令启用 RPM Fusion 来使用 gstreamer 和其他多媒体播放包来播放媒体文件。
```
sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin
```
```
sudo dnf groupupdate sound-and-video
```
启用 RPM Fusion 以使用 libdvdcss 播放 DVD。
```
sudo dnf install rpmfusion-free-release-taintedsudo dnf install libdvdcss
```
通过以下命令启用 RPM Fusion 以启用非 FLOSS 硬件包。
```
sudo dnf install rpmfusion-nonfree-release-taintedsudo dnf install *-firmware
```
运行命令后,如果你使用的是 Fedora 或 CentOS/RHEL请在重启前运行以下命令。
```
sudo dnf check-updatesudo dnf update
```
### 如何使用 dnf 删除仓库
如果要删除仓库,请按照以下步骤操作。
首先,使用以下命令查看添加到 Fedora 系统的仓库列表。
```
dnf repolist
```
![dnf 仓库列表][4]
如你所见,添加了 rpmfusion 自由和非自由仓库。要通过 dnf 删除它,你需要使用以下命令准确知道仓库文件名。
```
rpm -qa 'rpmfusion*'
```
这将列出仓库的确切名称。在示例中,它们是 “rpmfusion-free-release”。
![从 Fedora 中移除 rpmfusion][5]
现在你可以简单地运行以下命令来删除它。
```
sudo dnf remove rpmfusion-free-release
```
你可以重复上面的例子从 Fedora 中删除 rpmfusion也可以使用它从系统中删除任何其他仓库。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/enable-rpm-fusion-fedora-rhel-centos/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://rpmfusion.org/
[2]: https://www.debugpoint.com/wp-content/uploads/2020/07/rpmfusion.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2020/07/RPM-Fusion-Already-Installed-.png
[4]: https://www.debugpoint.com/wp-content/uploads/2020/07/dnf-repolist.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2020/07/remove-rpmfusion-from-fedora.jpg

View File

@ -0,0 +1,47 @@
[#]: subject: "Security Issues With Open Source In Todays World"
[#]: via: "https://www.opensourceforu.com/2022/10/security-issues-with-open-source-in-todays-world/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "KevinZonda"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15125-1.html"
当今世界的开源安全问题
======
![](https://img.linux.net.cn/data/attachment/album/202210/10/160918vmfxmsm4bnwi0nb4.jpg)
> 开源代码可能是当今大多数公司最可行的选择,但它也伴随着自己的问题。
许多人支持使用 <ruby>开源软件<rt>open source software</rt></ruby>OSS。毕竟我们为什么要不断地尝试构建代码来解决别人已经解决过的问题为什么不分享信息并逐步和迭代地增强当前的开源解决方案呢这些<ruby>平等主义价值观<rt>egalitarian values</rt></ruby>,可能是整个文明的根本,更不用说软件了,但还是包含了几千年来一直存在的冲突。
开源软件安全的问题在于,尽管任何人都可以查看源代码,但这并不意味着他们会这么做。有一些广泛使用的开源项目仅由数量有限的工程师维护。这些工程师无法完全自愿地提供时间和精力,因为他们也需要支付他们的账单。
即使对于更复杂的开源项目这也是一个问题。举个例子Linux 内核项目由 3000 多万行代码组成,包含数百个需要解决的缺陷,并有近 2000 名活跃的开发者。每个活跃的开发者都写了超过 15000 行的代码。
根据 Linux 基金会最近的一项研究,一个应用程序平均有 5.1 个重大漏洞仍未解决41% 的企业对其开源软件的安全性缺乏信心。而更糟糕的是,只有 49% 的企业拥有开源安全策略。
即使开源软件有安全漏洞,这也不能保证它能被修复。调查显示,目前修复一个漏洞平均需要 97.8 天,使使用该软件的企业在几个月内容易受到攻击。这就是开源软件安全有时被忽视的地方:就像好人可以寻找代码中的错误和漏洞来修复它们一样,坏人也可以寻找同样的漏洞来利用它们。
仅仅依靠志愿者社区来发现漏洞、报告漏洞和修复漏洞是一个漫长的过程。在你继续受益于开源的广泛优势的同时,花钱请人检查你的开源解决方案的安全性可以帮助弥补这个问题。
由于必须部署开源软件的更新和补丁以保证系统的安全,这一要求会带来独特的困难。如果你的解决方案依赖于某个软件版本,更新你的关键任务软件可能会导致功能损失和/或计划外的停机。当情况对业务至关重要时,聘请专家来回传补丁并维护一个时间更长的版本可能比让大型社区愿意去做更加优雅。
开源社区经常使用的一句话是:“这是开源的,去改变它吧!”它强调了一个关键点:当别人在项目中投入时间、精力或金钱的时候,期望白白得到良好的安全水平是不合理的,也是不可持续的。
要么按原定计划为开源做出贡献,改进代码并为他人发布,要么聘请专业人士管理开源代码并在必要时进行调试,这些都是选择。然而,这个行业无法承担完全不做贡献。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/10/security-issues-with-open-source-in-todays-world/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[KevinZonda](https://github.com/KevinZonda)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -3,43 +3,44 @@
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "Cubik65536"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15124-1.html"
一个全新的用于 NVIDIA 显卡的开源 Vulkan 驱动已经准备好测试了!
一个全新的用于英伟达显卡的开源 Vulkan 驱动已经准备好测试了!
======
为 NVIDIA 显卡开发的一个全新的开源驱动正在开发中!这里有一些好的进展……
![一个全新的用于 NVIDIA 显卡的开源 Vulkan 驱动已经准备好测试了!][1]
> 为英伟达显卡开发的一个全新的开源驱动正在开发中!这里有一些好的进展……
**NVK** 是一个全新的用于 NVIDIA 显卡的开源 Vulkan 驱动,它的目标是成为新的主流显卡驱动。
![一个全新的用于英伟达显卡的开源 Vulkan 驱动已经准备好测试了!][1]
这成为可能的部分原因是因为 Nvidia 开源了数据中心 GPU 和消费级 GPUGTX/RTX的 GPU 内核模块
**NVK** 是一个全新的用于英伟达显卡的开源 Vulkan 驱动,它的目标是成为新的主流显卡驱动
[NVIDIA 在改善其 GPU 在 Linux 上的体验方面迈出了重要的一步][2]
这成为可能的部分原因是因为英伟达开源了数据中心 GPU 和消费级 GPUGTX/RTX的 GPU 内核模块。
> **[英伟达在改善其 GPU 在 Linux 上的体验方面迈出了重要的一步][2]**
它使开发人员能够改进开源驱动程序并启用比以前更多的功能。
让我们来看看 NVK 可以提供什么。
### 适用于 NVIDIA GPU 的新 NVK 开源驱动程序
### 新的适用于英伟达 GPU 的 NVK 开源驱动程序
**Jason Ekstrand**Collabora 的工程师)和 Red Hat 的其他人已经在过去几个月里编写了 NVK 的代码。
他们可以利用 Turing 显卡提供的统一固件 BLOB然后在其上构建 Vulkan 支持。
他们可以利用 Turing 系列显卡提供的统一固件 BLOB然后在其上构建 Vulkan 支持。
**但是nouveau 开源驱动程序已经存在,对吗?**
**但是,不是已经有了 nouveau 开源驱动程序了吗?**
NVK 与其他的 nouveau 驱动非常不同,因为它是从头开始编写的。
noiveau一个重要的 Nvidia 显卡的开源驱动程序,已经年久失修了,试图在它的基础上构建是一个很多人都无法承担的任务。
nouveau 是一个主要的英伟达显卡的开源驱动程序,已经年久失修了,试图在它的基础上构建是一个很多人都无法承担的任务。
当然,它是由有很多才华的工程师开发的,但是缺乏公司的支持和贡献者的影响了它的发展。
**NVK 旨在克服这些问题,同时专注于对 Turing 系列及更高版本 GPU 的支持。**
由于内核的开发方式,对于 Kepler、Maxwell 和 Pascal 等较旧的 GPU 的支持可能不会很容易地加入 NVK。它可能会对新内核有一个很大的依赖,从而只支持较新的 GPU。
由于内核的开发方式,对于 Kepler、Maxwell 和 Pascal 等较旧的 GPU 的支持可能不会很容易地加入 NVK。它也许极大地依赖于新内核,从而只支持较新的 GPU。
同时nouveau 内核接口与 Vulkan 不兼容,阻碍了对较旧 GPU 的支持。
@ -49,13 +50,13 @@ noiveau一个重要的 Nvidia 显卡的开源驱动程序,已经年久失
### 如何尝试它?
NVK 目前处于非常初级的状态,有很多功能缺失,并且正在不断开发中。
NVK 目前处于非常初级的状态,有很多功能缺失,并且正在持续开发中。
**所以,它还不适合让所有类型的用户尝试。**
你还是可以通过拉取 freedesktop.org 上的 [nouveau/mesa 项目][4] 的 nvk/main 分支并构建它来尝试它。
如果你想的话,你也可以通过贡献到同一个 [nvk/main 分支][5] 来帮助 NVK 的开发。
如果你想的话,你也可以通过贡献到该项目下的 [nvk/main 分支][5] 来帮助 NVK 的开发。
对于更多的技术信息,你可以参考 [官方公告][6]。
@ -63,7 +64,7 @@ NVK 目前处于非常初级的状态,有很多功能缺失,并且正在不
NVK 有很多潜力,尤其是与老化的 [nouveau][7] 图形驱动套件相比。
这可以为 nouveau 带来一个合适的继承者,同时为 Linux 提供一个带有很多功能的,主流的开源 Nvidia 图形驱动套件。
这可以为 nouveau 带来一个合适的继承者,同时为 Linux 提供一个带有很多功能的、主流的开源英伟达图形驱动套件。
💬 *你对此有什么看法?你认为这最终能够实现 nouveau 驱动程序所未能实现的吗?*
@ -74,7 +75,7 @@ via: https://news.itsfoss.com/nvidia-nvk/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[Cubik65536](https://github.com/Cubik65536)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,23 +3,26 @@
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "KevinZonda"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15128-1.html"
谷歌 AI 推出新的阵列存储开源库
谷歌 AI 推出新的数组存储开源库
======
*TensorStore一个用于阵列存储的高性能开源库已被谷歌 AI 创造。*
谷歌开发的开源 C++ 和 Python 框架 TensorStore 旨在加速读写大型多维数组的设计。覆盖单个大型坐标系的多维数据集通常用于当代计算机科学和机器学习应用程序中。使用这些数据集具有挑战性,因为客户经常希望进行涉及多个工作站并行操作的调查,并且可能会以不可预测的间隔和不同的规模接收和输出数据。
![](https://www.opensourceforu.com/wp-content/uploads/2022/10/google3-1068x559.jpg)
谷歌研究院开发了 TensorStore这是一个为用户提供 API 访问权限的库,该 API 无需复杂的硬件即可管理庞大的数据集以解决数据存储和操作问题。该库支持许多存储系统包括本地和网络文件系统、Google Cloud Storage 等
> 谷歌 AI 引入了一个用于数组存储的高性能开源库 TensorStore
为了加载和处理大量数据TensorStore 提供了一个简单的 Python API。任何大型基础数据集都可以加载和更新而无需将完整的数据集存储在内存中因为在需要精确切片之前不会读取或保存实际数据。
谷歌开发的开源 C++ 和 Python 框架 TensorStore 旨在加速大型多维数组的读写设计。覆盖单一大型坐标系的多维数据集通常用于当代计算机科学和机器学习应用程序中。使用这些数据集具有挑战性,因为客户经常希望进行涉及多个工作站并行操作的调查,并且可能会以不可预测的间隔和不同的规模接收和输出数据。
这是通过索引和操作语法实现的,这与用于 NumPy 操作的语法非常相似。除了虚拟视图、广播、对齐和其他复杂的索引功能TensorStore 还支持,如数据类型转换、降低取样和随意创建的数组这些功能
谷歌研究院开发了 TensorStore该库为用户提供了一个可以管理巨大数据集的 API而无需复杂的硬件以解决数据存储和操作问题。该库支持许多存储系统包括本地和网络文件系统、谷歌云存储等
此外TensorStore 包含一个异步 API可以同时进行读取或写入操作。在执行其他工作时软件可以执行可配置的内存缓存从而减少在访问常用数据时处理较慢存储系统的需要。
为了加载和处理大量数据TensorStore 提供了一个简单的 Python API。任何任意大小的基础数据集都可以加载和更新而无需将数据集完整存储在内存中因为在需要精确切片之前不需要在内存中读取或保存实际数据。
这是通过索引和操作语法实现的,它与 NumPy 操作的语法非常相似。除了虚拟视图、广播、对齐和其他复杂的索引功能TensorStore 还支持如数据类型转换、降低取样和随意创建的数组这些功能。
此外TensorStore 包含一个异步 API可以并发进行读取或写入操作。在执行其他工作时软件可以进行内存缓存处理可配置从而减少在访问常用数据时处理较慢存储系统的需要。
大型数值数据集需要大量的处理能力来检查和分析。实现这一点的常用方法是在分散在许多设备上的大量 CPU 或加速器内核之间并行化任务。在保持出色速度的同时并行分析单个数据集的能力一直是 TensorStore 的关键目标。 PaLM、脑图和其他复杂的大规模机器学习模型是 TensorStore 应用案例的一些例子。
@ -29,8 +32,8 @@ via: https://www.opensourceforu.com/2022/10/google-ai-unveils-a-new-open-source-
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[KevinZonda](https://github.com/KevinZonda)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,152 @@
[#]: subject: "Upgrade Various Kinds of Packages in Linux at Once With Topgrade"
[#]: via: "https://itsfoss.com/topgrade/"
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15134-1.html"
使用 Topgrade 一次升级 Linux 中的各种软件包
======
![](https://img.linux.net.cn/data/attachment/album/202210/12/152118mo5r6pohnn4o5h56.jpg)
更新 Linux 系统并没有那么复杂,不是吗?毕竟,要更新 Ubuntu 之类的发行版,你只需要使用 `apt update``apt upgrade` 就行。
如果所有的包都是通过一个包管理器安装的,就会是这样。
但现在情况不再如此。你有经典的 apt/dnf/pacman还有 Snap、Flatpak、Appimages。不止于此你还可以使用 PIP用于 Python和 Cargo用于 Rust安装应用。
使用 Node NPM 包需要单独更新。Oh My Zsh需要单独更新。[Vim 中的插件][1]、Atom 等也可能不被 apt/dnf/pacman 覆盖。
你现在看到问题了吗?这就是名为 Topgrade 的新工具旨在解决的问题。
### Topgrade处理各种更新的单一程序
[Topgrade][2] 是一个 CLI 程序,它会检测你使用的工具,然后运行适当的命令来更新它们。
![Topgrade disable system][3]
除了通常的 Linux 包管理器,它还可以检测和更新 Brew、Cargo、PIP、Pihole、Vim 和 Emacs 插件、R 软件包等。你可以在 [维基页面][4] 上查看支持的包列表。
##### Topgrade 的主要特点:
* 能够更新来自不同的包管理器的软件包,**包括固件**
* 你可以如何控制更新包。
* 高度可定制。
* 甚至能够在更新包之前进行概览。
所以不要浪费任何时间,让我们跳到安装。
### 使用 Cargo 在 Linux 中安装 Topgrade
安装过程非常简单,因为我将使用 Cargo 包管理器。
我们已经有了 [详细指南,其中包含设置 Cargo 包管理器的多种方法][5]。所以我将在我的示例中使用 Ubuntu 来快速完成。
因此,让我们以最少方式安装依赖项以及 Cargo
```
sudo apt install cargo libssl-dev pkg-config
```
安装 Cargo 后,使用给定的命令安装 Topgrade
```
cargo install topgrade
```
它会抛出一个警告:
![cargo error][6]
你只需添加 `cargo` 路径即可运行二进制文件。这可以通过给定的命令来完成,你需要使用你的用户名替换 `sagar`
```
echo 'export PATH=$PATH:/home/sagar/.cargo/bin' >> /home/sagar/.bashrc
```
现在重启系统Topgrade 就可以使用了。但是等等,我们需要安装另一个包来更新 Cargo 以获取最新的包。
```
cargo install cargo-update
```
这样我们完成了安装。
### 使用 Topgrade
使用 Topgrade 非常简单。使用一个命令,就是这样:
```
topgrade
```
![][7]
但这不会给你除了系统包之外的任何控制,但正如我所提到的,你可以将不想更新的仓库列入黑名单。
#### 从 Topgrade 中排除包管理器和仓库
假设我想排除 Snap 和从默认包管理器下载的包,所以我的命令是:
```
topgrade --disable snap system
```
![Topgrade disable snap system][8]
要进行永久更改,你必须在其配置文件中进行一些更改,这些更改可以通过给定的命令访问:
```
topgrade --edit-config
```
对于此示例,我排除了 Snap 和默认系统仓库:
![configuring Topgrade][9]
#### 试运行 Topgrade
评估将要更新的过时软件包总是一个好主意,我从 Topgrade 的整个目录中找到了这个最有用的选项。
你只需使用带有 `-n` 选项的 `topgrade` 命令,它就会生成过期软件包的摘要。
```
topgrade -n
```
![summery of Topgrade][10]
检查需要更新的软件包的一种简洁方法。
### 总结
在使用 Topgrade 几周后,它成为了我的 Linux 武器库中不可或缺的一部分。 像大多数其他 Linux 用户一样,我只是通过我的默认包管理器更新包。 Python 和 Rust 包被完全忽略了。 感谢 Topgrade我的系统现在完全更新了。
我知道这不是每个人都想使用的工具。那你呢?愿意试一试吗?
--------------------------------------------------------------------------------
via: https://itsfoss.com/topgrade/
作者:[Sagar Sharma][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lkxed
[1]: https://linuxhandbook.com/install-vim-plugins/
[2]: https://github.com/r-darwish/topgrade
[3]: https://itsfoss.com/wp-content/uploads/2022/09/topgrade-disable-system.png
[4]: https://github.com/r-darwish/topgrade/wiki/Step-list
[5]: https://itsfoss.com/install-rust-cargo-ubuntu-linux/
[6]: https://itsfoss.com/wp-content/uploads/2022/09/cargo-error.png
[7]: https://itsfoss.com/wp-content/uploads/2022/10/topgrade.mp4
[8]: https://itsfoss.com/wp-content/uploads/2022/09/topgrade-disable-snap-system.png
[9]: https://itsfoss.com/wp-content/uploads/2022/09/configuring-topgrade-1.png
[10]: https://itsfoss.com/wp-content/uploads/2022/09/summery-of-topgrade.png

View File

@ -0,0 +1,218 @@
[#]: subject: "How to Create LVM Partition Step-by-Step in Linux"
[#]: via: "https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/"
[#]: author: "James Kiarie https://www.linuxtechi.com/author/james/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15152-1.html"
在 Linux 中创建 LVM 分区的分步指南
======
![](https://img.linux.net.cn/data/attachment/album/202210/18/113615swffwazya3nyfve2.jpg)
> 在本指南中,我们将逐步介绍如何在 Linux 中创建 LVM 分区。
LVM 代表 “<ruby>逻辑卷管理<rt>Logical Volume Management</rt></ruby>”,它是专门为服务器管理 Linux 系统上的磁盘或存储的推荐方式。 LVM 分区的主要优点之一是我们可以实时扩展其大小而无需停机。 LVM 分区也可以缩小,但不推荐。
为了演示,我在我的 Ubuntu 22.04 系统上连接了 15GB 磁盘,我们将从命令行在该磁盘上创建 LVM 分区。
##### 准备
- 连接到 Linux 系统的原始磁盘
- 具有 sudo 权限的本地用户
- 预装 lvm2 包
事不宜迟,让我们深入了解这些步骤。
### 步骤 1、识别新连接的原始磁盘
登录到你的系统,打开终端并运行以下 `dmesg` 命令:
```
$ sudo dmesg | grep -i sd
```
在输出中,查找大小为 15GB 的新磁盘。
![dmesg-command-new-attached-disk-linux][1]
识别新连接的原始磁盘的另一种方法是通过 `fdisk` 命令:
```
$ sudo fdisk -l | grep -i /dev/sd
```
输出:
![fdisk-command-output-new-disk][2]
从上面的输出,可以确认新连接的磁盘是 `/dev/sdb`
### 步骤 2、创建 PV物理卷
在开始在磁盘 `/dev/sdb` 上创建<ruby>物理卷<rt>Physical Volume</rt></ruby>PV之前请确保已安装 `lvm2` 包。如果未安装,请运行以下命令:
```
$ sudo apt install lvm2 // On Ubuntu / Debian
$ sudo dnf install lvm2 // on RHEL / CentOS
```
运行以下 `pvcreate` 命令在磁盘 `/dev/sdb` 上创建 PV
```
$ sudo pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
$
```
要验证 PV 状态,运行:
```
$ sudo pvs /dev/sdb
或者
$ sudo pvdisplay /dev/sdb
```
![pvdisplay-command-output-linux][3]
### 步骤 3、创建 VG卷组
要创建<ruby>卷组<rt>Volume Group</rt></ruby>VG我们将使用 `vgcreate` 命令。创建 VG 意味着将 PV 添加到其中。
语法:
```
$ sudo vgcreare <vg_name> <pv>
```
在我们的例子中,命令是:
```
$ sudo vgcreate volgrp01 /dev/sdb
Volume group "volgrp01" successfully created
$
```
运行以下命令以验证 VG`volgrp01`)的状态:
```
$ sudo vgs volgrp01
或者
$ sudo vgdisplay volgrp01
```
上述命令的输出:
![vgs-command-output-linux][4]
以上输出确认大小为 15 GiB 的卷组 `volgrp01` 已成功创建,一个<ruby>物理扩展<er>Physical Extend</rt></ruby>PE的大小为 4 MB。创建 VG 时可以更改 PE 大小。
### 步骤 4、创建 LV逻辑卷
`lvcreate` 命令用于从 VG 中创建<ruby>逻辑卷<rt>Logical Volume</rt></ruby> LV。 `lvcreate` 命令的语法如下所示:
```
$ sudo lvcreate -L <Size-of-LV> -n <LV-Name> <VG-Name>
```
在我们的例子中,以下命令将用于创建大小为 14 GB 的 LV
```
$ sudo lvcreate -L 14G -n lv01 volgrp01
Logical volume "lv01" created.
$
```
验证 LV 的状态,运行:
```
$ sudo lvs /dev/volgrp01/lv01
或者
$ sudo lvdisplay /dev/volgrp01/lv01
```
输出:
![lvs-command-output-linux][5]
上面的输出显示 LV`lv01`)已成功创建,大小为 14 GiB。
### 步骤 5、格式化 LVM 分区
使用 `mkfs` 命令格式化 LVM 分区。在我们的例子中LVM 分区是 `/dev/volgrp01/lv01`
注意:我们可以将分区格式化为 ext4 或 xfs因此请根据你的设置和要求选择文件系统类型。
运行以下命令将 LVM 分区格式化为 ext4 文件系统。
```
$ sudo mkfs.ext4 /dev/volgrp01/lv01
```
![mkfs-ext4-filesystem-lvm][6]
执行下面的命令,用 xfs 文件系统格式化 LVM 分区:
```
$ sudo mkfs.xfs /dev/volgrp01/lv01
```
要使用上述格式化分区,我们必须将其挂载到某个文件夹中。所以,让我们创建一个文件夹 `/mnt/data`
```
$ sudo mkdir /mnt/data
```
现在运行 `mount` 命令将其挂载到 `/mnt/data` 文件夹:
```
$ sudo mount /dev/volgrp01/lv01 /mnt/data/
$ df -Th /mnt/data/
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/volgrp01-lv01 ext4 14G 24K 13G 1% /mnt/data
$
```
尝试创建一些没用的文件,运行以下命令:
```
$ cd /mnt/data/
$ echo "testing lvm partition" | sudo tee dummy.txt
$ cat dummy.txt
testing lvm partition
$
$ sudo rm -f dummy.txt
```
完美,以上命令输出确认我们可以访问 LVM 分区。
要永久挂载上述 LVM 分区,请使用以下 `echo` 命令将其条目添加到 `fstab` 文件中:
```
$ echo '/dev/volgrp01/lv01 /mnt/data ext4 defaults 0 0' | sudo tee -a /etc/fstab
$ sudo mount -a
```
以上就是本指南的全部内容,感谢阅读。请在下面的评论区发表你的问题和反馈。
---
via: https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/
作者:[James Kiarie][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/10/dmesg-command-new-attached-disk-linux.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/10/fdisk-command-output-new-disk.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/10/pvdisplay-command-output-linux.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/10/vgs-command-output-linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/10/lvs-command-output-linux.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/10/mkfs-ext4-filesystem-lvm.png

View File

@ -0,0 +1,96 @@
[#]: subject: "How to Update Google Chrome on Ubuntu Linux"
[#]: via: "https://itsfoss.com/update-google-chrome-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15163-1.html"
如何在 Ubuntu Linux 上更新谷歌 Chrome
======
![](https://img.linux.net.cn/data/attachment/album/202210/22/085013gihsi4rtmpkmj4yb.png)
> 你设法在你的 Ubuntu 系统上安装了谷歌 Chrome 浏览器。现在你想知道如何让浏览器保持更新。
在 Windows 和 macOS 上,当 Chrome 上有可用更新时,你会在浏览器中收到通知,你可以从浏览器中点击更新选项。
Linux 中的情况有所不同。你不会从浏览器更新 Chrome。你要使用系统更新对其进行更新。
是的。当 Chrome 上有可用的新更新时Ubuntu 会通过系统更新工具通知你。
![当有新版本的 Chrome 可用时Ubuntu 会发送通知][1]
你只需单击“<ruby>立即安装<rt>Install Now</rt></ruby>”按钮,在被提示时输入你的帐户密码并将 Chrome 更新到新版本。
让我告诉你为什么会在系统级别看到更新,以及如何在命令行中更新谷歌 Chrome。
### 方法 1使用系统更新更新谷歌浏览器
你最初是如何安装 Chrome 的?你从 [Chrome 网站][2] 获得了 deb 安装程序文件,并使用它来 [在 Ubuntu 上安装 Chrome][3]。
当你这样做时,谷歌会在你系统的源列表中添加一个仓库条目。这样,你的系统就会信任来自谷歌仓库的包。
![谷歌 Chrome 存储库添加到 Ubuntu 系统][4]
对于添加到系统中的所有此类条目,包更新通过 Ubuntu 更新程序集中进行。
这就是为什么当 Google Chrome和其他已安装的应用有可用更新时你的 Ubuntu 系统会向你发送通知。
![Chrome 更新可通过系统更新与其他应用一起使用][5]
**单击“<ruby>立即安装<rt>Install Now</rt></ruby>”按钮并在要求时输入你的密码**。很快,系统将安装所有可升级的软件包。
根据更新偏好,通知可能不是立即的。如果需要,你可以手动运行更新程序工具并查看适用于你的 Ubuntu 系统的更新。
![运行软件更新程序以查看你的系统有哪些可用更新][6]
### 方法 2在 Ubuntu 命令行中更新 Chrome
如果你更喜欢终端而不是图形界面,你也可以使用命令更新 Chrome。
打开终端,并依次运行以下命令:
```
sudo apt update
sudo apt --only-upgrade install google-chrome-stable
```
第一条命令更新包缓存,以便你的系统知道可以升级哪些包。
第二条命令 [仅更新单个包][7],即谷歌 Chrome安装为 `google-chrome-stable`)。
### 总结
如你所见Ubuntu 比 Windows 更精简。你会随其他系统更新一起更新 Chrome。
顺便一提,如果你对它不满意,你可以了解 [从 Ubuntu 中删除 Google Chrome][8]。
Chrome 是一款不错的浏览器。你可以通过 [使用 Chrome 中的快捷方式][9] 来试验它,因为它使浏览体验更加流畅。
在 Ubuntu 上享受 Chrome
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-google-chrome-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[2]: https://www.google.com/chrome/
[3]: https://itsfoss.com/install-chrome-ubuntu/
[4]: https://itsfoss.com/wp-content/uploads/2021/06/google-chrome-repo-ubuntu.png
[5]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[6]: https://itsfoss.com/wp-content/uploads/2022/04/software-updater-ubuntu-22-04.jpg
[7]: https://itsfoss.com/apt-upgrade-single-package/
[8]: https://itsfoss.com/uninstall-chrome-from-ubuntu/
[9]: https://itsfoss.com/google-chrome-shortcuts/

View File

@ -0,0 +1,123 @@
[#]: subject: "Xubuntu 22.10: Top New Features"
[#]: via: "https://www.debugpoint.com/xubuntu-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15155-1.html"
Xubuntu 22.10:热门新功能
======
> 这是 Xubuntu 22.10 Kinetic Kudu 及其新功能的快速总结。
![Xubuntu 22.10 桌面][1]
质量需要时间来建立。它适用于生活的各个方面,包括软件。
自 Xfce 4.16 发布以来Xfce 4.17(开发版)已经被添加了许多新功能。这包括核心 Xfce、原生应用GNOME 43、MATE 1.26 和 libadwaita。由于 Xfce 也是 GNOME 和 MATE 的组合,正确地合并和测试这些更改需要时间。
在 Xubuntu 22.10 Kinetic Kudu 版本中,你将体验到自 2020 年 12 月以来所做的所有改进:将近两年的错误修复和增强。
让我们快速查看一下时间表。目前Xubuntu 22.10 beta 已经发布,并正在测试中(本问末尾提供了 ISO 链接)。最终版本预计于 2022 年 10 月 20 日发布。
### Xubuntu 22.10 新功能
#### 核心更新和 GNOME 框架
在其核心Xubuntu 22.10 同其基于的 Ubuntu 22.10 一样,采用 Linux 内核 5.19。另外Xfce 桌面版本是 Xfce 4.17。
4.17 版本是一个开发版,因为它是下一个大版本 Xfce 4.18 的基础,该版本 [计划在今年圣诞节发布][2]。
让我们谈谈 GNOME 和相关应用。 Xubuntu 22.10 中的 Xfce 4.17 首次获得了带有 GNOME 43 更新的 libadwaita。这意味着默认的 GNOME 应用程序可以在 Xfce 桌面下正确呈现。
这就是说GNOME <ruby>软件应用<rt>Software</rt></ruby> 43 在 Xubuntu 22.10 的 Xfce 桌面下看起来很棒。如果你将其与 Xfce 原生外观和带有 CSD/SSD例如 “<ruby>磁盘应用<rt>Disk</rt></ruby>”)的 GNOME 应用进行比较,它们看起来都很顺眼。
我对 GNOME 软件应用 43 在 Xfce 桌面下的 libadwaita/GTK4 渲染效果如此之好感到惊讶。
![在 Xubuntu 22.10 中一起使用三种不同的窗口][3]
#### Xfce 应用
Xfce 桌面带来了自己的原生应用集。在此版本中,所有应用都从 4.16 升级到 4.17 版本。
值得注意的变化包括Xfce 面板获得了对任务列表插件的中键单击支持和托盘时钟中的二进制时间模式。PulseAudio 插件引入了一个新的录音指示器,可以过滤掉多个按钮按下事件。
Thunar 文件管理器获得了大量的底层功能和错误修复。如果你将 Thunar 4.16 与 Thunar 4.17 进行比较,它是变化巨大的。更改包括更新的上下文菜单、路径栏、搜索、导航等。你可以在 [此处][4] 阅读 Thunar 的所有更改日志。
此外,截屏应用 ScreenShooter 默认支持 WebP。蓝牙管理器 Blueman 在系统托盘新增配置文件切换器,并更新了 Catfish 文件搜索工具。
这是 Xfce 应用版本的更新列表和指向其更改日志的链接(如果你想进一步挖掘)。
* Appfinder [4.17.0][5]
* Catfish [4.16.4][6]
* Mousepad [0.5.10][7]
* Panel [4.17.3][8]
* PulseAudio 插件 [0.4.4][9]
* Ristretto [0.12.3][10]
* Screenshooter [1.9.11][11]
* Task Manager [1.5.4][12]
* Terminal [1.0.4][13]
* Thunar [4.17.9][14]
#### 外观和感觉
默认的 elementary-xfce 图标集(浅色和深色)得到了更新,带有额外的精美图标,让你的 Xfce 桌面焕然一新。默认的 Greybird GTK 主题对窗口装饰进行了必要的改进,并添加了 Openbox 支持。
你可能会注意到的重要且可见的变化之一是 `ALT+TAB` 外观。图标更大一些,眼睛更舒适,可以在深色背景下更快地切换窗口。
![在 Xubuntu 22.10 的 elementary-xfce 中更新的图标集示例][15]
![ALT TAB 有更大的图标][16]
上述更改使默认应用与其所基于的 [Ubuntu 22.10 版本][17] 保持一致。以下是 Xubuntu 22.10 中的更改概括。
### 概括
* Linux 内核 5.19,基于 Ubuntu 22.10
* Xfce 桌面版 4.17
* 原生应用全部更新到 4.17
* 核心与 GNOME 43、libadwaita、GTK4 保持一致
* MATE 应用程序升级到 1.26
* Mozilla Firefox 网页浏览器 105.0
* Thunderbird 邮件客户端 102.3
* LibreOffice 7.4.4.2
### 总结
Xfce 桌面最关键的整体变化将在 4.18 版本中到来。例如,最初的 Wayland 支持、更新的 glib 和 GTK 包。如果一切顺利,你可以在明年 4 月发布的 Xubuntu 中期待这些最好的变化。
最后,如果你想试用,可以从 [这个页面][18] 下载 Beta 镜像。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/xubuntu-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Xubuntu-22.10-Desktop-1024x563.jpg
[2]: https://debugpointnews.com/xfce-4-18-announcement/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Three-different-window-decorations-together-in-Xubuntu-22.10.jpg
[4]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[5]: https://gitlab.xfce.org/xfce/xfce4-appfinder/-/blob/master/NEWS
[6]: https://gitlab.xfce.org/apps/catfish/-/blob/master/NEWS
[7]: https://gitlab.xfce.org/apps/mousepad/-/blob/master/NEWS
[8]: https://gitlab.xfce.org/xfce/xfce4-panel/-/blob/master/NEWS
[9]: https://gitlab.xfce.org/panel-plugins/xfce4-pulseaudio-plugin/-/blob/master/NEWS
[10]: https://gitlab.xfce.org/apps/ristretto/-/blob/master/NEWS
[11]: https://gitlab.xfce.org/apps/xfce4-screenshooter/-/blob/master/NEWS
[12]: https://gitlab.xfce.org/apps/xfce4-taskmanager/-/blob/master/NEWS
[13]: https://gitlab.xfce.org/apps/xfce4-terminal/-/blob/master/NEWS
[14]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[15]: https://www.debugpoint.com/wp-content/uploads/2022/10/Refreshed-icon-set-sample-in-elementary-xfce-with-Xubuntu-22.10.jpg
[16]: https://www.debugpoint.com/wp-content/uploads/2022/10/ALT-TAB-is-refreshed-with-larger-icons.jpg
[17]: https://www.debugpoint.com/ubuntu-22-10/
[18]: https://cdimage.ubuntu.com/xubuntu/releases/kinetic/beta/

View File

@ -0,0 +1,80 @@
[#]: subject: "Easiest Way to Open Files as Root in GNOME Files"
[#]: via: "https://www.debugpoint.com/gnome-files-root-access/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15168-1.html"
在 GNOME 文件应用中以 Root 身份打开文件的最简单方法
======
> 这是在 GNOME 文件应用中以 root 身份访问文件或目录的最简单方法。
![][1]
在 Windows 中,你通常可以在右键单击上下文菜单中以“以管理员身份打开”的方式打开文件或文件夹。
该功能是文件管理器的一部分,对于 Windows它是 Windows 资源管理器的一部分。但是,它是由操作系统及其权限控制模块执行的。
在 Linux 发行版及其文件管理器中,情况略有不同。不同的桌面有自己的处理方式。
由于以管理员root身份修改文件和文件夹是有风险的并且可能导致系统损坏因此用户无法通过文件管理器的 GUI 轻松使用该功能。
例如KDE Plasma 的默认文件管理器Dolphin最近 [添加了此功能][2],因此当需要 root 权限时,它会通过 PolicyKit KDE Agentpolkit窗口询问你如下所示。
![使用 Polkit 实现 KIO 后的 Dolphin root 访问权限][3]
而不是相反的方式。比如,你想在文件管理器中通过 root 打开/执行一些东西时,你不能使用 `sudo dolphin` 以 root 权限运行文件管理器本身。
在某种程度上,它挽救了许多不可预见的情况。但是高级用户总是可以通过终端使用 `sudo` 来完成他们的工作。
### GNOME 文件应用Nautilus和对文件、目录的 root 访问权限
话虽如此,[GNOME 文件应用][4](又名 Nautilus有一种方法可以通过 root 打开文件和文件夹。
以下是方法:
* 打开 GNOME 文件应用Nautilus
* 然后单击左侧窗格中的“<ruby>其他位置<rt>Other Locations</rt></ruby>”。
* 按 `CTRL+L` 调出地址栏。
* 在地址栏中,输入下面的内容并回车。
```
admin:///
```
* 它会要求输入管理员密码。当你成功验证,你就会以管理员身份打开系统。
* 现在,从这里开始,无论你做什么,它都是管理员或 root。
![以管理员身份输入位置地址][5]
![输入管理员密码][6]
![以 root 身份打开 GNOME 文件应用][7]
但是,与往常一样,请小心你作为管理员所做的事情。在你以 root 身份验证自己之后,通常很容易忘记。
这些选项不容易看到总是有原因的,以防止你和许多新的 Linux 用户破坏他们的系统。
祝好。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/gnome-files-root-access/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/nauroot-1024x576.jpg
[2]: https://www.debugpoint.com/dolphin-root-access/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/02/Dolphin-root-access-after-KIO-with-Polkit-implementation.jpg
[4]: https://wiki.gnome.org/Apps/Files
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enter-the-location-address-as-admin.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/Give-admin-password.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Opening-GNOME-Files-as-root.jpg

View File

@ -0,0 +1,149 @@
[#]: subject: "How to Enable Snap Support in Arch Linux"
[#]: via: "https://itsfoss.com/install-snap-arch-linux/"
[#]: author: "Pranav Krishna https://itsfoss.com/author/pranav/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15160-1.html"
如何在 Arch Linux 中启用 Snap 支持
======
![](https://img.linux.net.cn/data/attachment/album/202210/21/100128gzzqkf3fcg3f6q3n.jpg)
Snap 是由 Ubuntu 的母公司 Canonical 设计的通用包格式。有些人不喜欢 Snap但它有一些优势。
通常,某些应用仅以 Snap 格式提供。这为你提供了在 Arch Linux 中启用 Snap 的充分理由。
我知道 AUR 拥有大量应用,但 Snap 应用通常直接来自开发人员。
如果你希望能够在 Arch Linux 中安装 Snap 应用,你需要先启用 Snap 支持。
有两种方法可以做到:
* 使用 AUR 助手启用 Snap 支持(更简单)
* 通过从 AUR 获取包,手动启用 Snap 支持
让我们看看怎么做。
### 方法 1、使用 AUR 助手启用 Snap
Snap 支持在 Arch 用户仓库中以 `snapd` 包的形式提供。你可以使用 AUR 助手轻松安装它。
有 [许多 AUR 助手][1],但 `yay` 是我更喜欢的,因为它的语法类似于 [pacman 命令][2]。
如果你还没有安装 AUR请使用以下命令安装 Yay需要事先安装 `git`
```
git clone https://aur.archlinux.org/yay
cd yay
makepkg -si
```
![安装 yay][3]
现在 `yay` 已安装,你可以通过以下方式安装 `snapd`
```
yay -Sy snapd
```
![使用 yay 从 AUR 安装 snapd][4]
每当你 [更新 Arch Linux][5] 系统时,`yay` 都会启用 `snapd` 的自动更新。
#### 验证 Snap 支持是否有效
要测试 Snap 支持是否正常工作,请安装并运行 `hello-world` Snap 包。
```
sudo snap install hello-world
hello-world
(或者)
sudo snap run hello-world
```
![hello-world Snap 包执行][6]
如果它运行良好,那么你可以轻松安装其他 Snap 包。
### 方法 2、从 AUR 手动构建 snapd 包
如果你不想使用 AUR 助手,你仍然可以从 AUR 获取 `snapd`。让我展示详细的过程。
你需要先安装一些构建工具。
```
sudo pacman -Sy git go go-tools python-docutils
```
![为 Snap 安装依赖项][7]
完成依赖项安装后,现在可以克隆 `snapd` 的 AUR 目录,如下所示:
```
git clone https://aur.archlinux.org/snapd
cd snapd
```
![克隆仓库][8]
然后构建 `snapd` 包:
```
makepkg -si
```
当它要求安装其他依赖包时输入 `yes`
![手动构建 snapd][9]
你已安装 `snapd` 守护程序。但是,需要启用它以在启动时自动启动。
```
sudo systemctl enable snapd --now
sudo systemctl enable snapd.apparmor --now #start snap applications
sudo ln -s /var/lib/snapd/snap /snap #optional: classic snap support
```
![启动时启用 Snap][10]
手动构建包的主要缺点是每次新更新启动时你都必须手动构建。使用 AUR 助手为我们解决了这个问题。
### 总结
我更喜欢 Arch Linux 中的 pacman 和 AUR。很少能看到不在 AUR 中但以其他格式提供的应用。尽管如此,在某些你希望直接从源获取它的情况下,使用 Snap 可能是有利的,例如 [在 Arch 上安装 Spotify][11]。
希望本教程对你有所帮助。如果你有任何问题,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-snap-arch-linux/
作者:[Pranav Krishna][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pranav/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/best-aur-helpers/
[2]: https://itsfoss.com/pacman-command/
[3]: https://itsfoss.com/wp-content/uploads/2022/10/yay-makepkg.png
[4]: https://itsfoss.com/wp-content/uploads/2022/10/yay-install-snapd.png
[5]: https://itsfoss.com/update-arch-linux/
[6]: https://itsfoss.com/wp-content/uploads/2022/10/snap-hello-world-1.png
[7]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-dependencies.png
[8]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-clone.png
[9]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-makepkg-800x460.png
[10]: https://itsfoss.com/wp-content/uploads/2022/10/enable-snapd-startup-2.png
[11]: https://itsfoss.com/install-spotify-arch/

View File

@ -0,0 +1,106 @@
[#]: subject: "VirtualBox 7.0 Releases With Secure Boot and Full VM Encryption Support"
[#]: via: "https://news.itsfoss.com/virtualbox-7-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15141-1.html"
VirtualBox 7.0 发布,支持安全启动和全加密虚拟机
======
> VirtualBox 7.0 是自其上次大版本更新以来的一次重大升级。有一些不错的进步!
![伴随着 VirtualBox 7.0 的发布,支持安全启动和全加密虚拟机][1]
对 VirtualBox 来说,这是一次大的升级。这个版本值得关注,因为我们在最近几年没有看到过它的大版本更新。
对于那些不熟悉 VirtualBox 的人来说,它是一个由 [甲骨文公司][2] 开发的虚拟化软件。
随着 VirtualBox 7.0 的推出,增加了许多新功能。
让我们来看看其中最关键的一些。
### VirtualBox 7.0 的新内容
![virtualbox 7.0][3]
VirtualBox 7.0 是一次有益的升级。有图标更新、主题改进,以及一些关键的亮点,包括:
* 一个显示运行中的<ruby>客体<rt>Guest</rt></ruby>的性能统计的新工具。
* 支持安全启动。
* 支持<ruby>全加密虚拟机<rt>Full VM Encryption</rt></ruby>(通过 CLI
* 重新设计的新建虚拟机向导。
#### 通过命令行管理的全加密虚拟机
虚拟机VM现在可以完全加密了但只能通过命令行界面。
这也包括加密的配置日志和暂存状态。
截至目前,用户只能通过命令行界面对机器进行加密,未来将增加不同的方式。
#### 新的资源监控工具
![VirtualBox 7.0 的资源监控][4]
新的实用程序可以让你监控性能统计,如 CPU、内存使用、磁盘 I/O 等。它将列出所有正在运行的客体的性能统计。
这不是最吸引人的补充,但很有用。
#### 改进的主题支持
对主题的支持在所有平台上都得到了改进。在 Linux 和 macOS 上使用原生引擎,而在 Windows 上,有一个单独的实现。
#### 对安全启动的支持
VirtualBox 现在支持安全启动,增强了对恶意软件、病毒和间谍软件的安全性。
它还将防止虚拟机使用损坏的驱动程序启动,这对企业应用非常重要。
使用那些需要安全启动才能运行的操作系统的用户现在应该能够轻松创建虚拟机了。
#### 其他变化
VirtualBox 7.0 是一次重大的升级。因此,有几个功能的增加和全面的完善。
例如,新件虚拟机向导现在已经重新设计,以整合无人值守的客体操作系统安装。
![virtualbox 7.0 无人值守的发行版安装][5]
其他改进包括:
* 云端虚拟机现在可以被添加到 VirtualBox并作为本地虚拟机进行控制。
* VirtualBox 的图标在此版本中得到了更新。
* 引入了一个新的 3D 栈,支持 DirectX 11。它使用 [DXVK][6] 来为非 Windows 主机提供同样的支持。
* 支持虚拟 TPM 1.2/2.0。
* 改进了多显示器设置中的鼠标处理。
* Vorbis 是音频录制的默认音频编解码器。
你可以查看 [发行说明][7] 以了解更多信息。
如果你正在寻找增强的功能如更好的主题支持、加密功能、安全启动支持和类似的功能添加VirtualBox 7.0 是一个不错的升级。
💬 *你对这次升级有什么看法?你会使用较新的版本还是暂时坚持使用旧版本的虚拟机?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/virtualbox-7-0-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/virtualbox-7-0-release.jpg
[2]: https://www.oracle.com/in/
[3]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0.png
[4]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Resource_Monitor.png
[5]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Unattended_Guest_Install.png
[6]: https://github.com/doitsujin/dxvk
[7]: https://www.virtualbox.org/wiki/Changelog-7.0

View File

@ -0,0 +1,132 @@
[#]: subject: "First Look at LURE! Bringing AUR to All Linux Distros"
[#]: via: "https://news.itsfoss.com/lure-aur/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15151-1.html"
LURE 初窥!将 AUR 带入所有 Linux 发行版
======
> LURE 是一个新的开源项目,它希望成为所有发行版的 AUR。
![LURE 是一个新的开源项目,它希望成为所有发行版的 AUR][1]
AUR<ruby>Arch 用户仓库<rt>Arch User Repository</rt></ruby>)是一个由社区驱动的基于 Arch 的 Linux 的发行版仓库。
**简而言之:** 它可以帮助你安装官方仓库中没有的软件包,并让你获得最新的版本。
我发现它对我在 [Manjaro Linux][2] 上的体验很有帮助。
从技术上讲AUR 从源头构建一个软件包,然后利用软件包管理器(`pacman`)来安装它。
你也可以在我们的详细指南中探索更多关于它的信息。
> **[什么是 AUR 如何在 Arch 和 Manjaro Linux 中使用 AUR][3]**
📢 现在你对 AUR 有了一个基本的了解,有一个 **新的开源项目** 旨在将 AUR 的功能带到所有的发行版中。
这个项目被称为 “<ruby>Linux 用户仓库<rt>Linux User REpository</rt></ruby>LURE
> 💡 LURE 项目正处于 alpha 阶段,由创建者在几周前宣布。所以,它完全是一个正在进行的工作。
### 已经有这样的项目了?
![lure 添加仓库][5]
**没有。**
开发者们已经尝试做一个 AUR 的替代品,但是是针对特定的发行版。就像 [makedeb 软件包仓库][6] 是针对 Debian 的。
LURE 是一个雄心勃勃的想法,可以在你选择的任何发行版上工作。
它试图成为一个帮助你使用类似于 `PKGBUILD` 的脚本为你的发行版创建原生软件包的工具。
> **[创建 PKGBUILD 为 Arch Linux 制作软件包][7]**
开发者在 [Reddit 公告帖子][9] 中提到了一些技术细节:
> 我的项目叫 LURE是 “Linux 用户仓库”的简称。它构建原生软件包,然后使用系统软件包管理器安装它们,就像 AUR 一样。它使用一个类似于 AUR 的 `PKGBUILD` 的构建脚本来构建软件包。
>
> 它是用纯 Go 语言编写的,这意味着它在构建后没有任何依赖性,除了一些特权提升命令(`sudo``doas` 等等)和任何一个支持的软件包管理器,目前支持 `pacman`、`apt`、`apk`Alpine Linux 上,不是安卓)、`dnf`、`yum` 和 `zypper`
**听起来很棒!**
> **[LURE 项目Repo][10]**
你也可以在它的 [GitHub 镜像][11] 上探索更多信息。
### 使用 LURE
你不必安装一个额外的软件包管理器来使它工作,它可以自动与你系统的软件包管理器一起工作。
因此,如果它在其仓库(或任何其添加的仓库)中没有找到一个包,它就会转到系统的默认仓库,并从那里安装它。就像我用 `lure` 命令在我的系统上安装/移除 `neofetch` 一样。
![lure neofetch remove][12]
虽然该项目处于早期开发阶段,但它为各种发行版提供了 [二进制包][13],以让你安装和测试它们。
![][14]
目前,它的仓库包括一个来自创建者自己的项目。但你可以尝试添加一个仓库并构建/安装东西。
为了方便起见,我试着在它的仓库中安装软件包。
![][15]
命令看起来像这样:
```
lure in itd-bin
```
在它的 [官方文档页面][16],你可以读到更多关于它在构建/安装/添加存储库方面的用法。
未来版本的一些计划中的功能包括:
* 自动安装脚本
* 基于 Docker 的自动测试工具
* 仓库的网页接口
### 让它变得更好
嗯,首先,这是一个优秀的项目。如果你是过去使用过 Arch 的人,或者想离开 Arch Linux这将是一个很好的工具。
然而,对于大多数终端用户和非 Arch Linux 新手来说,像 [Pamac GUI 软件包管理器][17] 这样的软件包管理器支持 LURE 应该是锦上添花的。
当然,在目前的阶段,它需要开源贡献者的支持。所以,如果你喜欢这个想法,请随时为该项目贡献改进意见
*💭 你对 LURE 有什么看法?请在下面的评论中分享你的想法! *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/lure-aur/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/LURE-aur-for-all-linux-distros.jpg
[2]: https://news.itsfoss.com/manjaro-linux-experience/
[3]: https://itsfoss.com/aur-arch-linux/
[4]: https://itsfoss.com/aur-arch-linux/
[5]: https://news.itsfoss.com/content/images/2022/10/lure-repos.png
[6]: https://mpr.makedeb.org
[7]: https://itsfoss.com/create-pkgbuild/
[8]: https://itsfoss.com/create-pkgbuild/
[9]: https://www.reddit.com/r/linux/comments/xq09nf/lure_aur_on_nonarch_distros/
[10]: https://gitea.arsenm.dev/Arsen6331/lure
[11]: https://github.com/Arsen6331/lure
[12]: https://news.itsfoss.com/content/images/2022/10/lure-neofetch-rm.png
[13]: https://gitea.arsenm.dev/Arsen6331/lure/releases/tag/v0.0.2
[14]: https://news.itsfoss.com/content/images/2022/10/lure-binaries.jpg
[15]: https://news.itsfoss.com/content/images/2022/10/lure-test.png
[16]: https://github.com/Arsen6331/lure/blob/master/docs/usage.md
[17]: https://itsfoss.com/install-pamac-arch-linux/

View File

@ -0,0 +1,109 @@
[#]: subject: "Notion-like Markdown Note-Taking App 'Obsidian' is Out of Beta"
[#]: via: "https://news.itsfoss.com/obsidian-1-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15147-1.html"
类似 Notion 的 Markdown 笔记应用黑曜石推出测试版
======
> 黑曜石 1.0 做了重新设计,带来了有价值的新功能。
![类似于 Notion 的 Markdown 笔记软件 Obsidian 推出了测试版][1]
<ruby>黑曜石<rt>Obsidian</rt></ruby> 是一款强大的笔记应用,可以用来制作知识图谱,同时还提供 [Notion][2] 类似的功能。
在 1.0 更新之前,我们已经有一篇关于它的 [详细文章][3]。
黑曜石 1.0 的发布标志着该应用在桌面和移动体验方面的发展迈出了关键一步。
> 💡 黑曜石不是一个开源的应用程序,但可以在 Linux 上使用。
让我们来看看它的桌面版提供的新功能。
### 🆕 黑曜石 1.0 的新功能
![黑曜石 1.0][5]
1.0 版本增加了大量的新功能、主要的视觉变化和错误修复,其中一些亮点包括:
* 改良的用户界面
* 新的外观设置
* 带有标签堆叠的标签功能
* 大修的主题画廊
* 各种错误的修复
#### 🎨 用户界面的改造
![黑曜石 1.0 用户界面][8]
用户界面已经得到了全面改造,这使得用户体验更加直观和强大。
![黑曜石用户界面][9]
除此之外,黑曜石现在还有一个专门的外观设置部分。它包含了切换显示内联标题、标签标题栏、改变重点颜色等选项的设置。
![黑曜石 1.0 外观设置][10]
#### 带有标签堆叠的标签功能
![黑曜石 1.0 的标签][11]
现在你可以在黑曜石中打开多个标签,并使用热键来帮助你在忙碌的一天中完成多个任务。
一个额外的好处是,即使你退出黑曜石,它也会记住你曾经打开的标签和你在这些标签中的活动状态。
![黑曜石 1.0 的标签堆叠][12]
标签也可以分组形成堆叠,并在它们之间进行切换,从而使工作空间变得更加整齐。
#### 🛠️ 其他变化
其他值得一提的变化包括:
* 改变缩放级别的能力
* 改进了黑曜石的同步
* 内存泄漏修复
* 用于折叠行的折叠命令
你可以通过 [发布说明][13],以及 [发布公告][14] 来了解更多的细节。
### 📥 下载黑曜石 1.0
你可以到 [官方网站][17] 下载黑曜石 1.0。在手机上,也可以在谷歌应用商店和苹果的应用商店上找到它。
为 Linux 用户提供了三个软件包:**AppImage、Flatpak 和 Snap**。
开发者还澄清说,他们不会向个人用户收取黑曜石的使用费。
*💬 你对黑曜石 1.0 有什么看法?一个值得替代其他记事本的应用程序吗?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/obsidian-1-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/obsidian-1.png
[2]: https://notion.grsm.io/itsfoss
[3]: https://linux.cn/article-14230-1.html
[5]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0.png
[8]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_User_Interface.png
[9]: https://news.itsfoss.com/content/images/2022/10/obisidian-1-ui.png
[10]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Appearance_Settings.png
[11]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Tabs.png
[12]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Tab_Stacks.gif
[13]: https://forum.obsidian.md/t/obsidian-release-v1-0-0/44873
[14]: https://obsidian.md/1.0
[17]: https://obsidian.md/download
[18]: https://www.youtube-nocookie.com/embed/Ia2CaItxTEk

View File

@ -0,0 +1,111 @@
[#]: subject: "Open source DevOps tools in a platform future"
[#]: via: "https://opensource.com/article/22/10/open-source-devops-tools"
[#]: author: "Will Kelly https://opensource.com/users/willkelly"
[#]: collector: "lkxed"
[#]: translator: "lxbwolf"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15170-1.html"
开源 DevOps 工具的平台化未来
======
![](https://img.linux.net.cn/data/attachment/album/202210/24/092748lwwoicus5e4s59gg.jpg)
> 当商业 DevOps 工具市场着眼于平台时,是时候让开源 DevOps 工具重新定义它们的未来了。
DevOps 的开源根基是无法动摇的,即便有预言称全球的 DevOps 市场将在 2026 年之前达到 178 亿美元。不断变化的工作环境、安全和合规性问题,以及风险投资公司等等因素正在将市场推向 DevOps 平台,开发团队可以在云中获得完整的端到端 DevOps 工具链。
### 开源 DevOps 工具现状
我们要搞清楚一件事:开源工具不可能从 DevOps 世界中消失。现在,在开源和供应商提供的 DevOps 工具之间存在着一种平衡,开发人员会在两者间选择适合他们的工具。事实上,很多情况下,一个开发团队起初会为他们的 DevOps 流水线选择一个开源工具,后来又升级到商业版本。
### 三种开源 DevOps 工具实例
下面我们介绍一些开源 DevOps 工具的例子,每种工具都已经有了围绕其建立的商业化生态。
#### Git
源代码管理工具 [Git][1] 作为源代码库,可能是 DevOps 工具链的主要基础之一。
Git 的两个最佳商业案例是 GitLab 和 GitHub。GitLab [接受开发者对其贡献开源项目][2]。GitHub 也在着手努力成为一个 DevOps 平台,推出了人工智能版的结对编程 GitHub Copilot在推出后受到了一些开源团体的褒贬不一的评价。
#### Jenkins
作为一个开源的自动化服务Jenkins 因其易于安装、配置和可扩展性而受到推崇。
CloudBees 提供了 JenkinsXJenkinsX 是一套开源的解决方案,可以为 Kubernetes 上的云原生应用提供自动化持续集成和持续交付CI/CD以及自动化测试工具。他们还为JenkinsX 提供商业支持,包括:
- 访问 CloudBees 的专业技术技能
- 24x7 技术支持
- 访问 CloudBees 的文档和在线知识库
#### Kubernetes
随着越来越多的组织寻求企业级的容器编排解决方案,[Kubernetes][3] 的发展成为必然。尽管有人批评其复杂性。
自然而然的Kubernetes 周边有完整的、蓬勃发展的产业。根据 Allied 市场调研的数据,全球容器和 [Kubernetes 安全][4] 市场在 2020 年的估值为 7.14 亿美元,预计到 2030 年将达到 8.42 亿美元。
### 目前的 DevOps 工具链
各个行业仍有很多<ruby>自建<rt>build-your-own</rt></ruby>BYO的 CI/CD 工具链在发挥作用。支持 DevOps 功能的开源项目仍在蓬勃发展。
BYO 工具链可以集成其他工具,而且非常具有扩展性,这对于持续迭代其 DevOps 实践的组织来说一直是一个优势。在出于业务、IT 和安全原因寻求标准化的企业中,缺乏标准的材料清单可能是个麻烦。
虽然 DevOps 平台的出现并没有被忽视,但许多组织早在大流行之前就将他们的 CI/CD 工具链迁移到了公有云。长期以来工具链本身的安全性一直是一个不断上升的问题而公有云基础设施提供了身份访问管理IAM和其他安全功能来控制访问。
### DevOps 平台是敌是友?
DevOps 平台是一个端到端的解决方案,它将 CI/CD 工具链的所有功能放入云中。DevOps 平台的例子包括 GitLab 和 Harness。GitHub 也在采取行动,使自己成为一个 DevOps 平台。
#### 优势(即便只从企业买家角度考虑)
DevOps 平台对那些已经适应了 SaaS 和云计算行业的基于消费和订阅的定价的企业买家很有吸引力。在这个远程和混合工作的世界里,对可维护性、安全、合规性和开发人员的生产力的担忧肯定是技术领导者的首要考虑。对这些人来说,在 DevOps 平台上实现标准化是很有吸引力的。
#### 劣势
在依赖供应商提供的 DevOps 工具链时,人们会想到对供应商锁定功能的古老担忧。开发团队构建和维护其工具链的可扩展性不会像他们从头开始制作工具链时那样,更不用说引入新的工具来改善他们的工作流程了。
DevOps 平台供应商也有潜在的经济方面的劣势。想一想,一个被高估的 DevOps 工具初创公司如果没有达到其投资者的高额财务目标,可能会发生什么。同样,也可能有一些较小的初创供应商得不到下一轮的资金,而慢慢消失。
虽然 DevOps 平台的出现在很多方面都是有意义的,但它确实违背了促成我们今天使用的 DevOps 工具的开源精神。
### DevOps 工具:一个拐点
随着工作模式的改变,人们对 DevOps 工具链的安全和合规性的关注必然会增加。
#### 正在变化的工作环境
我们的工作方式与企业其他部门一样影响着 DevOps 团队。远程和混合 DevOps 团队需要安全的工具链。整个流水线中不断变化的协作和报告要求,如异步工作和经理要求返回办公室等,也是日益增长的必要条件。
#### 软件供应链安全市场
在高调的攻击和美国联邦政府的回应之后,软件供应链安全市场引起了很多关注。目前还没有组织将软件供应链的攻击归咎于开源,但我们将看到 DevOps/DevSecOps 实践和工具的延伸以对抗这种威胁。不过当一切都结束时DevOps/DevSecOps 的工具和实践将超过一些转向这一趋势的初创公司。
### 结语
对于 DevOps 领域的开源软件OSS项目来说这还远远没有结束但 DevOps 利益相关者有权开始询问未来的工具链。然而OSS DevOps 项目确实需要考虑它们的未来,特别是考虑到日益增长的直接影响流水线的安全和合规性问题。
DevOps 平台供应商与开源工具的未来趋势是合作性竞争,即 DevOps 平台供应商向作为其平台基础的开源工具贡献时间、金钱和资源。一个有趣的例子就是 [OpsVerse][5],它用他们为客户管理的开源工具提供了一个 DevOps 平台。
然后,还有一个未来,随着更多的企业构建的工具链迁移到云端,开源 DevOps 工具项目将继续繁荣和创新。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/open-source-devops-tools
作者:[Will Kelly][a]
选题:[lkxed][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/willkelly
[b]: https://github.com/lkxed
[1]: https://opensource.com/article/22/4/our-favorite-git-commands
[2]: https://opensource.com/article/19/9/how-contribute-gitlab
[3]: https://opensource.com/resources/what-is-kubernetes
[4]: https://enterprisersproject.com/article/2019/1/kubernetes-security-4-tips-manage-risks?intcmp=7013a000002qLH8AAM
[5]: https://www.opsverse.io/
[6]: https://www.redhat.com/architect/devsecops-culture?intcmp=7013a000002qLH8AAM

View File

@ -0,0 +1,38 @@
[#]: subject: "GitHub Copilot Appears To Be In Violation Of The Open Source Licence"
[#]: via: "https://www.opensourceforu.com/2022/10/github-copilot-appears-to-be-in-violation-of-the-open-source-licence/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15167-1.html"
GitHub Copilot 似乎违反了开源许可证的规定
======
![](https://img.linux.net.cn/data/attachment/album/202210/23/100112lms67c7e8mow8sv6.jpg)
> 自 Copilot 首次亮相以来Butterick 就对该计划提出了批评。
微软在 2018 年支付 75 亿美元收购了 GitHub此后将这个代码仓库整合到其开发者工具中同时在很大程度上采取了放手的态度。Matthew Butterick 是一名作家、律师,也是一名程序员,他认为微软基于机器学习的代码助手 GitHub Copilot 存在一些问题,它似乎不正确地对待开源代码许可证。
GitHub Copilot 是 Visual Studio 和其他 IDE 的一个插件,通过在你输入时提供代码完成的 “建议” 来运作。Codex 是该系统的动力源。然而Butterick 等开发者认为 AI 在如何学习方面存在问题或者更具体地说AI 是从哪里训练的。
这里的问题是GitHub 所训练的公开代码仓库是有许可证的,当他们的工作被利用时,需要按照许可证进行。虽然微软对其使用代码的问题一直避而不谈,称其为合理使用,但 Copilot 除了提供建议外,还能生成逐字逐句的代码部分。
根据 Codex由微软授权的开发者 OpenAI的说法“Codex 是在数以千万计的公开代码仓库中训练出来的,包括 GitHub 上的代码。”微软自己也含糊地将训练材料描述为数十亿行的公共代码。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/10/github-copilot-appears-to-be-in-violation-of-the-open-source-licence/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/10/github-logo-2-1-696x348.png

View File

@ -0,0 +1,147 @@
[#]: subject: "Ubuntu 22.10 Is Here!"
[#]: via: "https://news.itsfoss.com/ubuntu-22-10-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15171-1.html"
Ubuntu 22.10 的新变化
======
> Ubuntu 22.10 是一个令人印象深刻的版本,它拥有新的快速切换功能、应用程序散布,以及更多。
![](https://img.linux.net.cn/data/attachment/album/202210/24/101545l1jae18e1881ee19.jpg)
Ubuntu 22.10 “<ruby>充满活力的捻角羚<rt>Kinetic Kudu</rt></ruby>”来了。它带来了许多重大的改进,特别是 Linux 内核 5.19 和 GNOME 43 的体验。
当然,这是一个经过定制的 GNOME 体验,现有的 Ubuntu 用户会很熟悉。
那么Ubuntu 22.10 都有哪些新变化呢?让我们一起来看看。
### Ubuntu 22.10 的新变化
![][2]
除了 GNOME 43 之外Ubuntu 22.10 还包括:
- 一个改进的 <ruby>设置<rt>Settings</rt></ruby> 应用程序。
- 经典的 Unity 应用程序<ruby>散布模式<rt>Spread Mode</rt></ruby>的复活。
- 一些著名的应用程序被移植到 GTK4 和 Libadwaita。
- 全系统支持 WebP。
- PipeWire 采样管默认的音频服务器。
- Linux 内核 5.19。
> 💡 Ubuntu 22.10 将被支持九个月,直到 **2023 年 7 月**。如果你想要稳定而不是功能,你应该更愿意使用 [LTS 版本][3]。
#### 🎨 视觉改进
当第一次测试一个新版本时,视觉上的变化总是最先被注意到的。在 Ubuntu 22.10 中尤其如此,这要归功于 [GNOME 43][4] 所带来的重大改进。
![Ubuntu 22.10 快速设置][5]
首先我们有全新的快速设置菜单它取代了旧的、相当笨拙的系统菜单。与安卓、iOS 和 Windows 11 中的菜单类似,这个新增的菜单允许你打开和关闭 Wi-Fi 和蓝牙,所有这些都无需进入<ruby>设置<rt>Settings</rt></ruby>应用程序。
当然,在这个版本中我们还得到了全新的壁纸。
![Ubuntu 22.10 壁纸][6]
对于这个变化,我除了喜欢并希望从社区中看到更多这样的设计之外,没有什么别的可说的。
此外,更多的应用程序被移植到了 GTK4包括对 Nautilus 文件管理器的改进。
一些有价值的新增功能包括:
- 拖动并选择文件的能力(橡皮筋选择)。
- 自适应视图与一个紧凑的窗口。
- 新的文件上下文菜单。
你可以通过我们的详细报道来探索 Nautilus 的改进。
#### 👴 应用程序散布回来了!
我不太喜欢的 GNOME 的一个部分是多窗口应用程序的切换,我相信很多其他用户都有这样的不满。
![Ubuntu 22.04][7]
幸运的是Ubuntu 22.10 现在提供了一个很好的解决方案,对于长期用户来说应该是很熟悉的。终于,在 2017 年放弃对 Unity 的支持五年后Ubuntu 的开发者们又把应用程序<ruby>散布<rt>Spread</rt></ruby>带了回来。
![Ubuntu 22.10应用程序散布][8]
这是一个重大的改进,我很惊讶 GNOME 自己没有这样做。
#### 🛠️ 设置的改进
虽然不是大多数人日常使用的应用程序,但<ruby>系统设置<rt>System Settings</rt></ruby>是 GNOME 体验的一个核心部分。考虑到这一点,看到它接受了一次重大的视觉改造,以及移植到了 GTK 4 和 Libadwaita真是太棒了。
![Ubuntu 22.10 桌面设置][9]
因此,它现在变得更好看了,而且是自适应的,这意味着它在任何尺寸下都能很好地工作,甚至在像 PinePhone 这样的 Linux 手机上也能很好地工作!
另一个与设置有关的变化是增加了一个新的 “<ruby>Ubuntu 桌面设置<rt>Ubuntu Desktop Settings</rt></ruby>”菜单项。这提供了一个单一的、统一的地方来定制和改变你所有的 Ubuntu 特定设置。
#### Linux 内核 5.19
Ubuntu 22.10 还带来了一个更新的内核,即 [Linux 内核 5.19][10]。这个版本的改进相当少,尽管它确实带来了对一些下一代硬件的改进支持。
你应该注意到这是 Linux 5.x 系列的最后一个版本,因为 Linux 内核下一个版本跳到了 6.0。
#### 其他变化
![Ubuntu 22.10 webp][11]
总的来说有几个细微的调整。但其中一些基本的调整包括:
- 图像应用程序默认支持 .WebP 图像格式。
- GNOME 文本编辑器是默认编辑器。你可以安装 gedit 并使其成为默认的。
- GNOME 图书应用程序已经不再可用。Ubuntu 推荐 [Foliate][12] 作为替代。
- 不再默认安装 To Do 应用程序并且它有了一个新的名字“Endeavour”。
- GNOME 终端仍然是默认的终端应用。如果需要,可以安装 [GNOME 控制台][13]。
- 更新了 Firefox 104、[Thunderbird 102][14] 和 [Libreoffice 7.4][15] 等应用程序。
- 更多的应用程序已经被移植到 GTK4特别是 [Nautilus][16]。
> 注意,我们的列表集中在对桌面终端用户重要的变化上。如果你想知道更多关于服务器和其他使用情况的变化/更新,请参考 [官方发布说明][17]。
### 下载 Ubuntu 22.10
你可以从 [Ubuntu 的中央镜像库][18] 或其 [官方网站][19] 下载最新的 ISO。
**官方网站/仓库可能需要一段时间来提供 ISO 的下载。
> **[下载Ubuntu 22.10][19]**
💬 有兴趣尝试 Ubuntu 22.10 吗?请在评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-22-10-release/
作者:[Jacob Crume][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/ubuntu-22-10-release.png
[2]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-10.png
[3]: https://itsfoss.com/long-term-support-lts/
[4]: https://news.itsfoss.com/gnome-43-release/
[5]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-10-quick-setting.jpg
[6]: https://news.itsfoss.com/content/images/2022/10/22.10-wallpaper.png
[7]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-04-window-minimize.png
[8]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-10-app-spread.jpg
[9]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-10-desktop-setting.png
[10]: https://news.itsfoss.com/linux-kernel-5-19-release/
[11]: https://news.itsfoss.com/content/images/2022/10/ubuntu-22-10-webp.png
[12]: https://itsfoss.com/foliate-ebook-viewer/
[13]: https://itsfoss.com/gnome-console/
[14]: https://news.itsfoss.com/thunderbird-102-release/
[15]: https://news.itsfoss.com/libreoffice-7-4-release/
[16]: https://news.itsfoss.com/gnome-files-43/
[17]: https://discourse.ubuntu.com/t/kinetic-kudu-release-notes/27976
[18]: https://cdimage.ubuntu.com/ubuntu/releases/22.10/release/
[19]: https://ubuntu.com/download/desktop

View File

@ -1,38 +0,0 @@
[#]: subject: "Worlds First Open Source Wi-Fi 7 Access Points Are Now Available"
[#]: via: "https://www.opensourceforu.com/2022/10/worlds-first-open-source-wi-fi-7-access-points-are-now-available/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Worlds First Open Source Wi-Fi 7 Access Points Are Now Available
======
*The first Open source Wi-Fi 7 products in the world will be released by HFCL in conjunction with Qualcomm under its IO product line.*
The worlds first Open source Wi-Fi 7 Access Points will be introduced by HFCL Limited, the top high-tech enterprise and integrated next-gen communication product and solution provider, in collaboration with Qualcomm Technologies, Inc. on October 1, 2022 at the India Mobile Congress in Pragati Maidan, New Delhi.
Based on IEEE 802.11be, a ground-breaking Wi-Fi technology that is intended to give Extremely High Throughput (EHT), increased spectrum efficiency, better interference mitigation, and support for Real Time Applications (RTA), HFCL becomes the first OEM to release Open source Wi-Fi 7 Access Points. In order to provide a better user experience while yet using less power, Wi-Fi 7 uses faster connections with 320MHz and 4kQAM, numerous connections with Multi Link operation, and Adaptive Connections for adaptive interference puncturing.
Wi-Fi 7 promises a significant technological advance above all prior Wi-Fi standards updates, providing a more immersive user experience and paving the way for a more robust digital future. The peak data speeds supported by HFCLs Wi-Fi 7 APs will exceed 10 Gbps, and they will have latency under 2 ms as opposed to the 5 Gbps and 10 ms of existing Wi-Fi 6 products.
Technology providers like telecom operators, Internet service providers, system integrators, and network administrators will be able to offer mission-critical and real-time application services and provide a better user experience than ever before thanks to HFCLs Wi-Fi 7 product line, which is supported by a strong R&D focus. A wide variety of dual band and tri-band indoor and outdoor variations may be found in the new Wi-Fi 7 product portfolio.
Being the first Wi-Fi 7 Access points in the market to embrace Open standards, all Wi-Fi 7 variations will come pre-installed with open source software. With the goal of providing improved global connectivity and maintaining interoperability in multi-vendor scenarios, open standards support disaggregated Wi-Fi systems delivered as free open source software.
The introduction of Wi-Fi 7 will primarily support the countrys upcoming 5G rollout, particularly for enhancing inside coverage. Additionally, the technology will make it easier to construct a variety of apps because to its increased throughput, dependable network performance, and lower latency. The Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications including surveillance, remote industrial automation, AV/VR/XR, and other video-based applications will benefit from Wi-Fi 7 technology for businesses. With numerous developments in Cloud/Edge Computing and Cloud gaming, it will also increase the number of remote offices, real-time collaborations, and online video conferencing.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/10/worlds-first-open-source-wi-fi-7-access-points-are-now-available/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -0,0 +1,80 @@
[#]: subject: "Ubuntu but rolling but also stable That's what Rhino Linux aims to be"
[#]: via: "https://news.itsfoss.com/rhino-linux/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu but rolling but also stable That's what Rhino Linux aims to be
======
A rolling-release Ubuntu distribution? Wait, what? Rhino Linux sounds fascinating..
![Ubuntu but rolling but also stable! That's what Rhino Linux aims to be][1]
Rhino Linux will be the successor of [Rolling Rhino Remix][2]. A Linux distro built by http.llamaz that offered a rolling-release **unofficial** variant of Ubuntu.
To clarify, the project was never aimed to replace other stable distributions and was purely a passion project made for fun.
Considering people started using it as a daily driver and expected more from it, the developer has decided to turn this into a serious project.
Rhino Linux is its next step for it. So, what can you expect?
### Meet Rhino Linux: The Successor
The main goal is to provide a stable Ubuntu experience while still providing a rolling-release model.
The aim remains the same, but the fundamentals for Rhino Linux will receive a complete overhaul. They are potentially making it an impressive rolling-release Ubuntu distribution.
**Sounds exciting! 🤯**
At its core, Rhino Linux will be using a slightly modified version of [XFCE][3] as its desktop environment; it was chosen due to its well-known stability and speed.
The founder of Rhino Linux mentioned the following:
> Ubuntu as a rolling release is still at the very core of our concept. Rhino Linux is not a depature from Rolling Rhino Remix, but rather re-imagines it as the more stable, mature distribution it should have shipped as originally.
![xfce 4.14][4]
Alongside that, [Pacstall][5] will be used as the default package manager on Rhino Linux with one of their repositories.
> 💡Pacstall is an [AUR][6]-inspired package manager for Ubuntu.
The development of which is headed by the founder of Pacstall, [_Plasma_][7]. He has also joined as one of the new developers (Deputy Project Lead), and [Sourajyoti Basak][8] as another core member.
### Moving Forward: Availability and Release
As of writing, Rhino Linux has not received any specific release date, but you can expect it to release sometime in **2023**.
What happens to Rolling Rhino Remix?
The developer clarified that it would continue to be maintained for three months after the release of Rhino Linux. However, it won't have a new release image after its subsequent release on **11.01.2022**.
You can find out more about Rhino Linux by visiting its [official website][9].
_💬 What do you think of Rhino Linux? Can it be a contender for official Ubuntu flavors worth trying?_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/rhino-linux/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/rhino-linux.png
[2]: https://github.com/rollingrhinoremix
[3]: https://www.xfce.org/
[4]: https://news.itsfoss.com/content/images/2022/10/XFCE_4.14.png
[5]: https://github.com/pacstall/pacstall
[6]: https://itsfoss.com/aur-arch-linux/
[7]: https://github.com/Henryws
[8]: https://github.com/wizard-28
[9]: https://rhinolinux.org/

View File

@ -0,0 +1,100 @@
[#]: subject: "Ardour 7.0 Release Marks the end of 32-bit builds; Adds Clip Launching and Apple Silicon Support"
[#]: via: "https://news.itsfoss.com/ardour-7-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ardour 7.0 Release Marks the end of 32-bit builds; Adds Clip Launching and Apple Silicon Support
======
Ardour 7.0 is a major upgrade with much-needed improvements.
![Ardour 7.0 Release Marks the end of 32-bit builds; Adds Clip Launching and Apple Silicon Support][1]
[Ardour][2] is a popular open-source digital audio workstation software that audio professionals use.
Ardour 7.0 has been in development for a little more than a year since the release of 6.9 and has come a long way since Ardour 5.0.
Let's take a look at the highlights of this release.
### 🆕 Ardour 7.0: What's New?
![ardour 7.0][3]
This release introduces a variety of changes and feature additions, some of the highlights are:
- **Discontinuation of 32-bit Builds.**
- **Clip Launching feature (similar to what other DAWs provide).**
- **Support for Apple Silicon.**
- **Integration of Loop Library.**
- **Audio sample library support by Freesound.**
#### Clip Launching
![ardour 7.0 clip launching][4]
Similar to many mainstream DAWs like Ableton Live, Ardour now has support for clip launching.
Users can now play/stop audio clips and adjust the timing to fit the tempo map of a session. Pretty useful for live performances.
#### Loop Library
![ardour 7.0 loop library][5]
In addition to Clip Launching, users can also take advantage of a vast collection of loops sourced from a few hand-picked creators of [looperman][6].
Users can choose from any of the royalty-free loops, with more to be added in the near future.
#### End of 32-bit Builds
There won't be any further 32-bit builds of Ardour, only a few will remain available on the nightly build channels for now.
You may want to look around at some of the DAWs available for Linux to see what else might offer a 32-bit build.
#### Sound Samples Available by Freesound
![ardour 7.0 freesound integration][7]
Another feature that complements the Loop Library is the integration with [Freesound][8].
With this, users can make use of over 600,000 samples of audio. To do this, users must have a Freesound account linked to their Ardour installation.
#### 🛠️ Other Changes
Other notable changes include:
- **New Themes.**
- **Ripple Editing Modes.**
- **Mixer Scenes.**
- **Improvements to MIDI Editing.**
- **Support for I/O Plugins.**
You can go through the official [release notes][9] for more details of the release.
_💬 Are you going to try out Ardour 7.0? Satisfied with the DAW you currently use?_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ardour-7-0-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/ardour-7-0-release.jpg
[2]: https://ardour.org/
[3]: https://news.itsfoss.com/content/images/2022/10/Ardour_7.0.png
[4]: https://news.itsfoss.com/content/images/2022/10/Ardour_7.0_Clip_Launching.png
[5]: https://news.itsfoss.com/content/images/2022/10/Ardour_7.0_Loop_Library.png
[6]: https://www.looperman.com/
[7]: https://news.itsfoss.com/content/images/2022/10/Ardour_7.0_Freesound_Integration.png
[8]: https://freesound.org/
[9]: https://ardour.org/whatsnew.html

View File

@ -0,0 +1,117 @@
[#]: subject: "Kubuntu 22.10 is Now Available!"
[#]: via: "https://news.itsfoss.com/kubuntu-22-10-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubuntu 22.10 is Now Available!
======
Kubuntu 22.10 may not be the most exciting upgrade. But, it includes useful changes!
![Kubuntu 22.10 is Now Available!][1]
Kubuntu is an official Ubuntu flavor that offers a lot of functionality in a refined KDE-powered package.
The release of Kubuntu 22.10 promises various improvements and a newer version of [KDE Plasma][2].
Let us go through the highlights of this release.
### Kubuntu 22.10: What's New?
![kubuntu 22.10 desktop][3]
Kubuntu 22.10 is bringing in a lot of updates, some of the important ones that you can expect are:
- **KDE Plasma 5.25**
- **Linux Kernel 5.19**
- **PipeWire**
- **Firefox 104**
- **Qt 5.15.6**
> 💡Kubuntu 22.10 will be supported for nine months until **July 2023**. If you want stability over features, you should prefer using an [LTS version][4].
#### KDE Plasma 5.25
![kubuntu 22.10 kde version][5]
Even though[KDE Plasma 5.26][6] was released recently, Kubuntu 22.10 ships with KDE Plasma 5.25.
However, KDE Plasma 5.25 is still a major update over 5.24, which contained a lot of improvements such as, enhanced support for touchpads/touchscreens, upgrades to the user interface, and more.
You can read our coverage of KDE Plasma 5.25 to learn more:
Also, you can expect KDE Plasma 5.26 to come as a point release instead of being part of the launch of Kubuntu 22.10.
#### PipeWire Default
Like most Ubuntu 22.10-based distros, [PipeWire][7] is the default audio/video handler in this version of Kubuntu.
It replaces [PulseAudio][8], known to not play nice with Ubuntu 22.10.
#### Linux Kernel 5.19
![kubuntu 22.10 linux kernel 5.19][9]
Kubuntu 22.10 features the latest Linux Kernel 5.19, this should lead to improved support for ARM SoCs, Arc Alchemist GPUs, various BTRFS improvements, initial support for AMD RDNA3 graphics, and more.
#### Wayland Session for Testing
![kubuntu 22.10 wayland session switcher][10]
Kubuntu 22.10 features initial support for a Plasma Wayland session, but it is intended for testing purposes only and is not a complete integration.
![kubuntu 22.10 wayland session info][11]
#### Other Upgrades
Some of the other updates include the following:
- Ability to customize desktop accent color.
- Firefox 104 snap as the default browser.
- Qt 5.15.6
- LibreOffice 7.4
- Improved App Store.
To explore more about the release, refer to the [official release notes][12].
### Download Kubuntu 22.10
You can download the latest ISO from [Ubuntu's central image repository][13] or its [official website][14].
[Kubuntu 22.10][14]
It might take a while for its official website to make the ISO available.
💬 Are you excited about this release?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kubuntu-22-10-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/kubuntu-22-10-release.jpg
[2]: https://kde.org/plasma-desktop/
[3]: https://news.itsfoss.com/content/images/2022/10/Kubuntu_22.10_Desktop.png
[4]: https://itsfoss.com/long-term-support-lts/
[5]: https://news.itsfoss.com/content/images/2022/10/Kubuntu_22.10_KDE_Version.png
[6]: https://news.itsfoss.com/kde-plasma-5-26-release/
[7]: https://pipewire.org/
[8]: https://www.freedesktop.org/wiki/Software/PulseAudio/
[9]: https://news.itsfoss.com/content/images/2022/10/Kubuntu_22.10_Linux_Kernel.png
[10]: https://news.itsfoss.com/content/images/2022/10/Kubuntu_22.10_Wayland_Session.png
[11]: https://news.itsfoss.com/content/images/2022/10/Kubuntu_22.10_Wayland_Session_2.png
[12]: https://wiki.ubuntu.com/KineticKudu/ReleaseNotes/Kubuntu
[13]: https://cdimage.ubuntu.com/kubuntu/releases/22.10/release/
[14]: https://kubuntu.org/getkubuntu/

View File

@ -0,0 +1,123 @@
[#]: subject: "Ubuntu MATE 22.10 Release Has Some Interesting Upgrades!"
[#]: via: "https://news.itsfoss.com/ubuntu-mate-22-10-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu MATE 22.10 Release Has Some Interesting Upgrades!
======
Ubuntu MATE 22.10 has arrived with subtle and useful changes. Take a lookie!
![Ubuntu MATE 22.10 Release Has Some Interesting Upgrades!][1]
Ubuntu MATE is one of the [official Ubuntu flavours][2] that add interesting improvements with every upgrade.
It is aimed at users who cherish the look and feel of a traditional desktop but also desire the functionality of a modern operating system.
Ubuntu MATE 22.10 release adds a number of betterments and features, let us take a look at those.
### Ubuntu MATE 22.10: What's New?
![ubuntu mate 22.10 desktop][3]
Based on the non-LTS release of [Ubuntu 22.10][4] '_Kinetic Kudu_', Ubuntu MATE 22.10 brings in multiple updates, some key highlights include:
- **Improvements to MATE Desktop.**
- **New AI Wallpapers.**
- **PipeWire is the default audio server.**
- **New MATE User Manager.**
- **Firefox 105 update.**
- **LibreOffice 7.4.**
> 💡Note that Ubuntu MATE upgrades usually include more feature additions. This time, **Martin Wimpress** has been working to bring a similar experience to the Debian MATE edition. You can read more details in our previous coverage.
#### MATE Desktop Upgrades
![ubuntu mate 22.10 desktop view][5]
MATE desktop receives various bug fixes and updates to the **Ayatana indicators** and the **MATE Panel**.
Now, you can align the applets to the **center** alongside the usual left and right alignment options.
This feature officially arrives with **MATE Desktop 1.28 **release, but the Ubuntu MATE team has made it available with this release on top of **MATE Desktop 1.27**.
#### MATE User Manager
![ubuntu mate 22.10 user manager][6]
The MATE user manager is a new addition to the distro, allowing you to add/modify/remove user accounts.
With this, you can choose which users can be administrators, set up auto-login, set profile pictures, and manage group memberships. Pretty handy and a much-needed feature for computers with multiple users.
#### New AI Wallpapers
![ubuntu mate 22.10 ai wallpapers][7]
Another big highlight of this release is the addition of new AI-generated wallpapers.
These look beautiful! 😍
Seeing that AI-generated wallpapers are all the rage right now, the Ubuntu MATE team has included a new bunch of them with Ubuntu MATE 22.10.
It was created by [Simon Butcher][8] using diffusion models to illustrate 'kudu' (Antelope).
#### Linux Kernel 5.19
Ubuntu MATE 22.10 leverages the improvements brought forward by Linux Kernel 5.19, including enhanced support for various ARM SoCs, Arc Alchemist GPUs, and more.
You can read our coverage of the same to learn more.
#### 🛠️ Other Changes and Improvements
As with every other new release, Ubuntu MATE 22.10 replaces [PulseAudio][9] with [PipeWire][10] for better audio handling and the inclusion of additional Bluetooth codecs such as AAC, LDAC, aptX, and aptX HD.
Other notable changes include:
- **Updated apps include Firefox 105, LibreOffice 7.4, Celluloid 0.20 and Evolution 3.46.**
- **Ubuntu MATE HUD supports MATE, XFCE, and Budgie with more configuration ability.**
You can check out Ubuntu MATE 22.10 [official release notes][11] if you are curious.
### Download Ubuntu MATE 22.10
You can download the latest ISO from [Ubuntu's central image repository][12] or its [official website][13].
It might take a while for its official website/repo to make the ISO available.
> 💡Ubuntu MATE 22.10 will be supported for nine months until **July 2023**. If you want stability over features, you should prefer using an [LTS version][14].
[Download Ubuntu MATE 22.10][13]
💬 _Interested in trying Ubuntu MATE 22.10? Let me know your thoughts in the comments._
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-mate-22-10-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/ubuntu-mate-22-10-release.jpg
[2]: https://itsfoss.com/which-ubuntu-install/
[3]: https://news.itsfoss.com/content/images/2022/10/ubuntu-mate-22-10.png
[4]: https://news.itsfoss.com/ubuntu-22-10-features/
[5]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_MATE_22.10_Desktop.png
[6]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_MATE_22.10_User_Manager.png
[7]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_MATE_22.10_AI_Wallpapers.png
[8]: https://twitter.com/simonjbutcher
[9]: https://www.freedesktop.org/wiki/Software/PulseAudio/
[10]: https://pipewire.org/
[11]: https://ubuntu-mate.org/blog/ubuntu-mate-kinetic-kudu-release-notes/
[12]: https://cdimage.ubuntu.com/ubuntu-mate/releases/22.10/release/
[13]: https://ubuntu-mate.org/download/
[14]: https://itsfoss.com/long-term-support-lts/

View File

@ -0,0 +1,118 @@
[#]: subject: "Ubuntu Budgie 22.10 Release Improves Control Center and Removes Some GNOME Apps"
[#]: via: "https://news.itsfoss.com/ubuntu-budgie-22-10-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu Budgie 22.10 Release Improves Control Center and Removes Some GNOME Apps
======
Ubuntu Budgie 22.10 is an interesting release which removes several GNOME apps, and adds other improvements.
![Ubuntu Budgie 22.10 Release Improves Control Center and Removes Some GNOME Apps][1]
[Ubuntu Budgie][2] is an official flavor of Ubuntu, which is popular for its traditional desktop interface and minimal software bloat.
The release of Ubuntu Budgie 22.10 brings in a few crucial tweaks and additions.
### 🆕 Ubuntu Budgie 22.10: What's New?
![ubuntu budgie 22.10][3]
Based on Ubuntu 22.10 'Kinetic Kudu', Ubuntu Budgie 22.10 features Budgie Desktop 10.6.4 and a host of other improvements.
Some of the notable highlights include:
- **Enhanced Budgie Control Center**
- **Updated Budgie Welcome app**
- **Replacing various GNOME-based apps**
- **Updated Translations**
#### Budgie Desktop and Control Center
![ubuntu budgie 22.10 desktop settings][4]
Budgie Desktop has been updated to V10.6.4, which adds a new global option to control the spacing between applets and features various improvements to the workspace and clock applets.
![ubuntu budgie 22.10 display color profiles][5]
The Budgie Control Center has also received a bunch of tweaks, such as reworked display color profile support, revamped screen-sharing with support for [RDP][6] and [VNC][7], an option for fractional scaling of display, and more.
#### Welcome App Updates
![ubuntu budgie 22.10 welcome app][8]
Ubuntu Budgie 22.10 features the updated [Budgie Welcome app][9] with improved translations and a few changes.
#### Change in Default Apps
The developers of Ubuntu Budgie have started replacing and removing GNOME-based apps in favor of MATE-based apps and other alternatives.
They decided to do so, because of the inconsistent way GNOME-based apps look in Budgie alongside other apps with their rounded-off edges.
The inconsistencies have been caused due to GNOME moving on to the Libadwaita library for its styling and theming needs.
The Libadwaita library was a controversial addition to GNOME, that not many users liked, you can go through our coverage to learn more.
Here are some of the apps that have been removed or replaced:
- GNOME-Calculator replaced by MATE Calculator.
- GNOME-Calendar removed.
- GNOME System Monitor replaced by MATE System Monitor.
- GNOME Screenshot removed.
- GNOME Font Viewer replaced by [Font-Manager][10].
#### 🛠️ Other Changes
Some of the other changes include:
- **Rework of In-Built Theme**
- **Removal of PulseAudio in favor of PipeWire**
- **Native Screenshot Capability**
- **Support for WebP Images**
- **Ability to View Monitor's Refresh Rate**
You can go through the [release notes][11] to know more.
### 📥 Download Ubuntu Budgie 22.10
You can download the latest ISO from [Ubuntu's central image repository][12] or its [official website][13].
It might take a while for its official website to make the ISO available.
[Download Ubuntu Budgie 22.10][13]
> 💡Ubuntu Budgie 22.10 will be supported for nine months until **July 2023**. If you want stability over features, you should prefer using an [LTS version][14].
💬 What do you think of this non-LTS release? Willing to give it a try?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-budgie-22-10-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/ubuntu-budgie-22-10-release.png
[2]: https://ubuntubudgie.org/
[3]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_Budgie_22.10.png
[4]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_Budgie_22.10_Desktop_Settings.png
[5]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_Budgie_22.10_Color_Profiles.png
[6]: https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
[7]: https://en.wikipedia.org/wiki/Virtual_Network_Computing
[8]: https://news.itsfoss.com/content/images/2022/10/Ubuntu_Budgie_22.10_Welcome.png
[9]: https://ubuntubudgie.org/2022/02/quick-overview-of-budgie-welcome-application/
[10]: https://itsfoss.com/font-manager/
[11]: https://ubuntubudgie.org/2022/09/ubuntu-budgie-22-10-release-notes/
[12]: https://cdimage.ubuntu.com/ubuntu-budgie/releases/22.10/
[13]: https://ubuntubudgie.org/downloads/
[14]: https://itsfoss.com/long-term-support-lts/

View File

@ -0,0 +1,135 @@
[#]: subject: "Xubuntu 22.10 Releases With Xfce Upgrades, and Other Refinements"
[#]: via: "https://news.itsfoss.com/xubuntu-22-10-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Xubuntu 22.10 Releases With Xfce Upgrades, and Other Refinements
======
Xubuntu 22.10 provides a refined XFCE experience. Learn more about it here.
![Xubuntu 22.10 Releases With Xfce Upgrades, and Other Refinements][1]
Xubuntu is an XFCE-powered official Ubuntu flavour.
It is also one of the best lightweight Linux distributions available.
With the latest Xubuntu 22.10 "**Kinetic Kudu**" release, you can expect desktop environment improvements, feature additions, and several refinements across the board.
### Xubuntu 22.10: What's New?
![Xubuntu 22.10 home][2]
Xubuntu 22.10 brings in some exciting upgrades. Some of the highlights include:
- **Xfce 4.16 (or Xfce 4.17 dev)**
- **Catfish appearance update.**
- **New icon refresh and deprecated elementary-xfce-darker-theme.**
- **Mousepad search history.**
- **Thundar file manager improvements.**
- **Xfce task manager.**
> 💡Xubuntu 22.10 will be supported for nine months until **July 2023**. If you want stability over features, you should prefer using an [LTS version][3].
#### XFCE 4.17 Development Version or XFCE 4.16?
The release notes for Xubuntu 22.10 say it features the Xfce 4.17 development version.
However, when I installed the beta version for Xubuntu 22.10 (and updated it to the latest), it said it features Xfce 4.16.
![][4]
Not sure if they pulled out Xfce 4.17 development version or if Xfce 4.16 is here for now.
#### Catfish Appearance
![xubuntu catfish][5]
Catfish is a file search utility on Xubuntu. With the new upgrade, it has a refreshed appearance with tweaks under the hood.
You also get an "**Open With**" context menu when interacting with the files you searched for.
![][6]
A pretty subtle but valuable feature addition for catfish.
#### GNOME 43 Software
Among notable app updates, GNOME's latest Software Center is a nice thing to have. This is how it looks with Xubuntu 22.10:
![][7]
Of course, it may not give you a consistent look with other applications on Xfce, but I think you can give it an excuse.
#### Icon Updates
With elementary-xfce 0.17 icon update, there are many new icons and cleaner options that provide a consistent Xubuntu desktop experience.
![][8]
Additionally, the **elementary-xfce-darkest theme** icon pack has been deprecated.
![][9]
#### Task Manager Right-Click Option
![][10]
You can now copy the full process path to the clipboard. This could be useful to troubleshoot or stop things from the command line when required.
### Other Enhancements
![][11]
There are several other notable changes that include:
- **Linux Kernel 5.19**
- **Mozilla Firefox 105.**
- **Alt-Tab view improved with more prominent icons.**
- **Mosaic puzzle added to SGT Puzzles collection.**
- **Thunar archive plugin now supports compressing zip files.**
- **Mousepad text editor now includes a search history and a few more tweaks.**
To learn more about the changes, check out the [official release notes][12].
### Download Xubuntu 22.10
You can download the latest ISO from [Ubuntu's central image repository][13] or its [official website][14].
It might take a while for its official website to make the ISO available.
[Download Xubuntu 22.10][14]
💬 _What do you think about Xubuntu 22.10? Let me know your thoughts in the comments._
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/xubuntu-22-10-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/xubuntu-22-10-release.jpg
[2]: https://news.itsfoss.com/content/images/2022/10/xubuntu-22-10.png
[3]: https://itsfoss.com/long-term-support-lts/
[4]: https://news.itsfoss.com/content/images/2022/10/xfce-4-16.jpg
[5]: https://news.itsfoss.com/content/images/2022/10/catfish-xubuntu-22-10.png
[6]: https://news.itsfoss.com/content/images/2022/10/catfish-openwith-1.jpg
[7]: https://news.itsfoss.com/content/images/2022/10/xubuntu-gnome-43-software.jpg
[8]: https://news.itsfoss.com/content/images/2022/10/xubuntu-22-10-icons.jpg
[9]: https://news.itsfoss.com/content/images/2022/10/xfce-dark-theme.png
[10]: https://news.itsfoss.com/content/images/2022/10/task-manager-copy-command-line.jpg
[11]: https://news.itsfoss.com/content/images/2022/10/xubuntu-22-10-puzzle.png
[12]: https://wiki.xubuntu.org/releases/22.10/release-notes
[13]: https://cdimage.ubuntu.com/xubuntu/releases/22.10/release/
[14]: https://xubuntu.org/download/

View File

@ -1,142 +0,0 @@
[#]: subject: "Using habits to practice open organization principles"
[#]: via: "https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Using habits to practice open organization principles
======
Follow these steps to implement habits that support open culture and get rid of those that don't.
![Selfcare, drinking tea on the porch][1]
Image by: opensource.com
Habits are a long-term interest of mine. Several years ago, I gave a presentation on habits, both good and bad, and how to expand on good habits and change bad ones. Just recently, I read the habits-focused book Smart Thinking by Art Markman. You might ask what this has to do with [open organization principles.][2] There is a connection, and I'll explain it in this two-part article on managing habits.
In this first article, I talk about habits, how they work, and—most important—how you can start to change them. In the second article, I review Markman's thoughts as presented in his book.
### The intersection of principles and habits
Suppose you learned about open organization principles and although you found them interesting and valuable, you just weren't in the habit of using them. Here's how that might look in practice.
Community: If you're faced with a significant challenge but think you can't address it alone, you're likely in the habit of just giving up. Wouldn't it be better to have the habit of building a community of like-minded people that collectively can solve the problem?
Collaboration: Suppose you don't think you're a good collaborator. You like to do things alone. You know that there are cases when collaboration is required, but you don't have a habit of engaging in it. To counteract that, you must build a habit of collaborating more.
Transparency: Say you like to keep most of what you do and know a secret. However, you know that if you don't share information, you're not likely to get good information from others. Therefore, you must create the habit of being more transparent.
Inclusivity: Imagine you are uncomfortable working with people you don't know and who are different from you, whether in personality, culture, or language. You know that if you want to be successful, you must work with a wide variety of people. How do you create a habit of being more inclusive?
Adaptability: Suppose you tend to resist change long after what you're doing is no longer achieving what you had hoped it would. You know you must adapt and redirect your efforts, but how can you create a habit of being adaptive?
### What is a habit?
Before I give examples regarding the above principles, I'll explain some of the relevant characteristics of a habit.
* A habit is a behavior performed repeatedly—so much so that it's now performed without thinking.
* A habit is automatic and feels right at the time. The person is so used to it, that it feels good when doing it, and to do something else would require effort and make them feel uncomfortable. They might have second thoughts afterward though.
* Some habits are good and extremely helpful by saving you a lot of energy. The brain is 2% of the body's weight but consumes 20% of your daily energy. Because thinking and concentration require a lot of energy, your mind is built to save it through developing unconscious habits.
* Some habits are bad for you, so you desire to change them.
* All habits offer some reward, even if it is only temporary.
* Habits are formed around what you are familiar with and what you know, even habits you dont necessarily like.
### The three steps of a habit
1. Cue (trigger): First, a cue or trigger tells the brain to go into automatic mode, using previously learned habitual behavior. Cues can be things like seeing a candy bar or a television commercial, being in a certain place at a certain time of day, or just seeing a particular person. Time pressure can trigger a routine. An overwhelming atmosphere can trigger a routine. Simply put, something reminds you to behave a certain way.
2. Routine: The routine follows the trigger. A routine is a set of physical, mental, and/or emotional behaviors that can be incredibly complex or extremely simple. Some habits, such as those related to emotions, are measured in milliseconds.
3. Reward: The final step is the reward, which helps your brain figure out whether a particular activity is worth remembering for the future. Rewards can range from food or drugs that cause physical sensations to joy, pride, praise, or personal self-esteem.
### Bad habits in a business environment
Habits aren't just for individuals. All organizations have good and bad institutional habits. However, some organizations deliberately design their habits, while others just let them evolve without forethought, possibly through rivalries or fear. These are some organizational habit examples:
* Always being late with reports
* Working alone or working in groups when the opposite is appropriate
* Being triggered by excess pressure from the boss
* Not caring about declining sales
* Not cooperating among a sales team because of excess competition
* Allowing one talkative person to dominate a meeting
### A step-by-step plan to change a habit
Habits don't have to last forever. You can change your own behavior. First, remember that many habits can not be changed concurrently. Instead, find a keystone habit and work on it first. This produces small, quick rewards. Remember that one keystone habit can create a chain reaction.
Here is a four-step framework you can apply to changing any habit, including habits related to open organization principles.
##### Step one: identify the routine
Identify the habit loop and the routine in it (for example, when an important challenge comes up that you can't address alone). The routine (the behaviors you do) is the easiest to identify, so start there. For example: "In my organization, no one discusses problems with anyone. They just give up before starting." Determine the routine that you want to modify, change, or just study. For example: "Every time an important challenge comes up, I should discuss it with people and try to develop a community of like-minded people who have the skills to address it."
##### Step two: experiment with the rewards
Rewards are powerful because they satisfy cravings. But, we're often not conscious of the cravings that drive our behavior. They are only evident afterward. For example, there may be times in meetings when you want nothing more than to get out of the room and avoid a subject of conversation, even though down deep you know you should figure out how to address the problem.
To learn what a craving is, you must experiment. That might take a few days, weeks, or longer. You must feel the triggering pressure when it occurs to identify it fully. For example, ask yourself how you feel when you try to escape responsibility.
Consider yourself a scientist, just doing experiments and gathering data. The steps in your investigation are:
1. After the first routine, start adjusting the routines that follow to see whether there's a reward change. For example, if you give up every time you see a challenge you can't address by yourself, the reward is the relief of not taking responsibility. A better response might be to discuss the issue with at least one other person who is equally concerned about the issue. The point is to test different hypotheses to determine which craving drives your routine. Are you craving the avoidance of responsibility?
2. After four or five different routines and rewards, write down the first three or four things that come to mind right after each reward is received. Instead of just giving up in the face of a challenge, for instance, you discuss the issue with one person. Then, you decide what can be done.
3. After writing about your feeling or craving, set a timer for 15 minutes. When it rings, ask yourself whether you still have the craving. Before giving in to a craving, rest and think about the issue one or two more times. This forces you to be aware of the moment and helps you later recall what you were thinking about at that moment.
4. Try to remember what you were thinking and feeling at that precise instant, and then 15 minutes after the routine. If the craving is gone, you have identified the reward.
##### Step three: isolate the cue or trigger
The cue is often hard to identify because there's usually too much information bombarding you as your behaviors unfold. To identify a cue amid other distractions, you can observe four factors the moment the urge hits you:
Location: Where did it occur? ("My biggest challenges come out in meetings.")
Time: When did it occur? ("Meetings in the afternoon, when I'm tired, are the worst time, because I'm not interested in putting forth any effort.")
Feelings: What was your emotional state? ("I feel overwhelmed and depressed when I hear the problem.")
People: Who or what type of people were around you at the time, or were you alone? ("In the meetings, most other people don't seem interested in the problem either. Others dominate the discussion.")
##### Step four: have a plan
Once you have confirmed the reward driving your behavior, the cues that trigger it, and the behavior itself, you can begin to shift your actions. Follow these three easy steps:
1. First, plan for the cue. ("In meetings, I'm going to look for and focus my attention on important problems that come up.")
2. Second, choose a behavior that delivers the same reward but without the penalties you suffer now. ("I'm going to explore a plan to address that problem and consider what resources and skills I need to succeed. I'm going to feel great when I create a community that's able to address the problem successfully.")
3. Third, make the behavior a deliberate choice each and every time, until you no longer need to think about it. ("I'm going to consciously pay attention to major issues until I can do it without thinking. I might look at agendas of future meetings, so I know what to expect in advance. Before and during every meeting, I will ask why should I be here, to make sure I'm focused on what is important."
##### Plan to avoid forgetting something that must be done
To successfully start doing something you often forget, follow this process:
1. Plan what you want to do.
2. Determine when you want to complete it.
3. Break the project into small tasks as needed.
4. With a timer or daily planner, set up cues to start each task.
5. Complete each task on schedule.
6. Reward yourself for staying on schedule.
### Habit change
Change takes a long time. Sometimes a support group is required to help change a habit. Sometimes, a lot of practice and role play of a new and better routine in a low-stress environment is required. To find an effective reward, you need repeated experimentation.
Sometimes habits are only symptoms of a more significant, deeper problem. In these cases, professional help may be required. But if you have the desire to change and accept that there will be minor failures along the way, you can gain power over any habit.
In this article, I've used examples of community development using the cue-routine-reward process. It can equally be applied to the other open organization principles. I hope this article got you thinking about how to manage habits through knowing how habits work, taking steps to change habits, and making plans to avoid forgetting things you want done. Whether it's an open organization principle or anything else, you can now diagnose the cue, the routine, and the reward. That will lead you to a plan to change a habit when the cue presents itself.
In my next article, I'll look at habits through the lens of Art Markman's thoughts on Smart Thinking.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles
作者:[Ron McFarland][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/coffee_tea_selfcare_wfh_porch_520.png
[2]: https://theopenorganization.org/definition/open-organization-definition/

View File

@ -1,184 +0,0 @@
[#]: subject: "7 summer book recommendations from open source enthusiasts"
[#]: via: "https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list"
[#]: author: "Joshua Allen Holm https://opensource.com/users/holmja"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 summer book recommendations from open source enthusiasts
======
Members of the Opensource.com community recommend this mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics.
![Ceramic mug of tea or coffee with flowers and a book in front of a window][1]
Image by: Photo by [Carolyn V][2] on [Unsplash][3]
It is my great pleasure to introduce Opensource.com's 2022 summer reading list. This year's list contains seven wonderful reading recommendations from members of the Opensource.com community. You will find a nice mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics. I hope you find something on this list that interests you.
Enjoy!
![Book title 97 Things Every Java Programmer Should Know][4]
Image by: O'Reilly Press
**[97 Things Every Java Programmer Should Know: Collective Wisdom from the Experts, edited by Kevlin Henney and Trisha Gee][5]**
*[Recommendation written by Seth Kenlon][6]*
Written by 73 different authors working in all aspects of the software industry, the secret to this book's greatness is that it actually applies to much more than just Java programming. Of course, some chapters lean into Java, but there are topics like Be aware of your container surroundings, Deliver better software, faster, and Don't hIDE your tools that apply to development regardless of language.
Better still, some chapters apply to life in general. Break problems and tasks into small chunks is good advice on how to tackle any problem, Build diverse teams is important for every group of collaborators, and From puzzles to products is a fascinating look at how the mind of a puzzle-solver can apply to many different job roles.
Each chapter is just a few pages, and with 97 to choose from, it's easy to skip over the ones that don't apply to you. Whether you write Java code all day, just dabble, or if you haven't yet started, this is a great book for geeks interested in code and the process of software development.
![Book title A City is Not a Computer][7]
Image by: Princeton University Press
**[A City is Not a Computer: Other Urban Intelligences, by Shannon Mattern][8]**
*[Recommendation written by Scott Nesbitt][9]*
These days, it's become fashionable (if not inevitable) to make everything *smart*: Our phones, our household appliances, our watches, our cars, and, especially, our cities.
With the latter, that means putting sensors everywhere, collecting data as we go about our business, and pushing information (whether useful or not) to us based on that data.
This begs the question, does embedding all that technology in a city make it smart? In *A City Is Not a Computer*, Shannon Mattern argues that it doesn't.
A goal of making cities smart is to provide better engagement with and services to citizens. Mattern points out that smart cities often "aim to merge the ideologies of technocratic managerialism and public service, to reprogram citizens as 'consumers' and 'users'." That, instead of encouraging citizens to be active participants in their cities' wider life and governance.
Then there's the data that smart systems collect. We don't know what and how much is being gathered. We don't know how it's being used and by whom. There's *so much* data being collected that it overwhelms the municipal workers who deal with it. They can't process it all, so they focus on low-hanging fruit while ignoring deeper and more pressing problems. That definitely wasn't what cities were promised when they were sold smart systems as a balm for their urban woes.
*A City Is Not a Computer* is a short, dense, well-researched polemic against embracing smart cities because technologists believe we should. The book makes us think about the purpose of a smart city, who really benefits from making a city smart, and makes us question whether we need to or even should do that.
![Book title git sync murder][10]
Image by: Tilted Windmill Press
**[git sync murder, by Michael Warren Lucas][11]**
*[Recommendation written by Joshua Allen Holm][12]*
Dale Whitehead would rather stay at home and connect to the world through his computer's terminal, especially after what happened at the last conference he attended. During that conference, Dale found himself in the role of an amateur detective solving a murder. You can read about that case in the first book in this series, *git commit murder*.
Now, back home and attending another conference, Dale again finds himself in the role of detective. *git sync murder* finds Dale attending a local tech conference/sci-fi convention where a dead body is found. Was it murder or just an accident? Dale, now the "expert" on these matters, finds himself dragged into the situation and takes it upon himself to figure out what happened. To say much more than that would spoil things, so I will just say *git sync murder* is engaging and enjoyable to read. Reading *git commit murder* first is not necessary to enjoy *git sync murder*, but I highly recommend both books in the series.
Michael Warren Lucas's *git murder* series is perfect for techies who also love cozy mysteries. Lucas has literally written the book on many complex technical topics, and it carries over to his fiction writing. The characters in *git sync murder* talk tech at conference booths and conference social events. If you have not been to a conference recently because of COVID and miss the experience, Lucas will transport you to a tech conference with the added twist of a murder mystery to solve. Dale Whitehead is an interesting, if somewhat unorthodox, cozy mystery protagonist, and I think most Opensource.com readers would enjoy attending a tech conference with him as he finds himself thrust into the role of amateur sleuth.
![Book title Kick Like a Girl][13]
Image by: Inner Wings Foundation
**[Kick Like a Girl, by Melissa Di Donato Roos][14]**
*[Recommendation written by Joshua Allen Holm][15]*
Nobody likes to be excluded, but that is what happens to Francesca when she wants to play football at the local park. The boys won't play with her because she's a girl, so she goes home upset. Her mother consoles her by relating stories about various famous women who have made an impact in some significant way. The historical figures detailed in *Kick Like a Girl* include women from throughout history and from many different fields. Readers will learn about Frida Kahlo, Madeleine Albright, Ada Lovelace, Rosa Parks, Amelia Earhart, Marie Curie, Valentina Tereshkova, Florence Nightingale, and Malala Yousafzai. After hearing the stories of these inspiring figures, Francesca goes back to the park and challenges the boys to a football match.
*Kick Like a Girl* features engaging writing by Melissa Di Donato Roos (SUSE's CEO) and excellent illustrations by Ange Allen. This book is perfect for young readers, who will enjoy the rhyming text and colorful illustrations. Di Donato Roos has also written two other books for children, *How Do Mermaids Poo?* and *The Magic Box*, both of which are also worth checking out.
![Book title Mine!][16]
Image by: Doubleday
**[Mine!: How the Hidden Rules of Ownership Control Our Lives, by Michael Heller and James Salzman][17]**
*[Recommendation written by Bryan Behrenshausen][18]*
"A lot of what you know about ownership is wrong," authors Michael Heller and James Salzman write in *Mine!* It's the kind of confrontational invitation people drawn to open source can't help but accept. And this book is certainly one for open source aficionados, whose views on ownership—of code, of ideas, of intellectual property of all kinds—tend to differ from mainstream opinions and received wisdom. In this book, Heller and Salzman lay out the "hidden rules of ownership" that govern who controls access to what. These rules are subtle, powerful, deeply historical conventions that have become so commonplace they just seem incontrovertible. We know this because they've become platitudes: "First come, first served" or "You reap what you sow." Yet we see them play out everywhere: On airplanes in fights over precious legroom, in the streets as neighbors scuffle over freshly shoveled parking spaces, and in courts as juries decide who controls your inheritance and your DNA. Could alternate theories of ownership create space for rethinking some essential rights in the digital age? The authors certainly think so. And if they're correct, we might respond: Can open source software serve as a model for how ownership works—or doesn't—in the future?
![Book Title Not All Fairy Tales Have Happy Endings][19]
Image by: Lulu.com
**[Not All Fairy Tales Have Happy Endings: The Rise and Fall of Sierra On-Line, by Ken Williams][20]**
*[Recommendation written by Joshua Allen Holm][21]*
During the 1980s and 1990s, Sierra On-Line was a juggernaut in the computer software industry. From humble beginnings, this company, founded by Ken and Roberta Williams, published many iconic computer games. King's Quest, Space Quest, Quest for Glory, Leisure Suit Larry, and Gabriel Knight are just a few of the company's biggest franchises.
*Not All Fairy Tales Have Happy Endings* covers everything from the creation of Sierra's first game, [Mystery House][22], to the company's unfortunate and disastrous acquisition by CUC International and the aftermath. The Sierra brand would live on for a while after the acquisition, but the Sierra founded by the Williams was no more. Ken Williams recounts the entire history of Sierra in a way that only he could. His chronological narrative is interspersed with chapters providing advice about management and computer programming. Ken Williams had been out of the industry for many years by the time he wrote this book, but his advice is still extremely relevant.
Sierra On-Line is no more, but the company made a lasting impact on the computer gaming industry. *Not All Fairy Tales Have Happy Endings* is a worthwhile read for anyone interested in the history of computer software. Sierra On-Line was at the forefront of game development during its heyday, and there are many valuable lessons to learn from the man who led the company during those exciting times.
![Book title The Soul of a New Machine][23]
Image by: Back Bay Books
**[The Soul of a New Machine, by Tracy Kidder][24]**
*[Recommendation written by Guarav Kamathe][25]*
I am an avid reader of the history of computing. It's fascinating to know how these intelligent machines that we have become so dependent on (and often take for granted) came into being. I first heard of [The Soul of a New Machine][26] via [Bryan Cantrill][27]'s [blog post][28]. This is a non-fiction book written by [Tracy Kidder][29] and published in 1981 for which he [won a Pulitzer prize][30]. Imagine it's the 1970s, and you are part of the engineering team tasked with designing the [next generation computer][31]. The backdrop of the story begins at Data General Corporation, a then mini-computer vendor who was racing against time to compete with the 32-bit VAX computers from Digital Equipment Corporation (DEC). The book outlines how two competing teams within Data General, both wanting to take a shot at designing the new machine, results in a feud. What follows is a fascinating look at the events that unfold. The book provides insights into the minds of the engineers involved, the management, their work environment, the technical challenges they faced along the way and how they overcame them, how stress affected their personal lives, and much more. Anybody who wants to know what goes into making a computer should read this book.
There is the 2022 suggested reading list. It provides a variety of great options that I believe will provide Opensource.com readers with many hours of thought-provoking entertainment. Be sure to check out our previous reading lists for even more book recommendations.
* [2021 Opensource.com summer reading list][32]
* [2020 Opensource.com summer reading list][33]
* [2019 Opensource.com summer reading list][34]
* [2018 Open Organization summer reading list][35]
* [2016 Opensource.com summer reading list][36]
* [2015 Opensource.com summer reading list][37]
* [2014 Opensource.com summer reading list][38]
* [2013 Opensource.com summer reading list][39]
* [2012 Opensource.com summer reading list][40]
* [2011 Opensource.com summer reading list][41]
* [2010 Opensource.com summer reading list][42]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list
作者:[Joshua Allen Holm][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/tea-cup-mug-flowers-book-window.jpg
[2]: https://unsplash.com/@sixteenmilesout?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://opensource.com/sites/default/files/2022-06/97_Things_Every_Java_Programmer_Should_Know_1.jpg
[5]: https://www.oreilly.com/library/view/97-things-every/9781491952689/
[6]: https://opensource.com/users/seth
[7]: https://opensource.com/sites/default/files/2022-06/A_City_is_Not_a_Computer_0.jpg
[8]: https://press.princeton.edu/books/paperback/9780691208053/a-city-is-not-a-computer
[9]: https://opensource.com/users/scottnesbitt
[10]: https://opensource.com/sites/default/files/2022-06/git_sync_murder_0.jpg
[11]: https://mwl.io/fiction/crime#gsm
[12]: https://opensource.com/users/holmja
[13]: https://opensource.com/sites/default/files/2022-06/Kick_Like_a_Girl.jpg
[14]: https://innerwings.org/books/kick-like-a-girl
[15]: https://opensource.com/users/holmja
[16]: https://opensource.com/sites/default/files/2022-06/Mine.jpg
[17]: https://www.minethebook.com/
[18]: https://opensource.com/users/bbehrens
[19]: https://opensource.com/sites/default/files/2022-06/Not_All_Fairy_Tales.jpg
[20]: https://kensbook.com/
[21]: https://opensource.com/users/holmja
[22]: https://en.wikipedia.org/wiki/Mystery_House
[23]: https://opensource.com/sites/default/files/2022-06/The_Soul_of_a_New_Machine.jpg
[24]: https://www.hachettebookgroup.com/titles/tracy-kidder/the-soul-of-a-new-machine/9780316204552/
[25]: https://opensource.com/users/gkamathe
[26]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[27]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[28]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[29]: https://en.wikipedia.org/wiki/Tracy_Kidder
[30]: https://www.pulitzer.org/winners/tracy-kidder
[31]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[32]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list
[33]: https://opensource.com/article/20/6/summer-reading-list
[34]: https://opensource.com/article/19/6/summer-reading-list
[35]: https://opensource.com/open-organization/18/6/summer-reading-2018
[36]: https://opensource.com/life/16/6/2016-summer-reading-list
[37]: https://opensource.com/life/15/6/2015-summer-reading-list
[38]: https://opensource.com/life/14/6/annual-reading-list-2014
[39]: https://opensource.com/life/13/6/summer-reading-list-2013
[40]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading
[41]: https://opensource.com/life/11/7/summer-reading-list
[42]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list

View File

@ -1,253 +0,0 @@
[#]: subject: "Advantages and Disadvantages of Using Linux"
[#]: via: "https://itsfoss.com/advantages-linux/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Advantages and Disadvantages of Using Linux
======
Linux is a buzzword and you keep hearing about Linux here and there. People discuss it in the tech forum, it is part of the course curriculum and your favorite tech YouTubers get excited while showing their Linux build. The 10x developers you follow on Twitter are all Linux fans.
Basically, Linux is everywhere and everyone keeps talking about it. And that gives you FOMO.
So, you wonder about the advantages of Linux and whether is it really worth trying.
I have compiled various possible advantages and disadvantages of Linux in this article.
If you are on the fence about choosing Linux over your preferred operating system, we would like to help you out.
Before you start, you should know that Linux is not an operating system on its own. The operating systems are called [Linux distributions and there are hundreds of them][1]. For simplicity, Ill address it as Linux OS instead of a specific Linux distribution. This [article][2] explains things better.
### Advantages of Using Linux
Considering you are curious about Linux as an alternative operating system choice, it only makes sense that you know its advantages.
You might never regret your decision if it excels at what you want it to do.
#### No Need to Purchase a License
![open source proprietary illustration][3]
You need to own an Apple device to use macOS as your daily driver and a Windows license to use Microsofts Windows.
Therefore, you need a bit of investment with these options. But, with Linux? Its entirely free.
Not just the OS, there are many software packages available for free on Linux when compared to Windows and macOS.
You can try every mainstream Linux distribution without paying for a license. Of course, you get the option to donate to support the project, but that is up to you if you really like it.
**Additionally**, Linux is totally open-source, meaning anyone can inspect the source code for transparency.
#### Can Run With Minimal System Resources
![linux mint 21 resource usage][4]
Typically, when users think of trying another operating system, it is because they are frustrated with the performance of their system.
This is from my personal experience. I have had friends willing to try Linux to revive their old laptop or a system that constantly lags.
And, when it comes to Linux distributions, they are capable of running on decent hardware configurations. You do not need to have the latest and greatest. Moreover, there are specialized [lightweight Linux distributions][5] that are tailored to run on older hardware with no hiccups.
So, you have more chances to revive your old system or get a fast-performing computer in no time with Linux.
#### Less Exposed to Malware
![malware illustration][6]
No operating system is safe from malicious files or scripts. If you download and run something from an unknown source, you cannot guarantee its safety.
However, things are better for Linux. Yes, researchers have found attackers targeting Linux IoT devices. But, for desktop Linux, it is not “yet” something to worry about.
Malicious actors target platforms that are more popular among households, and Linux does not have a big market share in the desktop space to attract that kind of attention. In a way, it can be a good thing.
All you have to do is just stick to the official software packages, and read instructions before you do anything.
As an extra plus, you do not necessarily need an antivirus program to get protection from malware.
#### Customization
![Pop!_OS 22.04 LTS][7]
With an open-source code, you get the freedom to customize your Linux experience as much as you want.
Of course, you require a bit of technical expertise to go utilize the best of it. Even without any experience, you get more customization features in your operating system when compared to macOS and Windows.
![Customized Linux experience | Reddit user: u/ZB652][8]
[u/ZB652][9]
If you are into personalizing your experience and willing to put in extra effort, Linux is for you. As an example, refer to the [KDE customization guide][10] and [dock options][11] to get basic ideas.
#### Something for Everyone
With macOS or Windows, you get limited to the design/preference choices finalized by Microsoft or Apple.
But, with Linux, you will find several Linux distributions that try to focus on various things.
For instance, you can opt for a Linux distribution that focuses on getting the latest features all the time, or you can opt for something that only gives you security/maintenance updates.
You can get something that looks beautiful out of the box or something that you provide crazy customization options. You will not run out of options with Linux.
I recommend starting with [options that give you the best user experience][12].
#### Complete Development Environment
If you are a software developer or student learning to code, Linux definitely has an edge. A lot of your build tools are available and integrated into Linux. With Docker, you can create specialized test environment easily.
Microsoft knows about this part and this is why it created WSL to give developers access to Linux environments inside Windows. Still, WSL doesnt come close to the real Linux experience. The same goes for using Docker on Windows.
I know the same cannot be said about web designing because the coveted Adobe tools are not available on Linux yet. But if you dont need Adobe for your work, Linux is a pretty good choice.
#### Learning Linux is a Skill One Must Have!
There is a learning curve to using Linux, but it provides you with insights on various things.
You get to learn how things work in an operating system by exploring and customizing it, or even just by using it.
Not everyone knows how to use Linux.
So, it can be a great skill to gain and expand your knowledge of software and computers.
#### Linux is an in-demand Job Skill
![job illustration][13]
As I mentioned above, it is a great skill to have. But, not just limited to expanding your knowledge, it is also useful professionally.
You can work your way to become a Linux system administrator or a security expert and fill several other job roles by learning the fundamentals of Linux.
So, learning Linux opens up a whole range of opportunities!
#### Privacy-Friendly
These days you cannot use Windows without a Microsoft account. And when you set up Windows, youll find that it tries to track your data from a number of services and applications.
![privacy windows][14]
While you can find such settings and disable them, it is clear that Windows is configured to disregard your privacy by default.
Thats not the case in Linux. While some applications/distributions may have an optional feature to let you share useful insights with them, it has never been a big deal. Most of the things on Linux are tailored to give you maximum privacy by default without needing to configure anything.
Apple and Microsoft on the other hand have clever tactics to collect anonymous usage data from your computer. Occasionally, they log your activity on their app store and while you are signed in through your account.
#### DIY projects and Self-hosting
Got a tinkerer in you? If you like to make electronics or software projects, Linux is your paradise.
You can use Linux on [single-board computers like Raspberry Pi][15] and create cool things like retro gaming consoles, home automation systems, etc.
You can also deploy open source software on your own server and maintain them. This is called self-hosting and it has the following advantages:
* Reduce hosting costs
* Take control of your data
* Customize the app/service as per your requirements
Clearly, youll be doing all this either directly with Linux or tools built on top of it.
### Disadvantages of Linux
Linux is not a flawless choice. Just like everything, there are some downsides to Linux as well. Those include:
#### Learning Curve
![too much learn illustration][16]
Every so often it is not just about learning a new skill, it is more about getting comfortable as quickly as possible.
If a user cannot get their way around the task they intend to do, it is not for them. It is true for every operating system. For instance, a user who uses Windows/macOS, may not get comfortable with Linux as quickly.
You can read our comparison article to know the [difference between macOS and Linux][17].
I agree that some users catch on quicker than others. But, in general, when you step into the Linux world, you need to be willing to put a bit of effort into learning the things that are not obvious.
#### Variety
While we recommend using the [best Linux distributions tailored for beginners][18], choosing what you like at first can be overwhelming.
You might want to try multiple of them to see what works with you best, which can be time-consuming and confusing.
Its best to settle with one of the Linux distributions. But, if you remain confused, you can stick to Windows/macOS.
#### Market Share in Desktop Space
![linux desktop market share][19]
Linux is not a popular desktop operating system.
This should not be of concern to a user. However, without having a significant market presence, you cannot expect app developers to make/maintain tools for Linux.
Sure, there are lots of essential and popular tools available for Linux, more than ever. But, it remains a factor that may mean that not all good tools/services work on Linux.
Refer to our regularly updated article on [Linuxs market share][20], to get an idea.
#### Lack of Proprietary Software
As I mentioned above, not everyone is interested in bringing their tools/apps to Linux.
Hence, you may not find all the good proprietary offerings for Windows/macOS. Sure, you can use a compatibility layer to run Windows/macOS programs on Linux.
But that doesnt work all the time. For instance, you do not have official Microsoft 365 support for Linux and tools like Wallpaper Engine.
#### Not a Gaming-first OS
![gaming illustration][21]
If you want to game on your computer, Windows remains the best option for its support for the newest hardware and technologies.
When it comes to Linux, there are a lot of “ifs and buts” for a clear answer. You can refer to our [gaming guide for Linux][22] to explore more if interested.
#### Lack of Professional Tech Support
I know not everyone needs it. But, there are tech support options that can guide users/fix issues remotely on their laptop or computer.
With Linux, you can seek help from the community, but it may not be as seamless as some professional tech support services.
Youll still have to do most of the hit and try stuff on your own and not everyone would like it.
### Wrapping Up
I am primarily a Linux user but I use Windows when I have to play games. Though my preference is Linux, I have tried to be unbiased and give you enough pointers so that you can make up your mind if Linux is for you or not.
If you are going for Linux and have never used it, take the baby step and [use Linux in a virtual machine first][23]. You can also use WSL2 if you have Windows 11.
I welcome your comments and suggestions.
--------------------------------------------------------------------------------
via: https://itsfoss.com/advantages-linux/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/what-is-linux/
[2]: https://itsfoss.com/what-is-linux/
[3]: https://itsfoss.com/wp-content/uploads/2022/08/open-source-proprietary-illustration.jpg
[4]: https://itsfoss.com/wp-content/uploads/2022/08/linux-mint-21-resource-usage.jpg
[5]: https://itsfoss.com/lightweight-linux-beginners/
[6]: https://itsfoss.com/wp-content/uploads/2022/09/malware-illustration.jpg
[7]: https://itsfoss.com/wp-content/uploads/2022/08/pop-os-screenshot-2022.png
[8]: https://itsfoss.com/wp-content/uploads/2022/09/customization-reddit-unixporn.jpg
[9]: https://www.reddit.com/r/unixporn/comments/wzu5nl/plasma_cscx2n/
[10]: https://itsfoss.com/kde-customization/
[11]: https://itsfoss.com/best-linux-docks/
[12]: https://itsfoss.com/beautiful-linux-distributions/
[13]: https://itsfoss.com/wp-content/uploads/2022/09/job-illustration.jpg
[14]: https://itsfoss.com/wp-content/uploads/2022/09/privacy-windows.webp
[15]: https://itsfoss.com/raspberry-pi-alternatives/
[16]: https://itsfoss.com/wp-content/uploads/2022/09/too-much-learn-illustration.jpg
[17]: https://itsfoss.com/mac-linux-difference/
[18]: https://itsfoss.com/best-linux-beginners/
[19]: https://itsfoss.com/wp-content/uploads/2017/09/linux-desktop-market-share.jpg
[20]: https://itsfoss.com/linux-market-share/
[21]: https://itsfoss.com/wp-content/uploads/2022/08/gaming-illustration.jpg
[22]: https://itsfoss.com/linux-gaming-guide/
[23]: https://itsfoss.com/why-linux-virtual-machine/

View File

@ -0,0 +1,101 @@
[#]: subject: "Defining an open source AI for the greater good"
[#]: via: "https://opensource.com/article/22/10/defining-open-source-ai"
[#]: author: "Stefano Maffulli https://opensource.com/users/reed"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Defining an open source AI for the greater good
======
Join the conversation by joining the four Deep Dive: AI panel discussions starting on October 11.
Artificial intelligence (AI) has become more prevalent in our daily lives. While AI systems may intend to offer users convenience, there have been numerous examples of automated tools getting it wrong, resulting in serious consequences. What's happening in the AI system that leads to erroneous and harmful conclusions? Probably a dramatic combo of bad AI combined with a lack of human oversight. How do we as a society prevent AI ethics fails?
The open source community has had, for well over 20 years, clear processes for dealing with errors ("bugs") in software. The [Open Source Definition][2] firmly establishes the rights of developers and the rights of users. There are frameworks, licenses, and a legal understanding of what needs to be done. When you find a bug, you know who to blame, you know where to report it, and you know how to fix it. But when it comes to AI, do you have the same understanding of what you need to do in order to fix a bug, error, or bias?
In reality, there are many facets of AI that don't fit neatly into the Open Source Definition.
### Establishing boundaries for AI
What's the boundary between the data that trains an AI system and the software itself? In many ways, AI systems are like black boxes: it isn't really understood what happens inside, and very little insight for how a system has reached a specific conclusion. You can't inspect the networks inside that are responsible for making a judgment call. So how can open source principles apply to these "black boxes" making automated decisions?
For starters, you need to take a step back and understand what goes into an AI's automated decision-making process.
### The AI decision process
The AI process starts with collecting vast amounts of training data-data scraped from the internet, tagged and cataloged, and fed into a model to teach it how to make decisions on its own. However, the process of collecting a set of training data is itself problematic. It's a very expensive and time-consuming endeavor, so large corporations are better positioned to have the resources to build large training sets. Companies like Meta (Facebook) and Alphabet (Google) have been collecting people's data and images for a long, long time. (Think of all the pictures you've uploaded since before Facebook or even MySpace existed. I've lost track of all the pictures I've put online!) Essentially anything on the Internet is fair game for data collection, and today mobile phones are basically real-time sensors feeding data and images to a few mega-corporations and then to Internet-scrapers.
Examining the data going into the system is just scratching the surface. I haven't yet addressed the models and neural networks themselves. What's in an AI model? How do you know when you're chatting with a bot? How do you inspect it? How do you flag an issue? How do we fix it? How do we stop it in case it gets out of control?
It's no wonder that governments around the world are not only excited about AI and the good that AI could do, but also very concerned about the risks. How do we protect each other, and how do we ask for a *fair* AI? How do we establish not just rules and regulations, but also social norms that help us all define and understand acceptable behavior? We're just now beginning to ask these questions, and only just starting to identify all the pieces that need to be examined and considered.
To date, there aren't any guiding principles or guardrails to orient the conversation between stakeholders in the same way that, for instance, the [GNU Manifesto][3] and later the Open Source Definition provides. So far, everyone (corporations, governments, academia, and others) has moved at their own pace, and largely for their own self-interests. That's why the Open Source Initiative (OSI) has stepped forward to initiate a collaborative conversation.
### Open Source Initiative
The Open Source Initiative has launched [Deep Dive: AI][4], a three-part event to uncover the peculiarities of AI systems, to build understanding around where guardrails are needed, and to define Open Source in the context of AI. Here's a sampling of what the OSI has discovered so far.
#### Copyright
AI models may not be covered by copyright. Should they be?
Developers, researchers, and corporations share models publicly, some with an Open Source software license. Is that the right thing to do?
The output of AI may not be covered by copyright. That raises an interesting question: Do we want to apply copyright to this new kind of artifact? After all, copyleft was invented as a hack for copyright. Maybe this is the chance to create an alternative legal framework.
The release of the new Stable Diffusion model raises issues around the output of the models. Stable Diffusion has been trained on lots of images, including those owned by Disney. When you ask it to, for instance, create a picture of Mickey Mouse going to the US Congress, it spits out an image that looks exactly like Mickey Mouse in front of the US Capitol Building. That image may not be covered by copyright, but I bet you that the moment someone sells t-shirts with these pictures on it, Disney will have something to say about it.
No doubt we'll have a test case soon. Until then, delve more into the copyright conundrum in the **Deep Dive: AI** podcast [Copyright, selfie monkeys, the hand of God][5].
#### Regulation
The European Union is leading the way on AI regulation, and its approach is interesting. The AI Act is an interesting read. It's still in draft form, and it could be some time before it is approved, but its legal premise is based on risk. As it stands now, EU legislation would require extensive testing and validation, even on AI concepts that are still in their rudimentary research stages. Learn more about the EUs legislative approach in the Deep Dive: AI podcast [Solving for AIs black box problem][6].
#### Datasets
Larger datasets raise questions. Most of the large, publicly available datasets that are being used to train AI models today comprise data taken from the web. These datasets are collected by scraping massive amounts of publicly available data and also data that is available to the public under a wide variety of licenses. The legal conditions for using this raw data are not clear. This means machines are assembling petabytes of images with dubious provenance, not only because of questionable legal rights associated with the uses of these images, code and text, but also because of the often illicit content. Furthermore, we must acknowledge that this internet data has been produced by the wealthier segment of the world's population—the people with access to the internet and smartphones. This inherently skews the data. Find out more about this topic in the Deep Dive: AI podcast [When hackers take on AI: Sci-fi or the future?][7]
#### Damage control
AI can do real damage. Deep fakes are a good example. A Deep Fake AI tool enables you to impose the face of someone over the body of someone else. They're popular tools in the movie industry, for example. Deep Fake tools are unfortunately used also for nefarious purposes, such as making it appear that someone is in a compromising situation, or to distribute malicious misinformation. Learn more about Deep Fakes in Deep Dive: AI podcast [Building creative restrictions to curb AI abuse][8].
Another example is the *stop button* problem, where a machine trained to win a game can become so aware that it needs to win that it becomes resistant to being stopped. It sounds like science fiction, but it is an identified mathematical problem that research communities are aware of, and have no immediate solution for.
#### Hardware access
Currently, no real Open Source hardware stack for AI exists. Only an elite few have access to the hardware required for serious AI training and research. The volume of data consumed and generated by AI is measured in terabytes and petabytes, which means that special hardware is required to perform speedy computations on data sets of this size. Specifically, without graphic processing units (GPUs), an AI computation could take years instead of hours. Unfortunately, the hardware required to build and run these big AI models is proprietary, expensive, and requires special knowledge to set up. There are a limited number of organizations that have the resources to use and govern the technology.
Individual developers simply don't have the resources to purchase the hardware needed to run these data sets. A few vendors are beginning to release hardware with Open Source code, but the ecosystem is not mature. Learn more about the hardware requirements of AI in the Deep Dive: AI podcast [Why Debian wont distributed AI models anytime soon][9].
### AI challenges
The [Open Source Initiative][10] protects open source against many threats today, but also anticipates the challenges, such as AI, of tomorrow. AI is a promising field, but it can also deliver disappointing results. Some AI guardrails are needed to protect creators, users, and the world at large.
The Open Source Initiative is actively encouraging dialogue. We need to understand the issues and implications and help communities establish shared principles that ensure AI is good for us all. Join the conversation by joining the four [Deep Dive: AI panel discussions][11] starting on October 11.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/defining-open-source-ai
作者:[Stefano Maffulli][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/reed
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/brain_computer_solve_fix_tool.png
[2]: https://opensource.org/osd
[3]: https://www.gnu.org/gnu/manifesto.en.html
[4]: https://deepdive.opensource.org/
[5]: https://deepdive.opensource.org/podcast/copyright-selfie-monkeys-the-hand-of-god/
[6]: https://deepdive.opensource.org/podcast/solving-for-ais-black-box-problem/
[7]: https://deepdive.opensource.org/podcast/when-hackers-take-on-ai-sci-fi-or-the-future/
[8]: https://deepdive.opensource.org/podcast/building-creative-restrictions-to-curb-ai-abuse
[9]: https://deepdive.opensource.org/podcast/why-debian-wont-distribute-ai-models-any-time-soon/
[10]: https://opensource.org
[11]: https://deepdive.opensource.org/

View File

@ -0,0 +1,74 @@
[#]: subject: "Whats new in GNOME 43?"
[#]: via: "https://opensource.com/article/22/10/whats-new-gnome-43-linux"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Whats new in GNOME 43?
======
I love the [GNOME][1] desktop, and I use it as my daily [Linux desktop environment][2]. I find with GNOME, I can focus on the stuff I need to get done, but I still have flexibility to make the desktop look and act the way I want.
The GNOME Project recently released GNOME 43, the latest version of the GNOME desktop. I met with GNOME developer Emmanuele Bassi to ask a few questions about this latest release:
**Jim Hall (Jim): GNOME has lots of great desktop features. What are some of the new features in GNOME 43?**
**Emmanuele Bassi (Emmanuele):** GNOME 43 has a complete redesign of the system status menu in the Shell. The new design is meant to give quick and easy access to various settings: network connections and VPNs; audio input and output sources and volumes; toggling between light and dark styles. It also has a shortcut for taking a screenshot or starting a screen recording.
GNOME core applications have also been ported to the new major version of the GNOME toolkit, GTK4. GTK4 is more efficient when it comes to its rendering pipeline, which leads to smoother transitions and animations. Additionally, GNOME applications use libadwaita, which provides new UI elements and adaptive layouts that can seamlessly scale between desktop and mobile form factors.
The GNOME file manager, Nautilus, is one of the applications that has been ported over to GTK4 and libadwaita, and it has benefitted from the new features in the core platform; its now faster, and it adapts its UI when the window is resized.
The system settings can now show device security information, including manufacturing errors and hardware misconfiguration, as well as possible security issues like device tampering. Lots of work is planned for future releases, as device security is an area of growing concern.
**Jim: What do you love most about GNOME 43?**
**Emmanuele:** The most important feature of GNOME, one that I constantly take advantage of and that I always miss when I have to deal with other operating systems is how much the OS does not get in the way of what Im doing. Everything is designed to let me concentrate on my job, without interruptions. I dont have bells and whistles constantly on my screen, competing for attention. Everything is neatly tucked away, ready to be used only when I need to.
**Jim: Many folks are familiar with GNOME today, but may not be familiar with its history. How did GNOME get started?**
**Emmanuele:** GNOME started in 1997, 25 years ago, as a project for using existing free and open source components to create a desktop environment for everyone that would be respectful of users and developers freedom. At the time there were only commercial desktops for Unix, or desktops that were based on non-free components. Being able to take the entire desktop, learn from it, and redistribute it has always been a powerful motivator for contributors—even commercial ones.
Over the past 25 years, GNOME contributors have worked not just on making the desktop, but creating a platform capable of developing and distributing applications.
**Jim: Open source projects keep going because of a strong community. What keeps the GNOME community strong?**
**Emmanuele:** I dont pretend to speak for everyone in the project, but for myself I think the main component is the respect of every voice within the community of contributors, which comes from the shared vision of creating an entirely free and open platform. We all know where we want to go, and we are all working towards the same goal. Sometimes, we may end up pulling in different directions, which is why donating to entities like the GNOME Foundation, which sponsor gatherings and conferences, is crucial: they allow a more comprehensive communication between all the involved parties, and at the end we get better results for it.
GNOME also takes very seriously respectful communication between members of the community; we have a strong code of conduct, which is enforced within the community itself and covers all venues of communication, including in person events.
**Jim: GNOME established the Human Interface Guidelines (HIG) to unify the GNOME design and GNOME app interfaces. How did the HIG come about?**
**Emmanuele****:**The Human Interface Guidelines (HIG) came into being after Sun did a usability study on GNOME 1, one of the very first usability studies for a free software project. The findings from that study led to the creation of a standardized document that projects under the GNOME umbrella would have to follow, which is how we ended up with GNOME 2, back in 2002.
The HIG was a rallying point and a symbol, a way to demonstrate that the entire project cared about usability and accessibility, and it provided the tools to both desktop and application developers to create a consistent user experience.
Over the years, the HIG moved away from being a complete checklist of pixels of padding and grids of components, and instead it now provides design principles, UI patterns, conventions, and resources for contributors and application developers. The HIG now has its own implementation library, called libadwaita, which application developers can use when targeting GNOME, and immediately benefit from a deeper integration within the platform without having to re-implement the various styles and patterns manually.
_Thanks to Emmanuele Bassi for answering this interview. You can find GNOME at_[_https://www.gnome.org/_][3]
_Read the release announcement for GNOME 43 at_[_https://release.gnome.org/43/_][4]
_Learn about whats new in GNOME 43 for developers at_[_https://release.gnome.org/43/developers/_][5]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/whats-new-gnome-43-linux
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lkxed
[1]: https://opensource.com/article/19/12/gnome-linux-desktop
[2]: https://opensource.com/article/20/5/linux-desktops
[3]: https://www.gnome.org/
[4]: https://release.gnome.org/43/
[5]: https://release.gnome.org/43/developers/

View File

@ -0,0 +1,189 @@
[#]: subject: "Exploring innovative Open Organization charts"
[#]: via: "https://opensource.com/article/22/10/innovative-open-organization-chart"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Exploring innovative Open Organization charts
======
The ability to react quickly and adapt to changing situations is critical in today's business and work environment. In the past, offering efficient, standardized systems was the way to reduce costs and provide more to the public. In today's rapidly changing world, that's not enough. Collaborating, deciding, and executing quickly on a project requires that the traditional organization chart change to strengthen **adaptability**, **transparency**, **collaboration**, **inclusivity,** and project **community—**all five Open Organization Principles. Today, there are too many interdependencies to stick to the traditional top-down organization chart.
I just read the book [Team of Teams, by Stanley McChrystal][1], which discusses this concern, particularly in military combat situations. It is the efficiency of small, empowered, trusted, goal-oriented teams working together (and with other teams) that will be successful in the future. Their ability to interact with other teams will make a small group scalable within a large organization. McChrystal writes that adaptability, transparency, and cross-silo collaboration are key to their success. These are three of the Open Organization Principles. I think it's equally valid in the business environment and not just in military operations.
### Speed in decision-making and how to address continual unpredictability
When do you make a decision yourself, and when do you take decisions to top management? McChrystal states, "a 70% chance of success today is better than 90% tomorrow when speed of action is critical." These days, the competitors, or enemies, are moving at that pace.
In my article "[The 4 components of a great decision, What makes a "good" open decision?][2]" I wrote that decision-making speed was very important. Well, that's more true than ever, and you can't do that if you need approvals up and down a large vertical organization chart. Quick decisions must be made on the frontline, where the issues are. A horizontal organization that gets the people most directly involved in the decision-making process is required and is part of the strength that McChrystal is talking about.
![A horizontal org chart with connections between teams][3]
Image by:
(Ron McFarland, CC BY-SA 4.0)
These connections should have solid lines, and the vertical lines should be dotted, as communications should go up the line only when need be and horizontally minute by minute in real-time.
### Information reversal
In another presentation, I talked about an upside-down organization chart, which I called the [hierarchy of support][4]. Compare this with a vertical organizational chart.
![Hierarchy of company objectives][5]
Image by:
(Ron McFarland, CC BY-SA 4.0)
A typical vertical organization chart has top management in the top box. The staff provided frontline information to superiors so that they could decide.
Then, lines connect downward to departments under the top box, and directives move downward. Task directives flow from those department managers to the staff under them.
In a rapidly changing, unpredictable environment, the superiors should provide surrounding information to the staff so frontline people can make decisions independently. Imagine turning that organization chart upside down, with top management at the bottom and the staff at the top.
![Hierarchy of company support][6]
Image by:
(Ron McFarland, CC BY-SA 4.0)
With today's information technology, the frontline staff is often better informed than their superiors. Therefore, managers' main job is to support the staff where needed, but the decisions should be made rapidly on the frontline.
McChrystal uses the expression, "Eyes on - Hands off." I think he is suggesting what I'm saying in a different way. He calls top managers giving directives "chess players" and supporting managers "gardeners."
McChrystal started a training company called [Crosslead][7] that trains individuals and companies on how this type of organization works. Their name implies the horizontal, frontline communication I mentioned in my upside-down organization chart.
The book mentions Open Organization Principles throughout:
- **Adaptability**, which he calls "resilience."
- **Collaboration**, which is horizontal within teams and between teams.
- **Community**, which is **Inclusivity** and **Transparency** within teams.
### Getting through the forest by knowing the working environment
Imagine your goal is to get through a forest to your home on the other side. Unfortunately, the forest is rapidly changing because of the weather and climate.
One person gives you a map of the best way to go through the forest, but that map was made in the past and might be outdated. It might be the best way to get home, but blockages along that route may force you to return to your starting location.
Another person has a current satellite image of the forest which shows every possible route and its present condition. Furthermore, he has guides spread throughout the forest who can communicate and advise you on the best route.
Wouldn't the second method be more reliable with a rapidly changing forest?
### McChrystal's organization chart
It starts with a frontline team, a specific goal, and members' specific tasks. The members select a leader depending on the task at hand. Who is most experienced, informed, and qualified to lead them toward the given team goal?
It might well be that the official leader is the least qualified to make decisions, so the system is very slow at best and freezes at worst. Who will most people follow? That will determine the leader of any given task.
McChrystal writes about the "Perry Principle," in which top management could not give orders by sea because there was no communication system in [Admiral Perry's][8] days. McChrystal calls this a "principle" because empowerment was given to frontline staff as a last resort and only when forced. He thinks this should be reversed. Top management should only make the decision themselves when the frontline people can't decide for one reason or another.
The team chart that McChrystal is proposing is on the right.
![Command organizational chart versus team organizational chart][9]
Image by:
Team of Teams, page 96.
An exponential growth in frontline connectedness speeds up the communication and action process in a way that the current hierarchical structure can not handle. The command chart on the left is just too slow in a rapidly changing environment.
By the time the situation is reported, everything changes and reported information is obsolete. Therefore, a frontline leader, like a start-up entrepreneur, must have the authority, initiative, intuition, and creative thinking to make decisions and immediately act on them to achieve the best result. Speed determines success or failure.
Up until now, adaptability has mostly been characteristic of small interactive teams rather than large top-down hierarchies.
In this new environment, that frontline leader's superior must withhold decision-making on the one hand but relentlessly support the frontline on the other. This will lead to frontline decision-making competence to iterate and adjust in a fraction of the normal time.
### Attention directed from efficiency to adaptability
McChrystal introduces the work of [Frederick Winslow Taylor][10], who developed the reductionist theory and the optimization and standardization of processes. This process was the most efficient way to reduce costs and save energy. He believed there was one ideal way for any process.
So, he researched processes, developed instruction sheets, and instructed the frontline staff just to follow directions. It was a hard and fast line between thinking (by the researcher) and action (by the frontline worker). This approach is fine for repeated, well-known, stable processes, but not in changing environments, like factories with complicated but linear predictable activities, but not in changing environments. Unfortunately, this concept took the initiative to improve away from the frontline operator, as all they had to do was act and not think.
When modification was required, the frontline worker froze, unqualified and unskilled at adapting.
McChrystal writes that his military combat environment is not predictable. It is a "complex system." This complexity has countless unpredictable interdependencies in the environment.
When one event takes place, it may have a massive impact or no impact at all. This results in great unpredictability. We have to manage this unpredictability. In the past, communication was from a few to a few with some connected impact. Now, it is many to many, and no one knows who or what the impact is on who or what. It is totally unpredictable.
I believe this reductionist process is still important, but it can only go so far.
Therefore, those basic practice instruction sheets should come in the form of suggestions only and not orders to follow. McChrystal calls these situations _complexity systems_. It's like opening and walking through a door only to learn of other doors to choose from.
Those other doors cannot be foreseen without walking through the previous door. After selecting one of those doors, you discover more doors to choose from. To be most effective, whenever you select a door, you let everyone in the system know which one you picked and ask for advice if available. This is where real-time transparency is vital. In this environment, planning is not helpful, but feedback is.
Being better equipped and more efficient are not enough in complex environments. Being agile and resilient become critical factors. When disturbances come, the system must continue to function and adjust. This is all-important in a world of continual situational change. In this world, planning for disruption is vital. It is "rolling with the punches" or even benefiting from them by developing an immune system to disruption. When shocks come, options have been planned, developed, practiced, and applied when needed. Simply working on one ideal process is not enough. If all the attention is on the execution of one procedure, other more helpful skills may suffer. It is moving away from predicting a single forecast, exploring all possibilities, and preparing for them. McChrystal asks to contrast efficiency and effectiveness. He says, "Efficiency is doing things right. Effectiveness is doing the right thing." He thinks the latter is more important in a complex situation. To do that, people should be skilled in many rarely needed but still necessary tasks.
### Collaboration over vertical communication walls
Furthermore, breaching vertical walls between divisions or teams increases the speed of action, particularly where cross-functional collaboration is vital to the speed of response.
According to McChrystal, both between teams and within teams, collective consciousness is developed over years of joint practice, trust building, cooperation, deep group, and individual understanding, bonding, and service to their greater purpose.
The entire group can improvise in a coordinated way when necessary. Teamwork is a process of reevaluating everyone's move and intent, constant messaging, and real-time adjustment.
### Barriers between teams
As you move down the traditional organization chart, motivation and contextual awareness become more limited and specific, and greater distance from the overall organization's objectives. Members are tight within their team but separated from the other groups within the organization and possibly the entire organization's goals.
![Command of teams organizational chart][11]
Image by:
Team of Teams, page 129
### Real-time communication and connections between teams
In a complex, rapidly changing environment, the below chart is more appropriate, where there is a good deal of continual information flow and connections.
![Inter-team organizational chart][12]
Image by:
Team of Teams, page 129
Team members tackling complex environments must all grasp not just their team's purpose but the overarching goal of the entire organizational system. They must also consider how their activities impact other groups.
To be successful, team participation and team-to-team participation are vital, according to McChrystal. In Jim Whitehurst's [book][13] on letting and encouraging everyone to speak up in meetings, even the quiet people express this same point.
I wrote about it in my first article, [When empowering employee decision-making, intent is everything][14], posted on April 19, 2016. This concept is true when trying to connect teams as well.
Teams working on a problem and collaborating in real time can perform tasks more concurrently rather than sequentially, saving a massive amount of valuable time.
### Wrap up
This article presents several images of new organization chart concepts. Unofficially, to get things done, much horizontal communication has been going on for decades. The difference now is that updates are in minutes and not at weekly or monthly meetings.
I also discussed the importance of the speed of decision-making in today's working environment and that a new real-time communication flow system is needed. I mentioned that at least three critical Organization Principles, namely adaptability, transparency, and collaboration, were vitally important to make communication flow and allow faster decision-making and execution. Furthermore, I also presented that just having a highly efficient and low-cost system is not enough when faced with a rapidly changing, unpredictable working environment. An approach better able to adapt to change needs to be introduced and put into use, namely a new open organization chart.
In the second part of this article, I will discuss how this type of organization can work, including how to develop it and improve it. Also, I'll give examples of how it can work in various situations.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/innovative-open-organization-chart
作者:[Ron McFarland][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lkxed
[1]: https://www.shortform.com/summary/team-of-teams-summary-stanley-mcchrystal?gclid=CjwKCAjwy_aUBhACEiwA2IHHQFl6iYqOX4zSl3JxC-BVubNJo3Ee11s2nF2t4HMN6roJn2yehivPshoCXlQQAvD_BwE
[2]: https://opensource.com/open-organization/17/3/making-better-open-decisions
[3]: https://opensource.com/sites/default/files/2022-10/horizontal-org-chart.png
[4]: https://www.slideshare.net/RonMcFarland1/hierarchy-of-objectives-support
[5]: https://opensource.com/sites/default/files/2022-10/hierarchy-company-objectives.png
[6]: https://opensource.com/sites/default/files/2022-10/hierarchy-company-support.png
[7]: http://www.crosslead.com/
[8]: https://en.wikipedia.org/wiki/Matthew_C._Perry
[9]: https://opensource.com/sites/default/files/2022-10/command-vs-team-chart.png
[10]: https://en.wikipedia.org/wiki/Frederick_Winslow_Taylor
[11]: https://opensource.com/sites/default/files/2022-10/command-of-teams.png
[12]: https://opensource.com/sites/default/files/2022-10/connections-between-teams.png
[13]: https://www.goodreads.com/book/show/23258978-the-open-organization
[14]: https://opensource.com/open-organization/16/4/when-empowering-employee-decision-making-intent-everything

View File

@ -0,0 +1,114 @@
[#]: subject: "Our open source startup journey"
[#]: via: "https://opensource.com/article/22/10/tooljet-open-source-journey"
[#]: author: "Navaneeth PK https://opensource.com/users/navaneeth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Our open source startup journey
======
[ToolJet][1] is an open source, low-code framework for rapidly building and deploying internal tools. Our codebase is 100% JavaScript and TypeScript.
A lone developer in April 2021 started ToolJet. The public beta launched in June 2021 and was an instant hit. With this traction, ToolJet raised funding, and currently, we have a team of 20 members.
### Why open source?
Before working on ToolJet, I worked with a few enterprise clients as a consultant. Many of these clients were large enough to build and maintain dozens of internal tools. Despite the constant requests from sales, support, and operations teams to add more features and fix the bugs in their internal tools, engineering teams struggled to find the bandwidth to work on the internal utilities.
I tried using a few platforms to build and maintain internal tools. Most of these tools were very expensive, and frequently, they didn't really fit the requirements. We needed modifications, and most utilities didn't support on-premise hosting.
As a Ruby developer, I primarily used ActiveAdmin and RailsAdmin to build internal tools. Both utilities are amazing, but making them work with more than one data source is difficult. I then realized there is a need in the market for a framework that could build user interfaces and connect to multiple data sources. I believe any tool built for developers should be open source. Most of the tools and frameworks that developers use daily result from people from all over the world collaborating in public.
### The first commit
Building something like ToolJet needed a full-time commitment. Selling one of my side projects gave me a runway of 5-6 months, and I immediately started working on an idea I'd had in mind for at least two years.
The first commit (rails new) of ToolJet was on April 1, 2021.
Wait! I said the codebase is 100% JavaScript. Continue reading to discover why.
### Building and pitching investors
I sat in front of my screens for most of April and May, coding and pitching to investors for a pre-seed round.
My work also included creating the drag-and-drop application builder, documenting everything, ensuring there was documentation for setting ToolJet up on popular platforms, creating a website, creating posters and blog posts for launch, and more. The process went well without any major challenges. At this point, the frontend of ToolJet was built using React, with the backend using Ruby on Rails.
While the coding was going well, investor pitches weren't going great. I sent around 40 cold emails to venture capitalist firms and "angel investors" focused on early-stage funding. While most of them ignored the email, some shared their reason for rejection, and some scheduled a call.
Most of the calls were the same; I couldn't convince them of an open source business model.
### The launch
June 7th was the day of the launch. First, we launched on ProductHunt. Six hours passed, and there were only 70 new signups. But we were trending as the #1 product of the day (and ended up as the #3 product of the week). For posterity, here's the original [post][2].
I also posted on [HackerNews][3] around 6 PM, and within an hour, the post was #1. I was very happy that many visitors signed up and starred the repository. Many of these visitors and users reported bugs in the application and documentation. Within eight hours of posting on HN, more than 1,000 GitHub users starred ToolJet's GitHub repository, and there were hundreds of signups for ToolJet cloud. The trend continued for three days, and the repo had 2.4k stars.
![ToolJet repo stats on GitHub][4]
Image by:
GitHub StarTrack for ToolJet. (Navaneeth PK, CC BY-SA 4.0)
### Getting funding
The traction on GitHub was enough to be noticed by the venture capitalist (VC) world. The days following the launch were packed with calls. We had other options, but did not consider seriously consider them, including:
- Bootstrapping: During the early stages of the product, it was hard to find paying customers, and I did not have enough savings to fund the project until that happened.
- Building as a side project: While this strategy works great for smaller projects, I didn't feel it would work for ToolJet because we needed to create dozens of integrations and UI widgets before the platform could become useful for customers. As a side project, it might take months or years to achieve that.
I knew it could take months to build the platform I wanted if ToolJet became just a side project. I wanted to accelerate growth by expanding the team, and VC funding was the obvious choice, given the traction.
The good news is that we raised[$1.55 million in funding][5] within two weeks of the HN launch.
### Stack matters in open source
Soon after the launch, we found that many people wanted to contribute to ToolJet, but they were mostly JavaScript developers. We also realized that for a framework like ToolJet that in the future should have hundreds of data source connectors, only a plugin-based architecture made sense. We decided to migrate from Ruby to TypeScript in August 2021. Even though this took about a month and significant effort, this was one of the best decisions we've made for the project. Today, we have an extensible plugin-based architecture powered by our [plugin development kit][6]. We have contributions from over 200 developers. We've written extensively about this migration [here][7] and [here][8].
### Launching v1.0
Many users have been using ToolJet on production environments since August, and the platform did not show any stability or scalability issues. We were waiting to wrap up the developer platform feature before we called it v1.0. The ToolJet developer platform allows any JavaScript developer to build and publish plugins for ToolJet. Developers are now able to make connectors for ToolJet. Creating a ToolJet connector can take just 30 minutes, including integration tests.
### Building a growing community
![ToolJet star history][9]
Image by:
ToolJet Star History (Navaneeth PK, CC BY-SA 4.0)
We didn't spend money on marketing. Most of our efforts in spreading the news about ToolJet have been writing about our learnings and being active in developer communities. We have a team of three members who take care of community queries.
### The business model
ToolJet won't be a sustainable business without a [commercial product][10] to pay the bills. We've built an enterprise edition of ToolJet, for which customers must pay. There's no limit on usage for the free community edition, and additional features in the enterprise edition are relevant only to large teams. We have very large companies as paying customers right now, but we haven't started monetizing ToolJet aggressively. We have enough money left in the bank to build an even better ToolJet, so our focus currently is on product improvement.
### What's next?
We frequently release better versions of ToolJet with the help of constant feedback and contributions from the open source community. Many major improvements and dozens of connectors and UI components are in progress. We're moving faster than ever towards our initial goal of being the open framework that can connect to hundreds of data sources and build even the most complicated user interfaces!
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/tooljet-open-source-journey
作者:[Navaneeth PK][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/navaneeth
[b]: https://github.com/lkxed
[1]: https://github.com/ToolJet/ToolJet
[2]: https://www.producthunt.com/products/tooljet-0-5-3
[3]: https://news.ycombinator.com/item?id=27421408
[4]: https://opensource.com/sites/default/files/2022-10/tooljet-repo-stats.png
[5]: https://blog.tooljet.com/raising-vc-funding-for-open-source-project
[6]: https://www.npmjs.com/package/@tooljet/cli
[7]: https://blog.tooljet.com/migrating-toojet-from-ruby-on-rails-to-nodejs
[8]: https://blog.tooljet.com/how-we-migrated-tooljet-server-from-ruby-to-node-js
[9]: https://opensource.com/sites/default/files/2022-10/tooljet-star-history.png
[10]: https://opensource.com/article/19/11/product-vs-project

View File

@ -2,7 +2,7 @@
[#]: via: (https://itsfoss.com/nvidia-linux-mint/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,252 +0,0 @@
[#]: subject: (Complete Guide to Configuring SSH in Ubuntu)
[#]: via: (https://itsfoss.com/set-up-ssh-ubuntu/)
[#]: author: (Chris Patrick Carias Stas https://itsfoss.com/author/chris/)
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Complete Guide to Configuring SSH in Ubuntu
======
SSH has become the default method of accessing a remote Linux server these days.
SSH stands for Secure Shell and its a powerful, efficient, and popular network protocol used to establish communication between two computers in a remote fashion. And lets not forget the secure part of its name; SSH encrypts all traffic to prevent attacks like hijacking and eavesdropping while offering different authentication methods and a myriad of configuration options.
In this beginners guide, youll learn:
* The basic concept of SSH
* Setting up SSH server (on the system you want to access remotely)
* Connecting to remote server via SSH from the client machine (your personal computer)
### The absolute basics of SSH
Before you see any configuration process, it will be better to go through the absolute basic concept of SSH.
The SSH protocol is based on server-client architecture. The “server” allows the “client” to be connected over a communication channel. This channel is encrypted and the exchange is governed by the use of public and private SSH keys.
![Image credit: SSH][1]
[OpenSSH][2] is one of the most popular open source tools that provides the SSH functionality on Linux, BSD and Windows.
For a successful SSH set up, you need to:
* Have SSH server components on the machine that acts as the server. This is provided by **openssh-server** package.
* Have SSH client component on the machine from where you want to connect to the remote server machine. This is provided by **openssh-client** package and most Linux and BSD distributions come preinstalled with it.
It is important to keep a distinction between the server and client. You might not want your personal computer to act as SSH server unless you have good reasons where you want others to connect to your system via SSH.
Generally, you have a dedicated system working as the server. For example, a [Raspberry Pi running Ubuntu server][3]. You [enable SSH on the Raspberry Pi][4] so that you could control and manage the device from your main personal computer using SSH in a terminal.
With that information, lets see how you can set up a SSH server on Ubuntu.
### Configuring SSH Server on Ubuntu
Setting up SSH is not complicated and just needs a few steps to do it.
#### Prerequisites
* A user with **sudo** privileges on the server machine
* Internet connection to download the required packages
* At least another system in your network. It can be another computer on your LAN, a remote server via Internet, or a virtual machine hosted in your computer.
_**Again, the SSH server installation should be done on the system that you want to act as server and to which you want to connect remotely via SSH.**_
#### Step 1: Install required packages
Lets start by opening a terminal window to enter the necessary commands.
Remember to [update your Ubuntu system][5] before installing new packages or software with to make sure that you are running the latest versions.
```
sudo apt update && sudo apt upgrade
```
The package you need to run SSH Server is provided by openssh-server component from OpenSSH:
```
sudo apt install openssh-server
```
![][6]
#### Step 2: Checking the status of the server
Once the downloading and installation of the package is done the SSH service should be already running, but to be sure we will check it with:
```
service ssh status
```
You may also use the systemd commands:
```
sudo systemctl status ssh
```
You should see something like this, with the word Active highlighted. Hit `q` to return to the command prompt.
![][7]
If in your case the service is not running you will have to activate like this:
```
sudo systemctl enable --now ssh
```
#### Step 3: Allowing SSH through the firewall
Ubuntu comes with a firewall utility called [UFW][8] (UncomplicatedFirewall) which is an interface for **iptables** that in turn manages the networks rules. If the firewall is active, it may prevent the connection to your SSH Server.
To configure UFW so that it allows the wanted access, you need to run the following command:
```
sudo ufw allow ssh
```
The status of UFW can be checked running `sudo ufw status`.
At this time our SSH Server is up and running, just waiting for a connection from a client.
### Connecting to the remote system from your local machine
Your local Linux system should already have SSH client installed. If not, you may always install it using the following command on Ubuntu:
```
sudo apt install openssh-client
```
To connect to your Ubuntu system you need to know the IP address of the computer and use the `ssh` command, like this:
```
ssh [email protected]
```
Change **username** to your actual user in the system and **address** to the IP address of your Ubuntu machine.
If you dont [know the IP address of your computer][9] you can type `ip a` in the terminal of the server and check the output. You should have something like this:
![Using “ip a” to find the IP address][10]
As can be seen here my IP address is **192.168.1.111**. Lets try connecting using the **[[email protected]][11]** format.
```
ssh [email protected]
```
The first time you connect to a SSH server, it will ask for permission to add the host. Type `yes` and hit Enter to continue.
![First time connecting to the server][12]
Immediately SSH tells you that the host was permanently added and then asks for the password assigned to the username. Type in the password and hit Enter one more time.
![Host added, now type in the password][13]
And voila! You will be logged into your Ubuntu system remotely!
![Connected!][14]
Now you can work in your remote systems terminal as normal.
#### Closing the SSH connection
To close the connection you just need to type `exit` and it will close it at once, without asking for confirmation.
![Closing the connection with “exit”][15]
### Stopping and Disabling SSH in Ubuntu
If you want to stop SSH service you will need this command:
```
sudo systemctl stop ssh
```
This will stop the service until you restart it or until the system is rebooted. To restart it, type:
```
sudo systemctl start ssh
```
Now, if you want to disable it from starting during system boot, use this:
```
sudo systemctl disable ssh
```
This wont stop the service from running during the current session, just from loading during startup. If you want to let it start again during system boot, type:
```
sudo systemctl enable ssh
```
#### Other SSH clients
The tool `ssh` is included in most *nix systems, from Linux to macOS, but those are not the only options in existence, here are a couple of clients that can be used from other operating systems:
* [PuTTY][16] is a free SSH client for Windows and its open source. Its full of features and very easy to use. If you are connecting to your Ubuntu machine from a Windows station, PuTTY is a great option.
* [JuiceSSH][17] is an amazing tool for Android users. If you are on the go and need a mobile client to connect to your Ubuntu system, I amply recommend giving JuiceSSH a go. Its been around for almost 10 years and its free to use.
* And finally, [Termius][18] is available for Linux, Windows, macOS, iOS, and Android. It has a free tier version and also several premium options. If you are running a lot of servers and working with teams sharing connections then Termius is a good option for you.
#### Wrapping Up
With these instructions, you can set up SSH as a server service in our Ubuntu systems to be able to connect remotely and securely to your computer in order to work with the command line and perform any required task.
Our other website, Linux Handbook, has various informational articles on SSH. From here, I recommend reading the following:
* [Getting started with SSH on Linux][19]
* [Using SSH Config file to manage multiple SSH connections][20]
* [Adding public key to SSH server for password less authentication][21]
* [SSH hardening tips][22] to secure your SSH server
If you find it overwhelming, [Linux Handbook has a premium video course that explains SSH for beginners][23] along with hands-on labs to follow. This will give you a more streamlined knowledge of the topic.
Happy remote working!
--------------------------------------------------------------------------------
via: https://itsfoss.com/set-up-ssh-ubuntu/
作者:[Chris Patrick Carias Stas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/chris/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-diagram.png?resize=800%2C259&ssl=1
[2]: https://www.openssh.com/
[3]: https://itsfoss.com/install-ubuntu-server-raspberry-pi/
[4]: https://itsfoss.com/ssh-into-raspberry/
[5]: https://itsfoss.com/update-ubuntu/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0001.png?resize=800%2C253&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0002.png?resize=800%2C263&ssl=1
[8]: https://itsfoss.com/set-up-firewall-gufw/
[9]: https://itsfoss.com/check-ip-address-ubuntu/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-find-ip.png?resize=800%2C341&ssl=1
[11]: https://itsfoss.com/cdn-cgi/l/email-protection
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0004.png?resize=800%2C87&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0005.png?resize=800%2C57&ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0006.png?resize=800%2C322&ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/ssh-0007.png?resize=800%2C87&ssl=1
[16]: https://www.putty.org/
[17]: https://juicessh.com/
[18]: https://termius.com/
[19]: https://linuxhandbook.com/ssh-basics/
[20]: https://linuxhandbook.com/ssh-config-file/
[21]: https://linuxhandbook.com/add-ssh-public-key-to-server/
[22]: https://linuxhandbook.com/ssh-hardening-tips/
[23]: https://linuxhandbook.com/sshcourse/

View File

@ -1,267 +0,0 @@
[#]: subject: (Optimize Java serverless functions in Kubernetes)
[#]: via: (https://opensource.com/article/21/6/java-serverless-functions-kubernetes)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Optimize Java serverless functions in Kubernetes
======
Achieve faster startup and a smaller memory footprint to run serverless
functions on Kubernetes.
![Ship captain sailing the Kubernetes seas][1]
A faster startup and smaller memory footprint always matter in [Kubernetes][2] due to the expense of running thousands of application pods and the cost savings of doing it with fewer worker nodes and other resources. Memory is more important than throughput on containerized microservices on Kubernetes because:
* It's more expensive due to permanence (unlike CPU cycles)
* Microservices multiply the overhead cost
* One monolith application becomes _N_ microservices (e.g., 20 microservices ≈ 20GB)
This significantly impacts serverless function development and the Java deployment model. This is because many enterprise developers chose alternatives such as Go, Python, and Nodejs to overcome the performance bottleneck—until now, thanks to [Quarkus][3], a new Kubernetes-native Java stack. This article explains how to optimize Java performance to run serverless functions on Kubernetes using Quarkus.
### Container-first design
Traditional frameworks in the Java ecosystem come at a cost in terms of the memory and startup time required to initialize those frameworks, including configuration processing, classpath scanning, class loading, annotation processing, and building a metamodel of the world, which the framework requires to operate. This is multiplied over and over for different frameworks.
Quarkus helps fix these Java performance issues by "shifting left" almost all of the overhead to the build phase. By doing code and framework analysis, bytecode transformation, and dynamic metamodel generation only once, at build time, you end up with a highly optimized runtime executable that starts up super fast and doesn't require all the memory of a traditional startup because the work is done once, in the build phase.
![Quarkus Build phase][4]
(Daniel Oh, [CC BY-SA 4.0][5])
More importantly, Quarkus allows you to build a native executable file that provides [performance advantages][6], including amazingly fast boot time and incredibly small resident set size (RSS) memory, for instant scale-up and high-density memory utilization compared to the traditional cloud-native Java stack.
![Quarkus RSS and Boot Time Metrics][7]
(Daniel Oh, [CC BY-SA 4.0][5])
Here is a quick example of how you can build the native executable with a [Java serverless][8] function project using Quarkus.
### 1\. Create the Quarkus serverless Maven project
This command generates a Quarkus project (e.g., `quarkus-serverless-native`) to create a simple function:
```
$ mvn io.quarkus:quarkus-maven-plugin:1.13.4.Final:create \
       -DprojectGroupId=org.acme \
       -DprojectArtifactId=quarkus-serverless-native \
       -DclassName="org.acme.getting.started.GreetingResource"
```
### 2\. Build a native executable
You need a GraalVM to build a native executable for the Java application. You can choose any GraalVM distribution, such as [Oracle GraalVM Community Edition (CE)][9] and [Mandrel][10] (the downstream distribution of Oracle GraalVM CE). Mandrel is designed to support building Quarkus-native executables on OpenJDK 11.
Open `pom.xml`, and you will find this `native` profile. You'll use it to build a native executable:
```
&lt;profiles&gt;
    &lt;profile&gt;
        &lt;id&gt;native&lt;/id&gt;
        &lt;properties&gt;
            &lt;quarkus.package.type&gt;native&lt;/quarkus.package.type&gt;
        &lt;/properties&gt;
    &lt;/profile&gt;
&lt;/profiles&gt;
```
> **Note:** You can install the GraalVM or Mandrel distribution locally. You can also download the Mandrel container image to build it (as I did), so you need to run a container engine (e.g., Docker) locally.
Assuming you have started your container runtime already, run one of the following Maven commands.
For [Docker][11]:
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=docker
```
For [Podman][12]:
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=podman
```
The output should end with `BUILD SUCCESS`.
![Native Build Logs][13]
(Daniel Oh, [CC BY-SA 4.0][5])
Run the native executable directly without Java Virtual Machine (JVM):
```
`$ target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner`
```
The output will look like:
```
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,&lt; / /_/ /\ \  
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/  
INFO  [io.quarkus] (main) quarkus-serverless-native 1.0.0-SNAPSHOT native
(powered by Quarkus xx.xx.xx.) Started in 0.019s. Listening on: <http://0.0.0.0:8080>
INFO [io.quarkus] (main) Profile prod activated.
INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy]
```
Supersonic! That's _19_ _milliseconds_ to startup. The time might be different in your environment.
It also has extremely low memory usage, as the Linux `ps` utility reports. While the app is running, run this command in another terminal:
```
`$ ps -o pid,rss,command -p $(pgrep -f runner)`
```
You should see something like:
```
  PID    RSS COMMAND
10246  11360 target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner
```
This process is using around _11MB_ of memory (RSS). Pretty compact!
> **Note:** The RSS and memory usage of any app, including Quarkus, will vary depending on your specific environment and will rise as application experiences load.
You can also access the function with a REST API. Then the output should be `Hello RESTEasy`:
```
$ curl localhost:8080/hello
Hello RESTEasy
```
### 3\. Deploy the functions to Knative service
If you haven't already, [create a namespace][14] (e.g., `quarkus-serverless-native`) on [OKD][15] (OpenShift Kubernetes Distribution) to deploy this native executable as a serverless function. Then add a `quarkus-openshift` extension for Knative service deployment:
```
`$ ./mvnw -q quarkus:add-extension -Dextensions="openshift"`
```
Append the following variables in `src/main/resources/application.properties` to configure Knative and Kubernetes resources:
```
quarkus.container-image.group=quarkus-serverless-native
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.native.container-build=true
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.deployment-target=knative
quarkus.kubernetes.deploy=true
quarkus.openshift.build-strategy=docker
```
Build the native executable, then deploy it to the OKD cluster directly:
```
`$ ./mvnw clean package -Pnative`
```
> **Note:** Make sure to log in to the right project (e.g., `quarkus-serverless-native`) using the `oc login` command ahead of time.
The output should end with `BUILD SUCCESS`. It will take a few minutes to complete a native binary build and deploy a new Knative service. After successfully creating the service, you should see a Knative service (KSVC) and revision (REV) using either the `kubectl` or `oc` command tool:
```
$ kubectl get ksvc
NAME                        URL   [...]
quarkus-serverless-native   <http://quarkus-serverless-native-\[...\].SUBDOMAIN>  True
$ kubectl get rev
NAME                              CONFIG NAME                 K8S SERVICE NAME                  GENERATION   READY   REASON
quarkus-serverless-native-00001   quarkus-serverless-native   quarkus-serverless-native-00001   1            True
```
### 4\. Access the native executable function
Retrieve the serverless function's endpoint by running this `kubectl` command:
```
`$ kubectl get rt/quarkus-serverless-native`
```
The output should look like:
```
NAME                         URL                                                                                                          READY   REASON
quarkus-serverless-native   <http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN>   True
```
Access the route `URL` with a `curl` command:
```
`$ curl http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN/hello`
```
In less than one second, you will get the same result as you got locally:
```
`Hello RESTEasy`
```
When you access the Quarkus running pod's logs in the OKD cluster, you will see the native executable is running as the Knative service.
![Native Quarkus Log][16]
(Daniel Oh, [CC BY-SA 4.0][5])
### What's next?
You can optimize Java serverless functions with GraalVM distributions to deploy them as serverless functions on Knative with Kubernetes. Quarkus enables this performance optimization using simple configurations in normal microservices.
The next article in this series will guide you on making portable functions across multiple serverless platforms with no code changes.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/java-serverless-functions-kubernetes
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://opensource.com/article/19/6/reasons-kubernetes
[3]: https://quarkus.io/
[4]: https://opensource.com/sites/default/files/uploads/quarkus-build.png (Quarkus Build phase)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://quarkus.io/blog/runtime-performance/
[7]: https://opensource.com/sites/default/files/uploads/quarkus-boot-metrics.png (Quarkus RSS and Boot Time Metrics)
[8]: https://opensource.com/article/21/5/what-serverless-java
[9]: https://www.graalvm.org/community/
[10]: https://github.com/graalvm/mandrel
[11]: https://www.docker.com/
[12]: https://podman.io/
[13]: https://opensource.com/sites/default/files/uploads/native-build-logs.png (Native Build Logs)
[14]: https://docs.okd.io/latest/applications/projects/configuring-project-creation.html
[15]: https://docs.okd.io/latest/welcome/index.html
[16]: https://opensource.com/sites/default/files/uploads/native-quarkus-log.png (Native Quarkus Log)

View File

@ -1,325 +0,0 @@
[#]: subject: (Try Linux on any operating system with VirtualBox)
[#]: via: (https://opensource.com/article/21/6/try-linux-virtualbox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Try Linux on any operating system with VirtualBox
======
VirtualBox helps anyone—even a command line novice—set up a virtual
machine.
![Person programming on a laptop on a building][1]
VirtualBox makes it easy for anyone to try Linux. You don't even need experience with the command line to set up a simple virtual machine to tinker with Linux. I'm kind of a power user when it comes to virtual machines, but this article will show even novices how to virtualize a Linux system. In addition, it provides an overview of how to run and install a Linux system for testing purposes with the open source hypervisor [VirtualBox][2].
### Terms
Before starting, you should understand the difference between the two operating systems (OSes) in this setup:
* **Host system:** This is your actual OS on which you install VirtualBox.
* **Guest system:** This is the system you want to run virtualized on top of your host system.
Both systems, host and guest, must interact with each other when it comes to input/output, networking, file access, clipboard, audio, and video.
In this tutorial, I'll use Windows 10 as the _host system_ and [Fedora 33][3] as the _guest system_.
### Prerequisites
When we talk about virtualization, we actually mean [hardware-assisted virtualization][4]. Hardware-assisted virtualization requires a compatible CPU. Almost every ordinary x86 CPU from the last decade comes which this feature. AMD calls it **AMD-V,** and Intel calls it **VT-x**. The virtualization feature adds some additional CPU instructions, and it can be enabled or disabled in the BIOS.
To start with virtualization:
* Make sure that AMD-V or VT-x is enabled in the BIOS.
* Download and install [VirtualBox][5].
### Prepare the virtual machine
Download the image of the Linux distribution you want to try out. It does not matter if it's a 32-bit or 64-bit OS image. You can even start a 64-bit OS image on a 32-bit host system (with limitations in memory usage, of course) and vice versa.
> **Considerations:** If possible, choose a Linux distribution that comes with the [Logical Volume Manager][6] (LVM). LVM decouples the filesystem from the physical hard drives. This allows you to increase the size of your guest system's hard drive if you are running out of space.
Now, open VirtualBox and click on the yellow **New** button:
![VirtualBox New VM][7]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Next, configure how much memory the guest OS is allowed to use:
![Set VM memory size][9]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
My recommendation: **Don't skimp on memory!** When memory is low, the guest system will start paging memory from RAM to the hard drive, worsening the system's performance and responsiveness extremely. If the underlying host system starts paging, you might not notice. For a Linux workstation system with a graphical desktop environment, I recommend at least 4GB of memory.
Next, create the hard disk:
![Create virtual hard disk][10]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Choose the default option, **VDI**:
![Selecting hard disk file type][11]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
In this window, I recommend choosing **dynamically allocated**, as this allows you to increase the size later. If you choose **fixed size**, the disk will be probably faster, but you won't be able to modify it:
![Dynamically allocating hard disk][12]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
With a Linux distribution that uses LVM, you can start with a small hard disk. If you are running out of space, you can increase it on demand.
> **Note**: Fedora's website says [it requires][13] a minimum of 20GB free disk space. I highly recommend you stick to that specification. I chose 8GB here so that I can demonstrate how to increase it later. If you are new to Linux or inexperienced with the command line, choose 20GB.
![Setting hard disk size][14]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
After creating the hard drive, select the newly created virtual machine from the list in VirtualBox's main window and click on **Settings**. In the Settings menu, go to **System** and select the **Processor** tab. By default, VirtualBox assigns only one CPU core to the guest system. On a modern multicore CPU, it should not be any problem to assign at least two cores, which will speed up the guest system significantly:
![Assigning cores to guest system][15]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
#### Network adapter setup
The next thing to take care of is the network setup. By default, VirtualBox creates one NAT connection, which should be OK for most use cases:
![Network settings][16]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
You can create more than one network adapter. Here are the most common types:
* **NAT:** The NAT adapter performs a [network address translation][17]. From the outside, it looks like the host and the guest system use the same IP address. You are not able to access the guest system from within the host system over the network. (Although you could define [port forwarding][18] to access certain services.) When your host system has access to the internet, the guest system will have access, too. NAT requires no further configuration.
* _Choose **NAT** if you only need internet access for the guest system._
* **Bridged adapter:** Here, the guest and the host system share the same physical Ethernet device. Both systems will have independent IP addresses. From the outside, it looks like there are two separate systems in the network, both sharing the same physical Ethernet adapter. This setup is more flexible but requires more configuration.
* _Choose **Bridged adapter** if you want to share the guest system's network services._
* **Host-only adapter:** In this configuration, the guest system can only talk to the host or other guest systems running on the same host. The host system can also connect to the guest system. There is no internet nor physical network access for the guest.
* _Choose **Host-only adapter** for advanced security._
#### Assign the OS image
Navigate to **Storage** and select the virtual optical drive. Click on the **CD icon** on the right, and select **Choose a disk file…**. Then assign the downloaded Linux distribution image you want to install:
![Assigning OS image][19]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
### Install Linux
The virtual machine is now configured. Leave the **Settings** menu and go back to the main window. Click on the **Green arrow** (i.e., the start button). The virtual machine will start up and boot from the virtual optical drive, and you will find yourself in your Linux distribution's installer:
![VirtualBox Fedora installer][20]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
#### Partitioning
The installer will ask you for partitioning information during the installation process. Choose **Custom**:
![Selecting Custom partition configuration][21]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
> **Note:** I'm assuming you're creating this virtual machine just for testing purposes. Also you don't need to care about hibernation for your guest system, as this function is implicitly provided by VirtualBox. Therefore, you can omit the swap partition to save disk space on your host system. Keep in mind that you can add a swap partition later if needed. In [_An introduction to swap space on Linux systems_][22], David Both explains how to add a swap partition and choose the correct size.
Fedora 33 and later offer a [zram][23] partition, a compressed part of the memory used for paging and swap. The zram partition is resized on demand, and it is much faster than a hard disk swap partition.
To keep it simple, just add these two mount points:
![Adding mount points][24]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Apply the changes and proceed with the installation.
### Install VirtualBox Guest Additions
After you finish the installation, boot from the hard drive and log in. Now you can install VirtualBox Guest Additions, which include special device drivers and system applications that provide:
* Shared clipboard
* Shared folders
* Better performance
* Freely scalable window size
To install them, click on the top menu in **Devices** and select **Insert Guest Additions CD image…**:
![Selecting Guest Additions CD image][25]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
On most Linux distributions, the CD image with the Guest Additions is mounted automatically, and they are available in the file browser. Fedora will ask you if you want to run the installation script. Click **Run** and enter your credentials to grant the process root rights:
![Enabling Guest Additions autorun][26]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
When the installation is finished, reboot the system.
### LVM: Enlarge disk space
Creating an 8GB hard disk was a dumb decision, as Fedora quickly starts signaling that it is running out of space:
![Fedora hard disk running out of space][27]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
As I mentioned, a disk space of 20GB is recommended, and 8GB is the _absolute_ minimum for a Fedora 33 installation to boot up. A fresh installation with no additional software (except the VirtualBox Guest Additions) takes nearly the whole 8GB of available space. Don't open the GNOME Software center or anything else that might download files from the internet in this condition.
Luckily, I chose to use LVM, so I can easily fix this mishap.
To increase the filesystem's space within the virtual machine, you must first increase the virtual hard drive on your host system.
Shut down the virtual machine. If your host system is running Windows, open a command prompt and navigate to `C:\Program Files\Oracle\VirtualBox`. Resize the disk to 12,000MB with the following command:
```
`VBoxManage.exe modifyhd "C:\Users\StephanA\VirtualBox VMs\Fedora_33\Fedora_33.vdi" --resize 12000`
```
Boot the virtual machine and open the **Disks** utility. You should see the newly created unassigned free space. Select **Free Space** and click the **+** button:
![Free space before adding][28]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Now, create a new partition. Select the amount of free space you want to use:
![Creating a new partition and setting size][29]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
You don't want to create a filesystem or anything else on your new partition, so select **Other**:
![Selecting "other" for partition volume type][30]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Select **No Filesystem**:
![Setting "No filesystem" on new partition][31]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
The overview should now look like this:
![VirtualBox after adding new partition][32]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
There is a new partition device, **/dev/sda3**. Check your LVM volume group by typing `vgscan`:
![Checking LVM volume group by typing vgscan:][33]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Now you have everything you need. Extend the volume group in the new partition:
```
`vgextend fedora_localhost-live /dev/sda3`
```
![vgextend command output][34]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Because the volume group is larger, you can increase the size of the logical volume. The command `vgdisplay` shows that it has 951 free extends available:
![vgdisplay command output][35]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Increase the logical volume by 951 extends:
```
`lvextend -l+951 /dev/mapper/fedora_localhost--live-root`
```
![lvextend command output][36]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
After you increase the logical volume, the last thing to do is to resize the filesystem:
```
`resize2fs /dev/mapper/fedora_localhost--live-root`
```
![resize2fs command output][37]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Done! Check the **Disk Usage Analyzer**, and you should see that the extended space is available for the filesystem.
### Summary
With a virtual machine, you can check how a piece of software behaves with a specific operating system or a specific version of an operating system. Besides that, you can also try out any Linux distribution you want to test without worrying about breaking your system. For advanced users, VirtualBox offers a wide range of possibilities when it comes to testing, networking, and simulation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/try-linux-virtualbox
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: https://www.virtualbox.org/
[3]: https://getfedora.org/
[4]: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
[5]: https://www.virtualbox.org/wiki/Downloads
[6]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[7]: https://opensource.com/sites/default/files/uploads/virtualbox_new_vm.png (VirtualBox New VM)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/virtualbox_memory_size_1.png (Set VM memory size)
[10]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_1.png (Create virtual hard disk)
[11]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_2.png (Selecting hard disk file type)
[12]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_3.png (Dynamically allocating hard disk)
[13]: https://getfedora.org/en/workstation/download/
[14]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_4.png (Setting hard disk size)
[15]: https://opensource.com/sites/default/files/uploads/virtualbox_cpu_settings.png (Assigning cores to guest system)
[16]: https://opensource.com/sites/default/files/uploads/virtualbox_network_settings2.png (Network settings)
[17]: https://en.wikipedia.org/wiki/Network_address_translation
[18]: https://www.virtualbox.org/manual/ch06.html#natforward
[19]: https://opensource.com/sites/default/files/uploads/virtualbox_choose_image3.png (Assigning OS image)
[20]: https://opensource.com/sites/default/files/uploads/virtualbox_running.png (VirtualBox Fedora installer)
[21]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_1.png (Selecting Custom partition configuration)
[22]: https://opensource.com/article/18/9/swap-space-linux-systems
[23]: https://fedoraproject.org/wiki/Changes/SwapOnZRAM
[24]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_2.png (Adding mount points)
[25]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_2.png (Selecting Guest Additions CD image)
[26]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_autorun.png (Enabling Guest Additions autorun)
[27]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_usage_1.png (Fedora hard disk running out of space)
[28]: https://opensource.com/sites/default/files/uploads/virtualbox_disks_before.png (Free space before adding)
[29]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_1.png (Creating a new partition and setting size)
[30]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_2.png (Selecting "other" for partition volume type)
[31]: https://opensource.com/sites/default/files/uploads/virtualbox_no_partition_3.png (Setting "No filesystem" on new partition)
[32]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_after.png (VirtualBox after adding new partition)
[33]: https://opensource.com/sites/default/files/uploads/virtualbox_vgscan.png (Checking LVM volume group by typing vgscan:)
[34]: https://opensource.com/sites/default/files/uploads/virtualbox_vgextend_2.png (vgextend command output)
[35]: https://opensource.com/sites/default/files/uploads/virtualbox_vgdisplay.png (vgdisplay command output)
[36]: https://opensource.com/sites/default/files/uploads/virtualbox_lvextend.png (lvextend command output)
[37]: https://opensource.com/sites/default/files/uploads/virtualbox_resizefs.png (resize2fs command output)

View File

@ -1,191 +0,0 @@
[#]: subject: (Write your first web component)
[#]: via: (https://opensource.com/article/21/7/web-components)
[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Write your first web component
======
Don't repeat yourself; create elements you can reuse when writing web
apps for any browser.
![Digital creative of a browser on the internet][1]
Web components are a collection of open source technologies such as JavaScript and HTML that allow you to create custom elements that you can use and reuse in web apps. The components you create are independent of the rest of your code, so they're easy to reuse across many projects.
Best of all, it's a platform standard supported by all major modern browsers.
### What's in a web component?
* **Custom elements:** This JavaScript API allows you to define new types of HTML elements.
* **Shadow DOM:** This JavaScript API provides a way to attach a hidden separate [Document Object Model][2] (DOM) to an element. This encapsulates your web component by keeping the styling, markup structure, and behavior isolated from other code on the page. It ensures that styles are not overridden by external styles or, conversely, that a style from your web component doesn't "leak" into the rest of the page**.**
* **HTML templates:** The element allows you to define reusable DOM elements. The element and its contents are not rendered in the DOM but can still be referenced using JavaScript.
### Write your first web component
You can write a simple web component with your favorite text editor and JavaScript. This how-to uses bootstrap to generate simple stylings then creates a simple card web component to display the temperature of a location passed to it as an attribute. The component uses the [Open Weather API][3], which requires you to generate an APPID/APIKey by signing in.
The syntax of calling this web component requires the location's longitude and latitude:
```
`<weather-card longitude='85.8245' latitude='20.296' />`
```
Create a file named **weather-card.js** that will contain all the code for your web component. Start by defining your component. This can be done by creating a template element and adding some simple HTML elements into it:
```
const template = document.createElement('template');
template.innerHTML = `
  &lt;div class="card"&gt;
    &lt;div class="card-body"&gt;&lt;/div&gt;
  &lt;/div&gt;
`
```
Start defining the WebComponent class and its constructor:
```
class WeatherCard extends HTMLElement {
  constructor() {
    super();
    this._shadowRoot = this.attachShadow({ 'mode': 'open' });
    this._shadowRoot.appendChild(template.content.cloneNode(true));
  }
  ….
}
```
The constructor attaches the shadowRoot and sets it to open mode. Then the template is cloned to shadowRoot.
Next, access the attributes. These are the longitude and latitude, and you need them to make a GET request to the Open Weather API. This needs to be done in the `connectedCallback` function. You can use the `getAttribute` method to access the attributes or define getters to bind them to this object:
```
get longitude() {
  return this.getAttribute('longitude');
}
get latitude() {
  return this.getAttribute('latitude');
}
```
Now define the `connectedCallBack` method that fetches weather data whenever it is mounted:
```
connectedCallback() {
  var xmlHttp = new XMLHttpRequest();
  const url = `[http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&amp;lon=${this.longitude}\&amp;appid=API\\_KEY\\`][4]
  xmlHttp.open("GET", url, false);
  xmlHttp.send(null);
  this.$card = this._shadowRoot.querySelector('.card-body');
  let responseObj = JSON.parse(xmlHttp.responseText);
  let $townName = document.createElement('p');
  $townName.innerHTML = `Town: ${responseObj.name}`;
  this._shadowRoot.appendChild($townName);
  let $temperature = document.createElement('p');
  $temperature.innerHTML = `${parseInt(responseObj.main.temp - 273)} &amp;deg;C`
  this._shadowRoot.appendChild($temperature);
}
```
Once the weather data is retrieved, additional HTML elements are added to the template. Now, your class is defined.
Finally, define and register a new custom element by using the method `window.customElements.define`:
```
`window.customElements.define('weather-card', WeatherCard);`
```
The first argument is the name of the custom element, and the second argument is the defined class. Here's a [link to the entire component][5].
You've written your first web component! Now it's time to bring it to the DOM. To do that, you must load the JavaScript file with your web component definition in your HTML file (name it **index.html**):
```
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
  &lt;meta charset="UTF-8"&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;weather-card longitude='85.8245' latitude='20.296'&gt;&lt;/weather-card&gt;
  &lt;script src='./weather-card.js'&gt;&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;
```
Here's your web component in a browser:
![Web component displayed in a browser][6]
(Ramakrishna Pattnaik, [CC BY-SA 4.0][7])
Because web components need only HTML, CSS, and JavaScript, they are natively supported by browsers and can be used seamlessly with frontend frameworks, including React and Vue. The following simple code snippet shows how to use web components with a simple React App bootstrapped with [Create React App][8]. For this, you need to import the **weather-card.js** file you defined earlier and use it as a component:
```
import './App.css';
import './weather-card';
function App() {
  return (
  &lt;weather-card longitude='85.8245' latitude='20.296'&gt;&lt;/weather-card&gt;
  );
}
export default App;
```
### Lifecycle of a web component
All components follow a lifecycle from initialization to removal from the DOM (i.e., unmount). Methods are associated with each lifecycle event so that you can control the components better. The various lifecycle events of a web component include:
* **Constructor:** The constructor for a web component is called before it is mounted, meaning it's created before the element is attached to the document. It's used for initializing local state, binding event handlers, and creating the shadow DOM. The constructor must make a call to `super()` to call the class the Web Component class extends.
* **ConnectedCallBack:** This is called when an element is mounted (that is, inserted into the DOM tree). It deals with initializations creating DOM nodes and is used mostly for operations like instantiating network requests. React developers can relate it to `componentDidMount`.
* **attributeChangedCallback:** This method accepts three arguments: `name`, `oldValue`, and `newValue`. It is called whenever one of the component's observed attributes gets changed. Attributes are declared observed attributes using a static `observedAttributes` getter: [code] static get observedAttributes() {
  return ['name', '_id'];
} [/code] `attributeChangedCallback` will be called whenever the attribute name or `_id` is changed.
* **DisconnectedCallBack:** This is called when an element is removed from the DOM tree (i.e., unmounted). It is equivalent to React's `componentWillUnmount`. It is used to free resources that won't be garbage-collected automatically, like unsubscribing from DOM events, stopping interval timers, or unregistering all registered callbacks.
* **AdoptedCallback:** It is called each time the custom element is moved to a new document. It only occurs when dealing with IFrames.
### Modular open source
Web components can be a powerful way to develop web apps. Whether you're comfortable with JavaScript or just getting started with it, it's easy to create reusable code with this great open standard, no matter what browser your target audience uses.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/web-components
作者:[Ramakrishna Pattnaik][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rkpattnaik780
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/Document_Object_Model
[3]: https://openweathermap.org/api
[4]: http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&lon=${this.longitude}\&appid=API\_KEY\`
[5]: https://gist.github.com/rkpattnaik780/acc683d3796102c26c1abb03369e31f8
[6]: https://opensource.com/sites/default/files/uploads/webcomponent.png (Web component displayed in a browser)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://create-react-app.dev/docs/getting-started/

View File

@ -1,151 +0,0 @@
[#]: subject: "Troubleshooting “Bash: Command Not Found” Error in Linux"
[#]: via: "https://itsfoss.com/bash-command-not-found/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Troubleshooting “Bash: Command Not Found” Error in Linux
======
_**This beginner tutorial shows how to go about fixing the Bash: command not found error on Debian, Ubuntu and other Linux distributions.**_
When you use commands in Linux, you expect to see an output. But sometimes, youll encounter issues where the terminal shows command not found error.
![][1]
There is no straightforward, single solution to this error. You have to do a little bit of troubleshooting on your own.
Its not too difficult, honestly. The error gives some hint already when it says “bash: command not found”. Your shell (or Linux system) cannot find the command you entered.
There could be three possible reasons why it cannot find the command:
* Its a typo and the command name is misspelled
* The command is not even installed
* The command is basically an executable script and its location is not known
Lets go in detail on each possible root cause.
### Fixing “bash: command not found” error
![][2]
#### Method 1: Double check the command name (no, seriously)
It is human to make mistakes, specially while typing. It is possible that the command you entered has a typo (spelling mistake).
You should specially pay attention to:
* The correct command name
* The spaces between the command and its options
* The use of 1 (numeral one), I (capital i) and l (lowercase L)
* Use of uppercase and lowercase characters
Take a look at the example below, where I have misspelled the common ls command.
![][3]
So, make double sure what you are typing.
#### Method 2: Ensure that the command is installed on your system
This is another common reason behind the command not found error. You cannot run a command if it is not installed already.
While your Linux distribution comes with a huge number of commands installed by default, it is not possible to pre-install all the command line tools in a system. If the command you are trying to run is not a popular, common command, youll have to install it first.
You can use your distributions package manager to install it.
![You may have to install the missing command][4]
In some cases, popular commands may get discontinued and you may not even install it anymore. Youll have to find an alternative command to achieve the result.
Take the example of ipconfig command. This deprecated command was used for [getting Ip address][5] and other network interface information. Older tutorials on the web still mention using this command but you cannot use it anymore in newer Linux versions. It has been replaced by the ifconfig tool.
![Some popular commands get discontinued over the time][1]
Occasionally, your system wont find even the extremely common commands. This is often the case when you are running a Linux distribution in Docker containers. To cut down on the size of the operating system image, the containers often do not include even the most common Linux commands.
This is why Docker user stumble across things like [ping command not found error][6] etc.
![Docker containers often have only a few commands installed][7]
So, the solution is to either install the missing command or find a tool that could do the same thing you were trying to do with the missing command.
#### Method 3: Check if it is an executable script with correct path
This is a common mistake Linux rookies make while [running a shell script][8].
Even if you are in the same directory and try to run an executable script just by its name, it will show an error.
```
[email protected]:~/scripts# sample
-bash: sample: command not found
```
You need to either specify the shell interpreter explicitly or its absolute path.
![][9]
If you are in some other directory and try to execute the shell script without giving the correct path to the file, it will complain about not finding the file.
![][10]
##### Adding it to the PATH
In some cases, you download the entire software in a tar file, extract it and find an executable file along with other program files. To run the program, you need to run the executable file.
But for that, you need to be in the same directory or specify the entire path to the executable file. This is tiresome.
Here, you can use the PATH variable. This variable has a collection of directories and these directories have the binary (executable) files of various Linux commands. When you run a command, your Linux system checks the mentioned directories in the PATH variable to look for the executable file of that command.
You can check the location of the binary of a command by using the `which` command:
![][11]
If you want to run an executable file or script from anywhere on the system, you need to add the location of the file to this PATH variable.
![][12]
The PATH variable then needs to be added to the rc file of the shell so that the changes made to PATH variable is permanent.
You get the gist here. It is important that your Linux system has the knowledge about the location of the executable script. Either you give the path while running it or you add its location to the PATH variable.
### Did it help you?
I understand that when you are new to Linux, things could be overwhelming. But when you understand the root cause of the problem, it gradually improved your knowledge.
Here, there is no straightforward solution possible for the command not found error. I gave you some hints and pointers and that should help you in troubleshooting.
If you still have doubt or need help, please let me know in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/bash-command-not-found/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error.png?resize=741%2C291&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error-1.png?resize=800%2C450&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-error.png?resize=723%2C234&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-debian.png?resize=741%2C348&ssl=1
[5]: https://itsfoss.com/check-ip-address-ubuntu/
[6]: https://linuxhandbook.com/ping-command-ubuntu/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ping-command-not-found-ubuntu.png?resize=786%2C367&ssl=1
[8]: https://itsfoss.com/run-shell-script-linux/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-script-command-not-found-error-800x331.png?resize=800%2C331&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/script-file-not-found-error-800x259.png?resize=800%2C259&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/path-location.png?resize=800%2C241&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/adding-executable-to-PATH-variable-linux.png?resize=800%2C313&ssl=1

View File

@ -1,192 +0,0 @@
[#]: subject: "Reasons for servers to support IPv6"
[#]: via: "https://jvns.ca/blog/2022/01/29/reasons-for-servers-to-support-ipv6/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Reasons for servers to support IPv6
======
Ive been having a hard time understanding IPv6. On one hand, the basics initially seem pretty straightforward (there arent enough IPv4 addresses for all the devices on the internet, so people invented IPv6! There are enough IPv6 addresses for everyone!)
But when I try to actually understand it, I run into a lot of questions. One question is: `twitter.com` does not support IPv6. Presumably it cant be causing them THAT many issues to not support it. So why _do_ websites support IPv6?
I asked people on Twitter [why their servers support IPv6][1] and I got a lot of great answers, which Ill summarize here. These all come with the disclaimer that I have basically 0 experience with IPv6 so I cant evaluate these reasons very well.
First though, I want to explain why its possible for `twitter.com` to not support IPv6 because I didnt understand that initially.
### how can you tell `twitter.com` doesnt support IPv6?
You can tell they dont support IPv6 is because if you look up their AAAA record (which contains their IPv6 address), there isnt one. Some other big sites like `github.com` and `stripe.com` also dont support IPv6.
```
$ dig AAAA twitter.com
(empty response)
$ dig AAAA github.com
(empty response)
$ dig AAAA stripe.com
(empty response)
```
### why does `twitter.com` still work for IPv6 users?
I found this really confusing, because Ive always heard that lots of internet users are forced to use IPv6 because weve run out of IPv4 addresses. But if thats true, how could twitter.com continue to work for those people without IPv6 support? Heres what I learned from the Twitter thread yesterday.
There are two kinds of internet service providers (ISPs):
1. ISPs who own enough IPv4 address for all of their customers
2. ISPs who dont
My ISP is in category 1 my computer gets its own IPv4 address, and actually my ISP doesnt even support IPv6 at all.
But lots of ISPs (especially outside of North America) are in category 2: they dont have enough IPv4 addresses for all their customers. Those ISPs handle the problem by:
* giving all of their customers a unique IPv6 address, so they can access IPv6 sites directly
* making large groups of their customers _share_ IPv4 addresses. This can either be with CGNAT (”[carrier-grade NAT][2]”) or “464XLAT” or maybe something else.
All ISPs need _some_ IPv4 addresses, otherwise it would be impossible for their customers to access IPv4-only sites like twitter.com.
### what are the reasons to support IPv6?
Now weve explained why its possible to _not_ support IPv6. So why support it? There were a lot of reasons.
### reason: CGNAT is a bottleneck
The argument that was most compelling to me was: CGNAT (carrier-grade NAT) is a bottleneck and it causes performance issues, and its going to continue to get worse over time as access to IPv4 addresses becomes more and more restricted.
Someone also mentioned that because CGNAT is a bottleneck, its an attractive DDoS target because you can ruin lots of peoples internet experience just by attacking 1 server.
Servers supporting IPv6 reduces the need for CGNAT (IPv6 users can just connect directly!) which makes the internet work better for everyone.
I thought this argument was interesting because its a “public commons” / community argument its less that supporting IPv6 will make your site specifically work better, and more that if _almost everyone_ supports IPv6 then itll make the experience of the internet better for everyone, especially in countries where people dont have easy access to IPv4 addresses.
I dont actually know how much of an issue this is in practice.
There were lots of more selfish arguments to use IPv6 too though, so lets get into those.
### reason: so IPv6-only servers can access your site
I said before that most IPv6 users still have access to IPv4 though some kind of NAT. But apparently thats not true for everyone some people mentioned that they run some servers which only have IPv6 addresses and which arent behind any kind of NAT. So those servers are actually totally unable to access IPv4-only sites.
I imagine that those servers arent connecting to arbitrary machines that much maybe they only need to connect to a few hosts with IPv6 support.
But it makes sense to me that a machine should be able to access my site even if it doesnt have an IPv4 address.
### reason: better performance
For users who are using both IPv4 and IPv6 (with a dedicated IPv6 address and a shared IPv4 address), apparently IPv6 is often faster because it doesnt need to go through an extra translation layer.
So supporting IPv6 can make the site faster for users sometimes.
In practice clients use an algorithm called “Happy Eyeballs” which tries to figure out whether IPv4 or IPv6 will be faster and then uses whichever seems faster.
Some other performance benefits people mentioned:
* maybe sometimes using IPv6 can get you a SEO boost because of the better performance.
* maybe using IPv6 causes you to go through better (faster) network hardware because its a newer protocol
### reason: resilience against IPv4 internet outages
One person said that theyve run into issues where there was an internet outage that only affected IPv4 traffic, because of accidental BGP poisoining.
So supporting IPv6 means that their site can still stay partially online during those outages.
### reason: to avoid NAT issues with home servers
A few people mentioned that its much easier to use IPv6 with home servers instead of having to do port forwarding through your router, you can just give every server a unique IPv6 address and then access it directly.
Of course, for this to work the client needs to have IPv6 support, but more and more clients these days have IPv6 support too.
### reason: to own your IP addresses
Apparently you can buy IPv6 addresses, use them for the servers on your home network, and then if you change your ISP, continue to use the same IP addresses?
Im still not totally sure how this works (I dont know how you would convince computers on the internet to actually route those IPs to you? I guess you need to run your own AS or something?).
### reason: to learn about IPv6
One person said they work in security and in security its very important to understand how internet protocols work (attackers are using internet protocols!). So running an IPv6 server helps them learn how it works.
### reason: to push IPv6 forward / IPv4 is “legacy”
A couple of people said that they support IPv6 because its the current standard, and so they want to contribute to the success of IPv6 by supporting it.
A lot of people also said that they support IPv6 because they think sites that only support IPv4 are “behind” or “legacy”.
### reason: its easy
I got a bunch of answers along the lines of “its easy, why not”. Obviously adding IPv6 support is not easy in all situations, but a couple of reasons it might be easy in some cases:
* you automatically got an IPv6 address from your hosting company, so all you need to do is add an `AAAA` record pointing to that address
* your site is behind a CDN that supports IPv6, so you dont need to do anything extra
### reason: safer networking experimentation
Because the address space is so big, if you want to try something out you can just grab an IPv6 subnet, try out some things in it, and then literally never use that subnet again.
### reason: to run your own autonomous system (AS)
A few people said they were running their own autonomous system (I talked about what an AS is a bit in this [BGP post][3]). IPv4 addresses are too expensive so they bought IPv6 addresses for their AS instead.
### reason: security by obscurity
If your server _only_ has a public IPv6 address, attackers cant easily find it by scanning the whole internet. The IPv6 address space is too big to scan!
Obviously this shouldnt be your only security measure, but it seems like a nice bonus any time I run an IPv4 public server Im always a tiny bit surprised by how its constantly being scanned for vulnerabilities (like old versions of WordPress, etc).
### very silly reason: you can put easter eggs in your IPv6 address
IPv6 addresses have a lot of extra bits in them that you can do frivolous things with. For example one of Facebooks IPv6 addresses is “2a03:2880:f10e:83:face:b00c:0:25de” (it has `face:b00c` in it).
### there are more reasons than I thought
Thats all Ive learned about the “why support IPv6?” question so far.
I came away from this conversation more motivated to support IPv6 on my (very small) servers than I had been before. But thats because I think supporting IPv6 will require very little effort for me. (right now Im using a CDN that supports IPv6 so it comes basically for free)
I know very little about IPv6 still but my impression is that IPv6 support often isnt zero-effort and actually can be a lot of work. For example, I have no idea how much work it would actually be for Twitter to add IPv6 support on their edge servers.
### some more IPv6 questions
Here are some more IPv6 questions I have that maybe Ill explore later:
* what are the _disadvantages_ to supporting IPv6? what goes wrong?
* what are the incentives for ISPs that own enough IPv4 addresses for their customers to support IPv6? (another way of asking: is it likely that my ISP will move to supporting IPv6 in the next few years? or are they just not incentivized to do it so its unlikely?)
* [digital ocean][4] seems to only support IPv4 floating IPs, not IPv6 floating IPs. Why not? Shouldnt it be _easier_ to give out IPv6 floating IPs since there are more of them?
* when I try to ping an IPv6 address (like example.coms IP `2606:2800:220:1:248:1893:25c8:1946` for example) I get the error `ping: connect: Network is unreachable`. Why? (answer: its because my ISP doesnt support IPv6 so my computer doesnt have a public IPv6 address)
This [IPv4 vs IPv6 article from Tailscale][5] looks interesting and answers some of these questions.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/01/29/reasons-for-servers-to-support-ipv6/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/1487156306884636672
[2]: https://en.wikipedia.org/wiki/Carrier-grade_NAT
[3]: https://jvns.ca/blog/2021/10/05/tools-to-look-at-bgp-routes/
[4]: https://docs.digitalocean.com/products/networking/floating-ips/
[5]: https://tailscale.com/kb/1134/ipv6-faq/

View File

@ -1,5 +1,5 @@
[#]: subject: "slimmer emacs with kitty"
[#]: via: "https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html"
[#]: via: "https://jao.io/blog/slimmer-emacs-with-kitty.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: " "
@ -69,7 +69,7 @@ The gist of it is pretty simple though, and it's basically distilled in [this se
--------------------------------------------------------------------------------
via: https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html
via: https://jao.io/blog/slimmer-emacs-with-kitty.html
作者:[jao][a]
选题:[lujun9972][b]
@ -87,8 +87,8 @@ via: https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html
[5]: https://sw.kovidgoyal.net/kitty/
[6]: https://codeberg.org/jao/elibs/src/branch/main/data/kitty.conf
[7]: https://en.wikipedia.org/wiki/HarfBuzz
[8]: tmp.aRLm0IGxe1#fn.1
[9]: tmp.aRLm0IGxe1#fnr.1
[8]: tmp.cmx0w4nr81#fn.1
[9]: tmp.cmx0w4nr81#fnr.1
[10]: https://codeberg.org/jao/elibs/src/main/init.el#L1595
[11]: https://jao.io/blog/tags.html
[12]: https://jao.io/blog/tag-emacs.html

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/6/container-orchestration-kubernetes"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "Donkey"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -0,0 +1,117 @@
[#]: subject: "simple note taking"
[#]: via: "https://jao.io/blog/2022-06-19-simple-note-taking.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
simple note taking
======
I was just watching [Prot's explanation][1] of his new package _denote_, a very elegant note-taking system with a stress on simplicity and, as the author puts it, low-tech requirements. Now, those are excellent qualities in my book, and i think i'd quickly become a _denote_ user if it weren't for the fact that i already have a homegrown set of utilities following a similar philosophy. Inevitably, they differ in some details, as is to be expected from software that has grown with me, as Prot's with him, during more than a decade, but they are similar in important ways.
I've had in mind writing a brief note on my notes utilities for a while, so i guess this is a good time for it: i can, after showing you mine, point you to a polished package following a similar philosophy and sidestep any temptation of doing anything similar with my little functions :)
![][2]
As you'll see in a moment, in some ways, my note taking system is even simpler than Prot's, while in others i rely on more sophisticated software, essentially meaning that where _denote_ is happy to use dired and filenames, i am using grep over the front-matter of the notes. So if you loved the filename-as-metadata idea in _denote_, you can skip the rest of this post!
These are the main ideas around which i built my note-taking workflow:
* Personally, i have such a dislike for non-human readable identifiers, that i cannot even stand those 20221234T142312 prefixes (for some reason, i find them quite hard to read and distracting). When i evolved my notes collection, i wanted my files to be named by their title and nothing more. I am also pretty happy to limit myself to org-mode files. So i wanted a directory of (often short) notes with names like `the-lisp-machine.org`, `david-foster-wallace.org` or `combinator-parsing-a-short-tutorial.org`.[1][3]
* I like tags, so all my notes, besides a title, are going to have attached a list of them (_denote_ puts them in the filename and inside the file's headers; i'm content with the latter, because, as you'll see in a moment, i have an easy way of searching through that contents).
* I'm not totally averse to hierarchies: besides tagging, i put my notes in a subdirectory indicating their broad category. I can then quickly narrow my searches to a general _theme_ if needed[2][4].
* As mentioned, i want to be able to search by the title and tag (besides more broadly by contents) of my notes. Since that's all information available in plain text in the files, `grep` and family (via their emacs lovely helpers) are all that is needed; but i can easily go a step further and use other indexers of plain text like, say, recoll (via my [consult-recoll package][5]).
* It must be easy to quickly create notes that link to any contents i'm seeing in my emacs session, be it text, web, pdf, email, or any other. That comes for free thanks to org and `org-capture`.
* I want the code i have to write to accomplish all the above to be short and sweet, let's say well below two hundred lines of code.
Turns out that i was able to write a little emacs lisp library doing all the above, thanks to the magic of org-mode and consult: you can find it over at my repo by the name of [jao-org-notes.el][6]. The implementation is quite simple and is based on having all note files in a parent directory (`jao-org-notes-dir`) with a subfolder for each of the main top-level categories, and, inside each of them, note files in org mode with a preamble that has the structure of this example:
```
#+title: magit tips
#+date: <2021-07-22 Thu>
#+filetags: git tips
```
The header above corresponds to the note in the file `emacs/magit-tips.org`. Now, it's very easy to write a new command to ask for a top-level category and a list of tags and insert a header like that in a new file: it's called [jao-org-notes-open-or-create][7] in my little lib, and with it one can define a new org template:
```
("N" "Note" plain (file jao-org-notes-open-or-create)
"\n- %a\n %i"
:jump-to-captured t)
```
that one can then add to `org-capture-templates` (above, i'm using "N" as its shortcut; in the package, this is done by [jao-org-notes-setup][8], which takes the desired shortcut as a parameter). I maintain a simple list of possible tags in the variable `jao-org-notes--tags`, whose value is persisted in the file denoted by the value `jao-org-notes-tags-cache-file`, so that we can remember newly-added tags; with that and the magic of emacs's completing read, handling tags is a breeze.
Now for search. These are text files, so if i want to search for contents, i just need grepping, for instance with `M-x rgrep` or, even better, `M-x consult-ripgrep`. That is what the command [jao-org-notes-grep][9] does.
But it is also very useful to be able to limit searches to the title and tags of the notes: that's what the command [jao-org-notes-open][10] does using `consult` and `ripgrep` by the very simple device of searching for regular expressions in the first few lines of each file that start with either `#+title:` or `#+filetags:` followed by the terms we're looking for. That's something one could already do with `rgrep` alone; what consult adds to the party is the ability of displaying the matching results nicely formatted:
![][11]
Links between notes are simply org `file:` links, and having a simple "backlinks" command is, well, simple if you don't want anything fancy[3][12]. A command to insert a new link to another note is so boring to almost not meriting mention (okay, almost: [jao-org-notes-insert-link][13]).
And that's about it. With those simple commands and in about 160 lines of code i find myself comfortably managing plain text notes, and easily finding contents within them. I add a bit of icing by asking [Recoll][14] to index my notes directory (as well as my email and PDFs): it is clever enough to parse org files, and give you back pointers to the sections in the files, and then issue queries with the comfort of a consult asynchronous command thanks to [consult-recoll][5] (the screenshot in the introduction is just me using it). It's a nice use case of how having little, uncomplicated packages that don't try to be too sophisticated and center on the functionality one really needs makes it very easy to combine solutions in beatiful ways[4][15].
### Footnotes:
[1][16]
I also hate with a passion those `:PROPERTIES:` drawers and other metadata embellishments so often used in org files, and wanted to avoid them as much as possible, so i settled with the only mildly annoying `#+title` and friends at the beginning of the files and nothing more. The usual caveat that that makes it more difficult to have unique names has proven a non-problem to me over the years.
[2][17]
Currently i use `work`, `books`, `computers`, `emacs`, `letters`, `maths`, and `physics`: as you see, i am not making a great effort on finding the perfect ontology of all knowledge; rather, i just use the first broad breakdown of the themes that interest me most at the moment.
[3][18]
Just look for the regular expression matching "[[file:" followed by the name of the current file. I find myself seldom needing this apparently very popular functionality, but it should be pretty easy to present the search results in a separate buffer if needed.
[4][19]
Another example would be how easy it becomes to incorporate web contents nicely formatted as text when one uses eww as a browser. Or how how seamless it is taking notes on PDFs one's reading in emacs, or even externally zathura (that's for a future blog post though! :)).
[Tags][20]: [emacs][21]
--------------------------------------------------------------------------------
via: https://jao.io/blog/2022-06-19-simple-note-taking.html
作者:[jao][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jao.io
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/watch?v=mLzFJcLpDFI
[2]: https://jao.io/img/consult-recoll.png
[3]: tmp.xKXexbfGmW#fn.1
[4]: tmp.xKXexbfGmW#fn.2
[5]: https://codeberg.org/jao/consult-recoll
[6]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el
[7]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L133
[8]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L174
[9]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L143
[10]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L126
[11]: https://jao.io/img/org-notes.png
[12]: tmp.xKXexbfGmW#fn.3
[13]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L161
[14]: https://www.lesbonscomptes.com/recoll/
[15]: tmp.xKXexbfGmW#fn.4
[16]: tmp.xKXexbfGmW#fnr.1
[17]: tmp.xKXexbfGmW#fnr.2
[18]: tmp.xKXexbfGmW#fnr.3
[19]: tmp.xKXexbfGmW#fnr.4
[20]: https://jao.io/blog/tags.html
[21]: https://jao.io/blog/tag-emacs.html

View File

@ -1,525 +0,0 @@
[#]: subject: "Python Microservices Using Flask on Kubernetes"
[#]: via: "https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Python Microservices Using Flask on Kubernetes
======
*Microservices follow domain driven design (DDD), irrespective of the platform on which they are developed. Python microservices are not an exception. The object-oriented features of Python 3 make it even easier to model services on the lines of DDD. Part 10 of this series demonstrates how to deploy FindService of user management system as a Python microservice on Kubernetes.*
The power of microservices architecture lies partly in its polyglot nature. The enterprises decompose their functionality into a set of microservices and each team chooses a platform of its choice.
Our user management system was already decomposed into four microservices, namely, AddService, FindService, SearchService and JournalService. The AddService was developed on the Java platform and deployed on the Kubernetes cluster for resilience and scalability. This doesnt mean that the remaining services are also developed on Java. We are free to choose a suitable platform for individual services.
Let us pick Python as the platform for developing the FindService. The model for FindService has already been designed (refer to the March 2022 issue of Open Source For You). We only need to convert this model into code and configuration.
### The Pythonic approach
Python is a general-purpose programming language that has been around for about three decades. In the early days, it was the first choice for automation scripting. However, with frameworks like Django and Flask, its popularity has been growing and it is now adopted in various other domains like enterprise application development. Data science and machine learning pushed the envelope further and Python is now one of the top three programming languages.
Many people attribute the success of Python to its ease of coding. This is only partially true. As long as your objective is to develop small scripts, Python is just like a toy. You love it. However, the moment you enter the domain of serious large-scale application development, you will have to handle lots of ifs and buts, and Python becomes as good as or as bad as any other platform. For example, take an object-oriented approach! Many Python developers may not even be aware of the fact that Python supports classes, inheritance, etc. Python does support full-blown object-oriented development, but in its own way — the Pythonic way! Let us explore it!
### The domain model
AddService adds the users to the system by saving the data into a MySQL database. The objective of the FindService is to offer a REST API to find the users by their name. The domain model is reproduced in Figure 1. It primarily consists of value objects like Name, PhoneNumber along with User entity and UserRepository.
![Figure 1: The domain model of FindService][1]
Let us begin with the Name. Since it is a value object, it must be validated by the time it is created and must remain immutable. The basic structure looks like this:
```
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
As you can see, the Name consists of a value attribute of type str. As part of the post initialization, the value is validated.
Python 3.7 offers @dataclass decorator, which offers many features of a data-carrying class out-of-the-box like constructors, comparison operators, etc.
Following is the decorated Name:
```
from dataclasses import dataclass
@dataclass
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
The following code can create an object Name:
```
name = Name(“Krishna”)
```
The value attribute can be read or written as follows:
```
name.value = “Mohan”
print(name.value)
```
Another Name object can be compared as easily as follows:
```
other = Name(“Mohan)
if name == other: print(“same”)
```
As you can see, the objects are compared by their values, not by their references. This is all out-of-the-box. We can also make the object immutable, by freezing. Here is the final version of Name as a value object:
```
from dataclasses import dataclass
@dataclass(frozen=True)
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
The PhoneNumber also follows a similar approach, as it is also a value object:
```
@dataclass(frozen=True)
class PhoneNumber:
value: int
def __post_init__(self):
if self.value < 9000000000:
raise ValueError(“Invalid Phone Number”)
```
The User class is an entity, not a value object. In other words, the User is not immutable. Here is the structure:
```
from dataclasses import dataclass
import datetime
@dataclass
class User:
_name: Name
_phone: PhoneNumber
_since: datetime.datetime
def __post_init__(self):
if self._name is None or self._phone is None:
raise ValueError(“Invalid user”)
if self._since is None:
self.since = datetime.datetime.now()
```
You can observe that the User is not frozen as we want it to be mutable. However, we do not want all the properties mutable. The identity field like _name and fields like _since are not expected to be modified. Then, how can this be controlled?
Python 3 offers the so-called Descriptor protocol. It helps us in defining the getters and setters appropriately. Let us add the getters to all the three fields of User using the @property decorator:
```
@property
def name(self) -> Name:
return self._name
@property
def phone(self) -> PhoneNumber:
return self._phone
@property
def since(self) -> datetime.datetime:
return self._since
```
And the setter for the phone field can be added using @<field>.setter decoration:
```
@phone.setter
def phone(self, phone: PhoneNumber) -> None:
if phone is None:
raise ValueError(“Invalid phone”)
self._phone = phone
```
The User can also be given a method for easy print representation by overriding the __str__() function:
```
def __str__(self):
return self.name.value + “ [“ + str(self.phone.value) + “] since “ + str(self.since)
```
With this the entities and the value objects of the domain model are ready. Creating an exception class is as easy as follows:
```
class UserNotFoundException(Exception):
pass
```
The only other one remaining in the domain model is the UserRepository. Python offers a useful module called abc for building abstract methods and abstract classes. Since UserRepository is only an interface, we can use the abc package.
Any Python class that extends abc.ABC becomes abstract. Any function with the @abc.abstractmethod decorator becomes an abstract function. Here is the resultant structure of UserRepository:
```
from abc import ABC, abstractmethod
class UserRepository(ABC):
@abstractmethod
def fetch(self, name:Name) -> User:
pass
```
The UserRepository follows the repository pattern. In other words, it offers appropriate CRUD operations on the User entity without exposing the underlying data storage semantics. In our case, we need only fetch() operation since FindService only finds the users.
Since the UserRepository is an abstract class, we cannot create instance objects from this class. A concrete class must implement this abstract class for object creation.The data layer
The UserRepositoryImpl offers the concrete implementation of UserRepository:
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User: pass
```
Since the AddService stores the data of the users in a MySQL database server, the UserRepositoryImpl also must connect to the same database server to retrieve the data. The code for connecting to the database is given below. Observe that we are using the connector library of MySQL.
```
from mysql.connector import connect, Error
```
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
try:
with connect(
host=”mysqldb”,
user=”root”,
password=”admin”,
database=”glarimy”,
) as connection:
with connection.cursor() as cursor:
cursor.execute(“SELECT * FROM ums_users where name=%s”, (name.value,))
row = cursor.fetchone()
if cursor.rowcount == -1:
raise UserNotFoundException()
else:
return User(Name(row[0]), PhoneNumber(row[1]), row[2])
except Error as e:
raise e
```
In the above snippet, we are connecting to a database server named mysqldb using root as the user and admin as the password, in order to use the database schema named glarimy. It is fine to have such information in the code for an illustration, but surely not a suggested approach in production as it exposes sensitive information.
The logic of the fetch() operation is quite intuitive. It executes a SELECT query against the ums_users table. Recollect that the AddService was writing the user data into the same table. In case the SELECT query returns no records, the fetch() function throws UserNotFoundException. Otherwise, it constructs the User entity from the record and returns it to the caller. Nothing unusual.
### The application layer
And finally, we need to build the application layer. The model is reproduced in Figure 2. It consists of just two classes: a controller and a DTO.
![Figure 2: The application layer of FindService][2]
As we already know, a DTO is just a data container without any business logic. It is primarily used for carrying data between the FindService and the outside world. We just offer a way to convert the UserRecord into a dictionary for JSON marshalling in the REST layer:
```
class UserRecord:
def toJSON(self):
return {
“name”: self.name,
“phone”: self.phone,
“since”: self.since
}
```
The job of a controller is to convert DTOs to domain objects for invoking the domain services and vice versa, as can be observed in the find() operation.
```
class UserController:
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.value
record.phone = user.phone.value
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
The find() operation receives a string as a name, converts it as a Name object and invokes the UserRepository to fetch the corresponding User object. If it is found, a UserRecord is created by using the retrieved User object. Recollect that it is necessary to convert the domain objects as DTO to hide the domain model from the external world.
The UserController need not have multiple instances. It can be a singleton as well. By overriding the __new__ operation, it can be modelled as a singleton:
```
class UserController:
def __new__(self):
if not hasattr(self, instance):
self.instance = super().__new__(self)
return self.instance
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.getValue()
record.phone = user.phone.getValue()
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
We are done with implementing the model of the FindService fully. The only task remaining is to expose it as a REST service.
### The REST API
Our FindService offers only one API and that is to find a user by name. The obvious URI is as follows:
```
GET /user/{name}
```
This API is expected to find the user with the supplied name and returns the details of phone number, etc, of the user in JSON format. In case no such user is found, the API is expected to return a 404 status.
We can use the Flask framework to build the REST API. This framework was originally built for developing Web applications using Python. It is further extended to support the REST views besides the HTML views. We pick this framework for its simplicity.
Start by creating a Flask application:
```
from flask import Flask
app = Flask(__name__)
```
Then define the routes for the Flask application as simple functions:
```
@app.route(/user/<name>)
def get(name): pass
```
Observe that the @app.route is mapped to the API /user/<name> on one side and to the function get() on the other side.
As you may already have figured out, this get() function will be invoked every time the user accesses the API with a URI like http://server:port/user/Krishna. Flask is intelligent enough to extract Krishna as the name from the URI and pass it to the get() function.
The get() function is straightforward. It asks the controller to find the user and returns the same after marshalling it to the JSON format along with the usual HTTP headers. In case the controller returns None, the get() function returns the response with an appropriate HTTP status.
```
from flask import jsonify, abort
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
```
And, finally, the Flask app needs to be served. We can use the waitress server for this purpose.
```
from waitress import serve
serve(app, host=”0.0.0.0”, port=8080)
```
In the above snippet, the app is served on port 8080 on the local host.
The final code looks like this:
```
from flask import Flask, jsonify, abort
from waitress import serve
app = Flask(__name__)
@app.route(/user/<name>)
def get(name):
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
serve(app, host=”0.0.0.0”, port=8080)
```
### Deployment
The FindService is now ready with code. It has its domain model, data layer, and application layer besides the REST API. The next step is to build the service, containerise it, and then deploy it on Kubernetes. The process is in no way different from any other service, though there are some Python-specific steps.
Before proceeding further, let us have a look at the folder/file structure:
```
+ ums-find-service
+ ums
- domain.py
- data.py
- app.py
- Dockerfile
- requirements.txt
- kube-find-deployment.yml
```
As you can observe, the whole work is under the ums-find-service folder. It contains the code in the ums folder and configurations in Dockerfile, requirements.txt and kube-find-deployment.yml files.
The domain.py consists of the domain model classes, data.py consists of UserRepositoryImpl and app.py consists of the rest of the code. Since we have seen the code already, let us move on to the configuration files.
The first one is the file named requirements.txt. It declares the external dependencies for the Python system to download and install. We need to populate it with every external Python module that we use in the FindService. As you know, we used MySQL connector, Flask and Waitress modules. Hence the following is the content of the requirements.txt.
```
Flask==2.1.1
Flask_RESTful
mysql-connector-python
waitress
```
The second step is to declare the manifestation for dockerisation in Dockerfile. Here it is:
```
FROM python:3.8-slim-buster
WORKDIR /ums
ADD ums /ums
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 8080
ENTRYPOINT [“python”]
CMD [“/ums/app.py”]
```
In summary, we used Python 3.8 as the baseline and moved our code from the ums folder to a corresponding folder in the Docker container, besides moving the requirements.txt. Then we instructed the container to run the pip3 install command. And, finally, we exposed port 8080 (since the waitress was running on that port).
In order to run the service, we instructed the container to use the following command:
```
python /ums/app.py
```
Once the Dockerfile is ready, run the following command from within the ums-find-service folder to create the Dockerised image:
```
docker build -t glarimy/ums-find-service
```
It creates the Docker image, which can be found using the following command:
```
docker images
```
Try pushing the image to the Docker Hub as appropriate. You may log in to Docker as well.
```
docker login
docker push glarimy/ums-find-service
```
And the last step is to build the manifest for the Kubernetes deployment.
We have already covered the way to set up the Kubernetes cluster, and deploy and use services, in the previous part. I am assuming that we still have the manifest file we used in the previous part for deploying the AddService, MySQL, Kafka and Zookeeper. We only need to add the following entries into the kube-find-deployment.yml file:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ums-find-service
labels:
app: ums-find-service
spec:
replicas: 3
selector:
matchLabels:
app: ums-find-service
template:
metadata:
labels:
app: ums-find-service
spec:
containers:
- name: ums-find-service
image: glarimy/ums-find-service
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: ums-find-service
labels:
name: ums-find-service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: ums-find-service
```
The first part of the above manifest declares a deployment of FindService from the image glarimy/ums-find-service with three replicas. It also exposes port 8080. And the latter part of the manifest declares a Kubernetes service as the front-end for the FindService deployment. Recollect that the MySQL service in the name of mysqldb was already part of the above manifest from the previous part.
Run the following command to deploy the manifest on a Kubernetes cluster:
```
kubectl create -f kube-find-deployment.yml
```
Once the deployment is finished, you can verify the pods and services using the following command:
```
kubectl get services
```
It gives an output as shown in Figure 3.
![Figure 3: Kubernetes services][3]
It lists all the services running on the cluster. Make a note of the external-ip of the FindService and use the curl command to invoke the service:
```
curl http://10.98.45.187:8080/user/KrishnaMohan
```
Note that the IP address 10.98.45.187 corresponds to the FindService, as found in Figure 3.
If we have used the AddService to create a user by the name KrishnaMohan, the above curl command looks like what is shown in Figure 4.
![Figure 4: FindService][4]
With the AddService and FindService, along with the required back-end services for storage and messaging, the architecture of the user management system (UMS) stands as shown in Figure 5, at this point. You can observe that the end-user uses the IP address of the ums-add-service for adding a new user, whereas it uses the IP address of the ums-find-service for finding existing users. Each of these Kubernetes services are backed by three pods with the corresponding containers. Also note that the same mysqldb service is used for storing and retrieving the user data.
![Figure 5: UMS with AddService and FindService][5]
### What about the other services?
The UMS consists of two more services, namely, SearchService and JournalService. We will design these services in the next part of this series on the Node platform, and deploy them on the same Kubernetes cluster to demonstrate the real power of polyglot microservices architecture. We will conclude by observing some design patterns pertaining to microservices.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-The-domain-model-of-FindService-1.png
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-The-application-layer-of-FindService.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Kubernetes-services-1.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-FindService.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-UMS-with-AddService-and-FindService.png

View File

@ -1,323 +0,0 @@
[#]: subject: "How To Find Default Gateway IP Address In Linux And Unix From Commandline"
[#]: via: "https://ostechnix.com/find-default-gateway-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Find Default Gateway IP Address In Linux And Unix From Commandline
======
5 Ways To Find Gateway Or Router IP Address In Linux
A **gateway** is a node or a router that allows two or more hosts with different IP addresses to communicate with each other when connected to the same router. Without gateway, devices connected on the same router wont be able to communicate with each other. To put this another way, the gateway acts as an access point to pass network data from a local network to a remote network. In this guide, we will see all the possible ways to **find default gateway in Linux** and **Unix** from commandline.
#### Contents
1. Find Default Gateway In Linux 2. 1. Find Default Gateway Using ip Command 3. 2. Display Default Gateway IP Address Using route Command 4. 3. View Gateway IP Address Using netstat Command 5. 4. Print Default Gateway IP Address Or Router IP Address Using routel Command 6. 5. Find Gateway From Ethernet Configuration Files
7. Conclusion
### Find Default Gateway In Linux
There are various commandline tools are available to view the gateway IP address in Linux. The most commonly used tools are: **ip**, **ss**, and **netcat**. We will see how check the default gateway using each tool with examples.
#### 1. Find Default Gateway Using ip Command
The **ip** command is used to show and manipulate routing, network devices, interfaces and tunnels in Linux.
To find the default gateway or Router IP address, simply run:
```
$ ip route
```
Or,
```
$ ip r
```
Or,
```
$ ip route show
```
**Sample output:**
```
default via 192.168.1.101 dev eth0 proto static metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.20 metric 100
```
Did you see the line **"default via 192.168.1.101"** in the above output? This is the default gateway. So my default gateway is **192.168.1.101**.
You can use **-4** with `ip route` command to **display the IPv4 gateway** only:
```
$ ip -4 route
```
And, use `-6` to **display the IPv6 gateway** only:
```
$ ip -6 route
```
As you noticed in the output, the IP address and the subnet details are also shown. If you want to display ONLY the default gateway and exclude all other details from the output, you can use `awk` command with `ip route` like below.
To find the default gateway IP address using `ip route` and `grep`, run:
To print Gateway IP address with `ip route` and `awk` commands, run:
```
$ ip route | awk '/^default/{print $3}'
```
Or,
```
$ ip route show default | awk '{print $3}'
```
This will list only the gateway.
**Sample output:**
```
192.168.1.101
```
![Find Default Gateway Using ip Command][1]
You can also use **[grep][2]** command with `ip route` to filter the default gateway.
```
$ ip route | grep default
default via 192.168.1.101 dev eth0 proto static metric 100
```
The `ip route` is the recommended command to find the default gateway IP address in latest Linux distributions. However, some of you may still be using the legacy tools like **route** and `netstat`. Old habits die hard, right? The following sections explains how to determine the gateway in Linux using `route` and `netstat` commands.
#### 2. Display Default Gateway IP Address Using route Command
The **route** command is used to show and manipulate routing table in older Linux distributions, for example RHEL 6, CentOS 6.
If you're using those older Linux distributions, you can use the `route` command to display the default gateway.
Please note that the `route` tool is deprecated and replaced with `ip route` command in the latest Linux distributions. If you still want to use `route` for any reason, you need to install it.
First, we need to check which package provides `route` command. To do so, run the following command on your RHEL-based system:
```
$ dnf provides route
```
**Sample output:**
```
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : @System
Matched from:
Filename : /usr/sbin/route
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : baseos
Matched from:
Filename : /usr/sbin/route
```
As you can see in the above output, the net-tools package provides the `route` command. So, let us install it using command:
```
$ sudo dnf install net-tools
```
Now, run `route` command with `-n` flag to display the gateway IP address or router IP address in your Linux system:
```
$ route -n
```
**Sample output:**
```
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
```
![Display Default Gateway IP Address Using route Command][3]
As you see in the above output, the gateway IP address is 192.168.1.101. You will also see the two letters **"UG"** under Flags section. The letter **"U"** indicates the interface is **UP** and **G** stands for Gateway.
#### 3. View Gateway IP Address Using netstat Command
**Netstat** prints information about the Linux networking subsystem. Using netstat tool, we can print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships in Linux and Unix systems.
Netstat is part of net-tools package, so make sure you've installed it in your Linux system. The following commands install net-tools package in RHEL-based systems:
```
$ sudo dnf install net-tools
```
To print the default gateway IP address using `netstat` command, run:
```
$ netstat -rn
```
**Sample output:**
```
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
![View Gateway IP Address Using netstat Command][4]
The `netstat` command's output is same as `route` command's output. As per the above output, the gateway IP address is 192.168.1.101 and the UG stands the NIC associated to gateway is UP and G indicates Gateway,
Please note that `netstat` is also deprecated and it is recommended to use **"ss"** command instead of netstat.
#### 4. Print Default Gateway IP Address Or Router IP Address Using routel Command
The **routel** is a script to list routes with pretty output format. The routel script will list routes in a format that some might consider easier to interpret then the `ip route` list equivalent.
The routel script is also the part of net-tools package.
To print the default gateway or router IP address, run routel script without any flags like below:
```
$ routel
```
**Sample output:**
```
target gateway source proto scope dev tbl
default 192.168.1.101 static eth0
172.17.0.0/ 16 172.17.0.1 kernel linkdocker0
192.168.1.0/ 24 192.168.1.20 kernel link eth0
127.0.0.0/ 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
192.168.1.20 local 192.168.1.20 kernel host eth0 local
192.168.1.255 broadcast 192.168.1.20 kernel link eth0 local
::1 kernel lo
::/ 96 unreachable lo
::ffff:0.0.0.0/ 96 unreachable lo
2002:a00::/ 24 unreachable lo
2002:7f00::/ 24 unreachable lo
2002:a9fe::/ 32 unreachable lo
2002:ac10::/ 28 unreachable lo
2002:c0a8::/ 32 unreachable lo
2002:e000::/ 19 unreachable lo
3ffe:ffff::/ 32 unreachable lo
fe80::/ 64 kernel eth0
::1 local kernel lo local
fe80::d085:cff:fec7:c1c3 local kernel eth0 local
```
![Print Default Gateway IP Address Or Router IP Address Using routel Command][5]
To print only the default gateway, run routel with `grep` like below:
```
$ routel | grep default
default 192.168.1.101 static eth0
```
#### 5. Find Gateway From Ethernet Configuration Files
If you have **[configured static IP address in your Linux or Unix][6]** system, you can view the default gateway or router IP address by looking at the network configuration files.
In RPM-based systems like Fedora, RHEL, CentOS, AlmaLinux and Rocky Linux, the network interface card (shortly **NIC**) configuration are stored under **/etc/sysconfig/network-scripts/** directory.
Find the name of the network card:
```
# ip link show
```
**Sample output:**
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d2:85:0c:c7:c1:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
```
The network card name is **eth0**. So let us open the network card configuration of this NIC card file:
```
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
```
**Sample output:**
```
DEVICE=eth0
ONBOOT=yes
UUID=eb6b6a7c-37f5-11ed-a59a-a0e70bdf3dfb
BOOTPROTO=none
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.1.101
DNS1=8.8.8.8
```
As you see above, the gateway IP is `192.168.1.101`.
In Debian, Ubuntu and its derivatives, all network configuration files are stored under **/etc/network/** directory.
```
$ cat /etc/network/interfaces
```
**Sample output:**
```
auto ens18
iface ens18 inet static
address 192.168.1.150
netmask 255.255.255.0
gateway 192.168.1.101
dns-nameservers 8.8.8.8
```
Please note that this method should work only if the IP address is configured manually. For DHCP-enabled network, you need to follow the previous 4 methods.
### Conclusion
In this guide, we listed 5 different ways to find default gateway in Linux and Unix operating systems. We also have included sample commands to display the gateway/router IP address in each method. Hope this helps.
--------------------------------------------------------------------------------
via: https://ostechnix.com/find-default-gateway-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/wp-content/uploads/2022/09/Find-Default-Gateway-Using-ip-Command.png
[2]: https://ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[3]: https://ostechnix.com/wp-content/uploads/2022/09/Display-Default-Gateway-IP-Address-Using-route-Command.png
[4]: https://ostechnix.com/wp-content/uploads/2022/09/View-Gateway-IP-Address-Using-netstat-Command.png
[5]: https://ostechnix.com/wp-content/uploads/2022/09/Print-Default-Gateway-IP-Address-Or-Router-IP-Address-Using-routel-Command.png
[6]: https://ostechnix.com/configure-static-ip-address-linux-unix/

View File

@ -1,185 +0,0 @@
[#]: subject: "PyLint: The good, the bad, and the ugly"
[#]: via: "https://opensource.com/article/22/9/pylint-good-bad-ugly"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
PyLint: The good, the bad, and the ugly
======
Get the most out of PyLint.
![Python programming language logo with question marks][1]
Image by: Opensource.com
Hot take: PyLint is actually good!
"PyLint can save your life" is an exaggeration, but not as much as you might think! PyLint can keep you from really really hard to find and complicated bugs. At worst, it can save you the time of a test run. At best, it can help you avoid complicated production mistakes.
### The good
I'm embarrassed to say how common this can be. Naming tests is perpetually *weird*: Nothing cares about the name, and there's often not a natural name to be found. For instance, look at this code:
```
def test_add_small():
    # Math, am I right?
    assert 1 + 1 == 3
   
def test_add_large():
    assert 5 + 6 == 11
   
def test_add_small():
    assert 1 + 10 == 11
```
The test works:
```
collected 2 items                                                                        
test.py ..
2 passed
```
In reality, these files can be hundreds of lines long, and the person adding the new test might not be aware of all the names. Unless someone is looking at test output carefully, everything looks fine.
Worst of all, the *addition of the overriding test*, the *breakage of the overridden test*, and the *problem that results in prod* might be separated by days, months, or even years.
### PyLint finds it
But like a good friend, PyLint is there for you.
```
test.py:8:0: E0102: function already defined line 1
     (function-redefined)
```
### The bad
Like a 90s sitcom, the more you get into PyLint, the more it becomes problematic. This is completely reasonable code for an inventory modeling program:
```
"""Inventory abstractions"""
import attrs
@attrs.define
class Laptop:
    """A laptop"""
    ident: str
    cpu: str
```
It seems that PyLint has opinions (probably formed in the 90s) and is not afraid to state them as facts:
```
$ pylint laptop.py | sed -n '/^laptop/s/[^ ]*: //p'
R0903: Too few public methods (0/2) (too-few-public-methods)
```
### The ugly
Ever wanted to add your own unvetted opinion to a tool used by millions? PyLint has 12 million monthly downloads.
> "People will just disable the whole check if it's too picky." —PyLint issue 6987, July 3rd, 2022
The attitude it takes towards adding a test with potentially many false positives is...*"eh."*
### Making it work for you
PyLint is fine, but you need to interact with it carefully. Here are the three things I recommend to make PyLint work for you.
#### 1. Pin it
Pin the PyLint version you use to avoid any surprises!
In your `.toml` file:
```
[project.optional-dependencies]
pylint = ["pylint"]
```
In your code:
```
from unittest import mock
```
This corresponds with code like this:
```
# noxfile.py
...
@nox.session(python=VERSIONS[-1])
def refresh_deps(session):
    """Refresh the requirements-*.txt files"""
    session.install("pip-tools")
    for deps in [..., "pylint"]:
        session.run(
            "pip-compile",
            "--extra",
            deps,
            "pyproject.toml",
            "--output-file",
            f"requirements-{deps}.txt",
        )
```
#### 2. Default deny
Disable all checks. Then enable ones that you think have a high value-to-false-positive ratio. (Not just false-negative-to-false-positive ratio!)
```
# noxfile.py
...
@nox.session(python="3.10")
def lint(session):
    files = ["src/", "noxfile.py"]
    session.install("-r", "requirements-pylint.txt")
    session.install("-e", ".")
    session.run(
        "pylint",
        "--disable=all",
        *(f"--enable={checker}" for checker in checkers)
        "src",
    )
```
#### 3. Checkers
These are some of the ones I like. Enforce consistency in the project, avoid some obvious mistakes.
```
checkers = [
    "missing-class-docstring",
    "missing-function-docstring",
    "missing-module-docstring",
    "function-redefined",
]
```
### Using PyLint
You can take just the good parts of PyLint. Run it in CI to keep consistency, and use the highest value checkers.
Lose the bad parts: Default deny checkers.
Avoid the ugly parts: Pin the version to avoid surprises.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/pylint-good-bad-ugly
作者:[Moshe Zadka][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_programming_question.png

View File

@ -1,169 +0,0 @@
[#]: subject: "5 Git configurations I make on Linux"
[#]: via: "https://opensource.com/article/22/9/git-configuration-linux"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 Git configurations I make on Linux
======
This is a simple guide to quickly get started working with Git and a few of its many configuration options.
![Linux keys on the keyboard for a desktop computer][1]
Setting up Git on Linux is simple, but here are the five things I do to get the perfect configuration:
1. [Create global configuration][2]
2. [Set default name][3]
3. [Set default email address][4]
4. [Set default branch name][5]
5. [Set default editor][6]
I manage my code, shell scripts, and documentation versioning using Git. This means that for each new project I start, the first step is to create a directory for its content and make it into a Git repository:
```
$ mkdir newproject
$ cd newproject
$ git init
```
There are certain general settings that I always want. Not many, but enough that I don't want to have to repeat the configuration each time. I like to take advantage of the *global* configuration capability of Git.
Git offers the `git config` command for manual configuration but this is a lot of work with certain caveats. For example, a common item to set is your email address. You can set it by running `git config user.email` followed by your email address. However, this will only take effect if you are in an existing Git directory:
```
$ git config user.email alan@opensource.com
fatal: not in a git directory
```
Plus, when this command is run within a Git repository, it only configures that specific one. The process must be repeated for new repositories. I can avoid this repetition by setting it globally. The *--global* option will instruct Git to write the email address to the global configuration file; `~/.gitconfig`, even creating it if necessary:
> Remember, the tilde (~) character represents your home directory. In my case that is /home/alan.
```
$ git config --global user.email alan@opensource.com
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
```
The downside here is if you have a large list of preferred settings, you will have a lot of commands to enter. This is time-consuming and prone to human error. Git provides an even more efficient and convenient way to directly edit your global configuration file—that is the first item on my list!
### 1. Create global configuration
If you have just started using Git, you may not have this file at all. Not to worry, let's skip the searching and get started. Just use the `--edit` option:
```
$ git config --global --edit
```
If no file is found, Git will generate one with the following content and open it in your shell environment's default editor:
```
# This is Git's per-user configuration file.
[user]
# Please adapt and uncomment the following lines:
#       name = Alan
#       email = alan@hopper
~
~
~
"~/.gitconfig" 5L, 155B                                     1,1           All
```
Now that we have opened the editor and Git has created the global configuration file behind the scenes, we can continue with the rest of the settings.
### 2. Set default name
Name is the first directive in the file, so let's start with that. The command line to set mine is `git config --global user.name "Alan Formy-Duval"`. Instead of running this command, just edit the *name* directive in the configuration file:
```
name = Alan Formy-Duval
```
### 3. Set default email address
The email address is the second directive, so let's update it. By default, Git uses your system-provided name and email address. If this is incorrect or you prefer something different, you can specify it in the configuration file. In fact, if you have not configured them, Git will let you know with a friendly message the first time you commit:
```
Committer: Alan <alan@hopper>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate....
```
The command line to set mine is`git config --global user.email`**["alan@opensource.com"][7]**. Instead, edit the *email* directive in the configuration file and provide your preferred address:
```
email = alan@opensource.com
```
The last two settings that I like to set are the default branch name and the default editor. These directives will need to be added while you are still in the editor.
### 4. Set default branch name
There is currently a trend to move away from the usage of the word *master* as the default branch name. As a matter of fact, Git will highlight this trend with a friendly message upon initialization of a new repository:
```
$ git init
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:   git config --global init.defaultBranch <name>
```
This directive, named *defaultBranch*, needs to be located in a new section named *init*. It is now generally accepted that many coders use the word *main* for their default branch. This is what I like to use. Add this section followed by the directive to the configuration:
```
[init]
            defaultBranch = main
```
### 5. Set default editor
The fifth setting that I like to set is the default editor. This refers to the editor that Git will present for typing your commit message each time you commit changes to your repository. Everyone has a preference whether it is [nano][8], [emacs][9], [vi][10], or something else. I'm happy with vi. So, to set your editor, add a *core* section that includes the *editor* directive:
```
[core]
            editor = vi
```
That's the last one. Exit the editor. Git saves the global configuration file in your home directory. If you run the editing command again, you will see all of the content. Notice that the configuration file is a plain text file, so it can also be viewed using text tools such as the [cat command][11]. This is how mine appears:
```
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
        name = Alan Formy-Duval
[core]
        editor = vi
[init]
        defaultBranch = main
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/git-configuration-linux
作者:[Alan Formy-Duval][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/linux_keyboard_desktop.png
[2]: https://opensource.com/article/22/9/git-configuration-linux#create-global-configuration
[3]: https://opensource.com/article/22/9/git-configuration-linux#set-default-name
[4]: https://opensource.com/article/22/9/git-configuration-linux#set-default-email-address
[5]: https://opensource.com/article/22/9/git-configuration-linux#set-default-branch-name
[6]: https://opensource.com/article/22/9/git-configuration-linux#set-default-editor
[7]: https://opensource.com/mailto:alan@opensource.com
[8]: https://opensource.com/article/20/12/gnu-nano
[9]: https://opensource.com/resources/what-emacs
[10]: https://opensource.com/article/19/3/getting-started-vim
[11]: https://opensource.com/article/19/2/getting-started-cat-command

View File

@ -1,206 +0,0 @@
[#]: subject: "GUI Apps for Package Management in Arch Linux"
[#]: via: "https://itsfoss.com/arch-linux-gui-package-managers/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
GUI Apps for Package Management in Arch Linux
======
[Installing Arch Linux][1] is considered challenging. This is why [several Arch-based distributions exist][2] to make things easier by providing a graphical installer.
Even if you manage to install Arch Linux, youll notice that it relies heavily on the command line. Youll have to open the terminal if you have to install applications or update the system.
Yes! Arch Linux does not have a software center. Shocking for many, I know.
If you feel uncomfortable using the command line for managing applications, you can install a GUI tool. This helps in searching for packages and installing and removing them from the comfort of the GUI.
Wondering which graphical frontend for [pacman commands][3] you should use? I have some suggestions to help you get started.
**Please note that some software managers are desktop environment specific.**
### 1. Apper
![Installing Firefox using Apper][4]
Apper is a minimal Qt5 application and package manager using PackageKit which also supports AppStream and automatic updates. But, **there is no AUR support**.
To install it from the official repos use the command below.
```
sudo pacman -Syu apper
```
[Apper on GitLab][5]
### 2. Deepin App Store
![Installing Firefox using Deepin App Store][6]
Deepin App Store is an app store for Deepin Desktop Environment built with DTK(Qt5), using PackageKit with AppStream support and also provides system update notifications. There is **no AUR support**.
To install it, use the command below.
```
sudo pacman -Syu deepin-store
```
[Deepin Store on Github][7]
### 3. Discover
![Installing Firefox using Discover][8]
Discover needs no introduction for KDE Plasma users. It is a Qt-based application manager using PackageKit which supports AppStream, Flatpak and Firmware updates.
For installing Flatpak and Firmware updates from Discover `flatpak` and `fwupd` packages need to be installed respectively.
There is no AUR support.
```
sudo pacman -Syu discover packagekit-qt5
```
[Discover on GitLab][9]
### 4. GNOME PackageKit
![Installing Firefox using GNOME PackageKit][10]
GNOMEPackageKit is a GTK3 package manager using PackageKit which supports AppStream. Unfortunately, there is **no AUR support**.
To install it from the official repos use the command below.
```
sudo pacman -Syu gnome-packagekit
```
[PackageKit on freedesktop][11]
### 5. GNOME Software
![Installing Firefox using GNOME Software][12]
GNOME Software needs no introduction for GNOME desktop users. It is the GTK4 application manager using PackageKit which supports AppStream, Flatpak and Firmware updates.
There is no AUR support. To install Flatpak and Firmware updates from GNOME Software `flatpak` and `fwupd` packages need to be installed respectively.
Install it using:
```
sudo pacman -Syu gnome-software-packagekit-plugin gnome-software
```
[GNOME Software on GitLab][13]
### 6. tkPacman
![Installing Firefox using tkPacman][14]
It is a Tk pacman wrapper written in Tcl. The interface is similar to [Synaptic Package Manager][15].
It is quite lightweight due to no GTK/Qt dependencies as it is uses Tcl/Tk GUI toolkit.
It does not support AUR which is ironic because you need to install it from [AUR][16]. You need to install an [AUR helper][17] like yay beforehand.
```
yay -Syu tkpacman
```
[tkPacman on Sourceforge][18]
### 7. Octopi
![Installing Firefox using Octopi][19]
Consider it a better looking cousin of tkPacman. It uses Qt5 and Alpm and also supports Appstream and **AUR (via yay)**.
You also get desktop notifications, repository editor and cache cleaner. The interface is similar to Synaptic Package Manager.
To install it from the AUR, use the following command.
```
yay -Syu octopi
```
[Octopi on GitHub][20]
### 8. Pamac
![Installing Firefox using Pamac][21]
Pamac is the graphical package manager from Manjaro Linux. It based on GTK3 and Alpm and **supports AUR, Appstream, Flatpak and Snap**.
Pamac also supports automatic download of updates and downgrade of packages.
It is the most widely used Application in Arch Linux derivatives. But, has been notorious for [DDoSing the AUR webpage][22].
There are several ways to [install Pamac on Arch Linux][23]. The simplest would be to use an AUR helper.
```
yay -Syu pamac-aur
```
[Pamac on GitLab][24]
### Conclusion
To remove any of the above-mentioned GUI package managers along with the dependencies and configuration files, use the following command replacing *packagename* with the name of package to be removed.
```
sudo pacman -Rns packagename
```
So it seems Arch Linux can also be used without touching the terminal with the right tools.
There are also some other applications also which use Terminal User Interface (TUI). A few examples are [pcurses][25], [cylon][26], [pacseek][27], and [yup][28]. But, this article is about only the ones with proper GUI.
**Note:** PackageKit opens up system permissions by default, and is otherwise [not recommended][29] for general usage. Because if the user is part of the wheel group no password is required to update or install any software.
**You saw several options for using GUI software center on Arch Linux. Its time to make a decision on using one of them. Which one would you choose? Pamac or OctoPi or something else? Leave a quick comment below right now.**
--------------------------------------------------------------------------------
via: https://itsfoss.com/arch-linux-gui-package-managers/
作者:[Anuj Sharma][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/anuj/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-arch-linux/
[2]: https://itsfoss.com/arch-based-linux-distros/
[3]: https://itsfoss.com/pacman-command/
[4]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[5]: https://invent.kde.org/system/apper
[6]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[7]: https://github.com/dekzi/dde-store
[8]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[9]: https://invent.kde.org/plasma/discover
[10]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[11]: https://freedesktop.org/software/PackageKit/index.html
[12]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[13]: https://gitlab.gnome.org/GNOME/gnome-software
[14]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[15]: https://itsfoss.com/synaptic-package-manager/
[16]: https://itsfoss.com/aur-arch-linux/
[17]: https://itsfoss.com/best-aur-helpers/
[18]: https://sourceforge.net/projects/tkpacman
[19]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[20]: https://github.com/aarnt/octopi
[21]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[22]: https://gitlab.manjaro.org/applications/pamac/-/issues/1017
[23]: https://itsfoss.com/install-pamac-arch-linux/
[24]: https://gitlab.manjaro.org/applications/pamac
[25]: https://github.com/schuay/pcurses
[26]: https://github.com/gavinlyonsrepo/cylon
[27]: https://github.com/moson-mo/pacseek
[28]: https://github.com/ericm/yup
[29]: https://bugs.archlinux.org/task/50459

View File

@ -2,7 +2,7 @@
[#]: via: "https://ostechnix.com/execute-commands-on-remote-linux-systems-via-ssh/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,181 +0,0 @@
[#]: subject: "How to Enable RPM Fusion Repo in Fedora, CentOS, RHEL"
[#]: via: "https://www.debugpoint.com/enable-rpm-fusion-fedora-rhel-centos/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Enable RPM Fusion Repo in Fedora, CentOS, RHEL
======
This guide explains the steps to enable third-party software repository RPM Fusion in Fedora Linux Distribution.
The [RPM Fusion][1] software repo is a community-maintained software repo that provides additional packages for Fedora Linux which is not distributed by the official Fedora team such as DVD playback, media playback, software from GNOME and KDE work, etc. This is because of licensing, other legal reasons, and country-specific software norms.
The RPM Fusion provides .rpm packages for Red Hat Enterprise Linux as well alongside Fedora.
This guide explains the steps you need to enable the RPM Fusion repo in Fedora Linux. This guide applies to all Fedora release versions.
This is tested in all the current supported Fedora versions 35, 36 and 37.
![RPM Fusion][2]
### How to Enable RPM Fusion Repo in Fedora Linux, RHEL, CentOS
RPM Fusion has two flavours of the repo. Free and non-Free.
The Free one, as its name says, contains a free version of software packages and the non-free ones contain compiled packages of closed source and “non-commercial” open-source software.
Before you proceed, first check whether you have RPM fusion installed. Open up a terminal and run the below command.
```
dnf repolist | grep rpmfusion
```
If RPM is installed, you should see a message like the one below. Then no need to proceed at all. If it is not installed, you may proceed with the following steps.
![RPM-Fusion-Already-Installed-][3]
Open a terminal and run the below commands as per your operating system versions. Please note that the commands contain both a free and non-free version. If you want, you can omit either one from below while running.
#### Fedora
```
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
```
sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### Silverblue with rpm-ostree
```
sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
```
sudo rpm-ostree install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### RHEL 8
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
```
sudo subscription-manager repos --enable "codeready-builder-for-rhel-8-$(uname -m)-rpms"
```
#### CentOS 8
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
```
sudo dnf config-manager --enable PowerTools
```
### Additional Instructions
* RPM Fusion also provides to help users install packages from GNOME Software or KDE Discover. To enable it in Fedora, run the below command.
```
sudo dnf groupupdate core
```
* You can also enable RPM Fusion to play multimedia files that use gstreamer, and additional multimedia playback packages via the below command.
```
sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin
```
```
sudo dnf groupupdate sound-and-video
```
* Enable RPM Fusion to play a DVD that uses libdvdcss.
```
sudo dnf install rpmfusion-free-release-taintedsudo dnf install libdvdcss
```
* Enable RPM Fusion to enable non-FLOSS hardware packages via the below command.
```
sudo dnf install rpmfusion-nonfree-release-taintedsudo dnf install *-firmware
```
After running the commands, if you are using Fedora or CentOS/RHEL run the below commands before rebooting.
```
sudo dnf check-updatesudo dnf update
```
### How to remove repo using dnf
If you want to remove the repository, follow the steps below.
First, check using the below command to view the repo list added to your Fedora system.
```
dnf repolist
```
![dnf repolist][4]
As you can see, both rpmfusion free and non-free repo is added. To remove it via dnf, you need to know precisely the repo file name using the following command.
```
rpm -qa 'rpmfusion*'
```
This would list the exact name of the repo. In the example, they are rpmfusion-free-release.
![remove rpmfusion from fedora][5]
Now you can simply run the below command to remove it.
```
sudo dnf remove rpmfusion-free-release
```
You can repeat the above example to remove rpmfusion from Fedora, also, use this to remove any other repo from your system.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/enable-rpm-fusion-fedora-rhel-centos/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://rpmfusion.org/
[2]: https://www.debugpoint.com/wp-content/uploads/2020/07/rpmfusion.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2020/07/RPM-Fusion-Already-Installed-.png
[4]: https://www.debugpoint.com/wp-content/uploads/2020/07/dnf-repolist.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2020/07/remove-rpmfusion-from-fedora.jpg

View File

@ -1,149 +0,0 @@
[#]: subject: "Upgrade Various Kinds of Packages in Linux at Once With Topgrade"
[#]: via: "https://itsfoss.com/topgrade/"
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Upgrade Various Kinds of Packages in Linux at Once With Topgrade
======
Updating a Linux system is not that complicated, is it? After all, to update Ubuntu like distros, you just have to use apt update && apt upgrade.
That would have been the case if all the packages had been installed through a single package manager.
But thats not the case anymore. You have the classic apt/dnf/pacman and then come snap, flatpak, appimages. It doesnt end here. You may also install applications using PIP (for Python) and Cargo (for Rust).
Use node? The npm packages need to be updated separately. Oh My Zsh? Needs to be updated separately. [Plugins in Vim][1], Atom etc may also not be covered by apt/dnf/pacman.
Do you see the problem now? And this is the kind of problem a new tool called topgrade aims to solve.
### Topgrade: A single utility to take care of all kinds of updates
This [topgrade][2] is a CLI utility and it detects which tools you use and then runs the appropriate commands to update them.
![topgrade disable system][3]
Apart from the usual Linux package managers, it can detect and update brew, cargo, PIP, pihole, Vim and Emacs plugins, R packages etc. You can see the list of supported packages on [its wiki page][4].
##### Key Features of Topgrade:
* Ability to update packages from different package managers, including firmware!
* You do have control over how you want to update packages.
* Extremely customizable.
* Ability to have an overview even before updating packages.
So without wasting any time, lets jump to the installation.
### Install Topgrade in Linux using Cargo
The installation process is quite straightforward as Im going to use the cargo package manager.
We already have a [detailed guide with multiple methods for setting up a cargo package manager][5] So Im going to make it quick by using Ubuntu in my example.
So lets start with some dependencies and installation of cargo in the least extensive way:
```
sudo apt install cargo libssl-dev pkg-config
```
Once the cargo has been installed, utilize the given command to install topgrade:
```
cargo install topgrade
```
And it will throw a warning as given:
![cargo error][6]
Where you just have to add the path of cargo to run binaries. This can be done through given command where youve to change `sagar` with your username:
```
echo 'export PATH=$PATH:/home/sagar/.cargo/bin' >> /home/sagar/.bashrc
```
Now, reboot your system and topgrade is ready to use. But wait, we need to install another package that will update cargo to get the most recent packages.
```
cargo install cargo-update
```
And were done with installation.
### Using Topgrade
Using topgrade is extremely easy. Use a single command and thats it:
```
topgrade
```
![][7]
But this wont give you any control apart from system packages but as I mentioned, you can blacklist the repo that you dont want to get updated.
#### Exclude package managers and repositories from Topgrade
Lets suppose I want to exclude snaps and packages downloaded from the default package manager, so my command would be:
```
topgrade --disable snap system
```
![topgrade disable snap system][8]
For making a permanent change, youd have to make a few changes in its config file which can be accessed through the given command:
```
topgrade --edit-config
```
For this example, I discluded snaps and default system repo:
![configuring topgrade][9]
#### Dry run topgrade
Having an estimation of outdated packages that will be updated is always a good idea and I find this most useful option from the entire catalog of topgrade.
You just have to use topgrade with `-n` option and it will generate a summary of outdated packages.
```
topgrade -n
```
![summery of topgrade][10]
A neat way of checking packages that need to be updated.
### Final Words
After using Topgrade for a few weeks, it became an integral part of my Linux arsenal. Like most other Linux users, I was only updating packages through my default package manager. Python and Rust packages were ignored completely. Thanks to topgrade, my system is updated completely now.
I understand that this is not a tool everyone would want to use. What about you? Willing to give it a try?
--------------------------------------------------------------------------------
via: https://itsfoss.com/topgrade/
作者:[Sagar Sharma][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lkxed
[1]: https://linuxhandbook.com/install-vim-plugins/
[2]: https://github.com/r-darwish/topgrade
[3]: https://itsfoss.com/wp-content/uploads/2022/09/topgrade-disable-system.png
[4]: https://github.com/r-darwish/topgrade/wiki/Step-list
[5]: https://itsfoss.com/install-rust-cargo-ubuntu-linux/
[6]: https://itsfoss.com/wp-content/uploads/2022/09/cargo-error.png
[7]: https://itsfoss.com/wp-content/uploads/2022/10/topgrade.mp4
[8]: https://itsfoss.com/wp-content/uploads/2022/09/topgrade-disable-snap-system.png
[9]: https://itsfoss.com/wp-content/uploads/2022/09/configuring-topgrade-1.png
[10]: https://itsfoss.com/wp-content/uploads/2022/09/summery-of-topgrade.png

View File

@ -1,216 +0,0 @@
[#]: subject: "How to Create LVM Partition Step-by-Step in Linux"
[#]: via: "https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/"
[#]: author: "James Kiarie https://www.linuxtechi.com/author/james/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Create LVM Partition Step-by-Step in Linux
======
In this guide, we will cover how to create lvm partition step-by-step in Linux.
LVM stands for Logical Volume Management, it is the recommended way to manage disk or storage on Linux systems specially for servers. One of the main advantages of LVM partition is that we can extend its size online without any downtime. LVM partition can also be reduced but it is not recommended.
For the demo purpose, I have attached 15GB disk to my Ubuntu 22.04 system, we will create LVM partition on this disk from the command line.
##### Prerequisites
* Raw disk attached to Linux system
* Local User with Sudo rights
* Pre-Installed  lvm2 package
Without further ado, lets deep dive into the steps.
### Step 1) Identify new attached raw disk
Login to your system, open the terminal and run following dmesg command,
```
$ sudo dmesg | grep -i sd
```
In the output, look for new disk attached of size 15GB,
![dmesg-command-new-attached-disk-linux][1]
Alternate way to identify new attached raw disk is via fdisk command,
```
$ sudo fdisk -l | grep -i /dev/sd
```
Output,
![fdisk-command-output-new-disk][2]
From output above, it is confirmed that new attached disk is /dev/sdb
### Step 2) Create PV (Physical Volume)
Before start creating pv on disk /dev/sdb, make sure lvm2 package is installed. In case it is not installed, then run following command,
```
$ sudo apt install lvm2     // On Ubuntu / Debian
$ sudo dnf install lvm2    // on RHEL / CentOS
```
Run following pvcreate command to create pv on disk /dev/sdb,
```
$ sudo pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
$
```
To verify pv status run,
```
$ sudo pvs /dev/sdb
Or
$ sudo pvdisplay /dev/sdb
```
![pvdisplay-command-output-linux][3]
### Step 3) Create VG (Volume Group)
To create a volume group, we will use vgcreate command. Creating VG means adding pv to the volume group.
Syntax :
```
$ sudo vgcreare <vg_name>  <pv>
```
In our case, command would be,
```
$ sudo vgcreate volgrp01 /dev/sdb
  Volume group "volgrp01" successfully created
$
```
Run following commands to verify the status of vg (volgrp01)
```
$ sudo vgs volgrp01
Or
$ sudo vgdisplay volgrp01
```
Output of above commands,
![vgs-command-output-linux][4]
Above output confirms that volume group (volgrp01) of size 15 GiB is created successful and size of one physical extend (PE) is 4 MB. PE size can be changed while creating vg.
### Step 4) Create LV (Logical Volume)
Lvcreate command is used to create LV from the VG. Syntax of lvcreate command would look like below,
```
$ sudo lvcreate -L <Size-of-LV> -n <LV-Name>   <VG-Name>
```
In our case, following command will be used to create lv of size 14 GB
```
$ sudo lvcreate -L 14G -n lv01 volgrp01
  Logical volume "lv01" created.
$
```
Validate the status of lv, run
```
$ sudo lvs /dev/volgrp01/lv01
or
$ sudo lvdisplay /dev/volgrp01/lv01
```
Output,
![lvs-command-output-linux][5]
Output above shows that LV (lv01) has been created successfully of size 14 GiB.
### Step 5) Format LVM Partition
Use mkfs command to format the lvm partition. In our case lvm partition is /dev/volgrp01/lv01
Note:  We can format the partition either ext4 or xfs, so choose the file system type according to your setup and requirement.
Run following command to format LVM partition as ext4 file system.
```
$ sudo mkfs.ext4 /dev/volgrp01/lv01
```
![mkfs-ext4-filesystem-lvm][6]
Execute beneath command to format the lvm partition with xfs file system,
```
$ sudo mkfs.xfs /dev/volgrp01/lv01
```
To use above formatted partition, we must mount it on some folder. So, lets create a folder /mnt/data
```
$ sudo mkdir /mnt/data
```
Now run mount command to mount it on /mnt/data folder,
```
$ sudo mount /dev/volgrp01/lv01 /mnt/data/
$ df -Th /mnt/data/
Filesystem                Type  Size  Used Avail Use% Mounted on
/dev/mapper/volgrp01-lv01 ext4   14G   24K   13G   1% /mnt/data
$
```
Try to create some dummy file, run following commands,
```
$ cd /mnt/data/
$ echo "testing lvm partition" | sudo tee  dummy.txt
$ cat dummy.txt
testing lvm partition
$
$ sudo rm -f  dummy.txt
```
Perfect, above commands output confirm that we can access lvm partition.
To mount above lvm partition permanently, add its entries in fstab file using following echo command,
```
$ echo '/dev/volgrp01/lv01  /mnt/data  ext4  defaults 0 0' | sudo  tee -a /etc/fstab
$ sudo mount -a
```
Thats all from this guide, thanks for the reading. Kindly do post your queries and feedback in below comments section.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/
作者:[James Kiarie][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/10/dmesg-command-new-attached-disk-linux.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/10/fdisk-command-output-new-disk.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/10/pvdisplay-command-output-linux.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/10/vgs-command-output-linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/10/lvs-command-output-linux.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/10/mkfs-ext4-filesystem-lvm.png

View File

@ -0,0 +1,382 @@
[#]: subject: "14 Best Open Source WYSIWYG HTML Editors"
[#]: via: "https://itsfoss.com/open-source-wysiwyg-editors/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
14 Best Open Source WYSIWYG HTML Editors
======
WYSIWYG (What You See Is What You Get) editors are self-explanatory. Whatever you see when editing is what you, a reader/user see.
Whether you want to build your content management system, or aim to provide an editor to the end-user of your application, an open-source WYSIWYG editor will help provide a secure, modern, and scalable experience. Of course, you also get the technical freedom to customize open-source WYSIWYG editors to meet your requirements.
Here, we look at some of the best open-source WYSIWYG editors.
### Things to Look For When Choosing a WYSIWYG HTML Editor
![best open source wysiwyg editors][1]
A document editor must be fast for some users and loaded with features.
Similarly, what are some of the key highlights that you should look at when selecting an HTML editor? Let me give you some pointers here:
* Is the editor lightweight?
* Does it have SEO-friendly features?
* How well does it let you collaborate?
* Does it offer auto-save functionality?
* Can you check spelling and grammar with it?
* How well does it handle images/galleries?
When selecting an open-source HTML editor for your app or website, you should look for these essential aspects.
Keeping these in mind, let me mention some of the best options to try.
**Note:** *The editors are in no particular order of ranking. You may choose the best for your use case.*
Table of Contents
* Things to Look For When Choosing a WYSIWYG HTML Editor
* 1. CKEditor
* 2. Froala
* 3. TinyMCE
* 4. Quilljs
* 5. Aloha Editor
* 6. Editor.js
* 7. Trix
* 8. Summernote
* 9. ContentTools
* 10. Toast UI Editor
* 11. Jodit
* 12. SCEditor
* 13. SunEditor
* 14. ProseMirror
* Picking The Best Open-Source WYSIWYG Editor
### 1. CKEditor
![ck5 editor][2]
#### Key Features:
* Autosave.
* Drag and drop support.
* Responsive images.
* Supports pasting from Word/GDocs while preserving the formatting.
* Autoformatting, HTML/Markdown support, Font Style customization.
* Image alt text.
* Real-time Collaboration (Premium only).
* Revision History (Premium only).
* Spell and grammar check (Premium only).
CKEditor 5 is a feature-rich and open-source WYSIWYG editing solution with great flexibility. The user interface looks modern. Hence, you may expect a modern user experience.
It offers a free edition and a premium plan with extra features. CKEditor is a popular option among enterprises and several publications with a custom Content Management System (CMS), for which they provide technical support and custom deployment options.
CKeditors free edition should provide basic editing capabilities if you do not need an enterprise-grade offering. Check out its [GitHub page][3] to explore.
[CKEditor 5][4]
### 2. Froala
![froala][5]
#### Key Features:
* Simple user interface and Responsive Design.
* Easy to integrate.
* HTML/Markdown support.
* Theme/Custom style support.
* Lightweight.
* Image Manager and alt text.
* Autosave.
Froala is an exciting web editor that you can easily integrate with your existing [open-source CMS][6] like WordPress.
It provides a simple user interface with the ability to extend its functionality through default plugins. You can use it as a simple editor or add more tools to the interface for a powerful editing experience.
You can self-host it, but to access its mobile apps and premium support, you must opt for one of the paid plans. Head to its [GitHub page][7] to explore more.
[Froala][8]
### 3. TinyMCE
![tinymce editor][9]
#### Key Features:
* Autosave.
* Lightweight.
* Emoticons.
* Manage images.
* Preview.
* Color picker tool.
TinyMCE is an incredibly popular option for users looking to use a solid editor with several integration options.
TinyMCE was the editor powering WordPress with proven flexibility and ease of use for all users. Unless you want real-time collaboration and cloud deployments at your disposal, TinyMCEs free self-hosted edition should serve you well.
It is a lightweight option with essential features to work with. Check out more about it on its [GitHub page][10].
[TinyMCE][11]
### 4. Quilljs
![quilljs][12]
#### Key Features:
* Lightweight.
* Extend functionalities using extensions.
* Simple and easy to use.
Do you like Slacks in-app editor or LinkedIns web editor? Quilljs is what they use to offer that experience.
If you are looking for a polished free, open-source WYSIWYG editor with no premium frills, Quill (or Quilljs) should be the perfect text editor. It is a lightweight editor with a minimal user interface that allows you to customize or add your extensions to scale their functionalities per your requirements.
To explore its technical details, head to its [GitHub page][13].
[Quilljs][14]
### 5. Aloha Editor
![A Video from YouTube][15]
#### Key Features:
* Fast editor.
* Front-end editing.
* Supports clean copy/paste from Word.
* Easy integration.
* Plugin support.
* Customization for look and feel.
Aloha Editor is a simple and fast HTML5 WYSIWYG editor that lets you edit the content on the front end.
You can download and use it for free. But, if you need professional help, you can contact them for paid options. Its [GitHub page][16] should be the perfect place to explore its technical details.
[Aloha Editor][17]
### 6. Editor.js
![editor js 1][18]
#### Key Features:
* Block-style editing.
* Completely free and open-source.
* Plugin support.
* Collaborative editing (in roadmap).
Editor.js gives you the perks of a block-style editor. The headings, paragraphs, and other items are all separate blocks, which makes them editable while not affecting the rest of the content.
It is an entirely free and open-source project with no premium extras available for upgrade. However, there are several plugins to extend the features, and you can also explore its [GitHub page][19] for more info.
[Editor.js][20]
### 7. Trix
![trix editor][21]
**Note:** *This project hasnt seen any new activity for more than a year when writing.*
Trix is an open-source project by the creators of Ruby on Rails.
If you want something different for a change, with the basic functionalities of a web editor, Trix can be a pick. The project describes that it is built for the modern web.
Trix is not a popular option, but it is a respectable project that lets tinkerers try something different for their website or app. You can explore more on its [GitHub page][22].
[Trix][23]
### 8. Summernote
![summernote][24]
#### Key Features:
* Lightweight.
* Simple user interface.
* Plugins supported.
Want something similar to TincyMCE but simpler? Summernote can be a good choice.
It provides the look and feel of a classic web editor without any fancy modern UX elements. The focus of this editor is to offer a simple and fast experience along with the ability to add plugins and connectors.
You also get to change the themes according to Bootstraps used. Yes, an editor on Bootstrap. Explore more about it on its [GitHub page][25].
[Summernote][26]
### 9. ContentTools
![content tools][27]
#### Key Features:
* Easy-to-use.
* Completely free.
* Lightweight.
Want to edit HTML pages from the front end? Well, ContentTools lets you do that pretty quickly.
While it can be integrated with a CMS, it may not be a preferred pick for the job. You can take a look around at its [GitHub page][28] as well.
[ContentTools][29]
### 10. Toast UI Editor
![toast ui editor][30]
#### Key Features:
* Specially focused on Markdown editing/pages.
* Plugins supported.
* Live Preview.
Toast UI editor will be a perfect fit if you deal with Markdown documents to publish web pages.
It offers a live preview and a few essential options for edits. You also get a dark theme and plugin support for extended functions.
While it does provide useful features, it may not be a feature-rich editor for all. Learn more about it on its [GitHub page][31].
[Toast UI Editor][32]
### 11. Jodit
![jodit screenshot][33]
#### Key Features:
* Lightweight.
* TypeScript based.
* Plugin support.
Jodit is a TypeScript-based WYSIWYG editor that makes no use of additional libraries.
It is a simple and helpful editor with all the essential editing features, including drag-and-drop support and a plugin system to extend functionalities.
The user experience is much similar to WordPresss classic editor or TinyMCE. You can opt for its pro version to access additional plugins and technical support. Head to its [GitHub page][34] to explore technical details.
[Jodit][35]
### 12. SCEditor
![sceditor][36]
Key Features:
* Simple and easy to use.
* Completely free.
* Lightweight.
* Plugins support.
SCEditor is yet another simple open-source WYSIWYG editor. It may not be popular enough, but it has been actively maintained for more than six years since publishing.
By default, it does not feature drag-and-drop support, but you can add it using a plugin. There is scope for using multiple themes and customizing the icons as well. Learn more about it on its [GitHub page][37].
[SCEditor][38]
### 13. SunEditor
![suneditor][39]
#### Key Features:
* Feature-rich.
* Completely free.
* Plugin supported.
Like the last one, SunEditor is not popular enough but works well with its simple and feature-rich offering.
It is based on pure JavaScript with no dependencies. You should be able to copy from Microsoft Word and Excel without issues.
Additionally, one can use KaTex (math plugin) as well. It gives you complete freedom with custom plugins as well. There are no premium extras here. Head to its [GitHub page][40] to check out its recent releases.
### 14. ProseMirror
![prosemirror][41]
#### Key Features:
* Collaboration capabilities.
* Modular.
* Simple.
* Plugins support.
ProseMirror is an exciting choice for free for users who want collaborative editing capabilities. Most of the WYSIWYG editors offer the collaboration feature for a premium. But here, you can work with others on the same document in real-time (for free).
It provides a modular architecture that makes maintenance and development more accessible compared to others.
Explore more about it on its [GitHub page][42].
[ProseMirror][43]
### Picking The Best Open-Source WYSIWYG Editor
Depending on the type of use case, it is easy to pick a WYSIWYG, an open-source editor.
If you want to focus on the out-of-the-box experience and reduce efforts to maintain it, any option that provides premium technical support should be a good choice.
If you are more of a DIY user, you should do anything that serves your requirements.
Note that a popular option does not mean that it is a flawless editor for your requirements. Sometimes a more straightforward option is a better solution than a feature-rich editor.
*So, what would be your favorite open-source HTML editor?* *Let me know in the comments below.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-wysiwyg-editors/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/best-open-source-wysiwyg-editors.png
[2]: https://itsfoss.com/wp-content/uploads/2022/10/ck5-editor.webp
[3]: https://github.com/ckeditor/ckeditor5
[4]: https://ckeditor.com/ckeditor-5/
[5]: https://itsfoss.com/wp-content/uploads/2022/10/froala.jpg
[6]: https://itsfoss.com/open-source-cms/
[7]: https://github.com/froala
[8]: https://froala.com/wysiwyg-editor/
[9]: https://itsfoss.com/wp-content/uploads/2022/10/tinymce-editor.jpg
[10]: https://github.com/tinymce/tinymce
[11]: https://www.tiny.cloud/
[12]: https://itsfoss.com/wp-content/uploads/2022/10/quilljs.jpg
[13]: https://github.com/quilljs/quill
[14]: https://quilljs.com/
[15]: https://youtu.be/w_oXaW5Rrpc
[16]: https://github.com/alohaeditor/Aloha-Editor
[17]: https://www.alohaeditor.org/
[18]: https://itsfoss.com/wp-content/uploads/2022/10/editor-js-1.jpg
[19]: https://github.com/codex-team/editor.js
[20]: https://editorjs.io/
[21]: https://itsfoss.com/wp-content/uploads/2022/10/trix-editor.jpg
[22]: https://github.com/basecamp/trix
[23]: https://trix-editor.org/
[24]: https://itsfoss.com/wp-content/uploads/2022/10/summernote.jpg
[25]: https://github.com/summernote/summernote/
[26]: https://summernote.org/
[27]: https://itsfoss.com/wp-content/uploads/2022/10/content-tools.jpg
[28]: https://github.com/GetmeUK/ContentTools
[29]: https://getcontenttools.com/
[30]: https://itsfoss.com/wp-content/uploads/2022/10/toast-ui-editor.jpg
[31]: https://github.com/nhn/tui.editor
[32]: https://ui.toast.com/tui-editor
[33]: https://itsfoss.com/wp-content/uploads/2022/10/jodit-screenshot.jpg
[34]: https://github.com/xdan/jodit
[35]: https://xdsoft.net/jodit/
[36]: https://itsfoss.com/wp-content/uploads/2022/10/sceditor.jpg
[37]: https://github.com/samclarke/SCEditor
[38]: https://www.sceditor.com/
[39]: https://itsfoss.com/wp-content/uploads/2022/10/suneditor.png
[40]: https://github.com/JiHong88/SunEditor
[41]: https://itsfoss.com/wp-content/uploads/2022/10/prosemirror.jpg
[42]: https://github.com/ProseMirror/prosemirror
[43]: https://prosemirror.net/

View File

@ -0,0 +1,159 @@
[#]: subject: "Groovy vs Java: Connecting a PostgreSQL database with JDBC"
[#]: via: "https://opensource.com/article/22/10/groovy-vs-java-sql"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Groovy vs Java: Connecting a PostgreSQL database with JDBC
======
This example demonstrates how Groovy streamlines the clunkiness of Java.
![Coffee beans][1]
Image by: Pixabay. CC0.
Lately, I've been looking at how Groovy streamlines the slight clunkiness of Java. This article examines some differences between connecting to a PostgreSQL database using JDBC in Java versus Groovy.
### Install Java and Groovy
Groovy is based on Java and requires a Java installation. Both a recent/decent version of Java and Groovy might be in your Linux distribution's repositories, or you can install Groovy by following [these instructions][2]. A nice alternative for Linux users is SDKMan, which provides multiple versions of Java, Groovy, and many other related tools. For this article, I'm using SDK's releases of:
* Java version 11.0.12-open of OpenJDK 11
* Groovy version 3.0.8
### Back to the problem
If you haven't already, please review [this article][3] on installing JDBC and [this article][4] on setting up PostgreSQL.
Whether you're using Java or Groovy, several basic steps happen in any program that uses JDBC to pull data out of a database:
1. Establish a `Connection` instance to the database back end where the data resides.
2. Using an SQL string, get an instance of a `Statement` (or something similar, like a `PreparedStatement`) that will handle the execution of the SQL string.
3. Process the `ResultSet` instance returned by having the `Statement` instance execute the query, for example, printing the rows returned on the console.
4. Close the `Statement` and `Connection` instances when done.
In Java, the correspondence between the code and the list above is essentially one-for-one. Groovy, as usual, streamlines the process.
#### Java example
Here's the Java code to look at the land cover data I loaded in the second article linked above:
```
1  import java.sql.Connection;
2  import java.sql.Statement;
3  import java.sql.ResultSet;
4  import java.sql.DriverManager;
5  import java.sql.SQLException;
       
6  public class TestQuery {
       
7      public static void main(String[] args) {
       
8          final String url = "jdbc:postgresql://localhost/landcover";
9          final String user = "clh";
10         final String password = "carl-man";
11         try (Connection connection = DriverManager.getConnection(url, user, password)) {
12             try (Statement statement = connection.createStatement()) {
13                 ResultSet res = statement.executeQuery("select distinct country_code from land_cover");
14                while (res.next()) {
15                     System.out.println("country code " + res.getString("country_code"));
16                }
       
17             } catch (SQLException se) {
18                System.err.println(se.getMessage());
19             }
20         } catch (SQLException ce) {
21             System.err.println(ce.getMessage());
22         }
23     }
24 }
```
Lines 1-5 are the necessary import statements for the JDBC classes. Of course, I could shorten this to `import java.sql.*` but that sort of thing is somewhat frowned-upon these days.
Lines 6-24 define the public class `TestQuery` I will use to connect to the database and print some of the contents of the main table.
Lines 7-23 define the `main` method that does the work.
Lines 8-10 define the three strings needed to connect to a database: The URL, the user name, and the user password.
Lines 11-22 use a try-with-resources to open the `Connection` instance and automatically close it when done.
Lines 12 -19 use another try-with-resources to open the `Statement` instance and automatically close it when done.
Line 13 creates the `ResultSet` instance handle the SQL query, which uses **SELECT DISTINCT** to get all unique values of **country_code** from the **land_cover** table in the database.
Lines 14-16 process the result set returned by the query, printing out the country codes one per line.
Lines 17-19 and 20-22 handle any SQL exceptions.
#### Groovy example
I'll do something similar in Groovy:
```
1  import groovy.sql.Sql
       
2  final String url = "jdbc:postgresql://localhost/landcover"
3  final String user = "me"
4  final String password = "my-password"
5  final String driver = "org.postgresql.Driver"
       
6  Sql.withInstance(url, user, password, driver) { sql ->
       
7      sql.eachRow('select distinct country_code from land_cover') { row ->
8        println "row.country_code ${row.country_code}"
9      }
10  }
```
Okay, that's a lot shorter10 lines instead of 24! Here are the details:
Line 1 is the only import needed. In my view, not having to jump around JavaDocs for three different classes is a distinct advantage.
Lines 2-5 define the four strings needed to connect to a database using the `Sql` class. The first three are the same as for `java.sql.Connection` ; the fourth names the driver I want.
Line 6 is doing a bunch of heavy lifting. The method call is `Sql.withInstance()`, similar to other uses of "with" in Groovy. The call:
* Creates an instance of `Sql` (connection, statement, etc.).
* Takes a closure as its final parameter, passing it the instance of `Sql` that it created.
* Closes the instance of `Sql` when the closure exits.
Line 7 calls the `eachRow()` method of the `Sql` instance, wrapping the creation and processing of the result set. The `eachRow()` method takes a closure as its final argument and passes each `row` to the closure as it processes the returned lines of data from the table.
### Groovy can simplify your life
For those of you whose day job involves scripting and relational databases, I think it's pretty obvious from the above that Groovy can simplify your life. A few closing comments:
* I could have accomplished this similarly to the Java version; for example, instead of calling `sql.eachRow()`, I could have called `sql.query()`, which takes a closure as its last argument and passes a result set to that closure, at which point I would have probably used a `while()` as in the Java version (or maybe `each()`).
* I could also read the resulting rows into a list, all at once, using a call to `sql.rows()`, which can transform the data in a manner similar to using `.collect()` on a list.
* Remember that the SQL passed into the `eachRow()` (or `query()`) method can be arbitrarily complex, including table joins, grouping, ordering, and any other operations supported by the database in question.
* Note that SQL can also be parametrized when using an instance of `PreparedStatement`, which is a nice way to avoid SQL injection if any part of the SQL comes in from outside the coder's sphere of control.
* This is a good moment to point the diligent reader to the [JavaDocs for groovy.sql.Sql][5].
### Groovy resources
The [Apache Groovy language site][6] provides a good tutorial-level overview of working with databases, including other ways to connect, plus additional operations, including insertions, deletions, transactions, batching, pagination—the list goes on. This documentation is quite concise and easy to follow, at least partly because the facility it is documenting has itself been designed to be concise and easy to use!
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/groovy-vs-java-sql
作者:[Chris Hermansen][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/java-coffee-beans.jpg
[2]: https://groovy.apache.org/download.html
[3]: https://opensource.com/article/22/9/install-jdbc-linux
[4]: https://opensource.com/article/22/9/drop-your-database-for-postgresql
[5]: https://docs.groovy-lang.org/latest/html/api/index.html
[6]: https://groovy-lang.org/

View File

@ -0,0 +1,88 @@
[#]: subject: "Kubuntu 22.10 Kinetic Kudu: Top New Features"
[#]: via: "https://www.debugpoint.com/kubuntu-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubuntu 22.10 Kinetic Kudu: Top New Features
======
A brief summary of Kubuntu 22.10 “Kinetic Kudu” and additional information about the release.
![Kubuntu 22.10 Kinetic Kudu Desktop][1]
### Kubuntu 22.10: Top New Features
Among all the [great KDE Plasma-based distributions][2], Kubuntu is the best. Because it brings stability to both Plasma and at its core, that is Ubuntu.
Kubuntu 22.10 is a short-term release based on Ubuntu 22.10 supported for nine months from the release. Since short-term releases are to adopt the latest technologies, removing the obsolete ones, its features list is minimal.
This release of Kubuntu features Linux Kernel 5.19, which brings run-time Average Power Limiting (RAPL) support for Intels Raptor and Alder Lake processor, multiple families of ARM updates in mainline kernel and usual processor/GPU and file-system updates. Learn more about Kernel 5.19 features in [this article][3].
Compared to the prior [Kubuntu release 22.04 LTS][4] (with Plasma 5.24), you get the latest KDE Plasma 5.25 (final point release) desktop with all the bug fixes and updates.
Although, [KDE Plasma 5.26][5], which has just got released, could not make it to this version. But I believe it should come in as a point release, just not on the release day.
Besides, Plasma 5.25 is not small in terms of features. Its, in fact, packed with new cool advancements. If you are especially using Kubuntus earlier version, you should be aware of these new items.
Firstly, Kubuntu 22.10 enables you to make your default panel “float”. We call it the Floating Panel. So, no more using the add-ons for this.
Secondly, the accent colour of your desktop can change based on the wallpapers tone. Now you can see your Kubuntu desktop changes dynamically when you enable it from Settings > Global Theme > Colours.
![KDE Plasma - Dynamic Accent Colour and Floating Panel Demo][6]
In addition, switching between dark and light modes becomes more smooth thanks to the change. Also, in Kubuntu 22.10 with Wayland, you can now see and apply the display-specific resolutions in the settings dropdown.
On the other hand, Discover is more friendly to Flatpak, with additional app details and an additional options button to notify you that there is still data for uninstalled apps.
![The app page gives more clarity in Plasma 5.25][7]
Furthermore, the Krunner launcher in Kubuntu now detects the search language and display results accordingly. Also, the network manager applet now shows the Wi-Fi frequency alongside the access point name (this is help full for cases where you have the same access point name for 4G and 5G bands).
All of these changes are powered by Qt 5.15.6 and Framework 5.98. If you want to learn more about Plasma 5.25, refer to the dedicated feature guide [here][8].
### Other features of Kubuntu 22.10
The core applications and packages bump up to their respective versions based on Ubuntu 22.10, and heres a summary.
* Linux Kernel 5.19
* KDE Plasma 5.25.5 (hopefully will get 5.26 soon)
* KDE Framework 5.98
* Qt 5.15.6
* Firefox 105.0.1
* Thunderbird 102.3.2
* LibreOffice 7.4.2.3
* VLC Media Player 3.0.17
* Pipewire replacing PulseAudio
Finally, you can download the Kubuntu 22.10 BETA from the below links.
[https://cdimage.ubuntu.com/kubuntu/releases/kinetic/beta/][9]
While the developers are preparing for the final release (due on Oct 20, 2022), you can try it on a [virtual machine][10], Or a physical system.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/kubuntu-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Kubuntu-22.10-Kinetic-Kudu-Desktop.jpg
[2]: https://www.debugpoint.com/top-linux-distributions-kde-plasma/
[3]: https://www.debugpoint.com/linux-kernel-5-19/
[4]: https://www.debugpoint.com/kubuntu-22-04-lts/
[5]: https://www.debugpoint.com/kde-plasma-5-26/
[6]: https://youtu.be/npfHwMLXXHs
[7]: https://www.debugpoint.com/wp-content/uploads/2022/05/App-page-gives-more-clarity-in-Plasma-5.25.jpg
[8]: https://www.debugpoint.com/kde-plasma-5-25/
[9]: https://cdimage.ubuntu.com/kubuntu/releases/kinetic/beta/
[10]: https://www.debugpoint.com/tag/virtual-machine

View File

@ -0,0 +1,190 @@
[#]: subject: "Asynchronous programming in Rust"
[#]: via: "https://opensource.com/article/22/10/asynchronous-programming-rust"
[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Asynchronous programming in Rust
======
Take a look at how async-await works in Rust.
![Ferris the crab under the sea, unofficial logo for Rust programming language][1]
Image by: Opensource.com
Asynchronous programming: Incredibly useful but difficult to learn. You can't avoid async programming to create a fast and reactive application. Applications with a high amount of file or network I/O or with a GUI that should always be reactive benefit tremendously from async programming. Tasks can be executed in the background while the user still makes inputs. Async programming is possible in many languages, each with different styles and syntax. [Rust][2] is no exception. In Rust, this feature is called *async-await*.
While *async-await* has been an integral part of Rust since version 1.39.0, most applications depend on community crates. In Rust, except for a larger binary, *async-await* comes with zero costs. This article gives you an insight into asynchronous programming in Rust.
### Under the hood
To get a basic understanding of *async-await* in Rust, you literally start in the middle.
The center of *async-await* is the [future][3] trait, which declares the method *poll* (I cover this in more detail below). If a value can be computed asynchronously, the related type should implement the *future* trait. The *poll* method is called repeatedly until the final value is available.
At this point, you could repeatedly call the *poll* method from your synchronous application manually in order to get the final value. However, since I'm talking about asynchronous programming, you can hand over this task to another component: the runtime. So before you can make use of the *async* syntax, a runtime must be present. I use the runtime from the [tokio][4] community crate in the following examples.
A handy way of making the tokio runtime available is to use the `#[tokio::main]` macro on your main function:
```
#[tokio::main]
async fn main(){
    println!("Start!");
    sleep(Duration::from_secs(1)).await;
    println!("End after 1 second");
}
```
When the runtime is available, you can now *await* futures. Awaiting means that further executions stop here as long as the *future* needs to be completed. The *await* method causes the runtime to invoke the *poll* method, which will drive the *future* to completion.
In the above example, the tokios [sleep][5] function returns a *future* that finishes when the specified duration has passed. By awaiting this future, the related *poll* method is repeatedly called until the *future* completes. Furthermore, the *main()* function also returns a *future* because of the `async` keyword before the **fn**.
So if you see a function marked with `async`**:**
```
async fn foo() -> usize { /**/ }
```
Then it is just syntactic sugar for:
```
fn foo() -> impl Future<Output = usize> { async { /**/ } }
```
### Pinning and boxing
To remove some of the shrouds and clouds of *async-await* in Rust, you must understand *pinning* and *boxing*.
If you are dealing with *async-await*, you will relatively quickly step over the terms boxing and pinning. Since I find that the available explanations on the subject are rather difficult to understand, I have set myself the goal of explaining the issue more easily.
Sometimes it is necessary to have objects that are guaranteed not to be moved in memory. This comes into effect when you have a self-referential type:
```
struct MustBePinned {
    a: int16,
    b: &int16
}
```
If member **b** is a reference (pointer) to member **a** of the same instance, then reference **b** becomes invalid when the instance is moved because the location of member **a** has changed but **b** still points to the previous location. You can find a more comprehensive example of a *self-referential* type in the [Rust Async book][6]. All you need to know now is that an instance of *MustBePinned* should not be moved in memory. Types like *MustBePinned* do not implement the *Unpin* trait, which would allow them to move within memory safely. In other words, *MustBePinned* is *!Unpin*.
Back to the future: By default, a *future* is also *!Unpin*; thus, it should not be moved in memory. So how do you handle those types? You pin and box them.
The [Pin<T>][7] type wraps pointer types, guaranteeing that the values behind the pointer won't be moved. The **Pin<T>** type ensures this by not providing a mutable reference of the wrapped type. The type will be pinned for the lifetime of the object. If you accidentally pin a type that implements *Unpin* (which is safe to move), it won't have any effect.
In practice: If you want to return a *future* (*!Unpin*) from a function, you must box it. Using [Box<T>][8] causes the type to be allocated on the heap instead of the stack and thus ensures that it can outlive the current function without being moved. In particular, if you want to hand over a *future*, you can only hand over a pointer to it as the *future* must be of type **Pin<Box<dyn Future>>**.
Using *async-wait*, you will certainly stumble upon this boxing and pinning syntax. To wrap this topic up, you just have to remember this:
* Rust does not know whether a type can be safely moved.
* Types that shouldn't be moved must be wrapped inside [Pin<T>][9].
* Most types are [Unpin][10]ned types. They implement the trait Unpin and can be freely moved within memory.
* If a type is wrapped inside [Pin<T>][11] and the wrapped type is !Unpin, it is not possible to get a mutable reference out of it.
* Futures created by the async keyword are !Unpin and thus must be pinned.
### Future trait
In the [future][12] trait, everything comes together:
```
pub trait Future {
    type Output;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}
```
Here is a simple example of how to implement the *future* trait:
```
struct  MyCounterFuture {
cnt : u32,
cnt_final : u32
}
impl MyCounterFuture {
pub fn new(final_value : u32) -> Self {
Self {
cnt : 0,
cnt_final : final_value
}
}
}
 
impl Future for MyCounterFuture {
type Output = u32;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<u32>{
self.cnt += 1;
if self.cnt >= self.cnt_final {
println!("Counting finished");
return Poll::Ready(self.cnt_final);
}
cx.waker().wake_by_ref();
Poll::Pending
}
}
#[tokio::main]
async fn main(){
let my_counter = MyCounterFuture::new(42);
let final_value = my_counter.await;
println!("Final value: {}", final_value);
}
```
Here is a simple example of how the *future* trait is implemented manually: The *future* is initialized with a value to which it shall count, stored in **cnt_final**. Each time the *poll* method is invoked, the internal value **cnt** gets incremented by one. If **cnt** is less than **cnt_final**, the future signals the [waker][13] of the runtime that the *future* is ready to be polled again. The return value of `Poll::Pending` signals that the *future* has not completed yet. After **cnt** is *>=* **cnt_final**, the *poll* function returns with `Poll::Ready`, signaling that the *future* has completed and providing the final value.
This is just a simple example, and of course, there are other things to take care of. If you consider creating your own futures, I highly suggest reading the chapter [Async in depth][14] in the documentation of the tokio crate.
### Wrap up
Before I wrap things up, here is some additional information that I consider useful:
* Create a new pinned and boxed type using [Box::pin][15].
* The [futures][16] crate provides the type [BoxFuture][17] which lets you define a future as return type of a function.
* The [async_trait][18] allows you to define an async function in traits (which is currently not allowed).
* The [pin-utils][19] crate provides macros to pin values.
* The tokios [try_join!][20] macro (a)waits on multiple futures which return a [Result<T, E>][21].
Once the first hurdles have been overcome, async programming in Rust is straightforward. You don't even have to implement the *future* trait in your own types if you can outsource code that can be executed in parallel in an async function. In Rust, single-threaded and multi-threaded runtimes are available, so you can benefit from async programming even in embedded environments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/asynchronous-programming-rust
作者:[Stephan Avenwedde][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/rust_programming_crab_sea.png
[2]: https://opensource.com/article/20/12/learn-rust
[3]: https://doc.rust-lang.org/std/future/trait.Future.html
[4]: https://tokio.rs/
[5]: https://docs.rs/tokio/latest/tokio/time/fn.sleep.html
[6]: https://rust-lang.github.io/async-book/04_pinning/01_chapter.html
[7]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[8]: https://doc.rust-lang.org/std/boxed/struct.Box.html
[9]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[10]: https://doc.rust-lang.org/std/marker/trait.Unpin.html#
[11]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[12]: https://doc.rust-lang.org/std/future/trait.Future.html
[13]: https://tokio.rs/tokio/tutorial/async#wakers
[14]: https://tokio.rs/tokio/tutorial/async
[15]: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.pin
[16]: https://crates.io/crates/futures
[17]: https://docs.rs/futures/latest/futures/future/type.BoxFuture.html
[18]: https://docs.rs/async-trait/latest/async_trait/
[19]: https://crates.io/crates/pin-utils
[20]: https://docs.rs/tokio/latest/tokio/macro.try_join.html
[21]: https://doc.rust-lang.org/std/result/

View File

@ -0,0 +1,479 @@
[#]: subject: "How To Monitor User Activity In Linux"
[#]: via: "https://ostechnix.com/monitor-user-activity-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Monitor User Activity In Linux
======
As a Linux administrator, you need to keep track of all users' activities. When something goes wrong in the server, you can analyze and investigate the users' activities, and try to find the root cause of the problem. There are many ways to **monitor users in Linux**. In this guide, we are going to talk about **GNU accounting utilities** that can be used to **monitor the user activity in Linux**.
### What are Accounting utilities?
The Accounting utilities provides the useful information about system usage, such as connections, programs executed, and utilization of system resources in Linux. These accounting utilities can be installed using **psacct** or **acct** package.
The psacct or acct are same. In RPM-based systems, it is available as psacct, and in DEB-based systems, it is available as acct.
What is the use of psacct or acct utilities? You might wonder. Generally, the user's command line history details will be stored in **.bash_history** file in their $HOME directory. Some users might try to edit, modify or delete the history.
However, the accounting utilities will still be able to retrieve the users activities even though they [cleared their command line history][1] completely. Because, **all process accounting files are owned by root** user, and the normal users can't edit them.
### Install psacct or acct in Linux
The psacct/acct utilities are packaged for popular Linux distributions.
To install psacct in Alpine Linux, run:
```
$ sudo apk add psacct
```
To install acct in Arch Linux and its variants like EndeavourOS and Manjaro Linux, run:
```
$ sudo pacman -S acct
```
On Fedora, RHEL, and its clones like CentOS, AlmaLinux and Rocky Linux, run the following command to install psacct:
```
$ sudo dnf install psacct
```
In RHEL 6 and older versions, you should use `yum` instead of `dnf` to install psacct.
```
$ sudo yum install psacct
```
On Debian, Ubuntu, Linux Mint, install acct using command:
```
$ sudo apt install acct
```
To install acct on openSUSE, run:
```
$ sudo zypper install acct
```
### Start psacct/acct service
To enable and start the psacct service, run:
```
$ sudo systemctl enable psacct
```
```
$ sudo systemctl start psacct
```
To check if psacct service is loaded and active, run:
```
$ sudo systemctl status psacct
```
On DEB-based systems, the acct service will be automatically started after installing it.
You can verify whether acct service is started or not using command:
```
$ sudo systemctl status acct
```
**Sample output:**
```
● acct.service - Kernel process accounting
Loaded: loaded (/lib/systemd/system/acct.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2022-10-13 16:06:35 IST; 28s ago
Docs: man:accton(8)
Process: 3241 ExecStart=/usr/sbin/accton /var/log/account/pacct (code=exited, status=0/SUCCESS)
Main PID: 3241 (code=exited, status=0/SUCCESS)
CPU: 879us
Oct 13 16:06:35 ubuntu2204 systemd[1]: Starting Kernel process accounting...
Oct 13 16:06:35 ubuntu2204 accton[3241]: Turning on process accounting, file set to '/var/log/account/pacct'.
Oct 13 16:06:35 ubuntu2204 systemd[1]: Finished Kernel process accounting.
```
> **Download** - [Free eBook: "Nagios Monitoring Handbook"][2]
### Monitor User Activity in Linux using psacct or acct
The psacct (Process accounting) package contains following useful utilities to monitor the user and process activities.
* ac - Displays statistics about how long users have been logged on.
* lastcomm - Displays information about previously executed commands.
* accton - Turns process accounting on or off.
* dump-acct - Transforms the output file from the accton format to a human-readable format.
* dump-utmp - Prints utmp files in human-readable format.
* sa - Summarizes information about previously executed commands.
Let us learn how to monitor the activities of Linux users by using each utility with examples.
#### 1. The ac command examples
The **ac** utility will display the report of connect time in hours. It can tell you how long a user or group of users were connected to the system.
##### 1.1. Display total connect time of all users
```
$ ac
```
This command displays the total connect time of all users in hours.
```
total 52.91
```
![Display total connect time of all users][3]
##### 1.2. Show total connect of all users by day-wise
You can sort this result by day-wise by using **-d** flag as shown below.
```
$ ac -d
```
**Sample output:**
```
May 11 total 4.29
May 13 total 3.23
May 14 total 7.66
May 15 total 8.97
May 16 total 0.52
May 20 total 4.09
May 24 total 1.32
Jun 9 total 15.18
Jun 10 total 2.97
Jun 22 total 2.61
Jul 19 total 1.95
Today total 0.29
```
![Show total connect of all users by day-wise][4]
##### 1.3. Get total connect time by user-wise
Also, you can display how long each user was connected with the system with **-p** flag.
```
$ ac -p
```
**Sample output:**
```
ostechnix 52.85
root 0.51
total 53.36
```
![Get total connect time by user-wise][5]
##### 1.4. Print total connect time of a specific user
And also, you can display the individual user's total login time as well.
```
$ ac ostechnix
```
**Sample output:**
```
total 52.95
```
##### 1.5. View total connect time of a certain user by day-wise
To display individual user's login time by day-wise, run:
```
$ ac -d ostechnix
```
**Sample output:**
```
May 11 total 4.29
May 13 total 3.23
May 14 total 7.66
May 15 total 8.97
May 16 total 0.01
May 20 total 4.09
May 24 total 1.32
Jun 9 total 15.18
Jun 10 total 2.97
Jun 22 total 2.61
Jul 19 total 1.95
Today total 0.68
```
![View total connect time of a certain user by day-wise][6]
For more details, refer the man pages.
```
$ man ac
```
#### 2. The lastcomm command examples
The **lastcomm** utility displays the list of previously executed commands. The most recent executed commands will be listed first.
##### 2.1. Display previously executed commands
```
$ lastcomm
```
**Sample output:**
```
systemd-hostnam S root __ 0.06 secs Thu Oct 13 17:21
systemd-localed S root __ 0.06 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
awk ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
uname ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
sed ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
grep ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
grep ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
[...]
```
##### 2.2. Print last executed commands of a specific user
The above command displays all user's commands. You can display the previously executed commands by a particular user using command:
```
$ lastcomm ostechnix
```
**Sample output:**
```
less ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
gdbus X ostechni __ 0.00 secs Thu Oct 13 17:24
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:24
ac ostechni pts/1 0.00 secs Thu Oct 13 17:24
update-notifier F ostechni __ 0.00 secs Thu Oct 13 17:23
apport-checkrep ostechni __ 0.06 secs Thu Oct 13 17:23
apport-checkrep ostechni __ 0.05 secs Thu Oct 13 17:23
systemctl ostechni __ 0.00 secs Thu Oct 13 17:23
apt-check ostechni __ 0.81 secs Thu Oct 13 17:23
dpkg ostechni __ 0.00 secs Thu Oct 13 17:23
ischroot ostechni __ 0.00 secs Thu Oct 13 17:23
dpkg ostechni __ 0.00 secs Thu Oct 13 17:23
[...]
```
##### 2.3. Print total number of command execution
Also, you can view how many times a particular command has been executed.
```
$ lastcomm apt
```
**Sample output:**
```
apt S root pts/2 0.70 secs Thu Oct 13 16:06
apt F root pts/2 0.00 secs Thu Oct 13 16:06
apt F root pts/2 0.00 secs Thu Oct 13 16:06
```
As you see in the above output, the `apt` command has been executed three times by `root` user.
For more details, refer the man pages.
```
$ man lastcomm
```
#### 3. The sa command examples
The sa utility will summarize the information about previously executed commands.
##### 3.1. Print summary of all commands
```
$ sa
```
**Sample output:**
```
1522 1598.63re 0.23cp 0avio 32712k
139 570.90re 0.05cp 0avio 36877k ***other*
38 163.63re 0.05cp 0avio 111445k gdbus
3 0.05re 0.04cp 0avio 12015k apt-check
27 264.27re 0.02cp 0avio 0k kworker/dying*
2 51.87re 0.01cp 0avio 5310464k Docker Desktop
5 0.03re 0.01cp 0avio 785k snap-confine
8 59.48re 0.01cp 0avio 85838k gmain
5 103.94re 0.01cp 0avio 112720k dconf worker
24 3.38re 0.00cp 0avio 2937k systemd-udevd*
7 0.01re 0.00cp 0avio 36208k 5
3 1.51re 0.00cp 0avio 3672k systemd-timedat
2 0.00re 0.00cp 0avio 10236k apport-checkrep
2 0.01re 0.00cp 0avio 4316160k ThreadPoolForeg*
2 0.00re 0.00cp 0avio 8550k package-data-do
3 0.79re 0.00cp 0avio 2156k dbus-daemon
12 0.00re 0.00cp 0avio 39631k ffmpeg
[...]
```
##### 3.2. View number of processes and CPU minutes
To print the number of processes and number of CPU minutes on a per-user basis, run `sa` command with `-m` flag:
```
$ sa -m
```
**Sample output:**
```
1525 1598.63re 0.23cp 0avio 32651k
root 561 647.23re 0.09cp 0avio 3847k
ostechnix 825 780.79re 0.08cp 0avio 47788k
gdm 117 13.43re 0.06cp 0avio 63715k
colord 2 52.01re 0.00cp 0avio 89720k
geoclue 1 1.01re 0.00cp 0avio 70608k
jellyfin 12 0.00re 0.00cp 0avio 39631k
man 1 0.00re 0.00cp 0avio 3124k
kernoops 4 104.12re 0.00cp 0avio 3270k
sshd 1 0.05re 0.00cp 0avio 3856k
whoopsie 1 0.00re 0.00cp 0avio 8552k
```
##### 3.3. Print user id and command name
For each command in the accounting file, print the userid and command name using `-u` flag.
```
$ sa -u
```
**Sample output:**
```
root 0.00 cpu 693k mem 0 io accton
root 0.00 cpu 3668k mem 0 io systemd-tty-ask
root 0.00 cpu 3260k mem 0 io systemctl
root 0.01 cpu 3764k mem 0 io deb-systemd-inv
root 0.00 cpu 722k mem 0 io acct.postinst
root 0.00 cpu 704k mem 0 io rm
root 0.00 cpu 939k mem 0 io cp
root 0.00 cpu 704k mem 0 io rm
root 0.00 cpu 951k mem 0 io find
root 0.00 cpu 911k mem 0 io gzip
root 0.00 cpu 722k mem 0 io sh
root 0.00 cpu 748k mem 0 io install-info
root 0.00 cpu 911k mem 0 io gzip
[...]
```
For more details, refer the man pages.
```
$ man sa
```
#### 4. The dump-acct and dump-utmp command examples
The **dump-acct** utility displays the output file from the accton format to a human-readable format.
```
$ dump-acct /var/account/pacct
```
dump-utmp displays utmp files in human-readable format.
```
$ dump-utmp /var/run/utmp
```
For more details, refer the man pages.
```
$ man dump-acct
```
```
$ man dump-utmp
```
#### 5. The accton command examples
The accton command will allow you to turn on or turn off accounting.
To turn on process accounting, run:
```
$ accton on
```
To turn it off, run:
```
$ accton off
```
For more details, refer the man pages.
```
$ man accton
```
### Conclusion
Every Linux administrator should be aware of GNU accounting utilities to keep an eye on all users. These utilities will be quite helpful in troubleshooting time.
**Resource:**
* [The GNU Accounting Utilities website][7]
--------------------------------------------------------------------------------
via: https://ostechnix.com/monitor-user-activity-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/how-to-clear-command-line-history-in-linux/
[2]: https://ostechnix.tradepub.com/free/w_syst04/prgm.cgi
[3]: https://ostechnix.com/wp-content/uploads/2022/10/Display-total-connect-time-of-all-users.png
[4]: https://ostechnix.com/wp-content/uploads/2022/10/Show-total-connect-of-all-users-by-day-wise.png
[5]: https://ostechnix.com/wp-content/uploads/2022/10/Get-total-connect-time-by-user-wise.png
[6]: https://ostechnix.com/wp-content/uploads/2022/10/View-total-connect-time-of-a-certain-user-by-day-wise.png
[7]: https://www.gnu.org/software/acct/manual/accounting.html

View File

@ -0,0 +1,93 @@
[#]: subject: "Ubuntu Budgie 22.10 Kinetic Kudu: Top New Features"
[#]: via: "https://www.debugpoint.com/ubuntu-budgie-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu Budgie 22.10 Kinetic Kudu: Top New Features
======
Heres a brief overview of the Ubuntu Budgie 22.10 (upcoming) and its features.
![Ubuntu Budgie 22.10 KInetic Kudu Desktop][1]
### Ubuntu Budgie 22.10: Top New Features
The Budgie desktop has a different fan base because of its fusion of simplicity, feature and performance. Ubuntu Budgie 22.10 is an official Budgie flavour of Ubuntu, where you get the latest Ubuntu base with a stable Budgie desktop.
Ubuntu Budgie 22.10 is a short-term release, supported for nine months until July 2023.
This release of Ubuntu Budgie features Linux Kernel 5.19, which brings run-time Average Power Limiting (RAPL) support for Intels Raptor and Alder Lake processor, multiple families of ARM updates in mainline kernel and usual processor/GPU and file-system updates. Learn more about Kernel 5.19 features in [this article][2].
At the heart of this version, Ubuntu Budgie is powered by Budgie Desktop version 10.6.4, which brings updated applications and additional core tweaks. If you are using the prior Ubuntu budgie version, i.e. 22.04 or 21.10, you will notice some changes. And you should be aware of those.
First of all, the Budgie Control Center gets modern protocols for screensharing (both RDP and VNC), enhanced fractional scaling and colour profile support for your monitor. In addition, the Control centre now shows the monitor refresh rate and additional support added for Wacom tablets.
Secondly, Budgie desktop settings get a global option to control applets spacing and additional refinements.
After removing the calendar launcher, the clock at the top bar now only gives the launcher for date/time settings. You can access the Calendar from the applets at the extreme right of the top bar.
Ubuntu Budgie 22.10 fundamentally changed its application stack, which ships by default. Earlier (22.04 and prior), Budgie featured the GNOME applications as default for several desktop functions. Since GNOME is already moved to libadwaita/GTK4 with its own styling and theming, those apps wouldnt look consistent in Budgies styling.
That is correct. Because these rounded corners with budgie/mate apps really look off.
![Budgie desktop with GTK4-libadwaita and MATE apps][3]
So, from this release onwards, the default app sets are slowly moving away from GNOME Apps to native or MATE apps/alternatives.
For Example, GNOME Calculator and System Monitor are now replaced by Mate Calculator and Mate system monitor. Atril document viewer replaces Evince reader. In addition, font-manager replaces GNOME Font Viewer, and a list of other GNOME apps are in “wait and watch” mode. They may get replaced in upcoming Budgie desktop point releases.
However, the text editor is still Gedit in this version, which is already a [great choice][4]. And a new native screenshot application debuts with a tweak tool for Raspberry Pi devices.
If you want to learn more about this transition, read the discussion [here][5].
So, thats about the core changes in this release and heres a quick summary of Ubuntu Budgie alongside other applications.
### Summary of the core items of Ubuntu Budgie 22.10
#### Core and major items
* [Ubuntu 22.10][6]
* Linux Kernel 5.19
* Budgie desktop version 10.6.4
* Firefox 105.0.1 (Snap version)
* Thunderbird 102.3.2
* LibreOffice 7.4.2.3
#### Other changes
* Pipewire replacing PulseAudio. The PulseAudio package is removed from ISO.
* WebP support by default
* New Tweak Tool for ARM devices (Raspberry Pi)
* Tilix Terminal 1.9.5
* Transmission torrent client 3.0.0
* GNOME Software 43.0
Finally, you can download Ubuntu Budgie 22.10 Beta from the below link. Also you can try out the Raspberry Pi beta image as well.
* [Link to the beta build][7]
* [Link to Raspberry Pi build][8]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/ubuntu-budgie-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Ubuntu-Budgie-22.10-KInetic-Kudu-Desktop-1024x578.jpg
[2]: https://www.debugpoint.com/linux-kernel-5-19/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Budgie-desktop-with-GTK4-libadwaita-and-MATE-apps.jpg
[4]: https://www.debugpoint.com/gedit-features/
[5]: https://discourse.ubuntubudgie.org/t/default-applications-review-for-22-10-23-04-and-beyond/5883
[6]: https://www.debugpoint.com/ubuntu-22-10/
[7]: https://cdimage.ubuntu.com/ubuntu-budgie/releases/kinetic/beta/
[8]: https://sourceforge.net/projects/budgie-remix/files/budgie-raspi-22.10/

View File

@ -0,0 +1,102 @@
[#]: subject: "What you need to know about compiling code"
[#]: via: "https://opensource.com/article/22/10/compiling-code"
[#]: author: "Alan Smithee https://opensource.com/users/alansmithee"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about compiling code
======
Use this handy mousetrap analogy to understand compiling code. Then download our new eBook, An open source developer's guide to building applications.
Source code must be compiled in order to run, and in open source software everyone has access to source code. Whether you've written code yourself and you want to compile and run it, or whether you've downloaded somebody's project to try it out, it's useful to know how to process source code through a [compiler][2], and also what exactly a compiler does with all that code.
### Build a better mousetrap
We don't usually think of a mousetrap as a computer, but believe it or not, it does share some similarities with the CPU running the device you're reading this article on. The classic (non-cat) mousetrap has two states: it's either set or released. You might consider that *on* (the kill bar is set and stores potential energy) and *off* (the kill bar has been triggered.) In a sense, a mousetrap is a computer that calculates the presence of a mouse. You might imagine this code, in an imaginary language, describing the process:
```
if mousetrap == 0 then
There's a mouse!
else
There's no mouse yet.
end
```
In other words, you can derive mouse data based on the state of a mousetrap. The mousetrap isn't foolproof, of course. There could be a mouse next to the mousetrap, and the mousetrap would still be registered as *on* because the mouse has not yet triggered the trap. So the program could use a few enhancements, but that's pretty typical.
### Switches
A mousetrap is ultimately a switch. You probably use a switch to turn on the lights in your house. A lot of information is stored in these mechanisms. For instance, people often assume that you're at home when the lights are on.
You could program actions based on the activity of lights on in your neighborhood. If all lights are out, then turn down your loud music because people have probably gone to bed.
A CPU uses the same logic, multiplied by several orders of measure, and shrunken to a microscopic level. When a CPU receives an electrical signal at a specific register, then some other register can be tripped, and then another, and so on. If those registers are made to be meaningful, then there's communication happening. Maybe a chip somewhere on the same motherboard becomes active, or an LED lights up, or a pixel on a screen changes color.
**[[ Related read 6 Python interpreters to try in 2022 ]][3]**
What comes around goes around. If you really want to detect a rodent in more places than the one spot you happen to have a mousetrap set, you could program an application to do just that. With a webcam and some rudimentary image recognition software, you could establish a baseline of what an empty kitchen looks like and then scan for changes. When a mouse enters the kitchen, there's a shift in the pixel values where there was previously no mouse. Log the data, or better yet trigger a drone that focuses in on the mouse, captures it, and moves it outside. You've built a better mousetrap through the magic of on and off signals.
### Compilers
A code compiler translates human-readable code into a machine language that speaks directly to the CPU. It's a complex process because CPUs are legitimately complex (even more complex than a mousetrap), but also because the process is more flexible than it strictly "needs" to be. Not all compilers are flexible. There are some compilers that have exactly one target, and they only accept code files in a specific layout, and so the process is relatively straight-forward.
Luckily, modern general-purpose compilers aren't simple. They allow you to write code in a variety of languages, and they let you link libraries in different ways, and they can target several different architectures. The [GNU C Compiler (GCC)][4] has over 50 lines of options in its `--help` output, and the LLVM `clang` compiler has over 1000 lines in its `--help` output. The GCC manual contains over 100,000 words.
You have lots of options when you compile code.
Of course, most people don't need to know all the possible options. There are sections in the GCC man page I've never read, because they're for Objective-C or Fortran or chip architectures I've never even heard of. But I value the ability to compile code for several different architectures, for 64-bit and 32-bit, and to run open source software on computers the rest of the industry has left behind.
### The compilation lifecycle
Just as importantly, there's real power to understanding the different stages of compiling code. Here's the lifecycle of a simple C program:
1. C source with macros (.c) is preprocessed with `cpp` to render an `.i` file.
2. C source code with expanded macros (.i) is translated with `gcc` to render an `.s` file.
3. A text file in Assembly language (.s) is `as`sembled with as into an `.o` file.
4. Binary object code with instructions for the CPU, and with offsets not tied to memory areas relative to other object files and libraries (*.o) is linked with `ld` to produce an executable.
5. The final binary file either has all required objects within it, or it's set to load linked dynamic libraries (*.so files).
And here's a simple demonstration you can try (with some adjustment for library paths):
```
$ cat << EOF >> hello.c
 #include
 int main(void)
 { printf("hello world\n");
   return 0; }
   EOF
$ cpp hello.c > hello.i
$ gcc -S hello.i
$ as -o hello.o hello.s
$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/5.5.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o hello.o \
/usr/lib64/crtn.o  --start-group -lc -lgcc \
-lgcc_eh --end-group
$ ./hello
hello world
```
### Attainable knowledge
Computers have become amazingly powerful, and pleasantly user-friendly. Don't let that fool you into believing either of the two possible extremes: computers aren't as simple as mousetraps and light switches, but they also aren't beyond comprehension. You can learn about compiling code, about how to link, and compile for a different architecture. Once you know that, you can debug your code better. You can understand the code you download. You may even fix a bug or two. Or, in theory, you could build a better mousetrap. Or a CPU out of mousetraps. It's up to you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/compiling-code
作者:[Alan Smithee][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lkxed
[2]: https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters
[3]: https://opensource.com/article/22/9/python-interpreters-2022
[4]: https://opensource.com/article/22/5/gnu-c-compiler

View File

@ -0,0 +1,315 @@
[#]: subject: "13 Independent Linux Distros That are Built From Scratch"
[#]: via: "https://itsfoss.com/independent-linux-distros/"
[#]: author: "sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
13 Independent Linux Distros That are Built From Scratch
======
There are hundreds of Linux distributions available.
But most of them fall into these three categories: Debian, Red Hat (Fedora) and Arch Linux.
Using a distribution based on Debian/Ubuntu, Red Hat/SUSE or Arch Linux has its advantages. They are popular and hence their package manager offers a huge range of software.
However, some users prefer to use Linux distributions built from scratch and be independent of DEB/RPM packaging system.
In this article, we will list some of the best Linux distributions developed independently.
**Note:** Obviously, this list excludes popular options like Debian, Ubuntu, and Fedora, which are used as bases for creating new distros. Moreover, the distributions are in no particular order of ranking.
### 1. NixOS
![Image Credits: Distrowatch][1]
Initially released in 2003, Nix OS is built on top of the Nix Package Manager. It provides two releases every year, usually scheduled in May and November.
NixOS may not be a distribution directly geared to new and average users. However, its unique approach to [package management][2] attracts various kinds of users.
Additionally, 32-bit support systems are also supported.
##### Other Features:
* Builds packages isolated
* Reliable upgrade with rollback feature
* Reproducible system configuration
[NixOS][3]
**Related**: [Advanced Linux Distributions for Expert Linux Users][4]
### 2. Gentoo Linux
![Image Credits: Distrowatch][5]
Gentoo Linux is an independently developed distribution aimed mainly at system experts. It is built for users who want the freedom to customize, fine-tune and optimize the operating system to suit their requirements.
Gentoo uses [Portage package management][6] that lets you create and install packages, often allowing you to optimize them for your hardware. **Chromium OS**, the open-source version of Chrome OS, uses Gentoo at its core.
Not to forget, Gentoo is one of those [distributions that still support 32-bit architectures][7].
##### Other Features:
* Incremental Updates
* Source-based approach to software management
* Concept of overlay repositories like GURU (Gentoos user repository), where users can add packages not yet provided by Gentoo
[Gentoo Linux][8]
### 3. Void Linux
![Image Credits: Distrowatch][9]
Void Linux is a [rolling release distribution][10] with its own X Binary Package System (XBPS) for installing and removing software. It was created by **Juan Romero Pardines**, a former NetBSD developer.
It avoids systemd and instead uses runit as its init system. Furthermore, it gives you the option to use several [desktop environments][11].
##### Other Features:
* Minimal system requirements
* Offers an official repository for non-free packages
* Support for Raspberry Pi
* Integration of OpenBSDs LibreSSL software
* Support for musl C library
* 32-bit support
[Void Linux][12]
**Related:** [Not a Systemd Fan? Here are 13+ Systemd-Free Linux Distributions][13]
### 4. Solus Linux
![solus budgie 2022][14]
Formerly EvolveOS, Solus Linux offers some exciting features while built from scratch. Solus features its own homegrown budgie desktop environment as its flagship version.
Compared to other options, Solus Linux is one of the few independent distributions that new Linux users can use. It manages to be one of the [best Linux distributions][15] available.
It uses eopkg package management with a semi-rolling release model. As per the developers, Solus is exclusively developed for personal computing purposes.
##### Other Features:
* Available in Budgie, Gnome, MATE, and KDE Plasma editions
* Variety of software out of the box, which reduces setup efforts
[Solus Linux][16]
### 5. Mageia
![Image Credits: Distrowatch][17]
Mageia started as a fork of Mandriva Linux back in 2010. It aims to be a stable and secure operating system for desktop and server usage.
Mageia is a community-driven project supported by a non-profit organization and elected contributors. You will notice a major release every year.
##### Other Features
* Supports 32-bit system
* KDE Plasma, Gnome, and XFCE editions are available from the website
* Minimal system requirements
[Mageia][18]
**Related:** **[Linux Distros That Still Support 32-Bit Systems][19]**
### 6. Clear Linux
![Image Credits: Distrowatch][20]
Clear Linux is a distribution by Intel, primarily designed with performance and cloud use cases in mind.
One interesting thing about Clear Linux is the operating system upgrades as a whole rather than individual packages. So, even if you mess up with the system accidentally, it should boot correctly, performing a factory reset to let you set it up again.
It is not geared toward personal use. But it can be a unique choice to try.
##### Other Features:
* Highly tuned for Intel platforms
* A strict separation between User and System files
* Constant vulnerability scanning
[Clear Linux OS][21]
### 7. PCLinuxOS
![Image Credits: Distrowatch][22]
PCLinuxOS is an x86_64 Linux distribution that uses APT-RPM packages. You can get KDE Plasma, Mate, and XFCE desktops, while it also offers several community editions featuring more desktops.
Locally installed versions of PCLinuxOS utilize the APT package management system thanks to [Synaptic package manager][23]. You can also find rpm packages from its repositories.
##### Other Features:
* mylivecd script allows the user to take a snapshot of their current hard drive installation (all settings, applications, documents, etc.) and compress it into an ISO CD/DVD/USB image.
* Additional support for over 85 languages.
[PCLinuxOS][24]
### 8. 4MLinux
![4m linux 2022][25]
[4MLinux][26] is a general-purpose Linux distribution with a strong focus on the following four **“M”**  of computing:
* Maintenance (system rescue Live CD)
* Multimedia (full support for a huge number of image, audio and video formats)
* Miniserver (DNS, FTP, HTTP, MySQL, NFS, Proxy, SMTP, SSH, and Telnet)
* Mystery (meaning a collection of classic Linux games)
It has a minimal system requirement and is available as a desktop and server version.
##### Other Features
* Support for large number of image, audio/video formats
* Small and general-purpose Linux distribution
[4MLinux][27]
### 9. Tiny Core Linux
![Image Credits: Distrowatch][28]
Tiny Core Linux focuses on providing a base system using BusyBox and FLTK. It is not a complete desktop. So, you do not expect it to run on every system.
It represents only the core needed to boot into a very minimal X desktop, typically with wired internet access.
The user gets great control over everything, but it may not be an easy out-of-the-box experience for new Linux users.
##### Other Features
* Designed to run from a RAM copy created at boot time
* By default, operates like a cloud/internet client
* Users can run appbrowser to browse repositories and download applications
[Tiny Core Linux][29]
### 10. Linux From Scratch
![Image Credit: Reddit][30]
[Reddit][31]
Linux From Scratch is a way to install a working Linux system by building all its components manually. Once completed, it provides a compact, flexible and secure system and a greater understanding of the internal workings of the Linux-based operating systems.
If you need to dive deep into how a Linux system works and explore its nuts and bolts, Linux From Scratch is the project you need to try.
##### Other Features
* Customised Linux system, entirely from scratch
* Extremely flexible
* Offers added security because of self compile from source
[Linux From Scratch][32]
### 11. Slackware
![Image Credits: Distrowatch][33]
Slackware is the oldest distribution that is still being maintained. Originally created in 1993, with Softlanding Linux System as base, Slackware later became the base for many Linux distributions.
Slackware aims at producing the most UNIX-like Linux distribution while keeping simplicity and stability.
##### Other Features
* Available for 32-bit and 64-bit systems
* Extensive online documentation
* Can run on Pentium system to latest machines
[Slackware][34]
### 12. Alpine Linux
![alpine linux xfce 2022][35]
Alpine Linux is a community-developed operating system designed for routers, firewalls, VPNs, VoIP boxes, and servers. It began as a fork of the LEAF Project.
Alpine Linux uses apk-tools package management, initially written as a shell script and later written in C programming language. This is a minimal Linux distribution, which still supports 32-bit systems and can be installed as a run-from-RAM operating system.
##### Other Features:
* Provides a minimal container image of just 5 MB in size
* 2-year support for the main repository and support until the next stable release for the community repository
* Made around musl libc and Busybox with resource-efficient containers
[Alpine Linux][36]
### 13. KaOS
![Image Credits: Distrowatch][37]
KaOS is a Linux distribution built from scratch and inspired by Arch Linux. It uses [pacman for package management][38]. It is built with the philosophy “*One Desktop Environment (KDE Plasma), One Toolkit (Qt), One Architecture (x86_64)*“.
It has limited repositories, but still, it offers plenty of tools for a regular user.
##### Other Features:
* Most up-to-date Plasma desktop
* Tightly integrated rolling and transparent distribution for the modern desktop
[KaOS][39]
#### Wrapping Up
If you need a unique experience, these independent Linux distributions should serve the purpose.
However, if you want to replace it with a mainstream distribution like Ubuntu for your desktop…You might want to think twice, considering most of the options (if not all) above are not ideal options for day-to-day desktop usage.
But then again, if you have a fair share of experience with Linux distributions, you can undoubtedly take up the task for an adventure!
*If you were to try one of these indie distros, which one would it be? Share with us in the comments.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/independent-linux-distros/
作者:[sreenath][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/nixos-2022.png
[2]: https://itsfoss.com/package-manager/
[3]: https://nixos.org/
[4]: https://itsfoss.com/advanced-linux-distros/
[5]: https://itsfoss.com/wp-content/uploads/2022/08/gentoo-linux-plasma.jpg
[6]: https://wiki.gentoo.org/wiki/Portage
[7]: https://itsfoss.com/32-bit-linux-distributions/
[8]: https://www.gentoo.org/
[9]: https://itsfoss.com/wp-content/uploads/2022/08/void-linux.jpg
[10]: https://itsfoss.com/rolling-release/
[11]: https://itsfoss.com/best-linux-desktop-environments/
[12]: https://voidlinux.org/
[13]: https://itsfoss.com/systemd-free-distros/
[14]: https://itsfoss.com/wp-content/uploads/2022/10/solus-budgie-2022.jpg
[15]: https://itsfoss.com/best-linux-distributions/
[16]: https://getsol.us/home/
[17]: https://itsfoss.com/wp-content/uploads/2022/08/mageia-1.jpg
[18]: https://www.mageia.org/en/
[19]: https://itsfoss.com/32-bit-linux-distributions/
[20]: https://itsfoss.com/wp-content/uploads/2022/08/clear-linux-desktop.png
[21]: https://clearlinux.org/
[22]: https://itsfoss.com/wp-content/uploads/2022/08/pclinuxos.png
[23]: https://itsfoss.com/synaptic-package-manager/
[24]: https://www.pclinuxos.com/
[25]: https://itsfoss.com/wp-content/uploads/2022/10/4m-linux-2022.jpg
[26]: https://itsfoss.com/4mlinux-review/
[27]: http://4mlinux.com/
[28]: https://itsfoss.com/wp-content/uploads/2022/03/tinycore.jpg
[29]: http://www.tinycorelinux.net/
[30]: https://itsfoss.com/wp-content/uploads/2022/08/enable-aur-e1659974408774.png
[31]: https://www.reddit.com/r/linuxmasterrace/comments/udi7ts/decided_to_try_lfs_in_a_vm_started_about_a_week/
[32]: https://www.linuxfromscratch.org/
[33]: https://itsfoss.com/wp-content/uploads/2022/10/slackware-scaled.jpg
[34]: http://www.slackware.com/
[35]: https://itsfoss.com/wp-content/uploads/2022/10/alpine-linux-xfce-2022.png
[36]: https://www.alpinelinux.org/
[37]: https://itsfoss.com/wp-content/uploads/2022/08/kaos-desktop.png
[38]: https://itsfoss.com/pacman-command/
[39]: https://kaosx.us/

View File

@ -0,0 +1,69 @@
[#]: subject: "Can Kubernetes help solve automation challenges?"
[#]: via: "https://opensource.com/article/22/10/kubernetes-solve-automation-challenges"
[#]: author: "Rom Adams https://opensource.com/users/romdalf"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Can Kubernetes help solve automation challenges?
======
Automation at the organization level has been an elusive goal, but Kubernetes might be able to change all that.
I started my automation journey when I adopted Gentoo Linux as my primary operating system in 2002. Twenty years later, automation is not yet a done deal. When I meet with customers and partners, they share automation wins within teams, but they also describe the challenges to achieving similar success at an organizational level.
Most IT organizations have the ability to provision a virtual machine end to end, reducing what used to be a four-week lead time to just five minutes. That level of automation is itself a complex workflow, requiring networking (IP address management, DNS, proxy, networking zones, and so on), identity access management, [hypervisor][2], storage, backup, updating the operating system, applying the latest configuration files, monitoring, security and hardening, and compliance benchmarking. Wow!
It's not easy to address the business need for high velocity, scaling, and on-demand automation. For instance, consider the classic webshop or an online government service to file tax returns. The workload has well-defined peaks that need to be absorbed.
A common approach for handling such a load is having an oversized server farm, ready to be used by a specialized team of IT professionals, monitoring the seasonal influx of customers or citizens. Everybody wants a just-in-time deployment of an entire stack. They want infrastructure running workloads within the context of a hybrid cloud scenario, using the model of "build-consume-trash" to optimize costs while benefiting from infinite elasticity.
In other words, everybody wants the utopian "cloud experience."
### Can the cloud really deliver?
All is not lost, thanks mainly to the way [Kubernetes][3] has been designed. The exponential adoption of Kubernetes fuels innovation, displacing standard legacy practices for managing platforms and applications. Kubernetes requires the use of Everything-as-Code (EaC) to define the desired state of all resources, from simple compute nodes to TLS certificates. Kubernetes compels the use of three major design constructs:
* A standard interface to reduce integration friction between internal and external components
* An API-first and API-only approach to standardize the CRUD (Create, Read, Update, Delete) operations of all its components
* Use of [YAML][4] as a common language to define all desired states of these components in a simple and readable way
These three key components are essentially the same requirements for choosing an automation platform, at least if you want to ease adoption by cross-functional teams. This also blurs the separation of duties between teams, helping to improve collaboration across silos, which is a good thing!
As a matter of fact, customers and partners adopting Kubernetes are ramping up to a state of hyper-automation. Kubernetes organically drives teams to adopt multiple [DevOps foundations and practices][5]—like EaC, [version control with Git][6], peer reviews, [documentation as code][7]—and encourages cross-functional collaboration. These practices help mature a team's automation skills, and they help a team get a good start in GitOps and CI/CD pipelines dealing with both application lifecycle and infrastructure.
### Making automation a reality
You read that right! The entire stack for complex systems like a webshop or government reporting can be defined in clear, understandable, universal terms that can be executed on any on-prem or cloud provider. An autoscaler with custom metrics can be defined to trigger a just-in-time deployment of your desired stack to address the influx of customers or citizens during seasonal peaks. When metrics are back to normal, and cloud compute resources don't have a reason to exist anymore, you trash them and return to regular operations, with a set of core assets on-prem taking over the business until the next surge.
### The chicken and the egg paradox
Considering Kubernetes and cloud-native patterns, automation is a must. But it raises an important question: Can an organization adopt Kubernetes before addressing the automation strategy?
It might seem that starting with Kubernetes could inspire better automation, but that's not a foregone conclusion. A tool is not an answer to the problem of skills, practices, and culture. However, a well-designed platform can be a catalyst for learning, change, and cross-functional collaboration within an IT organization.
### Get started with Kubernetes
Even if you feel you missed the automation train, don't be afraid to start with Kubernetes on an easy, uncomplicated stack. Embrace the simplicity of this fantastic orchestrator and iterate with more complex needs once you've [mastered the initial steps][8].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/kubernetes-solve-automation-challenges
作者:[Rom Adams][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/romdalf
[b]: https://github.com/lkxed
[2]: https://www.redhat.com/en/topics/virtualization/what-is-a-hypervisor?intcmp=7013a000002qLH8AAM
[3]: https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=7013a000002qLH8AAM
[4]: https://opensource.com/article/21/9/yaml-cheat-sheet
[5]: https://opensource.com/resources/devops
[6]: https://opensource.com/life/16/7/stumbling-git
[7]: https://opensource.com/article/21/3/devops-documentation
[8]: https://opensource.com/article/17/11/getting-started-kubernetes

View File

@ -0,0 +1,193 @@
[#]: subject: "Celebrating KDEs 26th Birthday With Some Inspiring Facts From Its Glorious Past!"
[#]: via: "https://itsfoss.com/kde-facts-trivia/"
[#]: author: "Avimanyu Bandyopadhyay https://www.gizmoquest.com"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Celebrating KDEs 26th Birthday With Some Inspiring Facts From Its Glorious Past!
======
Wishing a Very Happy Birthday to **KDE**!
Let us celebrate this moment by looking back on its glorious history with some inspiring facts on this legendary and much-loved Desktop Environment!
![kde birthday][1]
### KDEs Origin
**26 years ago**, [Matthias Ettrich][2] (a German Computer Scientist currently working at [HERE][3]) founded KDE.
![matthias 2950607241][4]
When Matthias was a student at the [Eberhard Karls University of Tübingen][5], he was not satisfied as a [Common Desktop Environment (CDE)][6] user.
CDE is a desktop environment for Unix.
However, he wanted an interface that was more comfortable, simpler, and easy to use, with a better look and feel.
So, in 1996, Matthias Ettrich announced the **Kool Desktop Environment (KDE)**, a GUI (graphical user interface) for Unix systems, built with Qt and C ++.
Note that the full form of KDE was an innocent pun intended to CDE at the time. You do not usually say it as “Kool Desktop Environment”, just KDE as of now. You can read the [original announcement post][7] to get a dose of nostalgia.
**Trivia**: The official mascot of KDE is Konqi who has a girlfriend named Katie. Previously there used to be a wizard named [Kandalf][8] but was later replaced by Konqi because many people loved and preferred the mascot to be this charming and friendly dragon!
![Konqi is KDE's mascot. Katie is his girlfriend and mascot of KDE women's project.][9]
[Konqi][10]
[Katie][11]
And, heres how it looked like with KDE mascot:
![Screenshot of earlier version of KDE desktop][12]
### 13 Interesting and Inspiring Facts on KDE
Weve looked back into some interesting yet inspiring events that took place over the last 26 years of the KDE project:
#### 1. Early Development Events
15 developers met in Arnsberg, Germany, in 1997, to work on the KDE project and discuss its future. This event came to be known as [KDE One][13] followed by [KDE Two][14] and [KDE Three][15] and so on in the later years. They even had [one][16] for a beta version.
#### 2. The KDE Free Qt Foundation Agreement
The foundation agreement for the [KDE Free Qt Foundation][17] was signed by [KDE e.V.][18] and [Trolltech][19], then owner of the Qt Foundation who [ensured the permanent availability][20] of Qt as free software.
#### 3. First Stable Version
![kde 1][21]
The [first stable version][22] of KDE was released in **1998**, in addition to highlighting an application development framework, the [KOM/OpenParts][23], and an office suite preview. You can check out the [KDE 1.x Screenshots][24].
#### 4. The KDE Women Initiative
The community womens group, [KDE Women][25], was created and announced in March 2001 with the primary goal to increase the number of women in free software communities, particularly in KDE.
#### 5. 1 Million Commits
The community [reached 1 million commits][26] within a span of only 19 months, from 500,000 in January 2006 and 750,000 in December 2007, with the launch of KDE 4 at the same time.
#### 6. Release Candidate of Development Platform Announced
A [release candidate][27] of KDEs development platform consisting of basic libraries and tools to develop KDE applications was announced on October 2007.
#### 7. First KDE & Qt event in India
The [first conference][28] of the KDE and Qt community in India happened in Bengaluru in March 2011 that became an annual event henceforth.
#### 8. GCompris and KDE
In **December 2014**, the educational software suite [GCompris joined][29] the [project incubator of KDE community][30] (We have [previously][31] discussed GCompris, which is bundled with Escuelas Linux, a comprehensive educational distro for teachers and students).
#### 9. KDE Slimbooks
In **2016**, the KDE community partnered with a Spanish laptop retailer and [announced the launch of the KDE Slimbook][32], an ultrabook with KDE Plasma and KDE Applications pre-installed. Slimbook offers a pre-installed version of [KDE Neon][33] and [can be purchased from their website][34].
#### 10. Transition to GitLab
In **2019**, KDE [migrated][35] from Phabricator to GitLab to enhance the development process and let new contributors easy access to the workflow. However, KDE still uses bugzilla for tracking bugs.
#### 11. Adopts Decentralized Matrix Protocol
KDE added Matrix bridge to the IRC channels and powered up its native chat clients using the open-source decentralized Matrix protocol in **2019**.
#### 12. KDE PinePhone
KDE developers teamed up with [PINE64][36] in **2020** to introduce a community edition PinePhone powered by KDE.
#### 13. Valve Picks KDE for Steam Deck
Steam Deck is undoubtedly a super trending Linux gaming console right now. And, Valve chose KDE as its desktop environment to make it work in **2021**.
### Today, KDE is Powered by Three Great Projects
#### KDE Plasma
Previously called Plasma Workspaces, KDE Plasma facilitates a unified workspace environment for running and managing applications on various devices like desktops, netbooks, tablets or even [smartphones][37].
Currently, [KDE Plasma 5.26][38] is the most recent version and was released some days ago. The KDE Plasma 5 project is the fifth generation of the desktop environment and is the successor to KDE Plasma 4.
#### KDE Applications
KDE Applications are a bundled set of applications and libraries designed by the KDE community. Most of these applications are cross-platform, though primarily made for Linux.
A very [nice][39] project in this category is a music player called Elisa focused on an optimised integration with Plasma.
#### KDE Development Platform
The KDE Development Platform is what significantly empowers the above two initiatives, and is a collection of libraries and software frameworks released by KDE to promote better collaboration among the community to develop KDE software.
**Personal Note**: It was an honour covering this article on KDEs Birthday and I would like to take this opportunity to brief some of my personal favourite KDE based apps and distros that I have extensively used in the past and continue to.
Check out the entire [timeline][40] in detail here for a more comprehensive outline or you can take a look at this 19-year visual changes in this interesting video:
![A Video from YouTube][41]
### Best KDE-Based Distributions
If you have heard all the good things about KDE, you should try out the distributions powered by KDE.
We have a [list of Linux distributions based on KDE][42], if you are curious.
*Hope you liked our favourite moments in KDE history on their 26th Anniversary! Please do write about any thoughts you might have about any of your memorable experiences with KDE in the comments below.*
This article was originally published in 2018, and has been edited to reflect latest information.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kde-facts-trivia/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.gizmoquest.com
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/kde-birthday.png
[2]: https://en.wikipedia.org/wiki/Matthias_Ettrich
[3]: https://here.com/
[4]: https://itsfoss.com/wp-content/uploads/2022/10/matthias-2950607241.jpg
[5]: https://www.uni-tuebingen.de/en
[6]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
[7]: https://kde.org/announcements/announcement/
[8]: https://en.wikipedia.org/wiki/Konqi#Kandalf
[9]: https://itsfoss.com/wp-content/uploads/2018/10/Konqi-and-Katie.jpg
[10]: https://en.wikipedia.org/wiki/Konqi
[11]: https://community.kde.org/Katie
[12]: https://itsfoss.com/wp-content/uploads/2018/10/Konqi-from-the-early-days-who-replaced-Kandalf-right.jpg
[13]: https://community.kde.org/KDE_Project_History/KDE_One_(Developer_Meeting)
[14]: https://community.kde.org/KDE_Project_History/KDE_Two_(Developer_Meeting)
[15]: https://community.kde.org/KDE_Project_History/KDE_Three_(Developer_Meeting)
[16]: https://community.kde.org/KDE_Project_History/KDE_Three_Beta_(Developer_Meeting)
[17]: https://www.kde.org/community/whatiskde/kdefreeqtfoundation.php
[18]: https://www.kde.org/announcements/fsfe-associate-member.php
[19]: https://dot.kde.org/2007/02/28/trolltech-becomes-first-corporate-patron-kde
[20]: https://dot.kde.org/2016/01/13/qt-guaranteed-stay-free-and-open-%E2%80%93-legal-update
[21]: https://itsfoss.com/wp-content/uploads/2022/10/kde-1.jpg
[22]: https://www.kde.org/announcements/announce-1.0.php
[23]: https://www.kde.org/kdeslides/Usenix1998/sld016.htm
[24]: https://czechia.kde.org/screenshots/kde1shots.php
[25]: https://community.kde.org/KDE_Women
[26]: https://dot.kde.org/2009/07/20/kde-reaches-1000000-commits-its-subversion-repository
[27]: https://www.kde.org/announcements/announce-4.0-platform-rc1.php
[28]: https://dot.kde.org/2010/12/28/confkdein-first-kde-conference-india
[29]: https://dot.kde.org/2014/12/11/gcompris-joins-kde-incubator-and-launches-fundraiser
[30]: https://community.kde.org/Incubator
[31]: https://itsfoss.com/escuelas-linux/
[32]: https://dot.kde.org/2017/01/26/kde-and-slimbook-release-laptop-kde-fans
[33]: https://en.wikipedia.org/wiki/KDE_neon
[34]: https://slimbook.es/en/store/slimbook-kde
[35]: https://pointieststick.com/2020/05/23/this-week-in-kde-we-have-migrated-to-gitlab/
[36]: https://www.pine64.org
[37]: https://play.google.com/store/apps/details?id=org.kde.kdeconnect_tp
[38]: https://news.itsfoss.com/kde-plasma-5-26-release/
[39]: https://mgallienkde.wordpress.com/2018/10/09/0-3-release-of-elisa-music-player/
[40]: https://timeline.kde.org/
[41]: https://youtu.be/1UG4lQOMBC4
[42]: https://itsfoss.com/best-kde-distributions/

View File

@ -0,0 +1,230 @@
[#]: subject: "How To Restrict Access To Linux Servers Using TCP Wrappers"
[#]: via: "https://ostechnix.com/restrict-access-linux-servers-using-tcp-wrappers/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Restrict Access To Linux Servers Using TCP Wrappers
======
In this guide, we are going to learn **what is TCP Wrappers**, what is it used for, how to **install TCP Wrappers in Linux**, and how to **restrict access to Linux servers using TCP Wrappers**.
### What is TCP Wrappers?
**TCP Wrappers** (also known as **tcp_wrapper**) is an open source host-based ACL (Access Control List) system, which is used to restrict the TCP network services based on the hostname, IP address, network address, and so on. It decides which host should be allowed to access a specific network service.
TCP Wrapper was developed by a Dutch programmer and physicist **Wietse Zweitze Venema** in 1990 at the Eindhoven University of Technology. He maintained it until 1995, and then released it under BSD License in 2001.
### Is TCP Wrappers a replacement for Firewalls?
**No.** Please be aware that **TCP Wrapper is not a complete replacement for properly configured firewall**. It is just a **valuable addition to** **enhance your Linux server's security**.
Some Linux distributions such as Debian, Ubuntu have dropped the TCP Wrappers from the official repositories. Because, the last version of tcp_wrappers was released 20 years ago. At that time it was very powerful tool to "block all traffic".
However, these days we can do the same thing using firewalls/iptables/nftables for all traffic on **network level** or use similar filtering at the application level. But TCP Wrappers blocks incoming connection on application level only.
If you still prefer to use TCP Wrappers for any reason, it is always recommended to use TCP Wrappers in conjunction with a properly configured firewall and other security mechanisms and tools to harden your Linux server's security.
### Install TCP Wrappers in Linux
TCP Wrappers is available in the official repositories of most Linux operating systems.
Depending upon the Linux distribution you use, TCP Wrappers can be installed as shown below.
**On Arch-based systems**, make sure the [Community] repository is enabled and run the following command to TCP Wrappers in Arch Linux and its variants such as EndeavourOS and Manjaro Linux:
```
$ sudo pacman -S tcp-wrappers
```
**On Fedora , RHEL, CentOS, AlmaLinux and Rocky Linux:**
Make sure you've enabled the **[EPEL]** repository:
```
$ sudo dnf install epel-release
```
And then install TCP wrappers using command:
```
$ sudo dnf install tcp_wrappers
```
On RHEL 6 systems, you need to use yum instead of dnf to install TCP wrappers.
```
$ sudo yum install tcp_wrappers
```
### Configure TCP Wrappers
TCP Wrappers implements the access control with the help of two configuration files:
* /etc/hosts.allow,
* /etc/hosts.deny.
These two access control list files decides whether or not the specific clients are allowed to access your Linux server.
#### The /etc/hosts.allow file
The `/etc/hosts.allow` file contains the list of allowed or non-allowed hosts or networks. It means that we can both allow or deny connections to network services by defining access rules in this file.
#### The /etc/hosts.deny file
The `/etc/hosts.deny` file contains the list of hosts or networks that are not allowed to access your Linux server. The access rules in this file can also be set up in `/etc/hosts.allow` with a **'deny'** option.
The typical syntax to define an access rule is:
```
daemon_list : client_list : option : option ...
```
Where,
* daemon_list - The name of a network service such as SSH, FTP, Portmap etc.
* clients_list - The comma separated list of valid hostnames, IP addresses or network addresses.
* options - An optional action that specifies something to be done whenever a rule is matched.
The syntax is same for both files.
### Rules to remember
Before using TCP Wrappers, you need to know the following important rules. Please be mindful that the TCP Wrapper consults only these two files (hosts.allow and hosts.deny).
* The access rules in the `/etc/hosts.allow` file are applied first. They takes precedence over rules in `/etc/hosts.deny` file. Therefore, if access to a service is allowed in `/etc/hosts.allow` file, and a rule denying access to that same service in `/etc/hosts.deny` is ignored.
* Only one rule per service is allowed in both files (hosts.allow and `hosts.deny` files).
* The order of the rules is very important. Only the first matching rule for a given service will be taken into account. The same applies for both files.
* If there are no matching rules for a service in either files or if neither file exist, then access to the service will be granted to all remote hosts.
* Any changes in either files will come to effect immediately without restarting the network services.
### Restrict Access To Linux Servers Using TCP Wrappers
The recommended approach to secure a Linux server is to **block all incoming connections**, and allow only a few specific hosts or networks.
To do so, edit **/etc/hosts.deny** file:
```
$ sudo vi /etc/hosts.deny
```
Add the following line. This line refuses connections to ALL services and ALL networks.
```
ALL: ALL
```
Then, edit **/etc/hosts.allow** file:
```
$ sudo vi /etc/hosts.allow
```
and allow the specific hosts or networks of your choice.
```
sshd: 192.168.43.192 192.168.43.193
```
You can also specify valid hostnames instead of IP address as shown below.
```
sshd: server1.ostechnix.lan server2.ostechnx.lan
```
Alternatively, you can do the same by defining all rules (both allow and deny) in `/etc/hosts.allow` file itself.
Edit **/etc/hosts.allow** file and add the following lines.
```
sshd: 192.168.43.192 192.168.43.193
sshd: ALL: DENY
```
In this case, you don't need to specify any rule in `/etc/hosts.deny` file.
As per above rule, all incoming connections will be denied for all hosts except the two hosts 192.168.43.192, 192.168.43.193.
Now, try to SSH to your Linux server from any hosts except the above hosts, you will get the following error.
```
ssh_exchange_identification: read: Connection reset by peer
```
You can verify this from your Linux server's log files as shown below.
```
$ cat /var/log/secure
```
**Sample output:**
```
Jun 16 19:40:17 server sshd[15782]: refused connect from 192.168.43.150 (192.168.43.150)
```
Similarly, you can define rules for other services, say for example vsftpd, in `/etc/hosts.allow` file as shown below.
```
vsftpd: 192.168.43.192
vsftpd: ALL: DENY
```
Again, you don't need to define any rules in `/etc/hosts.deny` file. As per the above rule, a remote host with IP address 192.168.43.192 is allowed to access the Linux server via FTP. All other hosts will be denied.
Also, you can define the access rules in different formats in /etc/hosts.allow file as shown below.
```
sshd: 192.168.43.192 #Allow a single host for SSH service
sshd: 192.168.43.0/255.255.255.0 #Allow a /24 prefix for SSH
vsftpd: 192.168.43.192 #Allow a single host for FTP
vsftpd: 192.168.43.0/255.255.255.0 #Allow a /24 prefix for FTP
vsftpd: server1.ostechnix.lan #Allow a single host for FTP
```
#### Allow all hosts except a specific host
You can allow incoming connections from all hosts, but not from a specific host. Say for example, to allow incoming connections from all hosts in the **192.168.43** subnet, but not from the host **192.168.43.192**, add the following line in `/etc/hosts.allow` file.
```
ALL: 192.168.43. EXCEPT 192.168.43.192
```
In the above case, you don't need to add any rules in /etc/hosts.deny file.
Or you can specify the hostname instead of IP address as shown below.
```
ALL: .ostechnix.lan EXCEPT badhost.ostechnix.lan
```
For more details, refer the man pages.
```
$ man tcpd
```
### Conclusion
As you can see, securing network services in your Linux systems with TCP Wrappers is easy! But keep in mind that TCP Wrapper is not a replacement for a firewall. It should be used in conjunction with firewalls and other security tools.
**Resource:**
* [Wikipedia][1]
--------------------------------------------------------------------------------
via: https://ostechnix.com/restrict-access-linux-servers-using-tcp-wrappers/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://en.wikipedia.org/wiki/TCP_Wrapper

View File

@ -0,0 +1,111 @@
[#]: subject: "Install Gedit on Ubuntu 22.10 and Make it Default Text Editor"
[#]: via: "https://itsfoss.com/install-gedit-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install Gedit on Ubuntu 22.10 and Make it Default Text Editor
======
[GNOME has a brand new text editor][1] to replace the good old Gedit editor.
While it was already available with GNOME 42, Ubuntu 22.04 relied on Gedit.
This is changing in Ubuntu 22.10. GNOME Text Editor is the default here and Gedit is not even installed.
![Searching for text editor only brings GNOME Text Editor][2]
While the new editor is good enough, not everyone would like it. This is especially if you use Gedit extensively with additional plugins.
If you are among those people, let me show you how to install Gedit on Ubuntu. Ill also share how you can make it the default text editor.
### Install Gedit on Ubuntu
This is actually a no-brainer. While Gedit is not installed by default, it is still available in Ubuntu repositories.
So, all you have to do is to use the apt command to install it:
```
sudo apt install gedit
```
Gedit is also available in the software center but it is the snap package. You could install that if you want.
![Gedit is also available in Ubuntus Snap Store][3]
#### Install Gedit Plugins (optional)
By default, Gedit gives you the option to access a few plugins. You can enable or disable the plugins from the menu->preference->plugins.
![Accessing plugins in Gedit][4]
You should see the available plugins here. The installed or in-use plugins are checked.
![See the available and installed plugins in Gedit][5]
However, you can take the plugin selection to the next level by installing the gedit-plugins meta package.
```
sudo apt install gedit-plugins
```
This will give you access to additional plugins like bookmarks, bracket completion, Python console and more.
![Additional Gedit plugins][6]
**Tip**: If you notice that Gedit looks a bit out of place for the lack of around bottom corners, you can install a GNOME extension called [Round Bottom Corner][7]. This will force round bottom corners for all applications including Gedit.
### Make Gedit the default text editor
Alright! So you have installed Gedit but the text files still open in GNOME Text Editor with double click action. To open a file with Gedit, you need to right click and then select the open with option.
If you want Gedit to open text files all the time, you can set it as default.
Right click on a text file and go with “open with” option. Select Gedit here and enable the “Always use for this file type” option from the bottom.
![Set Gedit as the default text editor][8]
### Remove Gedit
Dont feel Gedit is up to the mark? Thats rare, but I am not judging you. To remove Gedit from Ubuntu, use the following command:
```
sudo apt remove gedit
```
You may also try uninstalling it from the software center.
### Conclusion
GNOME Text Editor is the next-gen, created-from-scratch editor that blends well with the new GNOME.
Its good enough for simple text editing. However, Gedit has a plugin ecosystem that gives it more feature.
For those who use it extensively for coding and other stuff, installing Gedit is still an option in Ubuntu.
What about you? Will you stick with the default new text editor or would you go back to the good old Gedit?
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-gedit-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/gnome-text-editor/
[2]: https://itsfoss.com/wp-content/uploads/2022/10/text-editor-ubuntu.png
[3]: https://itsfoss.com/wp-content/uploads/2022/10/install-gedit-from-ubuntu-software-center.png
[4]: https://itsfoss.com/wp-content/uploads/2022/10/access-plugins-in-gedit.png
[5]: https://itsfoss.com/wp-content/uploads/2022/10/plugins-in-gedit.png
[6]: https://itsfoss.com/wp-content/uploads/2022/10/additional-plugins-gedit.png
[7]: https://extensions.gnome.org/extension/5237/rounded-window-corners/
[8]: https://itsfoss.com/wp-content/uploads/2022/10/set-gedit-default.png

View File

@ -0,0 +1,108 @@
[#]: subject: "How to Enable and Access USB Drive in VirtualBox"
[#]: via: "https://www.debugpoint.com/enable-usb-virtualbox/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Enable and Access USB Drive in VirtualBox
======
Heres a precise guide on how you can enable USB in Oracle VirtualBox.
![][1]
When you work in a Virtual machine environment, the USB is usually plugged into the host system. But it is a little difficult to access that USB content from the guest system.
In VirtualBox, you need to install some extensions and enable some settings to access USB in. Heres how.
This article assumes that you have already installed VirtualBox and also installed some Linux distribution or operating system inside it.
If not, check out the [articles here][2].
### Enable USB in VirtualBox 7.0
#### Install VirtualBox Extension Pack
* Open the VirtualBox download page and download the VirtualBox Extension pack for all supported platforms using [this link][3].
![Download the extension pack][4]
* Then Click on `File > Tools > Extension Pack Manager.`
* Click on the `Install` button in the toolbar and select the downloaded .vbox-extpak file.
* Hit `Install`. Accept the terms, and give the admin password for the installation.
![install extension pack manager][5]
![install extension pack manager after accepting terms][6]
* After successful installation, you can see it in the installed list.
* Restart your host system. Restarting is mandatory.
#### Enable USB in the guest box
* Plugin the USB stick into your host system which you want to access from the guest virtual machine.
* Start VirtualBox and right-click on the VM name where you want to enable USB. Select Settings.
![Launch settings for the virtual machine][7]
* On the left pane, click on USB. Then select the controller version. For example, you can select USB 3.0. Then click on the small plus icon to add a USB filter.
* In this list, you should see your USB stick name (which you plugged in). For this example, I can see my Transcend Jetflash drive, which I plugged in.
* Select it and press OK.
![Select the USB stick][8]
* Now, start your virtual machine. Open the file manager, and you should see the USB is enabled and mounted on your virtual machine.
* In this demonstration, you can see the Thunar file manager of my [Arch-Xfce][9] virtual machine is showing the contents of my USB stick.
![Enabling USB and accessing contents from VirtualBox][10]
### Usage notes
Now, here are a couple of things you should remember.
* When you plug in the USB in the host system, keep it mounted. But do not open or access any file before launching the virtual machine.
* Once you start your virtual machine, the USB will be unmounted in the host system and auto-mounted in the guest system, i.e. your virtual machine.
* After you finish with a USB stick, ensure to eject or unmount it inside a virtual machine. Then it will be accessible again inside your host system.
### Wrapping Up
VirtualBox is a powerful utility and provides easy-to-use features to extensively set up your Virtual Machines. The steps are straightforward, and make sure your USB stick is detected properly in the host system to work.
Also, remember that USB stick detection via extension pack is not related to VirtualBox guest addition. They are completely unrelated and provide separate functions.
Finally, let me know if this guide helps you in the comment box.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/enable-usb-virtualbox/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/usb-vbox-1024x576.jpg
[2]: https://www.debugpoint.com/tag/virtualbox
[3]: https://www.virtualbox.org/wiki/Downloads
[4]: https://www.debugpoint.com/wp-content/uploads/2022/10/Download-the-extension-pack.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager-after-accepting-terms.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Launch-settings-for-the-virtual-machine.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/10/Select-the-USB-stick.jpg
[9]: https://www.debugpoint.com/xfce-arch-linux-install-4-16/
[10]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enabling-USB-and-accessing-contents-from-VirtualBox.jpg

View File

@ -0,0 +1,140 @@
[#]: subject: "How to Get Started with Shell Scripting in Linux"
[#]: via: "https://www.linuxtechi.com/get-started-shell-scripting-linux/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Get Started with Shell Scripting in Linux
======
Hello readers, In this post, we will cover how to get started with shell scripting in Linux or UNIX systems.
##### What is a Shell?
A shell is an interpreter in UNIX/Linux like operating systems. It takes commands typed by the user and calls the operating system to run those commands. In simple terms a shell acts as form of wrapper around the OS. For example , you may use the shell to enter a command to list the files in a directory , such as [ls command][1] , or a command to copy ,such as cp.
```
$ ls Desktop Documents Downloads Music Pictures playbook.yaml Public snap Templates test5 Videos $
```
In this example , when you simply type ls and press enter . The $ is the shell prompt , which tells you the the shell awaits your commands.The remaining lines are the names of the files in the current directory.
##### What is Shell Prompt?
The prompt, $, which is called command prompt, is issued by the shell. While the prompt is displayed, you can type a command. The shell reads your input after you press Enter. It determines the command you want executed by looking at the first word of your input. A word is an unbroken set of characters. Spaces and tabs separate words.
### What are different types of Shells
since there is no monopoly of shells , you are free to run any shell as you wish. Thats all well and good , but choosing a shell without knowing the alternative is not very helpful. Below are lists of shells available in UNIX/Linux.
##### The Bourne Shell
The Original Unix Shell is known as sh , short for shell or the Bourne shell , named for steven Bourne , the creator of sh. This is available on almost all the UNIX like operating system. The Basic bourne shell supports only the most limited command line editing, You can type the Characters,remove characters one at a time with the Backspace key and Press enter to execute the command. If command line gets messed up , you can press Ctrl-C to cancel the whole command.
##### The C Shell
It is desgined by Bill Joy at the university of california at Berkeley , the C shell was so named because much of its syntax parallels that of C programming language. This shell adds some neat features to the Bourne shell,especially the ability to recall previous commands to help create future commands.Because it is very likely you will need to execute more than one command to perform a particular task,this C shell capability is very useful.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[300,250],'linuxtechi_com-medrectangle-4','ezslot_7',340,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-medrectangle-4-0');
##### The Korn Shell
It is created by David Korn at AT&T Bell laboratories , the korn shell or ksh offers the same kind of enhancements offers by the C Shell , with one important difference: The korn shell is backward compatible with the older Bourne shell Synatx. In UNIX like AIX & HP-UX korn shell is the default shell.
##### Bash (The Bourne Again Shell)
Bash offers command-line editing like the korn shell,file name completion like the C shell and a lot of other advance features. Many Users view bash as having the best of the Korn and C shells in one shell. In Linux and Mac OS X system , bash is the default shell.
##### tcsh ( The T C Shell)
Linux systems popularized the T C shell ot Tcsh. Tcsh extends the traditional csh to add command line editing,file name completion and more. For example , tcsh will complete the file and directory names when you press Tab key(the same key used in bash). The older C shell did not support this feature.
### What is a Shell Script?
A Shell Script is a text file that contains one or more commands. In a shell script, the shell assumes each line of text file holds a separate command. These Commands appear for most parts as if you have typed them in at a shell windows.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'linuxtechi_com-box-4','ezslot_8',260,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-box-4-0');
##### Why to use Shell Script ?
Shell scripts are used to automate administrative tasks,encapsulate complex configuration details and get at the full power of the operating system.The ability to combine commands allows you to create new commands ,thereby adding value to your operating system.Furthermore ,combining a shell with graphical desktop environment allows you to get the best of both worlds.
In Linux system admin profile, day to day repeated tasks can be automated using shell script which saves time and allow admins to work on quality work.
##### Creating first shell script
Create a text file in your current  working directory with a name myscript.sh , all the shell scripts have an “.sh” extension. First line of a shell script is either #!/bin/sh or #!/bin/bash , it is known as shebang because # symbol is called hash and ! Symbol is called a bang. Where as /bin/sh & /bin/bash shows that commands to be executed either sh or bash shell.
Below are the content of  myscript.sh
```
#!/bin/bash # Written by LinuxTechi echo echo "Current Working Directory: $(pwd)" echo echo "Today' Date & Time: $(date)" DISK=$(df -Th) echo echo "Disk Space on System:" echo "$DISK"
```
Above shell script will display the current working , todays date & time along with file system disk space. We have used [echo command][2] and other [linux commands][3] to build this script.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[300,250],'linuxtechi_com-medrectangle-3','ezslot_6',320,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-medrectangle-3-0');
Assign the executable permissions using below [chmod command][4]
```
$ chmod a+x myscript.sh
```
Now execute the script.
```
$ sh myscript.sh or $ ./myscript.sh
```
Note: To execute any shell script available in current directory, use  ./<script-name> as shown below,
output,
##### Taking Input from the user in shell script
Read command is used to take inputs from user via keyboard and assign the value to a variable. echo command is used to display the contents.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'linuxtechi_com-banner-1','ezslot_9',360,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-banner-1-0');
Lets modify above script so that it starts taking input,
```
#!/bin/bash # Written by LinuxTechi read -p "Your Name: " NAME echo echo "Today' Date & Time: $(date)" echo read -p "Enter the file system:" DISK echo "$(df -Th $DISK)"
```
Now, try to execute the script this time it should prompt to enter details.
```
$ ./myscript.sh Your Name: Pradeep Kumar Today' Date & Time: Sat 15 Oct 05:32:38 BST 2022 Enter the file system:/mnt/data Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/volgrp01-lv01 ext4 14G 24K 13G 1% /mnt/data $
```
Perfect, above output confirms that scripting is prompting for input and processing data.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'linuxtechi_com-large-leaderboard-2','ezslot_11',550,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-large-leaderboard-2-0');
Thats conclude the post. I hope you have found it informative. Kindly do post your queries and feedback in below comments section.
Read Also: [How to Debug a Bash Shell Script in Linux][5]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/get-started-shell-scripting-linux/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/linux-ls-command-examples-beginners/
[2]: https://www.linuxtechi.com/echo-command-examples-in-linux/
[3]: https://www.linuxtechi.com/20-linux-commands-interview-questions-answers/
[4]: https://www.linuxtechi.com/chmod-command-examples-in-linux/
[5]: https://www.linuxtechi.com/debugging-shell-scripts-in-linux/

View File

@ -0,0 +1,221 @@
[#]: subject: "Top 10 Best Linux Distributions in 2022 For Everyone"
[#]: via: "https://www.debugpoint.com/best-linux-distributions-2022/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Top 10 Best Linux Distributions in 2022 For Everyone
======
**We compiled a list of the 10 best Linux distributions for everyone in 2022 based on their stability, attractiveness and time required to configure after installation.**
The Linux Distribution space is heavily fragmented to the point that a new fork is being created almost daily. A very few of them are unique and bring something different to the table. Most of them are just the same Ubuntu or Debian based with a different theme or a wrapper.
The Linux distro landscape is so dynamic that it changes every month. Some Linux distributions become more stable with ever-changing packages and components, while others become unstable in quality. Hence, its challenging to pick and choose the best Linux distribution for your school, work or just casual browsing, watching movies, etc. Not to mention, many Linux distributions are discontinued yearly due to a lack of contributions, cost overruns and other reasons.
That said, we compiled the below 10 best Linux distributions in 2022, which are perfect for any user or use case. That includes casual dual-boot users with Windows 10 or 11, students, teachers, developers, creators, etc. Take a look.
### Best Linux Distributions of 2022
#### 1. Fedora KDE
The first Linux distribution, which I think is the best on this list, is Fedora Linux KDE Edition. The primary reason is that Fedora Linux is very stable with the latest tech, and KDE Plasma is super fast and perfect for all users. Moreover, this Fedora and KDE Plasma combination doesnt require further modification or tweaks after installation. Over the last couple of releases and after the latest [Fedora 36][1] release feedback, Fedora Linux with KDE Plasma has become the go-to distribution for every possible use case and workflow.
In addition, KDE Plasma also brings KDE Applications and goodies, eliminating additional software you need. And with the help of Fedora repo and [RPM Fusion][2], you can blindly trust Fedora Linux with KDE Edition for your daily driver.
[![Fedora KDE Edition - Best Linux Distributions of 2022][3]][4]
On a side note, you can also consider the [Fedora Linux workstation][5] edition with GNOME if you prefer a GNOME-styled desktop. In addition, you might consider other [Fedora Spins][6] or [Fedora Labs][7] if you like a different desktop flavour.
You can download Fedora KDE Edition here. Other downloads with [torrent details][8] are present here.
[Download Fedora][9]
#### 2. KDE Neon
The second distribution we would like to feature in this list is KDE Neon. The KDE Neon is based on the Ubuntu LTS release at its base. But the KDE Framework and KDE Applications with KDE Plasma desktop are the latest from the team. The primary reason for featuring this is that it is perfect for you if you want a Ubuntu LTS base distribution but want the latest KDE Applications. In fact, you can use it for your daily driver for years to come, provided you keep your system up to date.
In contrast, the Kubuntu LTS releases are also perfect. But they may not have the latest KDE Framework or applications.
[![KDE Neon - Best Linux Distributions of 2022][10]][11]
You can download the KDE Neon at the below link. Make sure to choose the user edition while downloading.
[Download KDE Neon][12]
#### 3. Ubuntu LTS Releases with GNOME
The Ubuntu LTS releases (with default GNOME Desktop) are the most used Linux Distribution today. Its the most popular, most downloaded and used by users, enterprises and several real-world needs.
There is no doubt about the Ubuntu LTS versions power and stability. It has been time-tested. With the vast community support, Ubuntu LTS versions with customised GNOME might be the perfect fit for your needs.
Most third-party applications and games primarily target Ubuntu, and you get a much bigger support base than all the distributions in this list. But the recent trends of decisions from Canonical (Ubuntus creator), such as forcing users to adopt Snap and other stuff, may raise a concern for you if you are an advanced user.
But for casual users who want to browse the internet, watch movies, listen to music and do personal work, you can blindly trust Ubuntu LTS versions as your best Linux distribution.
[![Ubuntu LTS with GNOME][13]][14]
Finally, you can download Ubuntu 22.04 LTS (the current one) using the below link.
[Download Ubuntu][15]
#### 4. Linux Mint Cinnamon
One of the Linux distributions that “just works” out-of-the-box in “any” type of hardware. The [Linux Mint][16] is fourth on this list. The above three distributions (Fedora, Ubuntu LTS) may not work well in older hardware (PC or Laptop) having low memory and older CPU. But Linux Mint is perfect in those use cases with its unique ability to make everyone welcome.
Furthermore, with Linux Mint, you do not need to install any additional applications after a fresh install. It comes with every possible driver and utility for all use cases. For example, your printer, webcam, and Bluetooth would work in Linux Mint.
In addition, if you are new to Linux or Windows users who plan to migrate, then it is a perfect distribution to start. Its legacy menu-driven Cinnamon desktop is one of the best open-source desktops today.
[![Linux Mint Cinnamon Edition][17]][18]
If you ever get confused or have no time to choose which distribution is best for you, choose the Linux Mint Cinnamon edition. With that said, you can download Linux Mint using the below link.
[Download Linux Mint][19]
#### 5. Pop OS
The Pop OS is developed by American computer manufacturer System76 for their hardware lineup. But it is one of the famous and emerging Linux distributions based on Ubuntu. The Pop OS is primarily known to have perfect for modern hardware (including NVIDIA graphics) and brings some unique features absent in the traditional Ubuntu with GNOME desktop.
For example, you get a well-designed COSMIC desktop with Pop OS (which is currently being written with Rust), a built-in tiling feature, well-optimized power controls, and a stunning Pop Shop. The Pop Shop is a software store designed by its maker to give you a well-categorized set of applications for your study, learning, development, gaming, etc. This distribution is also perfect for gaming if you plan to start your Linux journey with gaming in mind.
In addition, if you want a professional-grade Linux distribution with official help and support, you should check out actual System76 hardware with Pop OS.
[![Pop OS - Best Linux Distributions of 2022][20]][21]
However, you can download the Pop OS for various hardware for free using the link below.
[Download Pop OS][22]
#### 6. MX Linux
MX Linux is a well-designed Linux distribution primarily targeted at older hardware with productivity and stability in mind. Its an emerging Linux distribution that is free from systemd and uses the init system. Based on the Debian Stable branch, it brings Xfce Desktop, KDE Plasma desktop and Fluxbox with its own powerful MX utilities.
You can use MX Linux for all of your needs. But I would not recommend it for gaming or development work. If you need a stable Linux distribution for your older hardware, free from systemd, you can choose MX Linux. Especially the Fluxbox edition.
[![MX Linux][23]][24]
You can download MX Linux from its official website below.
[Download MX Linux][25]
#### 7. Endeavour OS
If you like the concept of “Rolling release”, which gives you all the latest packages and operating system components, Arch Linux is perhaps the best you can have. However, installing Arch Linux might be tricky for new users, although the recent [archinstall][26] does a pretty job.
However, EndeavourOS is a perfect Arch Linux-based distribution which features Xfce, KDE Plasma and other popular desktops out of the box. Armed with the Calamares installer, it is super easy to install Endeavour OS.
However, this might not be the best Linux distribution for beginners. But it is the best one for little advanced users who are already familiar with Linux.
On the brighter side, you get to say, “btw, I use Arch”.
[![EndeavourOS - Best Linux Distributions of 2022][27]][28]
Last but not least, EndeavourOS has excellent community support, and its Telegram channel support is the best in my personal experience. So, if you ever get stuck, help is just a message away.
Download this excellent and emerging Linux distribution using the link below.
[Download Endeavour OS][29]
#### 8. Zorin OS
Zorin OS is a Linux distribution based on Ubuntu Linux and is best for those who want nice looks, power, stability, and a productive system. In this Linux distribution, the default desktop is a blend of Xfce and GNOME 3, heavily customised. One of the advantages of Zorin is it comes with ready-made themes. With those themes, you can make Zorin OS look like Windows and macOS with just one click.
This helps the new users easily migrate to Linux and use Zorin for their day-to-day work.
[![Zorin OS - Best Linux Distributions of 2022][30]][31]
Additionally, Zorin OS maintains three editions Pro, Lite and Core, which cater to different user bases. The Pro edition is a paid version with additional themes and tweaks for a minimal fee.
You can download Zorin OS from the below link.
[Download Zorin OS][32]
#### 9. Debian with Xfce
There are many Linux distributions which are based on Debian. But I have included vanilla Debian in this list because of its excellent stability and power. Debian termed a “Universal Operating System”, is perfect for moderately experienced users of Linux. But if you can set up a daily driver with Debian Stable with Xfce, you can run it for years without reformating or reinstalling for fear of breaking your system.
Debian package repo contains all possible packages, which gives you the ultimate flexibility to set up any custom system you want.
![Debian with Xfce Desktop][33]
A perfect Linux distribution if you know how to set up a Debian box with some experience. You can download and install Debian after choosing the proper installer for your system here. Debian comes with an installer for several architectures. You may [read our guide][34]if you are confused about which one to choose and how to install it.
[Download Debian][35]
#### 10. Ubuntu Studio
The final best Linux distribution we feature in this list is Ubuntu Studio. Ubuntu Studio is an official Ubuntu Linux distribution specially curated for Multimedia production type of work.
Ubuntu Studio comes with the low-latency mainline Linux Kernel to give additional advantages to multiple operations. In addition, Ubuntu Studio brings its native “Ubuntu Studio Controls”, which provides creators with several options to tweak CPU settings for heavy CPU-intensive rendering and processing.
[![Ubuntu Studio 22.04 LTS Desktop][36]][37]
Moreover, a massive list of free and open-source audio, graphics, and video applications is pre-loaded into the ISO, saving time if you plan to build a multimedia workstation.
Ubuntu Studio is powered by the KDE Plasma desktop, the perfect Linux distribution for all creators worldwide.
You can download Ubuntu Studio from the below link.
[Download Ubuntu Studio][38]
### Closing Notes
I hope this list of “curated and best Linux distributions” helps you pick one for yourself, your friends and co-workers. These are based on their current status (active project), prospects (i.e. it has a well-defined vision for the future) and how easy to set up and out-of-the-box experience.
Finally, which Linux distribution should you think should be in the top 10? Let me know in the comment box below.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/best-linux-distributions-2022/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/2022/02/fedora-36/
[2]: https://www.debugpoint.com/2020/07/enable-rpm-fusion-fedora-rhel-centos/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/05/Fedora-KDE-Edition-1024x640.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/05/Fedora-KDE-Edition.jpg
[5]: https://getfedora.org/en/workstation/download/
[6]: https://spins.fedoraproject.org/
[7]: https://labs.fedoraproject.org/
[8]: https://torrent.fedoraproject.org/
[9]: https://spins.fedoraproject.org/kde/download/index.html
[10]: https://www.debugpoint.com/wp-content/uploads/2022/05/KDE-Neon-1024x578.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/05/KDE-Neon.jpg
[12]: https://neon.kde.org/download
[13]: https://www.debugpoint.com/wp-content/uploads/2022/05/Ubuntu-LTS-with-GNOME-1024x575.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/05/Ubuntu-LTS-with-GNOME.jpg
[15]: https://ubuntu.com/download/desktop
[16]: https://www.debugpoint.com/linux-mint/
[17]: https://www.debugpoint.com/wp-content/uploads/2022/05/Linux-Mint-Cinnamon-Edition-1024x576.jpg
[18]: https://www.debugpoint.com/wp-content/uploads/2022/05/Linux-Mint-Cinnamon-Edition.jpg
[19]: https://linuxmint.com/download.php
[20]: https://www.debugpoint.com/wp-content/uploads/2022/05/Pop-OS-1024x577.jpg
[21]: https://www.debugpoint.com/wp-content/uploads/2022/05/Pop-OS.jpg
[22]: https://pop.system76.com/
[23]: https://www.debugpoint.com/wp-content/uploads/2022/05/MX-Linux-1024x522.jpg
[24]: https://www.debugpoint.com/wp-content/uploads/2022/05/MX-Linux.jpg
[25]: https://mxlinux.org/download-links/
[26]: https://www.debugpoint.com/2022/01/archinstall-guide/
[27]: https://www.debugpoint.com/wp-content/uploads/2022/05/EndeavourOS-1024x574.jpg
[28]: https://www.debugpoint.com/wp-content/uploads/2022/05/EndeavourOS.jpg
[29]: https://endeavouros.com/download/
[30]: https://www.debugpoint.com/wp-content/uploads/2022/05/Zorin-OS-1024x575.jpg
[31]: https://www.debugpoint.com/wp-content/uploads/2022/05/Zorin-OS.jpg
[32]: https://zorin.com/os/download/
[33]: https://www.debugpoint.com/wp-content/uploads/2022/05/Debian-with-Xfce-Desktop.jpg
[34]: https://www.debugpoint.com/2021/01/install-debian-buster/
[35]: https://www.debian.org/distrib/
[36]: https://www.debugpoint.com/wp-content/uploads/2022/04/Ubuntu-Studio-22.04-LTS-Desktop-1024x631.jpg
[37]: https://www.debugpoint.com/wp-content/uploads/2022/04/Ubuntu-Studio-22.04-LTS-Desktop.jpg
[38]: https://ubuntustudio.org/download/

View File

@ -0,0 +1,115 @@
[#]: subject: "How to Update or Upgrade Ubuntu Offline without Internet"
[#]: via: "https://www.debugpoint.com/how-to-update-or-upgrade-ubuntu-offline-without-internet/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Update or Upgrade Ubuntu Offline without Internet
======
**This guide explains the steps to update Ubuntu offline without an active internet connection.**
There are many situations where you may need to update your Ubuntu installation without an internet connection. You may be staying remotely or have a set of network Ubuntu systems that are not connected to the internet. In any case, keeping your system updated with the latest packages is always required.
Of course, updating any system while connected to the internet is always recommended.
But sometimes, it is not possible for security reasons also. Connecting to the internet may bring additional hardening steps for your systems to protect them from hackers and malware.
The following method using [apt-offline][1] helps to fix those use cases and outlines the steps to update your Ubuntu offline without the internet.
### Pre-requisite
- You need to have access to a Ubuntu system that has an internet connection (e.g. your friends, cafe, or lab system)
- A USB pen drive to hold the packages
- Install the apt-offline package in both the systems a) the offline system and b) the system with an internet connection.
### Install apt-offline
both systems
You can install the `apt-offline` using the following command. Remember, you have to get it installed in .
```
sudo apt install apt-offline
```
In case you need the `apt-offline` to be installed in the target system, you can download the deb package from the below link and copy it to the target system via a USB stick. Then run the below command to install.
The download link for Ubuntu 22.04 LTS and other versions is present below. You can choose a mirror and download the deb file.
[download .deb files apt-offline][2]
```
sudo dpkg -i name_of_package.deb
```
### Update Ubuntu offline: Steps
Open a terminal in the offline Ubuntu system and create a signature file using the following command in your home directory.
```
sudo apt-offline set ~/offline-data.sig
```
[![Create the sig file][3]][4]
This creates a file containing the required package paths and details for download.
[![sig file contents][5]][6]
Copy this .sig file to a USB and take it to a Ubuntu system with internet access.
Create a directory (see example below) to hold the downloaded packages in the Ubuntu system with an internet connection.
Open a terminal and run the following command to download the required packages. Remember to change the download directory and .sig file path as per your system.
```
apt-offline get -d ~/offline-data-dir offline-data.sig
```
[![Download the packages to install offline][7]][8]
offline Ubuntu system
You should see the files are downloaded properly. Now copy the entire downloaded directory to the USB drive and plug it into the .
Then run the following command to install the downloaded packages to the offline system. Change the directory path as per your system.
```
sudo apt-offline install offline-data-dir/
```
[![Installing packages - offline update ubuntu][9]][10]
The update should run smoothly if all goes well, and you should have an updated Ubuntu system.
You must repeat the steps above to keep your Ubuntu system up-to-date offline whenever you need to update.
I hope this guide helps you to update your Ubuntu system in an offline mode. If you face any trouble, let me know in the comment box below.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/how-to-update-or-upgrade-ubuntu-offline-without-internet/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://github.com/rickysarraf/apt-offline
[2]: https://packages.ubuntu.com/focal/all/apt-offline/download
[3]: https://www.debugpoint.com/wp-content/uploads/2021/03/Create-the-sig-file-1024x204.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2021/03/Create-the-sig-file.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2021/03/sig-file-contents-1024x250.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2021/03/sig-file-contents.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2021/03/Download-the-packages-to-install-offline-1024x437.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2021/03/Download-the-packages-to-install-offline.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2021/03/Installing-packages-offline-update-ubuntu-1024x509.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2021/03/Installing-packages-offline-update-ubuntu.jpg

View File

@ -0,0 +1,122 @@
[#]: subject: "Why you should consider Rexx for scripting"
[#]: via: "https://opensource.com/article/22/10/rexx-scripting-language"
[#]: author: "Howard Fosdick https://opensource.com/users/howtech"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why you should consider Rexx for scripting
======
How do you design a programming language to be powerful yet still easy to use? Rexx offers one example. This article describes how Rexx reconciles these two seemingly contradictory goals.
### History of Rexx programming language
Several decades ago, computers were shifting from batch to interactive processing. Developers required a scripting or "glue" language to tie systems together. The tool needed to do everything from supporting application development to issuing operating system commands to functioning as a macro language.
Mike Cowlishaw, IBM Fellow, created a solution in a language he named Rexx. It is widely considered the first general-purpose scripting language.
Rexx was so easy to use and powerful that it quickly permeated all of IBM's software. Today, Rexx is the bundled scripting language on all of IBM's commercial operating systems (z/OS, z/VM, z/VSE, and IBM i). It's no surprise that in the 1990s, IBM bundled Rexx with PC-DOS and then OS/2. Rexx popped up in Windows in the XP Resource Kit (before Microsoft decided to lock in customers with its proprietary scripting languages, VBScript and PowerShell). Rexx also emerged as the scripting language for the popular Amiga PC.
### Open source Rexx
With Rexx spreading across platforms, standardization was needed. The American National Standards Institute (ANSI) stepped forward in 1996.
That opened the floodgates. Open source Rexx interpreters started appearing. Today, more than a half dozen interpreters run on every imaginable platform and operating system, along with many open source tools.
Two Rexx variants deserve mention. _Open Object Rexx_ is a compatible superset of procedural or "classic" Rexx. _ooRexx_ is message-based and provides all the classes, objects, and methods one could hope for. For example, it supports multiple inheritance and mixin classes.
Paralleling the rise in Java's popularity, Mike Cowlishaw invented _NetRexx_. NetRexx is a Rexx variant that fully integrates with everything Java (including its object model) and runs on the Java virtual machine.
ooRexx went open source in 2004; NetRexx in 2011. Today the [Rexx Language Association][1] enhances and supports both products. The RexxLA also supports _Regina_, the most popular classic Rexx interpreter, and _BSF4ooRexx_, a tool that fully integrates ooRexx with Java. Everything Rexx is open source.
### Layered design
So, back to the initial conundrum. How does a programming language combine power with ease of use?
One part of the solution is a _layered architecture_. Operators and a minimal set of instructions form the core of the classic Rexx language:
![Rexx layered design][2]
Image by:
(Howard Fosdick, CC BY-SA 4.0)
Surrounding the core are the language's 70-odd built-in functions:
- Arithmetic
- Comparison
- Conversion
- Formatting
- String manipulation
- Miscellaneous
Additional power is added in the form of _external function libraries_. You can invoke external functions from within Rexx programs as if they were built in. Simply make them accessible by proper reference at the top of your script.
Function libraries are available for everything: GUIs, databases, web services, OS services, system commands, graphics, access methods, advanced math, display control, and more. The result is a highly-capable open source ecosystem.
Finally, recall that Open Object Rexx is a superset of classic Rexx. So you could use procedural Rexx and then transition your skills and code to object programming by moving to ooRexx. In a sense, ooRexx is yet another Rexx extension, this time into object-oriented programming.
### Rexx is human-oriented language
Rexx glues all its instructions, functions, and external libraries together in a consistent, dead-simple syntax. It doesn't rely on special characters, arcane syntax, or reserved words. It's case-insensitive and free-form.
This approach shifts the burden of programming from programmer to machine to the greatest degree possible. The result is a comparatively easy language to learn, code, remember, and maintain. Rexx is intended as a human-oriented language.
Rexx implements the _principle of least astonishment_, the idea that systems should work in ways that people assume or expect. For example, Rexx's default decimal arithmetic—with precision you control—means you aren't surprised by rounding errors.
Another example: All variables contain strings. If the strings represent valid numbers, one can perform arithmetic operations with them. This simple concept of dynamic typing makes all data visible and simplifies tracing and debugging.
Rexx capitalizes on the advantages of interpreters to simplify program development. Tracing facilities allow developers to direct and witness program execution in various ways. For example, one can single-step through code, inspect variable values, change them during execution, and more.
Rexx also raises common error conditions that the programmer can easily trap. This feature makes for more standardized, reliable code.
### Arrays
Rexx's approach to arrays (or tables) is a good example of how it combines simplicity with power.
Like all Rexx variables, you don't have to declare them in advance. They automatically expand to the size of available memory. This feature relieves programmers of the burden of memory management.
To form an array, a so-called _compound variable_ stitches together a _stem variable_ with one or more _subscripts_, as in these examples:
```
my_array.1
my_table.i.j
my_list.index_value
my_list.string_value
my_tree.branch_one
my_tree.branch_one.branch_two
```
Subscripts can represent numeric values, as you may be accustomed to in standard table processing.
Alternatively, they can contain strings. String subscripts allow you to build _associative arrays_ using the same simple syntax as common tables. Some refer to associative arrays as _key-value pairs_ or _content addressable memory_. Allowing array contents to be accessed by arbitrary strings rather than simply numeric values opens up an entirely new world of algorithmic solutions.
With this flexible but consistent syntax, you can build almost any data structure: Lists, two- or three- or n-dimensional tables, key-value pairs, balanced trees, unbalanced trees, dense tables, sparse tables, records, rows, and more.
The beauty is in simplicity. It's all based on the notion of compound variables.
### Wrap up
In the future, I'll walk through some Rexx program examples. One real-world example will show how a short script using associative arrays reduced the runtime of a legacy program from several hours down to less than a minute.
You can join the Rexx Language Association for free. For free Rexx downloads, tools, tutorials, and more, visit [RexxInfo.org][3].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/rexx-scripting-language
作者:[Howard Fosdick][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lkxed
[1]: http://www.RexxLA.org
[2]: https://opensource.com/sites/default/files/2022-10/rexx_layered_design.jpg
[3]: http://www.RexxInfo.org

View File

@ -0,0 +1,216 @@
[#]: subject: "Setup Docker And Docker Compose With DockSTARTer"
[#]: via: "https://ostechnix.com/setup-docker-and-docker-compose-with-dockstarter/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Setup Docker And Docker Compose With DockSTARTer
======
This guide explains **what is DockSTARTer**, how to **install DockSTARTer in Linux** and how to **setup Docker and Docker compose using DockSTARTer** to run containerized applications in Linux.
### What is DockSTARTer?
**DockSTARTer** is a TUI-based utility to easily install Docker and Docker compose in Linux and Unix systems. The main goal of DockSTARTer is to make it quick and easy to get up and running with Docker.
DockSTARTer has both TUI and CLI interfaces. So you can use either of these interfaces to quickly deploy multiple containerized apps in a single docker environment.
Please note that DockSTARTer is not a ready-made set of apps that run out of the box. You still need to choose what to run and how to run.
It also doesn't configure apps and storage for you. You may need to configure the settings of the apps and the storage manually by yourself.
As of writing this, we can run more than 100 docker apps using DockSTARter. Some of the popular apps are Adguard, Bitwarden, CloudFlare DDNS, Duplicacy, Emby, File Browser, Glances, Heimdall, InfluxDB, Jellyfin, Kiwix-serve, Lidarr, Minecraft Server, Nextcloud, openLDAP, Speedtest, Pihole, qBittorent, Rsnapshot, Syncthing, Time Machine, Uptimne Kuma, Vsftpd, Wireguard, youtubedl and a lot more.
DockSTARTer is free and opensource shell script. The source code of DockSTARTer is hosted in GitHub.
### Install DockSTARTer in Linux
DockSTARter can be installed in popular Linux operating systems.
To install DockSTARTer in Arch Linux and its variants such as EndeavourOS, and Manjaro Linux, run the following commands:
```
$ sudo pacman -S curl docker git
```
```
$ bash -c "$(curl -fsSL https://get.dockstarter.com)"
```
```
$ sudo reboot
```
To install DockSTARTer in Debian, Ubuntu, Linux Mint, Pop OS, run:
```
$ sudo apt install curl git
```
```
$ bash -c "$(curl -fsSL https://get.dockstarter.com)"
```
```
$ sudo reboot
```
To install DockSTARTer in Fedora, RHEL, CentOS, AlmaLinux and Rocky Linux, run:
```
$ sudo dnf install curl git
```
```
$ bash -c "$(curl -fsSL https://get.dockstarter.com)"
```
```
$ sudo reboot
```
### Use DockSTARTer to setup Docker and Docker Compose
DockSTARTer allows you to install and configure various apps in Docker.
To run DockSTARTer for the first time, enter the following command:
```
$ ds
```
Choose "Configuration" from the main menu and press ENTER:
And then select "Full setup".
Choose which apps you would like to install. By default, Watchtower app is selected. Use UP and DOWN arrow keys to navigate to app list and press SPACEBAR to select or deselect apps.
Now, DockSTARTer will display the default settings of the selected apps. If you would like to keep these settings for the apps, choose "Yes" and hit ENTER. Or choose "No" and change the settings as you want.
If you like to keep the default settings for VPN, choose "Yes" or choose "No" to change the settings as you please.
Now you will see the global settings for DockSTARTer. Review the global settings such as docker config directory, docker storage directory, docker hostname and time zone etc. If you're OK with the default settings, simply choose "Yes" and hit ENTER. If you want to change these settings, select "No". I want to change the storage directory, hostname and time zone, so I choose "No".
If you chose "No" in the previous wizard, you will be prompted to set docker config directory. There will be 2 choices given. You can either choose to keep the currently selected directory or enter a new one by selecting "Enter New" option. I am going to keep the currently selected directory.
Choose "yes" to set appropriate permissions on the docker configuration directory.
In this step, you need to set a directory for Docker storage. By default, DockSTARTer will create a directory called "storage" in your $HOME directory. If you want to keep the default storage directory, choose "Keep Current". Or choose "Enter New".
Enter the path to your Docker storage directory and hit ENTER. If the directory doesn't exist, DockSTARTer will attempt to create it.
Set the hostname for your Docker system. DockSTARTer recommends system detected values. Here, I am going to choose "Use System" option's setting for my Docker hostname.
Set the user's group ID (PGID). If you're unsure, simply go with the **"Use System"** option.
Set your user account ID (PUID). If you're unsure, simply go with the **"Use System"** option.
Set your system timezone. The system detected values, so just choose "Use System" option and hit ENTER.
Next, you will prompted if you would you like to run compose. Choose "Yes" to do so.
This will pull the Docker images that you choose to install in one the previous steps.
Finally, you will an output something like below after the Docker compose installed all selected apps.
```
[...] 2022-10-18 14:24:30 [WARN ] /home/ostechnix/.docker/compose/.env not found. Copying example template. 2022-10-18 14:24:30 [WARN ] Please verify that ~ is not used in /home/ostechnix/.docker/compose/.env file. 2022-10-18 14:24:30 [NOTICE] Preparing app menu. Please be patient, this can take a while. 2022-10-18 14:36:51 [NOTICE] /home/ostechnix/.docker/compose/.env does not contain any disabled apps. 2022-10-18 14:36:51 [NOTICE] Creating environment variables for enabled apps. Please be patient, this can take a while. 2022-10-18 15:55:29 [NOTICE] Creating environment variables for enabled apps. Please be patient, this can take a while. 2022-10-18 15:55:29 [NOTICE] Adding compose configurations for enabled apps. Please be patient, this can take a while. [+] Running 4/4 ⠿ watchtower Pulled 6.1s ⠿ 1045b2f97fda Pull complete 1.0s ⠿ 35a104a262d3 Pull complete 1.2s ⠿ 1a0671483169 Pull complete 3.1s [+] Running 2/2 ⠿ Network compose_default Created 0.0s ⠿ Container watchtower Started
```
That's it. You can view the list of running Docker containers using command:
```
$ docker ps
```
**Sample output:**
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9d3c34dc918f ghcr.io/containrrr/watchtower "/watchtower" 5 minutes ago Up 5 minutes 8080/tcp watchtower
```
### Install new Apps
To install the other apps, just restart DockSTARTer again using the following command:
```
$ ds
```
Select "Configuration" and then "Select Apps".
You will see the list of available apps in the next screen. Just select the app you want to run and follow the on-screen instructions.
### Remove Apps
Removing apps is same as adding new apps.
First, make sure the container app is stopped.
```
$ sudo docker stop <container-id>
```
Start DockeSTARTer, go to **Configuration -> Select Apps** and **uncheck** the apps that you want to remove and choose OK to remove the apps.
### Update DockSTARTer
To update DockSTARTer, simply start it using **"`ds`"** command from the Terminal and then choose "Update DockSTARTer" option.
You can also do it from the commandline by running:
```
$ sudo ds -u
```
### Prune Docker system
To remove all unused containers, networks, volumes, images and build cache, start DockSTARTer and then choose **"Prune Docker System"** option.
You can prune your Docker system from commandline by running the following command as well.
```
$ sudo ds -p
```
**Sample output:**
```
Deleted Containers: 9d3c34dc918fafa62d0e35283be4cbee46280a30dcd59b1aaa8b5fff1e4a085d Deleted Networks: compose_default Deleted Images: untagged: ghcr.io/containrrr/watchtower:latest untagged: ghcr.io/containrrr/watchtower@sha256:bbf9794a691b59ed2ed3089fec53844f14ada249ee5e372ff0e595b73f4e9ab3 deleted: sha256:333de6ea525af9137e1f14a5c1bfaa2e730adca97ab97f74d738dfa99967f14f deleted: sha256:f493af3d0a518d307b430e267571c926557c85222217a8707c52d1cf30e3577e deleted: sha256:62651dc7e144aa8c238c2c2997fc499cd813468fbdc491b478332476f99af159 deleted: sha256:83fe5af458237288fe7143a57f8485b78691032c8c8c30647f8a12b093d29343 Total reclaimed space: 16.92MB
```
### Change variables
You can adjust variables for running Docker containers at any time.
Start DockSTARTer by running **"`ds`"** command and choose "Configuration", and then choose the following settings:
- "Set App Variables" option for adjusting variables for all enabled apps,
- "Set VPN Variables" option for adjusting VPN specific variables,
- "Set Global Variables" option for adjusting global variables.
### Conclusion
DockSTARTer has made the process of running Docker apps much easier! DockSTARTer also has CLI interface, but you can quickly deploy Docker containers without memorizing any commands via the its Text-based interface.
**Resource:**
- **[DockSTARTer Website][1]**
--------------------------------------------------------------------------------
via: https://ostechnix.com/setup-docker-and-docker-compose-with-dockstarter/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://dockstarter.com/

View File

@ -0,0 +1,283 @@
[#]: subject: "Give Your Linux Desktop a Halloween Makeover"
[#]: via: "https://itsfoss.com/linux-halloween-makeover/"
[#]: author: "Sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Give Your Linux Desktop a Halloween Makeover
======
Halloween is around the corner. Boo!
Of course, there are ways to celebrate Halloween, and I believe you might have a few ideas of your own. How about giving your Linux desktop a spooky, dark makeover? Something like the screenshot below?
![ubuntu halloween theming final looks][1]
Customization is a high point of Linux, and there is no end to it. Earlier, we showed you[how to make your Linux look like macOS][2]. Today, Ill share a few tips to keep up with the Halloween spirit.
This is possible with a combination of themes, icons, extensions, fonts, conky, etc. **_While you can do these things on any distribution and desktop environment, its not feasible for me to show them all in a single tutorial._**
Here, I have used Ubuntu with the GNOME desktop environment.
### Getting all the tools
You need several packages and tools. Make sure you have them all (or most of them) before you start the customization.
_Its not mandatory to make all of the changes. But the more you do, the better look and feel you get._
**GNOME Tweaks and GMOME Extensions manager**
Get the Tweaks tool and the extension manager with this command:
```
`sudo apt install gnome-tweaks gnome-extension-manager`
```
In KDE-based systems, you dont any tweak tool to change the look. But surely, you will need the **Kvantum-Manager** app that I discussed in the [KDE theming][3] guide.
**Conky**
This is actually optional. Since the conky-manager project is not receiving any maintenance, it will be a bit tricky to use conky. But anyway, lets use it for the additional look-and-feel.
```
`sudo apt install conky-all`
```
**Neofetch or shell color scripts**
This step is also a personal choice. You can choose [neofetch][4] because its already available in the repository and can be used easily.
```
`sudo apt install neofetch`
```
[Shell-color scripts][5] are another excellent choice. The package is available in AUR and Arch Linux users can install it from there. In Ubuntu, you need to install it manually.
```
`git clone https://gitlab.com/dwt1/shell-color-scripts.git cd shell-color-scripts sudo make install`
```
**Themes, icons, fonts, and wallpaper**
I am using [Sweet][6] theme, [Beautiline][7] icon pack, [simple1e][8] cursors, and [Grey-Minimalistic][9] conky theme. Once downloaded, extract them. You should also get [Creepster][10] font.
Download a [spooky wallpaper][11] from the internet.
Alert! Youll be doing a lot of customization and change. You can go back to the usual look by reverting all the changes you made. An easier way out would be to create a new user with admin access and make all these changes with this new user. This way, your original user account and appearance doesnt get impacted. When Halloween is over, you can delete this additional user.
With all resources in hand, its time to utilize them.
### Install and use the extensions
Open the gnome-extensions app. In Ubuntu 22.04, you can install extensions from within the app, by using the browse section.
![install gnome shell extensions user themes blur my shell and dash to dock][12]
In other versions of Ubuntu and other GNOME distributions, you can [install shell extensions][13] through the browser. For our purpose, install the following extensions :
- [User Themes][14]
- [Dash to Dock][15]
- [Blur my Shell][16]
Also, make sure that all the extensions are enabled.
### Apply theme, icon, and font
You need to copy and paste the extracted theme folder to `~/.themes` directory and icon and cursor folder to the `~/.icons` directory.
Now open GNOME tweaks and apply the settings as shown in the screenshot below.
![set themes with gnome tweaks][17]
To use a [custom font in Ubuntu][18], right-click on the font file that you have downloaded and extracted and select open with Font manager. I am using [Creepster][10] font.
![right click on font file and select open with fonts][19]
Here, press the install button.
![install font using font manager application][20]
Note: In some systems, pressing the install button wont show the “installed” prompt. In that case, you can just close the app because once you press the install button, it has been installed.
Now open the Tweaks app and move to the fonts section. Here, you can change the fonts of various sections as shown in the screenshot below.
![change system fonts using gnome tweaks][21]
Note that, for terminals, a monospace font is required. Here, I am using a regular font and thus it may give you a slightly disoriented look sometimes.
### Apply Dash to Dock Extension settings
First, you need to **turn off the Ubuntu Dock extension** using the GNOME Extensions application.
![Disable Ubuntu Dock][22]
Run the Dash to Dock extension if its not running already.
Now, right-click on the dash to dock application button appearing on the bottom and select dash to dock settings.
![select dash to dock settings][23]
Here, you need to tweak some small things.
First, reduce the icon size using the respective slider.
![setting dash to dock icon size][24]
After that, you need to reduce the opacity of the dock. I prefer a fully transparent dock.
For this, set the opacity to **fixed** and reduce it to zero with the slider, as shown in the screenshot below.
![opacity setting for dash to dock][25]
### GNOME terminal setting
The main tweak you want to get is a custom neofetch look (or a shell color script) with some blurred transparency.
On applying monospace font in GNOME-tweaks earlier, the font in the GNOME terminal is also changed.
First, create a new profile from **preferences**.
![select preferences from hamburger menu][26]
Here, Click + sign to create a new profile. Type in a name and press **create** as shown below:
![create new profile in gnome terminal][27]
Inside the new profile, change the transparency setting and set it around the middle, as shown in the screenshot:
![set transperancy to gnome terminal][28]
Once finished, set this profile as the default. To do this, click on the triangle button associated with the new profile and select **Set as Default**.
![set new profile as default in gnome terminal][29]
#### Setting blur effect
The above step will only create a transparent shell. But if you need a blur effect, which is good for better visibility, you need to go to the Blur my Shell extension settings.
![blur my shell extension settings][30]
Here, go to the **Application** tab. Now, ensure that the terminal is opened and placed conveniently on the desktop. Click on **Add Window** button and select gnome-terminal window, to set the blur effect. Note: This feature is in beta so expect minor glitches.
![applying blur effect to selected windows][31]
This same procedure can be repeated for other apps also, like the Nautilus file manager.
#### Customizing Neofetch
One of the best features of neofetch is its customizability. You can tweak the look with a wide range of methods. For Halloween, I choose a pumpkin image to appear in place of the distro logo.
Neofetch supports adding custom images in a variety of formats. For that purpose, there are a variety of backends supported. Here, I use the jp2a backend, which will use an [ASCII converted image][32].
```
`neofetch --jp2a /path/to/your/image/file.png`
```
![neofetch with custom backend][33]
The above code will create a neofetch instance with the custom image. You can write this code to your .bashrc file, for permanent placement.
_**Unfortunately, this didnt work on my Wayland instance.**_
#### Customizing Shell Color Scripts
If you installed shell color scripts, you have a variety of shell scripts. To list the available scripts, use:
```
``colorscript -l``
```
![ghosts shell color script][34]
You can either get a random script each time by placing `colorscript random` in your .bashrc file. Or you can get any particular script by placing `colorscript -e <name>`
### Setting up Conky
I am using the [Grey-Minimalistic conky theme][9] from Deviantart. Each type of conky theme has a different installation method. So if you are using another conky file, follow its setup method, described in its README files.
Extract the conky theme file. Inside, we have several folders. First, you need to install the associated icons and fonts. That is, install the given font using font-manager. Copy and paste the icon folder to your ~/.icons folder.
![copy and paste conky files to home directory][35]
Now, go to the conky folder. Make sure that, you have [enabled viewing hidden files][36]. Now copy the `.conkyrc` file and `.conky-vision-icons` file to your Home directory, as shown above.
Now start conky to get a look like this.
![conky theme applied][37]
Add the conky to the [list of startup applications][38] so that it starts automatically at each boot.
![add conky to the list of startup applications][39]
### Change wallpaper
You are almost there. The only thing you need to do now is to [change the background wallpaper][40]. You have already downloaded the spooky wallpapers I believe.
![set image as wallpaper from nautilus][41]
### Behold the final look!
If you followed most of the steps above, you should get a desktop that looks like the one in the below screenshots.
![ubuntu halloween theme final look][42]
Is it scary enough for Halloween? What do you think? Let me know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-halloween-makeover/
作者:[Sreenath][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/ubuntu-halloween-theming-final-looks.jpg
[2]: https://itsfoss.com/make-ubuntu-look-like-macos/
[3]: https://itsfoss.com/properly-theme-kde-plasma/
[4]: https://itsfoss.com/using-neofetch/
[5]: https://gitlab.com/dwt1/shell-color-scripts
[6]: https://www.gnome-look.org/p/1253385
[7]: https://www.gnome-look.org/p/1425426
[8]: https://www.gnome-look.org/p/1405210
[9]: https://www.deviantart.com/bryantlloyd/art/Grey-Minimalistic-634726564
[10]: https://fonts.google.com/specimen/Creepster?query=creepster
[11]: https://www.wallpaperflare.com/search?wallpaper=spooky
[12]: https://itsfoss.com/wp-content/uploads/2022/10/install-gnome-shell-extensions-user-themes-blur-my-shell-and-dash-to-dock.png
[13]: https://itsfoss.com/gnome-shell-extensions/
[14]: https://extensions.gnome.org/extension/19/user-themes/
[15]: https://extensions.gnome.org/extension/307/dash-to-dock/
[16]: https://extensions.gnome.org/extension/3193/blur-my-shell/
[17]: https://itsfoss.com/wp-content/uploads/2022/10/set-themes-with-gnome-tweaks.png
[18]: https://itsfoss.com/install-fonts-ubuntu/
[19]: https://itsfoss.com/wp-content/uploads/2022/10/right-click-on-font-file-and-select-open-with-fonts.png
[20]: https://itsfoss.com/wp-content/uploads/2022/10/install-font-using-font-manager-application.png
[21]: https://itsfoss.com/wp-content/uploads/2022/10/change-system-fonts-using-gnome-tweaks.png
[22]: https://itsfoss.com/wp-content/uploads/2020/06/disable-ubuntu-dock.png
[23]: https://itsfoss.com/wp-content/uploads/2022/10/select-dash-to-dock-settings.png
[24]: https://itsfoss.com/wp-content/uploads/2022/10/setting-dash-to-dock-icon-size.png
[25]: https://itsfoss.com/wp-content/uploads/2022/10/opacity-setting-for-dash-to-dock.png
[26]: https://itsfoss.com/wp-content/uploads/2022/10/select-preferences-from-hamburger-menu.png
[27]: https://itsfoss.com/wp-content/uploads/2022/10/create-new-profile-in-gnome-terminal.png
[28]: https://itsfoss.com/wp-content/uploads/2022/10/set-transperancy-to-gnome-terminal.png
[29]: https://itsfoss.com/wp-content/uploads/2022/10/set-new-profile-as-default-in-gnome-terminal.png
[30]: https://itsfoss.com/wp-content/uploads/2022/10/blur-my-shell-extension-settings.png
[31]: https://itsfoss.com/wp-content/uploads/2022/10/applying-blur-effect-to-selected-windows.png
[32]: https://itsfoss.com/ascii-image-converter/
[33]: https://itsfoss.com/wp-content/uploads/2022/10/neofetch-with-custom-backend.png
[34]: https://itsfoss.com/wp-content/uploads/2022/10/ghosts-shell-color-script.png
[35]: https://itsfoss.com/wp-content/uploads/2022/10/copy-and-paste-conky-files-to-home-directory.png
[36]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
[37]: https://itsfoss.com/wp-content/uploads/2022/10/conky-theme-applied.png
[38]: https://itsfoss.com/manage-startup-applications-ubuntu/
[39]: https://itsfoss.com/wp-content/uploads/2022/10/add-conky-to-the-list-of-startup-applications.png
[40]: https://itsfoss.com/change-wallpaper-ubuntu/
[41]: https://itsfoss.com/wp-content/uploads/2022/10/set-image-as-wallpaper-from-nautilus.png
[42]: https://itsfoss.com/wp-content/uploads/2022/10/ubuntu-halloween-theme-final-look.jpg

View File

@ -0,0 +1,140 @@
[#]: subject: "Fedora 37: Top New Features and Release Wiki"
[#]: via: "https://www.debugpoint.com/fedora-37/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora 37: Top New Features and Release Wiki
======
**An article about Fedora 37 and its new features, release details and everything you need to know.**
Fedora 37 development is wrapped up, and the BETA is [now out][1]. Hence the features and packages are final at this stage.
In this usual feature guide page, I have summarised the essential features you should know about Fedora 37 and get an idea of what to expect. But before that, heres a tentative schedule.
- The beta was out on September 13, 2022.
- **Final Fedora 37 is planned for release on October 25, 2022.**
![Fedora 37 Workstation with GNOME 43][2]
### Fedora 37: Top New Features
#### Kernel
**First** up are the critical items that make the core. Fedora 37 is powered by **Linux Kernel 5.19,** the latest mainline Kernel available now. Linux Kernel 5.19 brings essential features such as a fix for Ratbleed vulnerability, ARM support, Apple M1 NVMe SSD controller support and many such features, which you can read in our [Kernel feature guide][3].
The advantage of using the latest Kernel is that you can be assured that you are using the latest and greatest hardware support available at this moment in time.
**Next** up, the desktop environments are updated in this release.
#### Desktop Environment
Fedora 37 is the first distribution which brings the stunning **GNOME 43** desktop, which brings some excellent features such as:
- [Revamped quick settings][4] with pill-buttons
- Files (nautilus) 43 with GTK4 and libadwaita port
- Files with rubberband, emblems, responsive sidebar-like features
- [Updated GNOME Web with WebExtension API support][5]
And many features you have been waiting for for years. Do check out my [GNOME 43 feature guide][6] to learn more.
Fedora 37 brings **KDE Plasma 5.26** desktop environment with tons of new features, performance improvements and bug fixes. The most noteworthy features of the KDE Plasma desktop include:
- An updated overview screen.
- Dynamic wallpaper for dark and light themes.
- Animated wallpaper support
- Multi-button mouse support
- Updated KDE Framework and applications.
…and much more which you can read in detail in my [KDE Plasma 5.26 feature guide][7].
Since the lightweight desktop LXQt gets a stable update, 1.1.0, it arrives in Fedora 37. **LXQt 1.1.0** brings a default colour palette for dark themes for a uniform look, two variants (simple and compact) of the application menu and re-arranged GTK settings. Furthermore, LXQt 1.1.0 also starts the initial work for the Qt 6.0 porting of desktop components. All these bug fixes and enhancements arrive in the Fedora LXQt edition.
In addition, other primary desktop flavours remain at their current releases since no significant new updates arrive, i.e. **Xfce 4.16 and MATE 1.26**for the respective Fedora flavours.
Lets see what the system-wide changes in this release that impacts all the Fedora flavours are.
#### System wide changes
The most significant change is the official support for **Raspberry Pi 4** boards. Thanks to the works over the years, you can now enjoy Fedora 37 on your favourite Pi boards with out-of-the-box supports.
Fedora Linux is always a pioneer in advancing technology and adopting the latest features before any other distro. With that in mind, the **SDDM display manager now comes with default Wayland** in KDE Plasma (and Kinoite) and different flavours. This completes the Wayland transition from the Fedora distro aspect for this flavour. 
As I [reported earlier][8], Fedora Linux 37 plans to provide us with a preview image of a **Web-based installer** for Anaconda. It might not be available immediately following the release. But it should be within a few days post-release.
Other noteworthy features include changing the **default hostname from “fedora” to “localhost”** to mitigate some third-party system configuration detection. 
Other than that, the **Fedora Core OS** is made to be an official Fedora edition and now stands together with Server, IoT and cloud editions for better discovery and adoption. Fedora Core OS minimal footprint OS is primarily used for container workloads and brings auto updates and additional features.
Following the tradition, this release also features a [brand new wallpaper][9] with both night and day versions. I must say it looks awesome (see the above desktop image).
Finally, also in this release, Fedora **drops 32-bit Java** packages, including JDK 8, 11, and 17, since usage is low. In addition, the openssl1.1 package is also deprecated.
The toolchain, apps and programming stack are updated as follows:
- Glibc 2.36 and Binutils 2.38
- Node.js 18.x
- Perl 5.36
- Python 3.11
### Summary of features in Fedora 37
So, thats about it with the features of this release. Heres a summary of the Fedora 37 features:
- Linux Kernel 5.19
- GNOME 43
- KDE Plasma 5.26
- Xfce 4.16
- MATE 1.24
- LXQt 1.1.0
- A preview image of the new web-based installer
- The SDDM display manager defaults to Wayland (in KDE Plasma and others)
- Official Raspberry Pi 4 support
- Fedora Core OS becomes the official flavour
- Key packages dropping 32-bit support
- And associated toolchain and programming language updates.
If you have spare time, you can[give it a spin][10] or test drive. Just be cautious that it is still BETA.
Also, If you are daring enough, you can upgrade to this release with **caution** because, yknow its BETA. The commands below will help you to do that.
```
sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download --ref --releasever=37
```
For Kinoite, Silverblue and other immutable versions, use:
```
rpm-ostree rebase fedora:fedora/37/x86_64/silverblue
```
**So, whats your favourite feature of this release? Let me know in the comment section.**
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/fedora-37/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://debugpointnews.com/fedora-37-beta/
[2]: https://www.debugpoint.com/wp-content/uploads/2022/08/Fedora-37-Workstation-with-GNOME-43-1024x572.jpg
[3]: https://www.debugpoint.com/linux-kernel-5-19/
[4]: https://www.debugpoint.com/gnome-43-quick-settings/
[5]: https://www.debugpoint.com/gnome-web-43-tab-view/
[6]: https://www.debugpoint.com/gnome-43/
[7]: https://www.debugpoint.com/kde-plasma-5-26/
[8]: https://debugpointnews.com/fedora-37-anaconda-web-ui-installer/
[9]: https://debugpointnews.com/fedora-37-wallpaper/
[10]: https://getfedora.org/workstation/download/

View File

@ -0,0 +1,89 @@
[#]: subject: "How to Install Viber in Ubuntu and Other Linux"
[#]: via: "https://www.debugpoint.com/install-viber-linux/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Viber in Ubuntu and Other Linux
======
**Heres a quick guide on how you can install Viber in Ubuntu and other Linux systems.**
[Viber][1] is a free, secure calling and messaging program for all popular mobile platforms and operating systems.
It has a rich set of features such as voice/video calls, text messages with GIFs, stickers, photos, and videos. In addition, Viber features group chats, group calls and disappearing messages.
Viber is a closed-source program, but available as free for Linux distributions with native executable clients.
Heres how to install it.
### Install Viber on Linux
It is available as an AppImage executable, deb and rpm package. Follow the respective button below to download it directly. The average executable size is ~180MB.
[Download Appimage for all Linux distros][2]
[Deb executable for Ubuntu][3]
RPM package for Fedora
If you have downloaded AppImage, simply change the permission to executable from any file manager. Then run.
- For Ubuntu, Linux Mint, Debian and related distributions, you can install deb package via [many methods][4].
- You may double-click and open via the installed software manager. Or install via dpkg command as below.
```
sudo dpkg -i viber.deb
```
- For Fedora and RPM-based packages, you can install via the following command.
```
sudo dnf localinstall viber.rpm
```
For Arch Linux and other distributions, you can use the Appimage as I explained above.
### Usage
After you finish installing Viber, open it via the application menu. Here are a couple of things you need to remember.
Before you start using Viber from your Laptop/desktop, you need to set it up on your mobile phone. Download and install Viber for your mobile platform from the below links.
- [Google Play Store][5]
- [Apple App Store][6]
Once installed, set up Viber. Remember, it requires your mobile number to register.
After setting up, open the app on the Linux desktop. And you should see a screen like the one below.
![Viber is Running in Linux][7]
Scan the QR code from your mobile phone app, and you should be ready to use Viber on your Linux desktop.
**Note:** Since it is a closed-source app, make sure you understand the terms of this app and privacy-related situations while using Viber.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/install-viber-linux/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.viber.com/
[2]: https://download.cdn.viber.com/cdn/desktop/Linux/viber.deb
[3]: https://download.cdn.viber.com/desktop/Linux/viber.rpm
[4]: https://www.debugpoint.com/install-deb-files/
[5]: https://play.google.com/store/apps/details?id=com.viber.voip&hl=en_IN&gl=US
[6]: https://apps.apple.com/us/app/viber-messenger-chats-calls/id382617920
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Viber-is-Running-in-Linux-1.jpg

View File

@ -0,0 +1,106 @@
[#]: subject: "How to Clean Up Snap Versions to Free Up Disk Space"
[#]: via: "https://www.debugpoint.com/clean-up-snap/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Clean Up Snap Versions to Free Up Disk Space
======
**This quick guide with a script helps to clean up old snap versions and free some disk space in your Ubuntu systems.**
I was running out of disk space in my test system with Ubuntu. So I was investigating via GNOMEs Disk Usage Analyser to find out which package is consuming the precious SSD space. Apart from the usual cache and home directory to my surprise, I found that Snap and Flatpak consume a considerable amount of storage space.
![Snap size - before cleanup][1]
Although, I always maintain a rule not to use Snap or Flatpak unless necessary. This is mainly because of their installation size and other issues. I prefer vanilla deb and rpm packages. Over the years, I have installed and removed a certain amount of Snap packages in this test system.
The problem arises after uninstallation; Snap keeps some residue files in the system, unknown to the general users.
So I opened the Snap folder `/var/lib/snapd/snaps` and discovered that Snap is keeping track of older versions of previously installed/uninstalled packages.
For example, in the below image, you can see GNOME 3.28, 3.34, and Wine all of these are removed long back. But they are still there. Its happening because of the Snap design, which keeps versions of uninstalled packages after a proper uninstallation.
![Files under snaps directory][2]
Alternatively, you can get the same in the terminal using:
```
snap list --all
```
![snap list all][3]
The default value is 3 for several revisions for retention. That means Snap keeps three older versions of each package, including the active version. This is okay if you do not have constraints on your disk space.
But for servers and other use cases, this can easily run into cost issues, consuming your disk space.
However, you can easily modify the count using the following command. The value can be between 2 to 20.
```
sudo snap set system refresh.retain=2
```
### Clean Up Snap Versions
In a post in SuperUser, Popey, the ex-Engineering Manager at Canonical, [provided a simple script][4] that can clean up old versions of Snaps and keep the latest one.
Heres the script we will use to clean the Snap up.
```
#!/bin/bash
#Removes old revisions of snaps
#CLOSE ALL SNAPS BEFORE RUNNING THIS
set -eu
LANG=en_US.UTF-8 snap list --all | awk '/disabled/{print $1, $3}' |
while read snapname revision; do
snap remove "$snapname" --revision="$revision"
done
```
Save the above script as .sh in a directory (for example`clean_snap.sh`), give it executable permission and run.
```
chmod +x clean_snap.sh
```
When I ran the script, it reduced a lot of disk space. The script would also show the name of the package being removed.
![Executing the script][5]
![Snaps size after cleanup][6]
### Closing Notes
There are always debates on how efficient Snaps design is. Many say it is broken by design, bloated, and heavy on systems. Some part of that argument is true, I would not deny it. The whole concept of sandboxing applications is great if implemented and enhanced properly. I believe, Flatpak does a better job compared to Snap.
That said, I hope this helps you clean up some disk space. Although it is tested in Ubuntu, it should work in all Linux distribution that supports Snap.
Also, check out our guide on [how to clean up Ubuntu][7] with additional steps.
Finally, if you are looking to clean up **Flatpak** apps, refer [this guide][8].
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/clean-up-snap/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2021/03/Snap-size-before-cleanup.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2021/03/Files-under-snaps-directory.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2021/03/snap-list-all.jpg
[4]: https://superuser.com/a/1330590
[5]: https://www.debugpoint.com/wp-content/uploads/2021/03/Executing-the-script.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2021/03/Snaps-size-after-cleanup.jpg
[7]: https://www.debugpoint.com/2018/07/4-simple-steps-clean-ubuntu-system-linux/
[8]: https://www.debugpoint.com/clean-up-flatpak/

View File

@ -0,0 +1,164 @@
[#]: subject: "How to Remove Snap Packages in Ubuntu Linux"
[#]: via: "https://www.debugpoint.com/remove-snap-ubuntu/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Remove Snap Packages in Ubuntu Linux
======
**A tutorial on how to remove Snap from Ubuntu Linux and getting a snap-free system.**
Snap packages developed by Canonical are beneficial for several use cases. It provides an easy and faster update of applications directly to the end-users. Not only that, it has several other benefits, such as it comes with all dependencies packaged and allows multiple installations of the same applications. Furthermore, it runs in a sandbox mode providing security and other benefits.
Among all these benefits, there are other debatable drawbacks of Snap tech. For example, almost every user who used Snap reported its slower performance, including its startup time, compared to native deb or RPM packages. In addition, due to its design, the application installation size is huge and costs disk space because it packages all the dependencies.
Not only that, but due to its sandbox nature, the Snap apps may not access several areas of your Linux desktop until managed with proper permission.
This guide explains how you can remove the snap from the Ubuntu system altogether.
These steps are tested in [Ubuntu 22.04 LTS Jammy Jellyfish][1]. However, it should work for all applicable Ubuntu versions.
Warning: These steps will remove Software and Firefox, the two critical applications in your Ubuntu system. Make sure you take backups of bookmarks and other Firefox settings before trying these steps.
### Remove Snap Packages in Ubuntu Linux
- Open a terminal and view the list of Snap packages installed in your system using the below command. It shows the snap packages such as Firefox, Software store, themes and other core packages installed by default.
```
snap list
```
![Snap list in Ubuntu][2]
- Remove snap packages in the following order. Firstly remove Firefox. Secondly, snap-store and the other packages that you see in the above command output in your system.
```
sudo snap remove --purge firefoxsudo snap remove --purge snap-storesudo snap remove --purge gnome-3-38-2004
```
```
sudo snap remove --purge gtk-common-themessudo snap remove --purge snapd-desktop-integrationsudo snap remove --purge baresudo snap remove --purge core20sudo snap remove --purge snapd
```
- Finally, remove the snap daemon via apt command.
```
sudo apt remove --autoremove snapd
```
[![remove snap and others][3]][4]
Thats not all. Even if you removed the snaps using the above command, the sudo apt update command again brings back the snap if you dont stop the apt trigger.
- So, to stop that, we need to create an apt preference file in **/etc/apt/preferences.d/ **and create a new preference file to stop snap. Create a new file called **nosnap.pref** in /etc/apt/preferences.d/
```
sudo gedit /etc/apt/preferences.d/nosnap.pref
```
- And add the following lines, then save the file.
```
Package: snapdPin: release a=*Pin-Priority: -10
```
![create a pref file][5]
_The apt preference is a potent tool if you know how to use it. For example, in the above statements, the Pin-Priority -10 means preventing a package from installation._
_Unrelated to this tutorial, for example, if you want to give super high priority to all the packages from distribution code name=bullseye, then one may see these preferences. If you want to learn more, you can visit the [apt man pages][6]._
```
Package: *Pin: release n=bullseyePin-Priority: 900
```
- Returning to the topic, once you save and close the above file, run the below again from the terminal.
```
sudo apt update
```
- Finally, the steps are complete for getting rid of the snap from Ubuntu.
### Installing Software and Firefox as deb files after removing Snap from Ubuntu
You removed Firefox and Sofware applications, so you need those for your work.
You can use the following command to install the apt version of the Gnome Software. Make sure you use the `--install-suggests`. Otherwise, it will install the snap version again!
```
sudo apt install --install-suggests gnome-software
```
And to install firefox, use the official PPA via the below commands.
```
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install -t 'o=LP-PPA-mozillateam' firefox
```
[![Add the PPA][7]][8]
[![Install Firefox as deb file from PPA][9]][10]
Once you have installed Firefox, enable the automatic update using the below commands. To learn more, [visit thi][11][s page][11].
```
echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox
```
Last but not least, create another preference file for Firefox to give super high priority to the above PPA while running apt. If you dont do this, the apt update command again pulls back firefox snap and brings over its “snap friends”. 😂
```
sudo gedit /etc/apt/preferences.d/mozillateamppa
```
Finally, add these lines and save the file.
```
Package: firefox*Pin: release o=LP-PPA-mozillateamPin-Priority: 501
```
Thats it.
### Revert back to Snap in Ubuntu
If you change your mind, remove the preference file and install the applications using the below commands below.
```
sudo rm /etc/apt/preferences.d/nosnap.prefsudo apt update && sudo apt upgradesudo snap install snap-storesudo apt install firefox
```
### Closing Notes
Wrapping up the tutorial on removing snap in Ubuntu, I would say these are unnecessary efforts to eliminate Snap completely. Mainly these are difficult for new users. I hope this guide helps you to get rid of snap. Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/remove-snap-ubuntu/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/2022/01/ubuntu-22-04-lts/
[2]: https://www.debugpoint.com/wp-content/uploads/2022/04/Snap-list-in-Ubuntu.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2022/04/remove-snap-and-others-1024x544.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/04/remove-snap-and-others.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/04/create-a-pref-file.jpg
[6]: https://manpages.ubuntu.com/manpages/focal/man5/apt_preferences.5.html
[7]: https://www.debugpoint.com/wp-content/uploads/2022/04/Add-the-PPA-1024x550.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/04/Add-the-PPA.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/04/Install-Firefox-as-deb-file-from-PPA-1024x548.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/04/Install-Firefox-as-deb-file-from-PPA.jpg
[11]: https://www.debugpoint.com/2021/09/remove-firefox-snap-ubuntu/

View File

@ -0,0 +1,113 @@
[#]: subject: "How to Enable and Access USB Drive in VirtualBox"
[#]: via: "https://www.debugpoint.com/enable-usb-virtualbox/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Enable and Access USB Drive in VirtualBox
======
**Heres a precise guide on how you can enable USB in Oracle VirtualBox.**
When you work in a Virtual machine environment, the USB is usually plugged into the host system. But it is a little difficult to access that USB content from the guest system.
In VirtualBox, you need to install some extensions and enable some settings to access USB in. Heres how.
This article assumes that you have already installed VirtualBox and also installed some Linux distribution or operating system inside it.
If not, check out the [articles here][1].
Please note that Oracle VM VirtualBox Extension Pack comes with Oracles Personal Use and Evaluation License (PUEL). This license is different from VirtualBox, which is under GPL. If you are using the below steps for commercial purposes, make sure you [read this page][2] carefully.
### Enable USB in VirtualBox 7.0
#### Install VirtualBox Extension Pack
- Open the VirtualBox download page and download the VirtualBox Extension pack for all supported platforms using [this link][3].
![Download the extension pack][4]
- Then Click on `File > Tools > Extension Pack Manager.`
- Click on the `Install` button in the toolbar and select the downloaded .vbox-extpak file.
- Hit `Install`. Accept the terms, and give the admin password for the installation.
![install extension pack manager][5]
![install extension pack manager after accepting terms][6]
- After successful installation, you can see it in the installed list.
- Restart your host system. Restarting is mandatory.
#### Enable USB in the guest box
- Plugin the USB stick into your host system which you want to access from the guest virtual machine.
- Start VirtualBox and right-click on the VM name where you want to enable USB. Select Settings.
![Launch settings for the virtual machine][7]
- On the left pane, click on USB. Then select the controller version. For example, you can select USB 3.0. Then click on the small plus icon to add a USB filter.
- In this list, you should see your USB stick name (which you plugged in). For this example, I can see my Transcend Jetflash drive, which I plugged in.
- Select it and press OK.
[![Select the USB stick][8]][9]
- Now, start your virtual machine. Open the file manager, and you should see the USB is enabled and mounted on your virtual machine.
- In this demonstration, you can see the Thunar file manager of my [Arch-Xfce][10] virtual machine is showing the contents of my USB stick.
[![Enabling USB and accessing contents from VirtualBox][11]][12]
### Usage notes
Now, here are a couple of things you should remember.
- When you plug in the USB in the host system, keep it mounted. But do not open or access any file before launching the virtual machine.
unmounted in the host system
- Once you start your virtual machine, the USB will be and auto-mounted in the guest system, i.e. your virtual machine.
- After you finish with a USB stick, ensure to eject or unmount it inside a virtual machine. Then it will be accessible again inside your host system.
### Wrapping Up
VirtualBox is a powerful utility and provides easy-to-use features to extensively set up your Virtual Machines. The steps are straightforward, and make sure your USB stick is detected properly in the host system to work.
Also, remember that USB stick detection via extension pack is not related to VirtualBox guest addition. They are completely unrelated and provide separate functions.
Finally, let me know if this guide helps you in the comment box.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/enable-usb-virtualbox/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/tag/virtualbox
[2]: https://www.virtualbox.org/wiki/VirtualBox_PUEL
[3]: https://www.virtualbox.org/wiki/Downloads
[4]: https://www.debugpoint.com/wp-content/uploads/2022/10/Download-the-extension-pack.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager-after-accepting-terms.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Launch-settings-for-the-virtual-machine.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/10/Select-the-USB-stick-1024x399.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/10/Select-the-USB-stick.jpg
[10]: https://www.debugpoint.com/xfce-arch-linux-install-4-16/
[11]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enabling-USB-and-accessing-contents-from-VirtualBox-1024x639.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enabling-USB-and-accessing-contents-from-VirtualBox.jpg

View File

@ -0,0 +1,91 @@
[#]: subject: "How to Check: Xorg or Wayland Display Server?"
[#]: via: "https://www.debugpoint.com/check-wayland-or-xorg/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Check: Xorg or Wayland Display Server?
======
**Heres how you can quickly check whether you are running Xorg or Wayland Display Server.**
With every passing day, the modern Wayland display server is making its way to all Linux distributions. Although the legacy Xorg is still relevant and will stay, Wayland is undoubtedly better in security and other performance aspects.
However, Xorg will not completely phase out anytime soon. Probably never.
If you are running any Linux distribution, how can you check whether you are running Xorg or Wayland? Heres how.
### Wayland or Xorg: Which one are you running?
- Open a terminal window (CTRL+ALT+T) in your Linux distributions (e.g. Ubuntu, Fedora, Arch…etc.).
- Then type the following command and hit enter.
```
echo $XDG_SESSION_TYPE
```
- The output of the command will tell you whether the current session is Wayland or Xorg (X11).
```
[debugpoint@fedora ~]$ echo $XDG_SESSION_TYPEwayland
```
![This command can give you details about Xorg or Wayland][1]
Thats simple. However, there are other ways as well.
### Other methods
#### Using Settings
If you want a graphical method, open your Linux distribution settings application. In the about section, you should see the Wayland/X11 mentioned under some label.
For example, in the GNOME Settings, you can find it under “Windowing system”, as shown below.
![In GNOME Settings you can find it][2]
#### Using session values
You can also find it out using `loginctl` which is the [systemd][3] login manager. Remember, it only works for systemd-based systems.
Open a terminal and run the below command. You can see the session id value. In this example `c2`.
```
loginctl
```
Now, pass the session id to the following command to get the display server type. Make sure to change c2 to your system spec.
```
loginctl show-session c2 -p Type
```
![Using loginctl to find out][4]
### Wrapping Up
So, these are some of the ways you can find out whether you are running Systemd or Xorg in your Linux system. You can also use the above commands in your shell scripts for further process automation.
Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/check-wayland-or-xorg/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/This-command-can-give-you-details-about-Xorg-or-Wayland-1024x612.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2022/10/In-GNOME-Settings-you-can-find-it.jpg
[3]: https://www.debugpoint.com/tag/systemd/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/10/Using-loginctl-to-find-out.jpg

View File

@ -0,0 +1,56 @@
[#]: subject: "4 open source editors I use for my writing"
[#]: via: "https://opensource.com/article/22/10/open-source-editors"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 open source editors I use for my writing
======
I've done a lot of writing throughout my career, mostly as an IT consultant creating product documentation as client deliverables. These documents generally provide instructions on installing various operating systems and software products.
Since 2018, I've contributed to opensource.com with articles about open source software. Of course, I use open source editors to write my pieces. Here are the four open source editors that I have used.
### 1. Vi
[Vi][1], also referred to as Vim, is the first open source editor that I learned. This was the editor taught by my computer science classes and that I used for all of my C programming. I have used it as my de facto command line editor since the mid-1990s. There are so many iterations of this tool that I could write a whole series on them. Suffice it to say that I stick to its basic command line form with minimal customization for my daily use.
### 2. LibreOffice Writer
Writer is part of the open source LibreOffice office suite. It is a full-featured word processor maintained by The Document Foundation. It supports industry-standard formats such as the Open Document Format (ODF), Open XML, and MS Office DOC, DOCX. [Learn more about Writer][2] on its official site.
### 3. Ghostwriter
Ghostwriter is a [text editor for Markdown][3]. It has a nice real-time viewer and syntax guide or cheat sheet feature. [Visit the official website][4] to discover more.
### 4. Gedit
Gedit is the basic graphical editor found in many Linux distributions and is described as "a small and lightweight text editor for the GNOME desktop." I have begun using it lately to create articles in the Asciidoc format. The benefit of using Asciidoc is that the syntax is easily manageable and importable into web rendering systems such as Drupal. [See the Gedit Wiki][5] for many tips and tricks.
### Editing text
An extensive list of editing software is available in the open source world. This list will likely grow as I continue writing. The primary goal for me is simplicity in formatting. I want my articles to be easy to import, convert, and publish in a web-focused platform.
Your writing style, feature needs, and target audience will guide you in determining your preferred tools.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/open-source-editors
作者:[Alan Formy-Duval][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lkxed
[1]: https://opensource.com/article/20/12/vi-text-editor
[2]: https://www.libreoffice.org/discover/writer/
[3]: https://opensource.com/article/21/10/markdown-editors
[4]: https://github.com/KDE/ghostwriter
[5]: https://wiki.gnome.org/Apps/Gedit

View File

@ -0,0 +1,91 @@
[#]: subject: "How to find GNOME Shell version from the Terminal"
[#]: via: "https://www.debugpoint.com/find-gnome-version/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to find GNOME Shell version from the Terminal
======
**Heres a quick guide on finding the GNOME desktop (or Shell) version via the command line and GUI.**
### Find GNOME Shell Version
You need to find the GNOME version number you are running for many use cases. For example, if you are a developer, you might want to find out compatible packages and ensure that all the dependencies are met.
So, to do that, heres how you can find the GNOME version number.
Firstly, open a terminal. And run the following command.
```
gnome-shell --version
```
![version via terminal][1]
It will give you the GNOME shell version number currently running in your system.
Similarly, if you are using the desktop environment, then open Settings. Then click on the About tab.
Here you can see the GNOME Version at the bottom.
![version via settings window][2]
### Usage Notes
There might be situations where you use a different desktop environment (such as MATE or Xfce), which also uses GNOME components and packages.
Those desktop environments dont use the gnome-shell package, of course. So, if you want to find out the GNOME packages and their version used in those specific cases, you can see the contents of this file below. This is for Ubuntu and Debian-based distros only.
```
/var/lib/apt/extended_states
```
For example, the file contains a below GNOME package.
Now that you know the package name, you can further find out its version installed in your system using the below command. Similar DNF command you can find in [this guide][3] for RPM-based distros.
```
apt show gnome-weather
```
**Sample output:**
```
debugpoint@debugpoint-mate:~$ apt show gnome-weather
Package: gnome-weather
Version: 43.0-1
Priority: optional
Section: universe/gnome
Origin: Ubuntu
Maintainer: Ubuntu Developers [ubuntu-devel-discuss@lists.ubuntu.com][4]
Original-Maintainer: Debian GNOME Maintainers [pkg-gnome-maintainers@lists.alioth.debian.org][5]
```
### Wrapping Up
This guide teaches you how to find the version of GNOME and some additional methods.
Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/find-gnome-version/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/version-via-terminal.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2022/10/version-via-settings-window.jpg
[3]: https://www.debugpoint.com/dnf-commands-examples/
[4]: https://www.debugpoint.commailto:ubuntu-devel-discuss@lists.ubuntu.com
[5]: https://www.debugpoint.commailto:pkg-gnome-maintainers@lists.alioth.debian.org

View File

@ -0,0 +1,117 @@
[#]: subject: "How to Upgrade to Ubuntu 22.10 From 22.04 LTS (Jammy to Kinetic)"
[#]: via: "https://www.debugpoint.com/upgrade-ubuntu-22-04-22-10/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Upgrade to Ubuntu 22.10 From 22.04 LTS (Jammy to Kinetic)
======
**Here are the steps on how to upgrade your current Ubuntu 22.04 LTS Jammy Jellyfish to Ubuntu 22.10 Kinetic Kudu.**
Always stay with long-term support release. That is the thumb rule. So, the prior [Ubuntu 22.04 LTS][1] Jammy Jellyfish is supported until April 2027. Thats a long time.
In addition, LTS releases are super stable. They rarely break and become unstable. So, if you use your laptop/desktop or server installation with the LTS version, stay with it.
However, if you want the latest Kernel, GNOME 43, and new technology like Pipewire you might want to make the jump and want to upgrade to [Ubuntu 22.10 Kinetic Kudu][2].
Heres how.
### Upgrade Ubuntu 22.04 LTS (Jammy Jellyfish) to Ubuntu 22.10 (Kinetic Kudu)
**Note**: I hope you are not running Ubuntu 21.10 Impish Indri, released last October. Because thats out of support. But for any reason, if you are still running it, I would recommend you do a fresh install of 22.10. Or, do a step upgrade to 22.04 and then 22.10.
#### Before you upgrade
Before you upgrade, do a little housekeeping. This is super important.
- Take backups of your `/home`, /`downloads` and other files to USB or any separate partition in case the upgrade fails.
- If you have added additional PPA over time, make sure you note them down. However, the upgrade process would disable the PPA before it starts. However, after the upgrade is complete, make sure to enable them manually.
- Note down and disable all the GNOME Extensions. Extensions tend to break after the upgrade if its not updated by the developer aligned with the GNOME version.
- Keep a LIVE USB stick handy.
#### Upgrade steps
- Open Software & Update.
- Go to the Updates tab.
- Select `Notify me of a new Ubuntu version'`and change it to `'For any new version'.`
- This will tell the package manager to look for the Ubuntu 22.10 release details.
![Make sure to change the option for new Ubuntu 22.10 release][3]
- Open a terminal and run below.
```
sudo apt updatesudo apt upgrade
```
Alternatively, you can open the Software Updater as well. Install all the pending packages.
- Once both the commands are complete, open the Software Updates. And you will see a prompt to Upgrade to Ubuntu 22.10 (as shown in the below image).
![New version update prompt from the GUI method][4]
- Now click on the `Upgrade` button and follow the on-screen instructions. The upgrade process takes time, so be patient and wait until it finishes. Make sure you have stable internet connectivity for the entire upgrade process.
If you still dont get the update, wait a day or two and try.
- If you do not see the above prompt, do a manual reboot of your system. Add try again.
**Via terminal**
- Open the following file via the nano file editor in the terminal.
```
nano /etc/update-manager/release-upgrades
```
- Change the `Prompt=LTS` to `Prompt=normal`. Note: If you have changed the updates tab to “For any new version” as mentioned above, then this file should be updated already. But verify once.
![Change the release upgrade file][5]
- Press CTRL+O and CTRL+X to save and exit.
- Finally, you can also run the below command to force the upgrade process from the terminal.
```
sudo do-release-upgrade -c
```
![New version update prompt from the terminal method][6]
The upgrade process will take some time (minimum half-hour or more) based on your internet connection and hardware. Wait until it is complete. Once done, restart and enjoy the Ubuntu 22.10 Kinetic Kudu.
![Upgrade is in progress][7]
While the upgrade process is in progress, take a look at the exciting articles we [recently published on Ubuntu 22.10][8].
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/upgrade-ubuntu-22-04-22-10/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/ubuntu-22-04-review/
[2]: https://www.debugpoint.com/ubuntu-22-10/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Make-sure-to-change-the-option-for-new-Ubuntu-22.10-release.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/10/New-version-update-prompt-from-the-GUI-method2.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/Change-the-release-upgrade-file.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/New-version-update-prompt-from-the-terminal-method.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Upgrade-is-in-progress.jpg
[8]: https://www.debugpoint.com/tag/ubuntu-22-10

View File

@ -0,0 +1,197 @@
[#]: subject: "10 Things to Do After Installing Ubuntu 22.10 [With Bonus Tip]"
[#]: via: "https://www.debugpoint.com/things-to-do-ubuntu-22-10/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "chai001125"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 Things to Do After Installing Ubuntu 22.10 [With Bonus Tip]
======
**Heres our recommended list of 10 things after installing Ubuntu 22.10 “Kinetic Kudu” (GNOME Edition).**
![][1]
Ubuntu 22.10 brings exciting new features such as GNOME 43, the latest Kernel, a newly re-designed tray menu, Files features, Pipewire and [many more][2].
I am sure you are excited to try them.
But wait.
Before you head over to enjoy a new installation of Ubuntu, heres an assorted list of customization tips which you cant miss.
### 10 Things to Do After Installing Ubuntu 22.10
#### 1. Update your system
The first thing you need to do after installing Ubuntu 22.10 is to update your system. Often, the latest ISO may not contain all the updates due to time differences. So, to update your system, oprn a terminal window and run the following commands.
```
sudo apt update && sudo apt upgrade
```
Once the commands are complete, you can proceed to the next steps.
#### 2. Remove Firefox Snap and install Flatpak or deb
Since Ubuntu 21.10 last year, the default web browser Firefox comes as a Snap package. Now, if you are an average user, this may not be a problem or a thing to worry about. But many users may not like the Snap package of Firefox for several reasons. For example, the startup time is slow. Unnecessary Snap update notifications when there is a backend update and so on.
So, to completely remove Firefox as Snap, you can follow the guide [on this page][3] that I have written. Its a little complex and may take time. And install a deb version of Firefox from PPA or use the [Flatpak version][4].
As I said, this is an optional tip; you may skip it if you want.
#### 3. Install and Enable Flatpak
While every distribution today ships Flatpak by default, Ubuntu does not. Because it promotes its own sandboxing technology Snap.
But Flatpak applications are the best for everyone. It helps you to quickly install and use several applications without worrying about dependency and other things.
Most of the Flatpak applications are present in a centralised repo @ Flathub.
To enable Flatpak applications in Ubuntu 22.10, follow the below commands.
```
sudo apt install flatpakflatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakreporeboot
```
Also, if you want to learn more about this process, [read this nice guide][5], we published a while ago.
#### 4. Review privacy settings
I recommend you opt out of any data collection after installing Ubuntu. Everyone knows that its difficult to protect your privacy over the internet, no matter how hard you try. These little steps matter.
To configure the privacy, open Settings and select Privacy. Then review the settings listed under privacy.
Also, ensure to disable backend reporting to Ubuntu servers with your usage. Run the following command to do that. Unfortunately, there is no option in the settings to disable it.
```
sudo ubuntu-report -f send no
```
![Turn off location services in Ubuntu 22.10][6]
#### 5. Explore GNOME 43 Features
The most visual and functional change coming in this release is GNOME 43. This is going to impact everyone and your workflow. Because there are some fundamental and core changes. GNOME 43 brings a new pill-shaped tray menu and updated native applications with new features such as Files and GNOME Web.
Do go over the detailed [GNOME 43 features][7] here to learn more. Or explore them yourself.
![Quick Settings Demo in GNOME 43][8]
#### 6. Ensure the audio works with Pipewire
If you work with Audio primarily or your workflow deals with sound capture, playback and other stuff, then make sure your Audio works properly in Ubuntu 22.10, wired or via Bluetooth.
Because there is a change in the audio server in this release for the first time in many years. The legacy PulseAudio is now replaced by the modern Pipewire. So its important for you to verify.
#### 7. Install additional packages
Its important to ensure you can play all video and audio formats on your Ubuntu desktop. If you skipped the extra package installation during the setup, you could install them via the below commands.
```
sudo apt install ubuntu-restricted-extras
```
This should settle any video or audio playback problem in Ubuntu. Especially with GNOME Videos which cant play anything by default.
#### 8. Setup basic apps
The base Ubuntu with GNOME comes with a very basic set of applications. Hence, its almost necessary for everyone to install applications before you use Ubuntu.
Now, necessary apps are different for everyone due to diverse workflow. Hence, heres a quick list of generic apps which I think you can go ahead and install since they are preety much common for all.
- GIMP Advanced photo editor
- VLC Media play that plays anything without the need for additional codecs
- Leafpad A lightweight text editor (even lightweight from default gedit)
- Synaptic A far better package manager
Command to install them:
```
sudo apt install -y gimp vlc leafpad synaptic
```
#### 9. Get some GNOME Extensions
You can extend your GNOME 43s functionality using several cool extensions. That includes customizing the top bar, tray, changing the adwaita accent colour further and more. So, to do that, make sure you install the GNOME Extension manager first via Flatpak using the command below.
```
flatpak install flathub com.mattjakeman.ExtensionManager
```
And once you do, you can go ahead with any extensions you want by searching for them in the above app. However, heres a quick set of necessary extensions which I feel are perfect for your brand-new Ubuntu desktop. You can simply search with these names in the Extension manager apps.
- Caffeine
- Custom Hot Corners
- Dash to Dock
- Blur my shell
- Gradients
- Hide Activities Button
- Net speed simplified
#### 10. Prepare backup
Last but not least, make sure you prepare for backup from the beginning. We always feel the necessity for backup when we run into difficult situations. To do that, the ideal app is Timeshift which is easy to install and use.
Heres the set of commands you can run from the terminal to install. And after installation, you can open and follow the on-screen instructions to set up a backup.
```
sudo add-apt-repository -y ppa:teejee2008/ppasudo apt-get updatesudo apt-get install timeshift
```
### Bonus Tips
If you want to customize your new Ubuntu installation further, here are some bonus tips for you.
#### Install nice fonts
Fonts impact everything. Its one of the small yet impactful settings. However, Ubuntu comes with a default “Ubuntu regular” font, which is also good.
But you can also go ahead and install some nice fonts from Ubuntus official repo. Here is some command to install them.
```
sudo apt install fonts-roboto fonts-cascadia-code fonts-firacode
```
After installation, you can change the font using the [GNOME Tweak tool][9].
#### Install TLP
You must take care of your laptop battery if you are a heavy laptop user. While no battery is everlasting, you can still take some steps to ensure it lasts longer. The TLP is one of the best programs available in Linux, which helps to do that automatically. All you need to do is install it using the following command and run.
```
sudo apt install tlp
```
As per the recommendation, always keep the battery strength between 50% to 80%. Dont overcharge or let it discharge below 50%. Dont keep it plugged into power continuously.
### Wrapping Up
So, there you have it. Ten gettings started tips with some bonus for your Ubuntu desktop journey.
I hope this helps and that you get to install & tweak your desktop with further customization. That said, do let me know what is your best after-install tips in Ubuntu in the comment box.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/things-to-do-ubuntu-22-10/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/u2210-things-hd-1024x576.jpg
[2]: https://www.debugpoint.com/ubuntu-22-10/
[3]: https://www.debugpoint.com/remove-firefox-snap-ubuntu/
[4]: https://flathub.org/apps/details/org.mozilla.firefox
[5]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/Turn-off-location-services-in-Ubuntu-22.10.jpg
[7]: https://www.debugpoint.com/gnome-43/
[8]: https://www.debugpoint.com/wp-content/uploads/2022/08/Quick-Settings-Demo-in-GNOME-43.gif
[9]: https://www.debugpoint.com/customize-your-ubuntu-desktop-using-gnome-tweak/

View File

@ -0,0 +1,130 @@
[#]: subject: "How to Install AWS CLI on Linux Step-by-Step"
[#]: via: "https://www.linuxtechi.com/how-to-install-aws-cli-on-linux/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install AWS CLI on Linux Step-by-Step
======
This post describes how to install latest version of AWS CLI on a linux system step-by-step. AWS CLI is a command line interface which allows us to interact with our AWS account. Developer and sysadmin use aws cli to perform day to day activities and automation.
##### Prerequisites
- Pre-Installed Linux System
- Sudo User with admin rights
- Internet Connectivity
Without any further delay, lets jump into AWS Cli installations steps.
### 1) Download installation file
Open the terminal and run following curl command to download aws cli installation file,
```
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
```
Above command will download file as awscliv2.zip in current working directory.
Execute below [ls command][1] to verify downloaded file,
```
$ ls -l awscliv2.zip -rw-rw-r-- 1 linuxtechi linuxtechi 47244662 Oct 20 10:53 awscliv2.zip $
```
### 2) Unzip downloaded installation file
Run beneath [unzip command][2] to unzip the installer.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[300,250],'linuxtechi_com-medrectangle-3','ezslot_6',320,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-medrectangle-3-0');
```
$ unzip awscliv2.zip
```
It will create aws folder in present working directory and unzip all required files into it.
```
$ ls -ld aws drwxr-xr-x 3 linuxtechi linuxtechi 4096 Oct 19 17:18 aws $
```
### 3) Run install script
To install aws cli, run following install script,
```
$ sudo ./aws/install
```
Script will install all files under /usr/local/aws-cli and will create symbolic link in /usr/local/bin.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[300,250],'linuxtechi_com-medrectangle-4','ezslot_7',340,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-medrectangle-4-0');
### 4) Verify AWS CLI version
To verify the aws cli version, run
```
$ aws --version aws-cli/2.8.4 Python/3.9.11 Linux/5.15.0-48-generic exe/x86_64.ubuntu.22 prompt/off $
```
### 5) Configure AWS CLI
To verify the AWS Cli installation, lets configure aws cli.
Login to your AWS management console and retrieve AWS access key id and secret access key.
In case it is not created yet then create access key ID and secret access key. Copy these keys somewhere safe.
Now head back to Linux terminal, run following aws command,
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'linuxtechi_com-box-4','ezslot_8',260,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-box-4-0');
```
$ aws configure AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxxx AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxx Default region name [None]: us-west-2 Default output format [None]: json $
```
Above credentials will be saved in following file,
```
$ cat  ~/.aws/credentials
```
Output of above commands,
Run aws command to list s3 bucket and vpc of your account.
```
$ aws s3 ls $ aws ec2 describe-vpcs
```
Output,
Output above confirms that aws cli has been configured successfully.
Thats all from this post, please do post your queries and feedback in below comments sections.
if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'linuxtechi_com-banner-1','ezslot_9',360,'0','0'])};__ez_fad_position('div-gpt-ad-linuxtechi_com-banner-1-0');
Also Read:[How to Setup EKS Cluster along with NLB on AWS][3]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-install-aws-cli-on-linux/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/linux-ls-command-examples-beginners/
[2]: https://www.linuxtechi.com/linux-zip-unzip-command-examples/
[3]: https://www.linuxtechi.com/how-to-setup-eks-cluster-nlb-on-aws/

View File

@ -0,0 +1,82 @@
[#]: subject: "Observability-driven development with OpenTelemetry"
[#]: via: "https://opensource.com/article/22/10/observability-driven-development-opentelemetry"
[#]: author: "Ken Hamric https://opensource.com/users/kenkubeshopio"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Observability-driven development with OpenTelemetry
======
Observability-driven development (ODD) is being recognized as "necessary" for complex, microservice-based architectures. Charity Majors coined the term and has written about it in several articles, including [Observability: A Manifesto][1]. She explains the term in this quote:
> Do you bake observability right into your code as you're writing it? The best engineers do a form of "observability-driven-development" — they understand their software as they write it, include instrumentation when they ship it, then check it regularly to make sure it looks as expected. You can't just tack this on after the fact, "when it's done".
### OpenTelemetry provides the plumbing
The OpenTelemetry project has the industry backing to be the 'plumbing' for enabling observability across distributed applications. The OpenTelemetry project is [second only to Kubernetes][2] when measuring the size of its contributor community among [Cloud Native Computing Foundation (CNCF)][3] projects, and was formed when OpenTracing and OpenCensus projects merged in 2019. Since then, almost all of the major players in the industry have announced their support for OpenTelemetry.
[OpenTelemetry][4] covers three observability signals—logs, metrics, and distributed traces. It standardizes the approach to instrumenting your code, collecting the data, and exporting it to a backend system where the analyses can occur and the information can be stored. By standardizing the 'plumbing' to gather these metrics, you can now be assured that you don't have to change the instrumentation embedded in your code when switching from one vendor to another, or deciding to take the analysis and storage in-house with an open source solution such as OpenSearch. Vendors fully support OpenTelemetry as it removes the onerous task of enabling instrumentation across every programming language, every tool, every database, every message bus— and across each version of these languages. An open source approach with OpenTelemetry benefits all!
### Bridging the gap with Tracetest
So you want to do ODD, and you have a standard of how to instrument the code with OpenTelemetry. Now you just need a tool to bridge the gap and help you develop and test your distributed application with OpenTelemetry. This is why my team is building [Tracetest][5], an open source tool to enable the development and testing of your distributed microservice application. It's agnostic to the development language used or the backend OpenTelemetry data source that is chosen.
For years, developers have utilized tools such as Postman, ReadyAPI, or Insomnia to trigger their code, view the response, and create tests against the response. Tracetest extends this old concept to support the modern, observability-driven development needs of teams. Traces are front and center in the tool. Tracetest empowers you to trigger your code to execute, view both the response from that code and the OpenTelemetry trace, and to build tests based on both the response and the data contained in the trace.
![Image of Tracetest functionality.][6]
Image by:
(Ken Hamric, CC BY-SA 4.0)
### Tracetest: Trigger, trace, and test
How does Tracetest work? First, you define a triggering transaction. This can be a [REST][7] or gRPC call. The tool executes this trigger and shows the developer the full response returned. This enables an interactive process of altering the underlying code and executing the trigger to check the response. Second, Tracetest integrates with your existing OpenTelemetry infrastructure to pull in the trace generated by the execution of the trigger, and shows you the full details of the trace. Spans, attributes, and timing are all visible. The developer can adjust their code and add manual instrumentation, re-execute the trigger, and see the results of their changes to the trace directly in the tool. Lastly, Tracetest allows you to build tests based on both the response of the transaction and the trace data in a technique known as trace-based testing.
### What is trace-based testing?
Trace-based testing is a new approach to an old problem. How do you enable integration tests to be written against complex systems? Typically, the old approach involved adding lots of complexity into your test so it had visibility into what was occurring in the system. The test would need a trigger, but it would also need to do extra work to access information contained throughout the system. It would need a database connection and authentication information, ability to monitor the message bus, and even additional instrumentation added to the code to enable the test. In contrast, Trace-based testing removes all the complexity. It can do this because of one simple fact—you have already fully instrumented your code with OpenTelemetry. By leveraging the data contained in the traces produced by the application under the test, Tracetest can make assertions against both the response data and the trace data. Examples of questions that can be asked include:
- Did the response to the gRPC call have a 0 status code and was the response message correct?
- Did both downstream microservices pull the message off the message queue?
- When calling an external system as part of the process—does it return a status code of 200?
- Did all my database queries execute in less than 250ms?
![Image of Observability driven development in Tracetest.][8]
Image by:
(Ken Hamric, CC BY-SA 4.0)
By combining the ability to exercise your code, view the response and trace returned, and then build tests based on both sets of data, Tracetest provides a tool to enable you to do observability-driven development with OpenTelemetry.
### Try Tracetest
If you're ready to get started, [download Tracetest][9] and try it out. It's open source, so you can [contribute to the code][10] and help shape the future of trace-based testing with Tracetest!
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/observability-driven-development-opentelemetry
作者:[Ken Hamric][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kenkubeshopio
[b]: https://github.com/lkxed
[1]: https://www.honeycomb.io/blog/observability-a-manifesto/
[2]: https://www.cncf.io/blog/2021/12/15/end-of-year-update-on-cncf-and-open-source-velocity-in-2021/
[3]: https://www.cncf.io/
[4]: https://opentelemetry.io/
[5]: http://tracetest.io
[6]: https://opensource.com/sites/default/files/2022-10/tracetest%20functionality.png
[7]: https://www.redhat.com/en/topics/api/what-is-a-rest-api?intcmp=7013a000002qLH8AAM
[8]: https://opensource.com/sites/default/files/2022-10/Tracetest-odd.png
[9]: https://tracetest.io/download
[10]: https://github.com/kubeshop/tracetest/issues

View File

@ -0,0 +1,250 @@
[#]: subject: "How to use journalctl to View and Analyze Systemd Logs [With Examples]"
[#]: via: "https://www.debugpoint.com/systemd-journalctl/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to use journalctl to View and Analyze Systemd Logs [With Examples]
======
**This guide explains the basics of the journalctl utility of [Systemd][1] and its various commands. You can use these commands for troubleshooting desktop and server logs in Linux. This is how you can use journalctl to view and analyze Systemd Logs with different examples.**
### Introduction
Many say that Systemd is not good, it is heavy on the system and it is a debated topic always. But you can not deny that it provides a well set of utilities to manage, troubleshoot a system. Imagine you end up with a broken system with no GUI. You probably messed up boot and GRUB as well. In those kinds of scenarios or in general you can boot from a LIVE system, mount your Linux partition and explore the Systemd logs to find out about the problem.
Systemd has three basic components as follows
- **systemd**: System and service manager for Linux operating systems.
- **systemctl**: Command to introspect and control the state of the systemd system and service manager.
- **systemd-analyze**: Provides system boot-up performance statistics and retrieve other state and tracing information from the system and service manager
Apart from these three, there are additional services that systemd provides such as journald, logind, networkd, etc. In this guide we will talk about the journald service of systemd.
### journald systemd journal daemon
By design, systemd provides a centralized way of handing all operating system logs from processes, applications, etc. All these logging events are handled by journald daemon of systemd. The journald daemon collects all logs from everywhere of the Linux operating systems and stores themes as binary data in files.
The advantages of centralized logging of events, system problems as binary data are many. For example, as the system logs are stored as binary and not text you can translate in many ways such as text, JSON objects for various needs. Also, it is super easy to track down to a single event as the logs are stored sequentially via date/time manipulation of the logs.
Remember the log files that journald collects are in thousands of lines and it gets updated for every event, every boot. So if you have a long time running Linux operating system the journal logs size should in GBs. As the logs are in thousands, its better to filter with basic commands to find out more about the system problems.
#### The journald Configuration File
The configuration file of the journald is present in the below path. It contains various flags on how the logging happens. You can take a look at the file and make the changes necessary. But I would recommend not to modify this file unless you know what you are doing.
```
/etc/systemd/journald.conf
```
#### Where journald stores the binary log files
The journald stores the logs in binary format. They are stored inside a directory under this path.
```
/var/log/journal
```
For example, in the below path there is a directory that contains all the system logs to date.
![journalctl log file path][2]
Do not use cat command or use nano or vi to open these files. They would not be displayed properly.
### Use journalctl to View and Analyze Systemd Logs
#### Basic journald command
The basic command to view logs using journal daemon is
```
journalctl
```
![journalctl][3]
This gives you all the journal entries including errors, warnings, etc from all applications and processes. It shows the list with the oldest log at the top and current logs at the bottom. You need to keep pressing ENTER to scroll through it line by line. You can also use PAGE UP and PAGE DOWN keys to scroll. Press q to exit from this view.
#### How to view journal entries for time zones
By default, the journalctl shows the log time in the current system time zone. However, you can easily provide the timezone in your command to convert the same log to a different time zone. For example, to view the logs in UTC, use the below command.
```
journalctl --utc
```
![journalctl --utc][4]
#### How to view only errors, warnings, etc in journal logs
The logs that a system generates have different priorities. Some logs may be a warning which can be ignored or some may be critical errors. You might want to look at only errors, not warnings. That is also possible using the below command.
To view emergency system messages use:
```
journalctl -p 0
```
![journalctl -p 0][5]
Error codes
```
0: emergency 1: alerts 2: critical 3: errors 4: warning 5: notice 6: info 7: debug
```
When you specify the error code, it shows all messages from that code and above. For example, if you specify the below command, it shows all messages with priority 2, 1 and 0
```
journalctl -p 2
```
#### How to view journal logs for a specific boot
When you are running the journalctl command it shows the information from the current boot that is from the current session which you are running. But it is also possible to view information about past boots as well.
Journal logs keep on updating in every reboot. The journald keeps track of the logs in different boots. To view, the boot-wise logs use the below command.
```
journalctl --list-boots
```
![journalctl list-boots][6]
- The first number shows the unique journald boot track number which you can use in the next command to analyze that specific boot.
- The second number the boot ID which also you can specify in the commands.
- The next two date, time combinations are the duration of the logs stored in the respective file. This is super handy if you want to find out a log or error from a specific date, time.
To view a specific boot number you the first number or the boot ID as below.
```
journalctl -b -45
```
```
journalctl -b 8bab42c7e82440f886a3f041a7c95b98
```
![journalctl -b 45][7]
You can also use `-x` switch which can add an explanation of the systemd error messages in your display. This is a lifesaver in certain situations.
```
journalctl -xb -p 3
```
![journalctl -xb][8]
#### How to view journal logs for a specific time, date duration
The journalctl is powerful enough to provide “english” like argument in the command itself for time and date manipulation.
You can use`--since` switch with a combination of `“yesterday”, “today”, “tomorrow”, or “now”.`
Some of the examples of different commands below. You can modify them as per your need. They are self-explanatory. The date, time format in the below commands are `"YYYY-MM-DD HH:MM:SS"`
```
journalctl --since "2020-12-04 06:00:00"
```
```
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
```
```
journalctl --since yesterday
```
```
journalctl --since 09:00 --until "1 hour ago"
```
![journalctl --since 09:00 --until][9]
You can combine the above with the error level switches as well.
#### How to see Kernel specific journal logs
The Linux Kernel messages can be extracted from journal logs as well. To view the Kernel messages from the current boot only use the below command.
```
journalctl -k
```
#### How to see journal logs for a service, PID
You can filter out specific logs from a systemd service unit only from the journald logs. For example, to find out the logs from NetworkManager service use the below command.
```
journalctl -u NetworkManager.service
```
![journalctl NetworkManager service][10]
If you do not know the service name, you can use the below command to list the systemd services in your system.
```
systemctl list-units --type=service
```
#### How to view journal logs for a user, group
If you are analyzing server logs this command is helpful where multiple users are logged in. You can first find out about the user id using the below command from the user name. For example, to find out the id of user “`debugpoint`”
```
id -u debugpoint
```
Then use that ID with `_UID` switch to view the logs generated by the user.
```
journalctl _UID=1000 --since today
```
![journalctl _UID][11]
Similarly use `_GID` switch to find out the same for user groups.
#### How to view journal logs for an executable
You can also find out journald logs of a specific program or executable. For example, if you want to find out the messages of gnome-shell, you can run the below command.
```
journalctl /usr/bin/gnome-shell --since today
```
![journalctl gnome-shell][12]
### Closing notes
I hope this guide helps you to use journalctl to view analyze systemd logs on your Linux desktop or server troubleshooting. The systemd journal management extremely powerful if you know how to use the commands, it makes your life a bit easy during debugging time. All major mainstream Linux distribution uses Systemd these days. Ubuntu, Debian, Fedora, Arch they all use systemd for their default OS offerings. In case if you are wondering about systemd-free Linux distributions, you might want to check out [MX-Linux][13], Gentoo, Slackware, Void Linux.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/systemd-journalctl/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://freedesktop.org/wiki/Software/systemd/
[2]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-log-file-path.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-utc.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-p-0.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-list-boots.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-b-45.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-xb.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-since-0900-until.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-NetworkManager-service.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-_UID.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2020/12/journalctl-gnome-shell.jpg
[13]: https://www.debugpoint.com/tag/mx-linux

View File

@ -0,0 +1,144 @@
[#]: subject: "How to Install Python 3.10 in Ubuntu and Other Related Linux"
[#]: via: "https://www.debugpoint.com/install-python-3-10-ubuntu/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Python 3.10 in Ubuntu and Other Related Linux
======
**Planning to get Python 3.10 installed for your work? Heres how to install Python 3.10 in Ubuntu and related distributions.**
Python 3.10 was released on Oct 25, 2021 with additional features and updates. This release brings better handling of error messages, new pattern-matching features, TypeAlias, user-defined type guards and more. You can read the release highlights [here][1].
As of writing this guide, Python 3.10 is adopted by most of the current distros. For example, Ubuntu 22.04 LTS, and Fedora 36 all have Python 3.10 by default.
That said, if you need Python 3.10 in any non-supported releases right now, you can use the [below reliable PPA][2] to install the latest Python 3.10 in Ubuntu. Heres how.
### How to Install Python 3.10 on Ubuntu
This PPA can be used for Ubuntu 21.10, Ubuntu 21.04, Ubuntu 20.04 LTS, Ubuntu 18.04 LTS, and Linux Mint 20.x, Elementary OS 6 and other related Ubuntu-based distributions. Mostly those dont support 3.10 by default.
- Open a terminal prompt and add the following PPA.
```
sudo add-apt-repository ppa:deadsnakes/ppa
```
- Refresh the cache using the below command.
```
sudo apt update 
```
- And install Python 3.10 using the below command.
```
sudo apt install python3.10
```
### Set Python Versions
Setting up Python 3.10 as default require some additional steps. Follow along.
**Warning**: Many applications in your Ubuntu system depend on the stock version of Python 3.9. Hence, be very sure that your work applications (e.g. GIMP, GNOME Terminal etc.) are compatible with Python 3.10. So, be cautious.
**Quick Tip:** If you want to check which of your installed system packages depends on a specific version, use the following `rdepends` switch of `apt-cache` command. In the below example, I am checking which of the installed packages depends on Python 3.8.
```
apt-cache rdepends python3.8
```
```
[~]$ apt-cache rdepends python3.8
python3.8
Reverse Depends:
python3.8-dbg
virtualbox
python3.8-venv
python3.8-full
libpython3.8-testsuite
libglib2.0-tests
idle-python3.8
idle-python3.8
python3.8-minimal
python3.8-doc
python3.8-dev
python3.8-dbg
python3-uno
gedit
virtualbox
stimfit
python3.8-venv
python3-stfio
python3-escript-mpi
python3-escript
python3-csound
pitivi
obs-studio
liferea
libpython3.8-testsuite
libglib2.0-tests
kitty
kdevelop-python
idle-python3.8
idle-python3.8
rhythmbox-plugins
python3.8-minimal
python3.8-doc
python3.8-dev
python3
python3-uno
python3-all
cluster-glue
gedit
[~]$
```
#### Use Python 3.10 as the default Python3
- First, check the current default version using the below command from the terminal.
```
python3 --version
```
- Use `update-alternatives` to create symbolic links to python3
```
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
```
```
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 2
```
- And choose which one to use as Python3 via the command:
```
sudo update-alternatives --config python3
```
![Install Python 3.10 in Ubuntu][3]
Thats all for the steps. Now you can start using the latest Python in your current Ubuntu version for your work/study. You switch over to the stock version using the above commands and changing the version numbers at any given time.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/install-python-3-10-ubuntu/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://docs.python.org/3.10/whatsnew/3.10.html
[2]: https://github.com/deadsnakes
[3]: https://www.debugpoint.com/wp-content/uploads/2021/10/Installed-Python-3.10-in-Ubuntu-1024x472.jpeg

View File

@ -0,0 +1,92 @@
[#]: subject: "PaperDE is a Touch-Friendly Linux Desktop Environment"
[#]: via: "https://news.itsfoss.com/paperde/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
PaperDE is a Touch-Friendly Linux Desktop Environment
======
PaperDE is a simple, and touch-friendly desktop environment to look out for.
![PaperDE is a Touch-Friendly Linux Desktop Environment][1]
Seeing that many desktop environments exist, you may ask, why do we need another?
Well, the answer is simple. It is good to have options.
Having various user experiences enables you to experiment with different setups until you find the perfect one. If you are new to the Linux world, you may want to check out some of the best desktop environments available:
So, what does PaperDE bring to the table?
### PaperDE: A Minimal Desktop Environment
![paperde desktop view][2]
[PaperDE][3] aims to be a simple, lightweight desktop environment with a touchscreen-friendly user interface.
Yes, it is not a wildly different idea. But, as per the official screenshots, it looks pretty.
When closely examined, PaperDE looks similar to a mix of desktop environments such as [GNOME][4] and [Budgie][5].
![paperde apps][6]
It is being made from the ground up, with Qt/Wayland and [Wayfire][7] at its core, and features [PipeWire][8] as the default audio/video stack.
Furthermore, the DE will feature a dock bar for easy access to pinned apps and support adding various widgets from the [C-Suite][9] apps to the main screen.
![paperde widgets][10]
When asked about its touchscreen and gesture support on Reddit, one of the developers ([rahmanshaber][11]) mentioned:
> No, gestures doesn't work in real life, as you may use it in a tablet where it didn't came with linux installed. We used UI/ UX design approach that makes it easier to navigate the DE using finger and touch.
### Can You Try PaperDE Now?
The short answer is, yes.
The long answer is, that it is still in the early stages of development.
> 💡They recently released PaperDE 0.2.0 a few days back, you can check out their GitLab [project][12] to learn more.
The developers say that it is only possible to test it out using Arch, and no Flatpak or Snap packages are available to install.
I did try using the AUR package, but it failed to build on Arch Linux.
The developers have said that PaperDE will also be made available in the [official repository][13] of Alpine Linux soon. But, for other packages, maintainers and contributors will have to help.
If you are looking for an experiment to test, you can try it out.
[PaperDE][12]
_💬 Are you excited to see a new desktop environment in the works? Or do you think you do not need another project like this?_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/paperde/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/paper-de.png
[2]: https://news.itsfoss.com/content/images/2022/10/PaperDE-1.png
[3]: https://cubocore.org/paperde.html
[4]: https://www.gnome.org/
[5]: https://ubuntubudgie.org/
[6]: https://news.itsfoss.com/content/images/2022/10/paperde-apps.png
[7]: https://github.com/wayfirewm/wayfire
[8]: https://pipewire.org/
[9]: https://cubocore.org/coreapps.html
[10]: https://news.itsfoss.com/content/images/2022/10/PaperDE-2.png
[11]: https://gitlab.com/rahmanshaber
[12]: https://gitlab.com/cubocore/paper/paperde
[13]: https://pkgs.alpinelinux.org/packages

Some files were not shown because too many files have changed in this diff Show More