Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-06-13 11:19:04 +08:00
commit b9fac2d0a7
9 changed files with 1418 additions and 451 deletions

View File

@ -0,0 +1,298 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12310-1.html)
[#]: subject: (How to write a VS Code extension)
[#]: via: (https://opensource.com/article/20/6/vs-code-extension)
[#]: author: (Ashique Hussain Ansari https://opensource.com/users/uidoyen)
如何编写 VS Code 扩展
======
> 通过为流行的代码编辑器编写自己的扩展来添加缺失的功能。
![](https://img.linux.net.cn/data/attachment/album/202006/13/105415w5u1d0z5bdoneb82.jpg)
Visual Studio CodeVS Code是微软为 Linux、Windows 和 macOS 创建的跨平台代码编辑器。遗憾的是,微软版本的 [VS Code][2] 是在 [Microsoft Software License][3] 下发布的,这不是一个开源的许可证。然而,它的源代码是开源的,在 MIT 许可证下由 [VSCodium][4] 项目发布。
VSCodium 和 VS Code一样支持扩展、内嵌式 Git 控制、GitHub 集成、语法高亮、调试、智能代码补完、代码片段等。换句话说,对于大多数用户来说,使用 VS Code 和 VSCodium 没有什么区别,而且后者是完全开源的!
### 什么是 VS Code 扩展?
<ruby>扩展<rt>extension</rt></ruby>可以让你为 VS Code 或 VSCodium 添加功能。你可以在 GUI 中或从终端安装扩展。
你也可以构建自己的扩展。有几个你可能想学习如何构建扩展的原因:
1. **想要添加一些功能:** 如果缺失你想要的功能,你可以创建一个扩展来添加它。
2. **为了乐趣和学习:** 扩展 API 允许你探索 VSCodium 是如何工作的,这是一件有趣的事情。
3. **为了提高您的技能:** 创建扩展可以提高你的编程技能。
4. **为了成名:** 创建一个对他人有用的扩展可以提高你的公众形象。
### 安装工具
在你开始之前,你必须已经安装了 [Node.js][5]、[npm][6] 和 VS Code 或 [VSCodium][4]。
要生成一个扩展,你还需要以下工具:[Yeoman][7],是一个开源的客户端脚手架工具,可以帮助你搭建新项目;以及 [vscode-generator-code][8],是 VS Code 团队创建的 Yeoman 生成器。
### 构建一个扩展
在本教程中,你将构建一个扩展,它可以为应用程序初始化一个 Docker 镜像。
#### 生成一个扩展骨架
要在全局范围内安装并运行 Yeoman 生成器,请在命令提示符或终端中输入以下内容:
```
npm install -g yo generator-code
```
导航到要生成扩展的文件夹,键入以下命令,然后按回车:
```
yo code
```
根据提示,你必须回答一些关于你的扩展的问题:
* **你想创建什么类型的扩展?** 使用上下箭头选择其中一个选项。在本文中,我将只介绍第一个选项,`New Extension (TypeScript)`。
* **你的扩展名称是什么?** 输入你的扩展名称。我的叫 `initdockerapp`。(我相信你会有一个更好的名字。)
* **你的扩展的标识符是什么?** 请保持原样。
* **你的扩展的描述是什么?** 写一些关于你的扩展的内容(你可以现在填写或稍后编辑它)。
* **初始化 Git 仓库?** 这将初始化一个 Git 仓库,你可以稍后添加 `set-remote`
* **使用哪个包管理器?** 你可以选择 `yarn``npm`;我使用 `npm`
按回车键后,就会开始安装所需的依赖项。最后显示:
> "Your extension **initdockerapp** has been created!"
干的漂亮!
### 检查项目的结构
检查你生成的东西和项目结构。导航到新的文件夹,并在终端中键入 `cd initdockerapp`
一旦你进入该目录,键入 `.code`。它将在你的编辑器中打开,看起来像这样。
![Project file structure in VSCodium][9]
Hussain Ansari, [CC BY-SA 4.0][10]
最需要注意的两个文件是 `src` 文件夹内的 `package.json``extension.ts`
#### package.json
首先来看看 `package.json`,它应该是这样的。
```
{
"name": "initdockerapp",
"displayName": "initdockerapp",
"description": "",
"version": "0.0.1",
"engines": {
"vscode": "^1.44.0"
},
"categories": [
"Other"
],
"activationEvents": [
"onCommand:initdockerapp.initialize"
],
"main": "./out/extension.js",
"contributes": {
"commands": [
{
"command": "initdockerapp.initialize",
"title": "Initialize A Docker Application"
}
]
},
"scripts": {
"vscode:prepublish": "npm run compile",
"compile": "tsc -p ./",
"lint": "eslint src --ext ts",
"watch": "tsc -watch -p ./",
"pretest": "npm run compile && npm run lint",
"test": "node ./out/test/runTest.js"
},
"devDependencies": {
"@types/vscode": "^1.44.0",
"@types/glob": "^7.1.1",
"@types/mocha": "^7.0.2",
"@types/node": "^13.11.0",
"eslint": "^6.8.0",
"@typescript-eslint/parser": "^2.26.0",
"@typescript-eslint/eslint-plugin": "^2.26.0",
"glob": "^7.1.6",
"mocha": "^7.1.1",
"typescript": "^3.8.3",
"vscode-test": "^1.3.0"
}
}
{
"name": "initdockerapp",
"displayName": "initdockerapp",
"description": "",
"version": "0.0.1",
"engines": {
"vscode": "^1.44.0"
},
"categories": [
"Other"
],
"activationEvents": [
"onCommand:initdockerapp.initialize"
],
"main": "./out/extension.js",
"contributes": {
"commands": [
{
"command": "initdockerapp.initialize",
"title": "Initialize A Docker Application"
}
]
},
"scripts": {
"vscode:prepublish": "npm run compile",
"compile": "tsc -p ./",
"lint": "eslint src --ext ts",
"watch": "tsc -watch -p ./",
"pretest": "npm run compile && npm run lint",
"test": "node ./out/test/runTest.js"
},
"devDependencies": {
"@types/vscode": "^1.44.0",
"@types/glob": "^7.1.1",
"@types/mocha": "^7.0.2",
"@types/node": "^13.11.0",
"eslint": "^6.8.0",
"@typescript-eslint/parser": "^2.26.0",
"@typescript-eslint/eslint-plugin": "^2.26.0",
"glob": "^7.1.6",
"mocha": "^7.1.1",
"typescript": "^3.8.3",
"vscode-test": "^1.3.0"
}
}
```
如果你是 Node.js 开发者,其中一些可能看起来很熟悉,因为 `name`、`description`、`version` 和 `scripts` 是 Node.js 项目的常见部分。
有几个部分是非常重要的:
* `engines`:说明该扩展将支持哪个版本的 VS Code / VSCodium。
* `categories`:设置扩展类型;你可以从 `Languages`、`Snippets`、`Linters`、`Themes`、`Debuggers`、`Formatters`、`Keymaps` 和 `Other` 中选择。
* `contributes`:可用于与你的扩展一起运行的命令清单。
* `main`:扩展的入口点。
* `activationEvents`:指定激活事件发生的时间。具体来说,这决定了扩展何时会被加载到你的编辑器中。扩展是懒加载的,所以在激活事件触发之前,它们不会被激活。
#### src/extension.ts
接下来看看 `src/extension.ts`,它应该是这样的:
```
// The module 'vscode' contains the VSCodium extensibility API
// Import the module and reference it with the alias vscode in your code below
import * as vscode from "vscode";
const fs = require("fs");
const path = require("path");
// this method is called when your extension is activated
// your extension is activated the very first time the command is executed
export function activate(context: vscode.ExtensionContext) {
// Use the console to output diagnostic information (console.log) and errors (console.error)
// This line of code will only be executed once when your extension is activated
console.log('Congratulations, your extension "initdockerapp" is now active!');
// The command has been defined in the package.json file
// Now provide the implementation of the command with registerCommand
// The commandId parameter must match the command field in package.json
let disposable = vscode.commands.registerCommand('initdockerapp.initialize', () => {
// The code you place here will be executed every time your command is executed
let fileContent =`
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
`;
fs.writeFile(path.join(vscode.workspace.rootPath, "Dockerfile"), fileContent, (err:any) => {
if (err) {
return vscode.window.showErrorMessage("Failed to initialize docker file!");
}
vscode.window.showInformationMessage("Dockerfile has been created!");
});
});
context.subscriptions.push(disposable);
}
// this method is called when your extension is deactivated
export function deactivate() {}
```
这是为你的扩展写代码的地方。已经有一些自动生成的代码了,我再来分析一下。
注意,`vscode.command.registerCommand` 里面的 `initdockerapp.initialize``package.json` 里面的命令是一样的。它需要两个参数。
1. 要注册的命令名称
2. 执行命令的功能
另一个需要注意的函数是 `fs.writeFile`,这是你写在 `vscode.command.registerCommand` 函数里面的。这将在你的项目根目录下创建一个 Dockerfile并在其中附加代码来创建一个 Docker 镜像。
### 调试扩展
现在你已经写好了扩展是时候调试它了。点击“Run”菜单选择“Start Debugging”或者直接按 `F5`)打开调试窗口。
在调试窗口里面点击“Add Folder”或“Clone Repository”按钮打开该项目。
接下来,用 `Ctrl+Shift+P`(在 macOS 上,用 `Command` 键代替 `Ctrl`)打开命令面板,运行 `Initialize A Docker Application`
* 第一次运行此命令时,自 VSCodium 启动后,激活函数尚未执行。因此,调用激活函数,并由激活 函数注册该命令。
* 如果命令已注册,那么它将被执行。
你会看到右下角有一条信息,上面写着:`Dockerfile has been created!`。这就创建了一个 Dockerfile里面有一些预定义的代码看起来是这样的
![Running the new extension command][11]
Hussain Ansari, [CC BY-SA 4.0][10]
### 总结
有许多有用的 API 可以帮助你创建你想要构建的扩展。VS Code 扩展 API 还有许多其他强大的方法可以使用。
你可以在 VS Code 扩展 API 文档中了解更多关于 VS Code API 的信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/vs-code-extension
作者:[Ashique Hussain Ansari][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/uidoyen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://code.visualstudio.com/
[3]: https://code.visualstudio.com/license
[4]: https://vscodium.com/
[5]: https://nodejs.org/en/
[6]: https://www.npmjs.com/
[7]: https://yeoman.io/
[8]: https://github.com/Microsoft/vscode-generator-code
[9]: https://opensource.com/sites/default/files/uploads/vscode-tree.png (Project file structure in VSCodium)
[10]: https://creativecommons.org/licenses/by-sa/4.0/
[11]: https://opensource.com/sites/default/files/uploads/vscode-run-command.png (Running the new extension command)

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 lessons from remote meetings were taking back to the office)
[#]: via: (https://opensource.com/article/20/6/remote-meetings)
[#]: author: (Abigail Cabunoc Mayes https://opensource.com/users/abbycabs)
3 lessons from remote meetings were taking back to the office
======
Some of the ways we're accommodating working at home during the pandemic
can make in-real-life meetings better and more inclusive when we get
back to the office.
![Two people chatting via a video conference app][1]
For those of us fortunate enough to work remotely during this pandemic, we'll likely be camped out in our home offices for a while yet. The transition back to in-person work will [take time and be geographically patchy][2].
As I've talked with colleagues who are working remotely, many people say this period is temporary and makeshift: "_Once it's safe to return to the office, we can resume all our old habits and processes_." But in truth, this period of working from home and our eventual return to the office are deeply entwined. The choices and changes we make now will impact the ways we work once we step back into our offices, laboratories, classrooms, and other workspaces.
Rather than viewing this moment as temporary and makeshift, we should see it as formative. By investing in and improving our online meeting experience _now_, we can build the foundation for a better work environment that persists long after the pandemic. We can use this moment to recalibrate our culture and systems, so they are more robust, resilient, and inclusive. Those of us in scientific fields can use this moment to deliberately shift toward [kinder science][3].
Meetings are just one example: Rather than trying to recreate in-person meetings online, let's reimagine what remote meetings can be. With online meetings, you have to be intentional about setting up channels through which participants can contribute and take time to make sure that they know how and feel comfortable doing so. We can take this practice back to the office, providing an opportunity to be more inclusive in-person and break power balances (including leadership hierarchy and majority groups) that can keep folks silent.
Using this difficult moment to build a better work environment may sound overwhelming. But, there's good news: There's no shortage of resources and experiences to draw from. Working remotely feels new to a lot of us, but folks in the open source software world have been working this way and building community online for a long time: Tim O'Reilly has described this as the [architecture of participation][4]. Mozilla has been empowering cohorts of [Open Leaders][5] around the globe for years. And [rOpenSci][6], [RStudio][7], and the [Carpentries][8] have created and support remote, collaborative communities of scientists and coders. We can learn a lot from these communities that have been building relationships and innovating together from afar.
Ready to get started? Below, we share three principles for empowering remote interactions that we can also carry forward in real life (IRL).
### Set an inclusive tone
Remote meetings can easily feel disconnected or unnatural—especially if you're new to meeting online. Start all meetings with a welcome to earn buy-in and participation. For example, schedule time at the very beginning to welcome everyone, announce the meeting goals, and explain how to participate, like how to unmute microphones, use the chat, or write in a shared document. Starting with a quick roll call and icebreaker question during the meeting makes folks less anonymous and encourages participation. Additionally, outline the shared expectations and culture of the meeting by summarizing the code of conduct or community participation guidelines.
Create a detailed agenda with specific minutes allotted to different topics, including the welcome. The agenda should be shared ahead of time and contain enough structure to allow for productive conversations, but also enough flexibility to allow for fruitful digressions. Agendas can be designed with [POP][9] to clearly state the _purpose_, _outcomes_, and _process_ of the meeting.
Providing multiple communication channels during the meeting for folks to "speak up" is important not just for introverts but also for underrepresented minorities, students, and early career people, as well as international or multi-lingual participants. If you're using a shared document, writing feedback instead of verbalizing it can save time and also be used for side and asynchronous conversations. Using multiple channels also provides ways to be in touch and catch up if folks join the call late or drop off due to connectivity issues.
### Provide robust documentation
People can walk away (or log off) from meetings with different perceptions and expectations. Robust documentation can dispel this ambiguity and keep everyone on the same page.
Write meeting notes in a document—preferably in the agenda document mentioned above. Use collaborative document software such as Google Docs or an open source alternative like [Etherpad][10] so that folks can take live notes together. Invite and teach participants how to contribute and have them get into the rhythm early on, for example, with a written roll call that also buffers time as folks arrive at the start.
Encouraging everyone to participate in the document will result in a record of the meeting that is less brittle than anything produced by a single designated note-taker and will include more voices. Folks can contribute in many ways (including adding links, comments, +1's to affirm others' ideas, and emojis to provide quick emotion and color). Further, people can contribute to shared documents live during a meeting or asynchronously before and afterward. These notes can also be shared in many ways after the meeting (email digest, Slack, Twitter, [Mattermost][11], etc.) with different audiences and become more accessible in "post-production" (e.g., captioning, alt-tags on any images shared, etc.).
### Choose the right tools
Choosing the appropriate communication channels for meetings and follow-up is important. It also requires time and empathy to make sure everyone knows how to use the technology.
For virtual meetings, videoconferencing software like [Jitsi][12] allow participants to engage with "faces on." This is a nice norm to set if participants are comfortable, but it is also important to state that it is fine to participate without enabling video for any reason, including bandwidth or privacy issues.
Providing the opportunity for participants to have smaller conversations through [Big Blue Button][13]'s breakout rooms can be very fruitful for moving ideas forward and strengthening relationships and trust. Having prompts or tasks to center the conversations helps make the best use of time, and scheduling time for summaries once the whole group has reconvened allows more ideas and insights to be shared.
Using collaborative software for creating presentations can strengthen engagement while reducing bandwidth issues. Asking participants to open the presentation on their computers and advance the slides themselves eliminates the need for screen-sharing—and the bandwidth issues, multiple windows kerfuffling, and passiveness that can arise as a result.
Being deliberate about the communication platforms outside of meetings is also important. For example, [GitHub Issues][14] might be better for archiving decision-making conversations. And a messaging platform like [Mattermost][15] might be better for quick contact, co-working, and community-building. Whichever platforms you use, take some time to make sure that they meet your team's needs and provide enough control to make them as safe as possible for your participants.
### Make meetings work online, offline, or both
When we're in person again, these principles and tips don't become obsolete. They're relevant and critical to a productive, inclusive workplace, whether we're online, offline, or a hybrid of the two. We have led our [Mozilla Open Leaders][16] and [Openscapes Champions][17] programs this way, and we've learned that these approaches can transcend discipline and organization size.
And while each principle takes time and practice, they're all attainable. We have a big chance here to redesign the way we interact and collaborate, so let's start with intention and kindness.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/remote-meetings
作者:[Abigail Cabunoc Mayes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/abbycabs
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app)
[2]: https://www.theatlantic.com/health/archive/2020/03/how-will-coronavirus-end/608719/
[3]: https://blogs.scientificamerican.com/observations/open-software-means-kinder-science/
[4]: http://radar.oreilly.com/2015/03/socialcivics-and-the-architecture-of-participation.html
[5]: https://foundation.mozilla.org/en/opportunity/mozilla-open-leaders/
[6]: https://ropensci.org/
[7]: https://community.rstudio.com/
[8]: http://carpentries.org/
[9]: https://suzannehawkes.com/2010/04/09/pop-everything/
[10]: https://opensource.com/business/15/7/five-open-source-alternatives-google-docs
[11]: https://opensource.com/alternatives/slack
[12]: https://opensource.com/alternatives/skype
[13]: https://opensource.com/article/20/5/open-source-video-conferencing#bigbluebutton
[14]: https://openscapes.github.io/series/github-issues.html
[15]: https://mattermost.com/
[16]: https://foundation.mozilla.org/en/blog/online-meeting-tips/
[17]: https://www.openscapes.org/blog/2020/03/11/how-to-run-a-remote-workshop/

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 reasons to contribute to open source now)
[#]: via: (https://opensource.com/article/20/6/why-contribute-open-source)
[#]: author: (Jason Blais https://opensource.com/users/jasonblais)
3 reasons to contribute to open source now
======
Now, more than ever, is the ideal time to contribute to open source.
Heres why.
![Business woman on laptop sitting in front of window][1]
Open source software has [taken over the world][2]. From the early days of Linux and MySQL, open source is driving innovation like never before, with more than [180,000 public repositories on GitHub][3] alone.
For those of you who have not yet ventured into the open source world, here are the three reasons to start today.
### Build your confidence as a developer
If you're young, early in your career, or are even just learning a new programming language, open source is the best way to get started.
By contributing to an open source project, you receive immediate feedback on your development and programming skills. You may get suggestions about the choice of a function name, the way you used conditional logic, or how using a goroutine you didn't know about speeds up the execution of your program. This is all invaluable feedback to receive when you're learning something new.
Moreover, as you create more pull requests and apply what you learned from previous submissions, you begin to learn how to write good code and [submit great pull requests for code review][4]. Finally, many open source projects offer mentorship programs to help guide you through your first few contributions. It is a very welcoming, safe environment to build your confidence as a developer.
For an example story, read about [Allan Guwatudde's experience in open source][5] as a self-taught developer.
### Build your resume or CV
Even if you're a seasoned developer, you may want to build your resume to help with career development and future job searches. Perhaps you're interested in exploring a new cutting-edge framework or a new programming module, and you don't have opportunities to do either at work.
You may be able to get experience by registering for a course or finding a way to introduce these concepts at your day job. But when those options are not available (or desirable), open source provides the perfect opportunity! In addition to building your skills and increasing your confidence, all of your open source contributions are public and demonstrate the skills you have mastered and the projects you've tackled. In fact, your open source profile by itself could provide you with a strong portfolio that sets you apart from other job candidates.
Moreover, many open source projects—[such as Mattermost][6]—allow you to add yourself as a Contributor on LinkedIn to directly promote your professional profile.
[Read about Siyuan Liu's journey][7] from the first open source contribution to becoming a two-time MVP of the Mattermost project.
### Build your professional network
Building a strong professional network can help you achieve your career goals, learn more about your own or adjacent fields, and help with a job search. Contributing to open source is an excellent way to build that network. You join a welcoming community of hundreds or thousands of contributors, interact with likeminded developers in the open source space, and build connections along the way. You might even get introduced to key people in the industry, like the maintainer of a high-profile open source tool. Such relationships can turn into career-changing connections.
Finally, contributing to an open source project may even land you a job! For example, [Mattermost][8] has hired several contributors from its open source community to work full-time on the engineering team.
### Start contributing to open source today
Open source empowers you to build your confidence as a developer, build your resume, and build your professional network. Moreover, your contribution—no matter how big or small—makes a direct impact on the future of the open source project. That's why many projects send gifts as a thank you to contributors (e.g., a [customized mug to all first-time contributors][9]).
Ready to get started with open source? Check out [these open source projects][10] for first-time open source contributions or find out [how to contribute to Mattermost][11] to get started.
You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/why-contribute-open-source
作者:[Jason Blais][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jasonblais
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://techcrunch.com/2019/01/12/how-open-source-software-took-over-the-world/
[3]: https://github.com/search?q=stars%3A%3E100&s=stars&type=Repositories
[4]: https://mattermost.com/blog/submitting-great-prs/
[5]: https://mattermost.com/blog/building-confidence-and-gaining-experience-with-good-open-source-projects/
[6]: https://docs.mattermost.com/overview/faq.html#can-contributors-add-themselves-to-the-mattermost-company-page-on-linkedin
[7]: https://mattermost.com/blog/open-source-contributor-journey-with-mattermost/
[8]: https://mattermost.com/careers/
[9]: https://forum.mattermost.org/t/limited-edition-mattermost-mugs/143
[10]: https://firstcontributions.github.io/
[11]: http://mattermost.com/contribute

View File

@ -1,311 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to write a VS Code extension)
[#]: via: (https://opensource.com/article/20/6/vs-code-extension)
[#]: author: (Ashique Hussain Ansari https://opensource.com/users/uidoyen)
How to write a VS Code extension
======
Add missing features by writing your own extension for the popular code
editor.
![woman on laptop sitting at the window][1]
Visual Studio Code (VS Code) is a cross-platform code editor created by Microsoft for Linux, Windows, and macOS. Unfortunately, Microsoft's version of [VS Code][2] is released under the [Microsoft Software License][3], which is not an open source license. However, the source code is open source, released under the MIT license, with releases distributed by the [VSCodium][4] project.
VSCodium, like VS Code, has support for extensions, embedded Git control, GitHub integration, syntax highlighting, debugging, intelligent code completion, snippets, and more. In other words, for most users there's no difference between using VS Code and VSCodium, and the latter is completely open source!
### What are VS Code extensions?
Extensions allow you to add capabilities to VS Code or VSCodium. You can install extensions in the GUI or from a terminal.
You can also build your own extensions. There are several reasons you might want to learn to build an extension:
1. **To add something:** If a feature you want is missing, you can create an extension to add it.
2. **For fun and learning:** The extension API allows you to explore how VSCodium works, which is a fun thing to do.
3. **To improve your skills:** Creating an extension enhances your programming skills.
4. **For fame:** Creating an extension that is useful to others can increase your public profile.
### Install the tools
Before you begin, you must already have [Node.js][5], [npm][6], and VS Code or [VSCodium][4] installed.
To generate an extension, you will also need the following tools: [Yeoman][7], an open source client-side scaffolding tool that helps you kickstart new projects, and [vscode-generator-code][8], a Yeoman generator build created by the VS Code team.
### Build an extension
In this tutorial, you will build an extension that initializes a Docker image for an application.
#### Generate an extension skeleton
To install and run the Yeoman generator globally, enter the following in a command prompt or terminal:
```
`npm install -g yo generator-code`
```
Navigate to the folder where you want to generate the extension, type the following command, and hit **Enter**:
```
`yo code`
```
At the prompt, you must answer some questions about your extension:
* **What type of extension do you want to create?** Choose one of the options by using the Up and Down arrows. In this article, I will explain only the first one, **New Extension (TypeScript)**.
* **What's the name of your extension?** Enter the name of your extension. Mine is called **initdockerapp**. (I am sure you will have a better name.)
* **What's the identifier of your extension?** Leave this as it is.
* **What's the description of your extension?** Write something about your extension (you can fill this in or edit it later, too).
* **Initialize a Git repository?** This initializes a Git repository, and you can add `set-remote` later.
* **Which package manager to use?** You can choose yarn or npm; I will use npm.
Hit the **Enter** key, and it will start installing the required dependencies. And finally:
> "Your extension **initdockerapp** has been created!"
Excellent!
### Check the project's structure
Examine what you generated and the project structure. Navigate to the new folder and type `cd initdockerapp` in your terminal.
Once you are in, type `.code`. It will open in your editor and look something like this:
![Project file structure in VSCodium][9]
(Hussain Ansari, [CC BY-SA 4.0][10])
The two most important files to pay attention to are `package.json` and `extension.ts` inside the `src` folder.
#### package.json
First, look at `package.json`, which should look something like this:
```
{
        "name": "initdockerapp",
        "displayName": "initdockerapp",
        "description": "",
        "version": "0.0.1",
        "engines": {
                "vscode": "^1.44.0"
        },
        "categories": [
                "Other"
        ],
        "activationEvents": [
                "onCommand:initdockerapp.initialize"
        ],
        "main": "./out/extension.js",
        "contributes": {
                "commands": [
                        {
                                "command": "initdockerapp.initialize",
                                "title": "Initialize A Docker Application"
                        }
                ]
        },
        "scripts": {
                "vscode:prepublish": "npm run compile",
                "compile": "tsc -p ./",
                "lint": "eslint src --ext ts",
                "watch": "tsc -watch -p ./",
                "pretest": "npm run compile &amp;&amp; npm run lint",
                "test": "node ./out/test/runTest.js"
        },
        "devDependencies": {
                "@types/vscode": "^1.44.0",
                "@types/glob": "^7.1.1",
                "@types/mocha": "^7.0.2",
                "@types/node": "^13.11.0",
                "eslint": "^6.8.0",
                "@typescript-eslint/parser": "^2.26.0",
                "@typescript-eslint/eslint-plugin": "^2.26.0",
                "glob": "^7.1.6",
                "mocha": "^7.1.1",
                "typescript": "^3.8.3",
                "vscode-test": "^1.3.0"
        }
}
{
        "name": "initdockerapp",
        "displayName": "initdockerapp",
        "description": "",
        "version": "0.0.1",
        "engines": {
                "vscode": "^1.44.0"
        },
        "categories": [
                "Other"
        ],
        "activationEvents": [
                "onCommand:initdockerapp.initialize"
        ],
        "main": "./out/extension.js",
        "contributes": {
                "commands": [
                        {
                                "command": "initdockerapp.initialize",
                                "title": "Initialize A Docker Application"
                        }
                ]
        },
        "scripts": {
                "vscode:prepublish": "npm run compile",
                "compile": "tsc -p ./",
                "lint": "eslint src --ext ts",
                "watch": "tsc -watch -p ./",
                "pretest": "npm run compile &amp;&amp; npm run lint",
                "test": "node ./out/test/runTest.js"
        },
        "devDependencies": {
                "@types/vscode": "^1.44.0",
                "@types/glob": "^7.1.1",
                "@types/mocha": "^7.0.2",
                "@types/node": "^13.11.0",
                "eslint": "^6.8.0",
                "@typescript-eslint/parser": "^2.26.0",
                "@typescript-eslint/eslint-plugin": "^2.26.0",
                "glob": "^7.1.6",
                "mocha": "^7.1.1",
                "typescript": "^3.8.3",
                "vscode-test": "^1.3.0"
        }
}
```
If you are a Node.js developer, some of this might look familiar since `name`, `description`, `version`, and `scripts` are common parts of a Node.js project.
There are a few sections that are very important.
* `engines`: States which version of VSCodium the extension will support
* `categories`: Sets the extension type; you can choose from Languages, Snippets, Linters, Themes, Debuggers, Formatters, Keymaps, and Other
* `contributes`: A list of commands that can be used to run with your extension
* `main`: The entry point of your extension
* `activationEvents`: Specifies when the activation event happens. Specifically, this dictates when the extension will be loaded into your editor. Extensions are lazy-loaded, so they aren't activated until an activation event occurs
#### src/extension.ts
Next, look at `src/extension.ts`, which should look something like this:
```
// The module 'vscode' contains the VSCodium extensibility API
// Import the module and reference it with the alias vscode in your code below
import * as vscode from "vscode";
const fs = require("fs");
const path = require("path");
// this method is called when your extension is activated
// your extension is activated the very first time the command is executed
export function activate(context: vscode.ExtensionContext) {
        // Use the console to output diagnostic information (console.log) and errors (console.error)
        // This line of code will only be executed once when your extension is activated
        console.log('Congratulations, your extension "initdockerapp" is now active!');
       
        // The command has been defined in the package.json file
        // Now provide the implementation of the command with registerCommand
        // The commandId parameter must match the command field in package.json
        let disposable = vscode.commands.registerCommand('initdockerapp.initialize', () =&gt; {
                // The code you place here will be executed every time your command is executed
                let fileContent =`
                FROM node:alpine
                WORKDIR /usr/src/app
                COPY package.json .
                RUN npm install
               
                COPY . .
               
                EXPOSE 3000
                CMD ["npm", "start"]
                `;
               
                fs.writeFile(path.join(vscode.workspace.rootPath, "Dockerfile"), fileContent, (err:any) =&gt; {
                        if (err) {
                                return vscode.window.showErrorMessage("Failed to initialize docker file!");
                        }
                        vscode.window.showInformationMessage("Dockerfile has been created!");
                });
        });
        context.subscriptions.push(disposable);
}
// this method is called when your extension is deactivated
export function deactivate() {}
```
This is where you write the code for your extension. There's already some auto-generated code, which I'll break down.
Notice that the name `initdockerapp.initialize` inside `vscode.command.registerCommand` is the same as the command in `package.json`. It takes two parameters:
1. The name of the command to register
2. A function to execute a command
The other function to note is `fs.writeFile`, which you wrote inside the `vscode.command.registerCommand` function. This creates a dockerfile inside your project root, and appends the code to create a Docker image.
### Debug the extension
Now that you've written the extension, its time to debug it. Click the **Run** menu and select **Start Debugging** (or just press **F5**) to open a debugging window.
Open the project inside the debugging window by clicking on either the **Add Folder** or the **Clone Repository** button.
Next, open a command panel with **Ctrl+Shift+P** (on macOS, substitute the Command key for Ctrl) and run **Initialize A Docker Application**.
* The first time you run this command, the activate function has not been executed since VSCodium was launched. Therefore, activate is called, and the activate function registers the command.
* If the command has already been registered, then it executes.
You'll see a message in the lower-right corner that says: "Dockerfile has been created!" This created a Dockerfile with some pre-defined code that looks something like this:
![Running the new extension command][11]
(Hussain Ansari, [CC BY-SA 4.0][10])
### Summary
There are many helpful APIs that will help you create the extensions you want to build. The VS Code extension API has many other powerful methods you can use.
You can learn more about VS Code APIs in the VS Code Extension API documentation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/vs-code-extension
作者:[Ashique Hussain Ansari][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/uidoyen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://code.visualstudio.com/
[3]: https://code.visualstudio.com/license
[4]: https://vscodium.com/
[5]: https://nodejs.org/en/
[6]: https://www.npmjs.com/
[7]: https://yeoman.io/
[8]: https://github.com/Microsoft/vscode-generator-code
[9]: https://opensource.com/sites/default/files/uploads/vscode-tree.png (Project file structure in VSCodium)
[10]: https://creativecommons.org/licenses/by-sa/4.0/
[11]: https://opensource.com/sites/default/files/uploads/vscode-run-command.png (Running the new extension command)

View File

@ -1,140 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How I stream video with OBS and WebSockets)
[#]: via: (https://opensource.com/article/20/6/obs-websockets-streaming)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
How I stream video with OBS and WebSockets
======
Take control of your streaming with these open source supporting tools
that simplify WebSockets.
![An old-fashioned video camera][1]
[OBS][2] is one of the staples of live streaming videos now. It is the preferred software for streaming to Twitch, one of the most popular live video sites around. There are some really nice add-ons to allow a streamer to control things from their phone or another screen without disrupting the running video. It turns out, it is really easy to build your own control panel using [Node-RED][3] and the [obs-websockets][4] plugin.
![My OBS Control Dashboard][5]
My OBS control dashboard
I know what many of you are thinking—"He said WebSockets and easy in the same sentence?" Many people have had difficulty setting up and using WebSockets, which allow for bi-directional communication over a single connection via a web server. Node-RED has built-in support for WebSockets and is the part that makes this easy, at least compared to writing your own client/server.
Before starting, make sure you have OBS installed and configured. Start by downloading and installing the [latest stable release of the obs-websockets][6] plugin. For this article, the default settings are fine, but I strongly recommend following the instructions for securing obs-websockets in the future.
Next, [download and install Node-RED][7], either on the same system or on a different one (like a Raspberry Pi). Again, the default installation is fine for our purposes, but it would be wise to secure the installation following the directions on their site.
Now for the fun parts. Start Node-RED and open up the web interface (by default at <http://localhost:1880>), and you have a blank canvas. Open up the "hamburger" menu on the right and select "Manage Palate." Then click on the "Install" tab and search for the "node-red-contrib-dashboard" and "node-red-contrib-rbe" modules.
Once those are installed, click on the right-hand list and drag-and-drop the following blocks to the canvas:
* 1 Websocket Out
* 1 Websocket In
* 1 Debug
* 1 Inject
* 1 Switch
* 1 Change
* 2 JSON
* 1 Catch
Connect them in the following orders:
```
Inject-&gt;Button-&gt;Change-&gt;JSON-&gt;Websocket Out
Websocket In-&gt;JSON-&gt;Switch-&gt;RBE-&gt;Debug
Catch-&gt;Debug
```
![The basic flows][8]
The basic flows
When the button is pushed (or the Inject node sends a timestamp), a payload is sent through the change node, converted from a JSON object to a string, then sent to the WebSocket Out node. When a message is received by the WebSocket In node, it is converted to a JSON object, and if it is not a duplicate, sent to the Debug node for output. And the Catch node will catch any errors and put them into the debug panel.
What is in that payload? Let's set everything up and find out.
First, double click the `button` to open the settings dialog. Start by changing the payload to "JSON" using the drop-down menu. In the field, add the following:
```
{"request-type":"GetVersion"}
```
Enable the checkbox for "If msg arrives on input, emulate a button click" and click Done to close the button config. When a message comes from the Inject node, or if the button is pressed in the UI, it will send the JSON payload to the next node.
![Setting up the button][9]
Setting up the button
Now open up the Change node. We want to set `msg.payload.message-id` to `msg._msgid` by changing the first field from `payload` to `payload.message-id` and then using the drop-down on the second field to change the type from `String` to `msg.,` then we will put `_msgid` in the field. This copies the unique message ID to the JSON object payload so that each request has a unique ID for tracking.
This is then sent to the JSON node to convert from a JSON object to a string, and then passed to the Websocket Out node. Open up the Websocket Out node to configure the connection to OBS. First, change the `Type` to `Connect to` and then click the pencil icon to create a new connection URL. Set that to `ws://OBSMachine:4444/` and close the dialog to save. `OBSMachine` is the name of the machine OBS and obs-websocket are running on. For example, if Node-RED is running on the same machine, this would be `ws://localhost:4444`, and if it is on a machine named "luxuria.local" then it would be `ws://luxuria.local:4444`. Close and update the Websocket Out node. This sends the payload text string to the WebSocket in OBS.
![Websocket Out Node configuration][10]
Websocket Out Node configuration
On to the WebSocket In flow! Open the WebSocket In node, and set it to a `Type` of `Connect to` and the URL to the connection we defined before (it should auto-fill). Next in line is the second JSON node, which we can leave alone. This accepts output from OBS and converts it into a payload object.
Next, we will filter the regular heartbeat and status updates from everything else. Open up the switch and set the "Property" value to `payload["update-type"]`. Now select `Is Not Null` from the drop-down below it. Click `+` to add a second option and select `otherwise` from the drop-down.
![Switch Node configuration][11]
Switch Node configuration
Connect the new output on the switch directly to the Debug node input.
The RBE node, which will filter out duplicates, needs to be told what field to watch for. Since it should be connected to the output from the switch that sends nothing but status updates, this is important, as obs-websocket is sending updates every few seconds. By default, the RBE compares the entire payload object, which will constantly be changing. Open up the RBE Node, and change the `Property` from `payload` to `payload.streaming`. If the `streaming` value of the payload changes, then pass the message through; otherwise, discard it.
The final step is to change the Debug node output from `msg.payload` to the complete msg object. This allows us to see the entire object, which sometimes has useful information outside the `payload`.
Now, click `Deploy` to activate the changes. Hopefully, the WebSocket nodes will have a green "Connected" message underneath them. If they are red or yellow, the connection URL is likely incorrect and needs to be updated, or the connection is being blocked. Make sure that port 4444 on the remote machine is open and available, and that OBS is running!
Without the RBE node filtering on the `streaming` value, the debug panel (the bug icon on the right of the canvas) would be filling with Heartbeat messages about now. Click the button to the left of the Inject node to send a signal that will simulate a button click. If all is well, you should see an object arrive that has a listing of all the things `obs-websocket` can do.
![The response to "GetVersion"][12]
The response to "GetVersion"
Now open up `http://localhost:1880/ui` in another tab or window. It should show a single button. Press it! The debug panel should show the same information as before.
Congrats! You have sent your first (and hopefully not last) WebSocket message to OBS!
This is just the beginning of what can be done with `obs-websockets` and Node-RED. The complete documentation of what is supported is documented in the protocol.md file in the GitHub repository for obs-websockets. With a little experimentation, you can create a full-featured control panel to start and stop streaming, change scenes, and a whole lot more. If you are like me, you'll have all kinds of controls set up before you know it.
![OBS Websocket][13]
I may have gotten a little mad with power.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/obs-websockets-streaming
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera)
[2]: https://obsproject.com/
[3]: https://nodered.org/
[4]: https://github.com/Palakis/obs-websocket
[5]: https://opensource.com/sites/default/files/uploads/obscontrol-img1.png.jpg (My OBS Control Dashboard)
[6]: https://github.com/palakis/obs-websocket/releases
[7]: https://nodered.org/docs/getting-started/
[8]: https://opensource.com/sites/default/files/uploads/obscontrol-img2.png.jpg (The basic flows)
[9]: https://opensource.com/sites/default/files/uploads/obscontrol-img3.png.jpg (Setting up the button)
[10]: https://opensource.com/sites/default/files/uploads/obscontrol-img4.png.jpg (Websocket Out Node configuration)
[11]: https://opensource.com/sites/default/files/uploads/obscontrol-img5.png.jpg (Switch Node configuration)
[12]: https://opensource.com/sites/default/files/uploads/obscontrol-img6.png.jpg (The response to "GetVersion")
[13]: https://opensource.com/sites/default/files/uploads/obscontrol-img7.png.jpg (OBS Websocket)

View File

@ -0,0 +1,488 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora 32: Simple Local File-Sharing with Samba)
[#]: via: (https://fedoramagazine.org/fedora-32-simple-local-file-sharing-with-samba/)
[#]: author: (da2ce7 https://fedoramagazine.org/author/da2ce7/)
Fedora 32: Simple Local File-Sharing with Samba
======
![][1]
Sharing files with Fedora 32 using Samba is cross-platform, convenient, reliable, and performant.
### What is Samba?
[Samba][2] is a high-quality implementation of [Server Message Block protocol (SMB)][3]. Originally developed by Microsoft for connecting windows computers together via local-area-networks, it is now extensively used for internal network communications.
Apple used to maintain its own independent file sharing called “[Apple Filing Protocol (**AFP**)][4]“, however in [recent times][5], it also has also switched to SMB.
**In this guide we provide the minimal instructions to enable:**
* Public Folder Sharing (Both Read Only and Read Write)
* User Home Folder Access
```
Note about this guide: The convention '~]$' for a local user command prompt, and '~]#' for a super user prompt will be used.
```
### Public Sharing Folder
Having a shared public place where authenticated users on an internal network can access files, or even modify and change files if they are given permission, can be very convenient. This part of the guide walks through the process of setting up a shared folder, ready for sharing with Samba.
```
Please Note: This guide assumes the public sharing folder is on a Modern Linux Filesystem; other filesystems such as NTFS or FAT32 will not work. Samba uses POSIX Access Control Lists (ACLs).
For those who wish to learn more about Access Control Lists, please consider reading the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: Chapter 5. Access Control Lists", as it likewise applies to Fedora 32.
In General, this is only an issue for anyone who wishes to share a drive or filesystem that was created outside of the normal Fedora Installation process. (such as a external hard drive).
It is possible for Samba to share filesystem paths that do not support POSIX ACLs, however this is out of the scope of this guide.
```
#### Create Folder
For this guide the _**/srv/public/**_ folder for sharing will be used.
> The _/srv/_ directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the _/home/_ directory.
>
> [Red Hat Enterprise Linux 7, Storage Administration Guide: Chapter 2. File System Structure and Maintenance: 2.1.1.8. The /srv/ Directory][6]
```
Make the Folder (will provide an error if the folder already exists).
~]# mkdir --verbose /srv/public
Verify folder exists:
~]$ ls --directory /srv/public
Expected Output:
/srv/public
```
#### Set Filesystem Security Context
To have _read and write_ access to the public folder the _public_content_rw_t_ security context will be used for this guide. Those wanting _read only_ may use: _public_content_t_.
> Label files and directories that have been created with the _public_content_rw_t_ type to share them with read and write permissions through vsftpd. Other services, such as Apache HTTP Server, Samba, and NFS, also have access to files labeled with this type. Remember that booleans for each service must be enabled before they can write to files labeled with this type.
>
> [Red Hat Enterprise Linux 7, SELinux Users and Administrators Guide: Chapter 16. File Transfer Protocol: 16.1. Types: public_content_rw_t][7]
Add _/srv/public_ as _“public_content_rw_t”_ in the systems local filesystem security context customization registry:
```
Add new security filesystem security context:
~]# semanage fcontext --add --type public_content_rw_t "/srv/public(/.*)?"
Verifiy new security filesystem security context:
~]# semanage fcontext --locallist --list
Expected Output: (should include)
/srv/public(/.*)? all files system_u:object_r:public_content_rw_t:s0
```
Now that the folder has been added to the local systems filesystem security context registry; The **restorecon** command can be used to restore the context to the folder:
```
Restore security context to the /srv/public folder:
$~]# restorecon -Rv /srv/public
Verify security context was correctly applied:
~]$ ls --directory --context /srv/public/
Expected Output:
unconfined_u:object_r:public_content_rw_t:s0 /srv/public/
```
#### User Permissions
##### Creating the Sharing Groups
To allow a user to either have _read only_, or _read and write_ accesses to the public share folder create two new groups that govern these privileges: _public_readonly_ and _public_readwrite_.
User accounts can be granted access to _read only_, or _read and write_ by adding their account to the respective group (and allow login via Samba creating a smb password). This process is demonstrated in the section: “Test Public Sharing (localhost)”.
```
Create the public_readonly and public_readwrite groups:
~]# groupadd public_readonly
~]# groupadd public_readwrite
Verify successful creation of groups:
~]$ getent group public_readonly public_readwrite
Expected Output: (Note: x:1...: number will probability differ on your System)
public_readonly:x:1009:
public_readwrite:x:1010:
```
##### Set Permissions
Now set the appropriate user permissions to the public shared folder:
```
Set User and Group Permissions for Folder:
~]# chmod --verbose 2700 /srv/public
~]# setfacl -m group:public_readonly:r-x /srv/public
~]# setfacl -m default:group:public_readonly:r-x /srv/public
~]# setfacl -m group:public_readwrite:rwx /srv/public
~]# setfacl -m default:group:public_readwrite:rwx /srv/public
Verify user permissions have been correctly applied:
~]$ getfacl --absolute-names /srv/public
Expected Output:
file: /srv/public
owner: root
group: root
flags: -s-
user::rwx
group::---
group:public_readonly:r-x
group:public_readwrite:rwx
mask::rwx
other::---
default:user::rwx
default:group::---
default:group:public_readonly:r-x
default:group:public_readwrite:rwx
default:mask::rwx
default:other::---
```
### Samba
#### Installation
```
~]# dnf install samba
```
#### Hostname (systemwide)
Samba will use the name of the computer when sharing files; it is good to set a hostname so that the computer can be found easily on the local network.
```
View Your Current Hostname:
~]$ hostnamectl status
```
If you wish to change your hostname to something more descriptive, use the command:
```
Modify your system's hostname (example):
~]# hostnamectl set-hostname "simple-samba-server"
```
```
For a more complete overview of the hostnamectl command, please read the previous Fedora Magazine Article: "How to set the hostname on Fedora".
```
#### Firewall
Configuring your firewall is a complex and involved task. This guide will just have the most minimal manipulation of the firewall to enable Samba to pass through.
```
For those who are interested in learning more about configuring firewalls; please consider reading the documentation: "Red Hat Enterprise Linux 8: Securing networks: Chapter 5. Using and configuring firewall", as it generally applies to Fedora 32 as well.
```
```
Allow Samba access through the firewall:
~]# firewall-cmd --add-service=samba --permanent
~]# firewall-cmd --reload
Verify Samba is included in your active firewall:
~]$ firewall-cmd --list-services
Output (should include):
samba
```
#### Configuration
##### Remove Default Configuration
The stock configuration that is included with Fedora 32 is not required for this simple guide. In particular it includes support for sharing printers with Samba.
For this guide make a backup of the default configuration and create a new configuration file from scratch.
```
Create a backup copy of the existing Samba Configuration:
~]# cp --verbose --no-clobber /etc/samba/smb.conf /etc/samba/smb.conf.fedora0
Empty the configuration file:
~]# > /etc/samba/smb.conf
```
##### Samba Configuration
```
Please Note: This configuration file does not contain any global definitions; the defaults provided by Samba are good for purposes of this guide.
```
```
Edit the Samba Configuration File with Vim:
~]# vim /etc/samba/smb.conf
```
Add the following to _/etc/samba/smb.conf_ file:
```
# smb.conf - Samba Configuration File
# The name of the share is in square brackets [],
# this will be shared as //hostname/sharename
# There are a three exceptions:
# the [global] section;
# the [homes] section, that is dynamically set to the username;
# the [printers] section, same as [homes], but for printers.
# path: the physical filesystem path (or device)
# comment: a label on the share, seen on the network.
# read only: disable writing, defaults to true.
# For a full list of configuration options,
# please read the manual: "man smb.conf".
[global]
[public]
path = /srv/public
comment = Public Folder
read only = No
```
#### Write Permission
By default Samba is not granted permission to modify any file of the system. Modify systems security configuration to allow Samba to modify any filesystem path that has the security context of _public_content_rw_t_.
For convenience, Fedora has a built-in SELinux Boolean for this purpose called: _smbd_anon_write_, setting this to _true_ will enable Samba to write in any filesystem path that has been set to the security context of _public_content_rw_t_.
For those who are wishing Samba only have a read-only access to their public sharing folder, they may choose skip this step and not set this boolean.
```
There are many more SELinux boolean that are available for Samba. For those who are interested, please read the documentation: "Red Hat Enterprise Linux 7: SELinux User's and Administrator's Guide: 15.3. Samba Booleans", it also apply to Fedora 32 without any adaptation.
```
```
Set SELinux Boolean allowing Samba to write to filesystem paths set with the security context public_content_rw_t:
~]# setsebool -P smbd_anon_write=1
Verify bool has been correctly set:
$ getsebool smbd_anon_write
Expected Output:
smbd_anon_write --> on
```
### Samba Services
The Samba service is divided into two parts that we need to start.
#### Samba smb Service
The Samba “Server Message Block” (SMB) services is for sharing files and printers over the local network.
Manual: “[smbd server to provide SMB/CIFS services to clients][8]“
#### Enable and Start Services
```
For those who are interested in learning more about configuring, enabling, disabling, and managing services, please consider studying the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: 10.2. Managing System Services".
```
```
Enable and start smb and nmb services:
~]# systemctl enable smb.service
~]# systemctl start smb.service
Verify smb service:
~]# systemctl status smb.service
```
#### Test Public Sharing (localhost)
To demonstrate allowing and removing access to the public shared folder, create a new user called _samba_test_user_, this user will be granted permissions first to read the public folder, and then access to read and write the public folder.
The same process demonstrated here can be used to grant access to your public shared folder to other users of your computer.
The _samba_test_user_ will be created as a locked user account, disallowing normal login to the computer.
```
Create 'samba_test_user', and lock the account.
~]# useradd samba_test_user
~]# passwd --lock samba_test_user
Set a Samba Password for this Test User (such as 'test'):
~]# smbpasswd -a samba_test_user
```
##### Test Read Only access to the Public Share:
```
Add samba_test_user to the public_readonly group:
~]# gpasswd --add samba_test_user public_readonly
Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public
First, the ls command should succeed,
Second, the mkdir command should not work,
and finally, exit:
smb: \> ls
smb: \> mkdir error
smb: \> exit
Remove samba_test_user from the public_readonly group:
gpasswd --delete samba_test_user public_readonly
```
##### Test Read and Write access to the Public Share:
```
Add samba_test_user to the public_readwrite group:
~]# gpasswd --add samba_test_user public_readwrite
Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public
First, the ls command should succeed,
Second, the mkdir command should work,
Third, the rmdir command should work,
and finally, exit:
smb: \> ls
smb: \> mkdir success
smb: \> rmdir success
smb: \> exit
Remove samba_test_user from the public_readwrite group:
~]# gpasswd --delete samba_test_user public_readwrite
```
After testing is completed, for security, disable the **samba_test_user**s ability to login in via samba.
```
Disable samba_test_user login via samba:
~]# smbpasswd -d samba_test_user
```
### Home Folder Sharing
In this last section of the guide; Samba will be configured to share a user home folder.
For example: If the user bob has been registered with _smbpasswd_, bobs home directory _/home/bob_, would become the share _//server-name/bob_.
This share will only be available for bob, and no other users.
```
This is a very convenient way of accessing your own local files; however naturally it carries at a security risk.
```
#### Setup Home Folder Sharing
##### Give Samba Permission for Public Folder Sharing
```
Set SELinux Boolean allowing Samba to read and write to home folders:
~]# setsebool -P samba_enable_home_dirs=1
Verify bool has been correctly set:
$ getsebool samba_enable_home_dirs
Expected Output:
samba_enable_home_dirs --> on
```
##### Add Home Sharing to the Samba Configuration
**Append the following to the systems smb.conf file:**
```
# The home folder dynamically links to the user home.
# If 'bob' user uses Samba:
# The homes section is used as the template for a new virtual share:
# [homes]
# ... (various options)
# A virtual section for 'bob' is made:
# Share is modified: [homes] -> [bob]
# Path is added: path = /home/bob
# Any option within the [homes] section is appended.
# [bob]
# path = /home/bob
# ... (copy of various options)
# here is our share,
# same as is included in the Fedora default configuration.
[homes]
comment = Home Directories
valid users = %S, %D%w%S
browseable = No
read only = No
inherit acls = Yes
```
##### Reload Samba Configuration
```
Tell Samba to reload it's configuration:
~]# smbcontrol all reload-config
```
#### Test Home Directory Sharing
```
Switch to samba_test_user and create a folder in it's home directory:
~]# su samba_test_user
samba_test_user:~]$ cd ~
samba_test_user:~]$ mkdir --verbose test_folder
samba_test_user:~]$ exit
Enable samba_test_user to login via Samba:
~]# smbpasswd -e samba_test_user
Login to the local Samba Service (samba_test_user home folder):
$ smbclient --user=samba_test_user //localhost/samba_test_user
Test (all commands should complete without error):
smb: \> ls
smb: \> ls test_folder
smb: \> rmdir test_folder
smb: \> mkdir home_success
smb: \> rmdir home_success
smb: \> exit
Disable samba_test_user from login in via Samba:
~]# smbpasswd -d samba_test_user
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-32-simple-local-file-sharing-with-samba/
作者:[da2ce7][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/da2ce7/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/sambabasics-816x346.png
[2]: https://www.samba.org/samba/
[3]: https://en.wikipedia.org/wiki/Server_Message_Block
[4]: https://en.wikipedia.org/wiki/Apple_Filing_Protocol
[5]: https://appleinsider.com/articles/13/06/11/apple-shifts-from-afp-file-sharing-to-smb2-in-os-x-109-mavericks
[6]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-filesystem#s3-filesystem-srv
[7]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-file_transfer_protocol#sect-Managing_Confined_Services-File_Transfer_Protocol-Types
[8]: https://www.samba.org/samba/docs/current/man-html/smbd.8.html

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Import functions and variables into Bash with the source command)
[#]: via: (https://opensource.com/article/20/6/bash-source-command)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Import functions and variables into Bash with the source command
======
Source is like a Python import or a Java include. Learn it to expand
your Bash prowess.
![bash logo on green background][1]
When you log into a Linux shell, you inherit a specific working environment. An _environment_, in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the [PATH][2] environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a **command not found** error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.
There are many ways to affect your shell environment. You can make modifications in configuration files, such as `~/.bashrc` and `~/.profile`, you can run services at startup, and you can create your own custom commands or script your own [Bash functions][3].
### Add to your environment with source
Bash (along with some other shells) has a built-in command called `source`. And here's where it can get confusing: `source` performs the same function as the command `.` (yes, that's but a single dot), and it's _not_ the same `source` as the `Tcl` command (which may come up on your screen if you type `man source`). The built-in `source` command isn't in your `PATH` at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can type `help source`.
The `.` command is [POSIX][4]-compliant. The `source` command is not defined by POSIX but is interchangeable with the `.` command.
According to Bash `help`, the `source` command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute with `source` happens within and affects your _current_ environment.
Before exploring how `source` can affect your environment, try `source` on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file called `hello.sh`:
```
#!/usr/bin/env bash
echo "hello world"
```
Using `source`, you can run this script even without setting the executable bit:
```
$ source hello.sh
hello world
```
You can also use the built-in`.` command for the same results:
```
$ . hello.sh
hello world
```
The `source` and `.` commands successfully execute the contents of the test file.
### Set variables and import functions
You can use `source` to "import" a file into your shell environment, just as you might use the `include` keyword in C or C++ to reference a library or the `import` keyword in Python to bring in a module. This is one of the most common uses for `source`, and it's a common default inclusion in `.bashrc` files to `source` a file called `.bash_aliases` so that any custom aliases you define get imported into your environment when you log in.
Here's an example of importing a Bash function. First, create a function in a file called `myfunctions`. This prints your public IP address and your local IP address:
```
function myip() {
        curl <http://icanhazip.com>      
        ip addr | grep inet$IP | \
        cut -d"/" -f 1 | \
        grep -v 127\\.0 | \
        grep -v \:\:1 | \
        awk '{$1=$1};1'
}
```
Import the function into your shell:
```
`$ source myfunctions`
```
Test your new function:
```
$ myip
93.184.216.34
inet 192.168.0.23
inet6 fbd4:e85f:49c:2121:ce12:ef79:0e77:59d1
inet 10.8.42.38
```
### Search for source
When you use `source` in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.
If Bash can't find the file to execute, it searches your `PATH` instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.
These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of `/usr/include`, so no matter where you are when you source them, Bash finds them.
For instance, you could create a directory called `~/.local/include` as a storage area for common functions and then put this block of code into your `.bashrc` file:
```
for i in $HOME/.local/include/*;
  do source $i
done
```
This "imports" any file containing custom functions in `~/.local/include` into your shell environment.
Bash is the only shell that searches both the current directory and your `PATH` when you use either the `source` or the `.` command.
### Using source for open source
Using `source` or `.` to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your `.bashrc` file, consider placing related functions or groups of aliases into dedicated files, and then use `source` to ingest them.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/bash-source-command
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/17/6/set-path-linux
[3]: https://opensource.com/article/20/6/how-write-functions-bash
[4]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains

View File

@ -0,0 +1,194 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Use Microsoft OneDrive in Linux With Rclone Open-Source Tool [For Intermediate to Expert Users])
[#]: via: (https://itsfoss.com/use-onedrive-linux-rclone/)
[#]: author: (Community https://itsfoss.com/author/itsfoss/)
How to Use Microsoft OneDrive in Linux With Rclone Open-Source Tool [For Intermediate to Expert Users]
======
_**Brief: A step-by-step tutorial showing how to use the rclone command line tool to synchronize OneDrive in Linux.**_
There are [several cloud storage services available for Linux][1]. There is [Dropbox][2] that gives 2 GB of free space. You can also use [Mega][3] where you can get 15 GB of free storage.
Microsofts own Cloud storage service, OneDrive gives 5 GB of free storage to any Microsoft account holder. The one major problem is that unlike Dropbox and Mega, Microsoft does not provide a desktop client for Linux.
This means that youll have to resort to using web browser for accessing your files in OneDrive which is not very convinient.
There is a hassle-free, GUI application [Insync][4] that lets you [use OneDrive on Linux easily][5]. But its a premium software and not everyone would like that.
If you are not afraid of the Linux terminal, let me show you a command line tool rclone that you can use for synchronizing Microsoft OneDrive in Linux.
![][6]
### What is rclone?
Rclone is an open source command line tool that enables you to synchronize a local Linux directory with various cloud storage services.
With rclone, you can backup files to cloud storage, restore files from cloud storage, mirror cloud data, migrate data between cloud services, use multiple cloud storage as disk.
You can use it with Google Drive, OneDrive, Nextcloud, Amazon S3 and over [40 such cloud services][7].
Rclone is an extensive command line tool and using it could be confusing with so many options. This is why I wrote this tutorial to show you how to use rclone with Microsoft OneDrive.
### Sync Microsoft OneDrive in Linux with rclone
Using Rclone in Linux is not that complicated but requires some patience and familiarity with the Linux terminal. You need to tweak the configuration a little to make it work. Lets see how to do that.
#### Step 1: Install Rclone
I am [using Ubuntu 20.04][8] in this tutorial but you should be able to follow this tutorial in pretty much any Linux distribution. Just the rclone installation instruction could be different but the rest of steps remains the same.
In Debian/Ubuntu based distributions use:
```
sudo apt install rclone
```
For Arch-based distributions, use:
```
sudo pacman -S rclone
```
For other distributions, please use your distributions package manager.
#### Step 2: Adding new remote
Once you have installed rclone successfully, you need to configure rclone. Enter the following command in the terminal:
```
rclone config
```
If its your first time using rclone, you have to add a new remote to rclone. Select **n** to add new remote.
![Configuring Rclone][9]
Now you have to enter the name of remote. You can enter any name here that matches the cloud service so that it is easy to identify. I am using **onedrive**.
![Configuring Rclone ][10]
#### Step 3: Select cloud service you want to sync with rclone
After entering name and hitting enter, you will see a list of cloud services like Google cloud storage, Box, One Drive and others.
You have to enter the number of the service you want to use. In this case, its One Drive. Make sure you enter the correct number.
![Selecting Cloud Service][11]
As you dont need to enter client ID or secret ID hit **Enter** twice.
Next enter **N** for selecting **no** for advanced configuration. Of course, if you want to configure something very specific, you can go ahead with **Y**.
![Configuring OneDrive][12]
When youre asked for **Use auto config**, press **Y**.
#### Step 4: Login to OneDrive account
When you enter y and hit enter, your default browser will open and here you have to log into your Microsoft account. And if it asks for permission click on **yes**.
![One Drive Logging In][13]
#### Step 5: Enter account type
Now you have to select account type. For most of the users it will be the first one, **One drive Personal or business**. I believe it is personal so go with 1.
![][14]
After that, you will get a list of Drives associated with your account. So, for the most part, you need to select 0 to select your drive and enter **Y** for yes in next step.
![][15]
It will ask for one last time if this configuration is okay? Hit **Y** if it is.
![][16]
And then enter **q** to exit the Rclone configuration menu.
![][17]
#### Step 5: Mounting OneDrive int file manager
Create folder in your home directory where you will mount OneDrive. I will name the folder “OneDrive”. You can name it whatever you want, but please make sure you change the name to yours in the commands.
[Create a new folder with mkdir command][18] in your home directory or wherever you want:
```
mkdir ~/OneDrive
```
Now you have to use the following command:
```
rclone --vfs-cache-mode writes mount "one drive": ~/OneDrive
```
In above command “one drive” is the name of the “remote”, so you should use the correct name there if yours is different. You can check the name of the “remote” in step 2 of this tutorial.
![Mounting One Drive][19]
This command will mount one drive in given location and will continue to run in terminal. When you stop the process with,`ctrl + c` the one drive will be unmounted.
To mount one drive on startup, follow the next step below.
#### Step 6: Mount One Drive on startup
Every Linux distribution gives some way to manage startup application. I am using [Ubuntus Startup Application Preferences tool][20] here.
Open “**Startup Applications**“. And click on “**Add**“. Now, in the command field, enter the following:
```
sh -c "rclone --vfs-cache-mode writes mount onedrive: ~/OneDrive"
```
![Mounting OneDrive On Startup][21]
Thats it. Now, you can easily use OneDrive on Linux without any hiccups.
As you can see, using OneDrive in Linux with rclone takes some effort. If you want an easy way out, get a GUI tool like [Insync][4]. and use OneDrive natively in Linux.
I hope you find this tutorial helpful. If you have any questions or suggestion, well be happy to help you out.
### Sumeet
Computer engineer, FOSS lover, lower level computing enthusiast. Believe in helping others and spreading knowledge. When I get off from computer (it rarely happens) I do painting, reading and watching movies/series. Love the work of Sir Arthur Conan Doyle, J. R. R. Tolkien and J. K. Rowling. BTW, I use Arch.
--------------------------------------------------------------------------------
via: https://itsfoss.com/use-onedrive-linux-rclone/
作者:[Community][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/cloud-services-linux/
[2]: https://www.dropbox.com/
[3]: https://itsfoss.com/recommends/mega/
[4]: https://itsfoss.com/recommends/insync/
[5]: https://itsfoss.com/use-onedrive-on-linux/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/sync-onedrive-in-linux-rclone.png?ssl=1
[7]: https://rclone.org/#providers
[8]: https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-Rclone.png?resize=800%2C298&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-Rclone-1.png?resize=800%2C303&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Selecting-cloud-service.png?resize=800%2C416&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-OneDrive-1.png?resize=800%2C416&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/One-Drive-logging-in.png?resize=800%2C432&ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-OneDrive-2.png?resize=800%2C430&ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-OneDrive-3.png?resize=800%2C428&ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Configuring-One-Drive.png?resize=800%2C426&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Exiting-Rclone-Configuration.png?resize=800%2C255&ssl=1
[18]: https://linuxhandbook.com/mkdir-command/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Mounting-one-drive-1.png?fit=800%2C432&ssl=1
[20]: https://itsfoss.com/manage-startup-applications-ubuntu/
[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Mounting-OneDrive-on-startup.png?fit=800%2C499&ssl=1

View File

@ -0,0 +1,137 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How I stream video with OBS and WebSockets)
[#]: via: (https://opensource.com/article/20/6/obs-websockets-streaming)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
如何用 OBS 和 WebSockets 播放视频流
======
> 用这些简化了 WebSockets 的开源支持工具来控制你的流媒体。
![老式摄像机][1]
[OBS][2] 是现在视频直播的主流之一。它是直播流媒体到 Twitch 的首选软件Twitch 是近来最受欢迎的视频直播网站之一。有一些非常好的附加组件,可以让流媒体人从他们的手机或另一个屏幕上进行控制,而不影响正在运行的视频。事实证明,使用 [Node-RED][3] 和 [obs-websockets][4] 插件来构建自己的控制面板真的很容易。
![My OBS Control Dashboard][5]
*我的 OBS 控制仪表盘*
我知道你们很多人在想什么 —— “他在同一句话中提到了 WebSockets 和简单?”很多人在设置和使用 WebSockets 时遇到了困难WebSockets 允许通过 Web 服务器的单一连接进行双向通信。 Node-RED 内置了对 WebSockets 的支持,是让这一切变得简单的原因之一,至少与编写自己的客户端/服务器相比是如此。
在开始之前,请确保你已经安装和配置了 OBS。首先下载并安装[最新稳定版的 obs-websockets][6]插件。对于本文来说,默认的设置就可以了,但我强烈建议你之后按照说明来保护 obs-websockets 的安全。
接下来,[下载并安装 Node-RED][7],可以在同一个系统上,也可以在不同的系统上(比如树莓派)。同样,默认的安装对我们这篇文章来说是够了,但最好按照他们网站上的指示进行安全安装。
现在是有趣的部分。启动 Node-RED打开网页界面默认在 <http://localhost:1880>),你有了一个空白的画布。打开右边的“汉堡”菜单,选择“<ruby>管理口味<rt>Manage Palate</rt></ruby>”。然后点击“安装”标签,搜索 `node-red-contrib-dashboard``node-red-contrib-rbe` 模块。
安装好这些模块后,点击右侧列表,将以下模块拖拽到画布上。
* 1 Websocket Out
* 1 Websocket In
* 1 Debug
* 1 Inject
* 1 Switch
* 1 Change
* 2 JSON
* 1 Catch
以下列顺序连接它们:
```
Inject->Button->Change->JSON->Websocket Out
Websocket In->JSON->Switch->RBE->Debug
Catch->Debug
```
![The basic flows][8]
*基本流程*
当 “Button” 被按下时(或 “Inject” 节点发送一个时间戳),有效载荷通过 “change” 节点发送,从 JSON 对象转换为字符串,然后发送到 “WebSocket Out” 节点。当 “WebSocket In” 节点收到消息后,会将其转换为 JSON 对象,如果不是重复的,则发送到 “Debug” 节点进行输出。而 “Catch” 节点会捕捉到任何错误,并将其放入 “Debug” 面板中。
那有效载荷里有什么呢?让我们设置好一切,一探究竟。
首先,双击 “Button” 打开设置对话框。先使用下拉菜单将有效载荷改为 “JSON”。在该字段中添加以下内容
```
{"request-type":"GetVersion"}
```
启用 “If msg arrives on input, emulate a button click” 复选框,然后点击 “Done” 关闭 “Button” 配置。当消息从 “Inject” 节点传来时,或者 UI 中的 “Button” 被按下,它将把 JSON 有效载荷发送到下一个节点。
![Setting up the button][9]
*设置 “Button”*
现在打开 “Change” 节点。我们要将 `msg.payload.message-id` 设置为 `msg._msgid`,方法是将第一个字段从 `payload` 改为 `payload.message-id`,然后使用第二个字段的下拉菜单将类型从 `String` 改为 `msg.`,然后我们将 `_msgid` 放入该字段。这样就会把唯一的消息 ID 复制到 JSON 对象的有效载荷中,这样每个请求就有一个唯一的 ID 进行跟踪。
然后将其发送到 “JSON” 节点,以便将 JSON 对象转换为字符串,然后传递给 “Websocket Out” 节点。打开 “Websocket Out” 节点,配置到 OBS 的连接。首先,将 `Type` 更改为 `Connect to`,然后单击铅笔图标以创建新的连接 URL。将其设置为 `ws://OBSMachine:4444/`,然后关闭对话框进行保存。`OBSMachine` 是 OBS 和 obs-websocket 运行的机器名称。例如,如果 Node-RED 运行在同一台机器上,则为 `ws://localhost:4444`,如果是在名为 `luxuria.local` 的机器上,则为 `ws://luxuria.local:4444`。关闭并更新 “Websocket Out” 节点。这将向 OBS 中的 WebSocket 发送有效载荷文本字符串。
![Websocket Out Node configuration][10]
*“Websocket Out” 节点配置*
进入 “WebSocket In” 流程!打开 “WebSocket In” 节点,并对其设置 `Type``Connect to` 和我们之前定义的连接的 URL应自动填充。接下来是第二个 “JSON” 节点,我们可以不用管它。它接受 OBS 的输出,并将其转换为有效载荷对象。
接下来,我们将从中过滤出常规的心跳和状态更新。打开 “Switch”`Property` 值设置为 `payload["update-type"]`。现在从它下面的下拉菜单中选择 `Is Not Null`。点击 `+` 添加第二个选项,并从下拉菜单中选择 `otherwise`
![Switch Node configuration][11]
*“Switch” 节点配置*
将 “Switch” 上的新输出直接连接到 “Debug” 节点的输入。
RBE 节点将过滤掉重复的内容,需要告诉它要观察什么字段。由于它应该连接到 “Switch” 的输出,而它只发送状态更新,所以这一点很重要,因为 obs-websocket 每隔几秒钟就会发送更新。默认情况下RBE 会比较整个有效载荷对象,它将不断变化。打开 RBE 节点,将 `Property``payload` 改为 `payload.streaming`。如果 `payload``streaming` 值发生了变化,那么就把消息传递过去,否则就丢弃。
最后一步是将 “Debug” 节点的输出从 `msg.payload` 改为完整的 `msg` 对象。这使我们能够看到整个对象,有时在 `payload` 之外还有有用的信息。
现在,单击 “Deploy” 以激活更改。希望 WebSocket 节点下面会有绿色的 `Connected` 消息。如果它们是红色或黄色的,则连接 URL 可能不正确,需要更新,或者连接被阻止。请确保远程机器上的 4444 端口是开放的、可用的,并且 OBS 正在运行!
如果没有 RBE 节点对 `streaming` 值的过滤,调试面板(点击画布右侧的“虫子”图标)大约现在就会被心跳消息填满。点击 “Inject” 节点左边的按钮,发送一个模拟按钮点击的信号。如果一切顺利,你应该会看到一个对象到达,它有一个 `obs-websocket` 可以做的所有事情的列表。
![The response to "GetVersion"][12]
*对 “GetVersion” 的回应*
现在在另一个标签或窗口中打开 `http://localhost:1880/ui`。它应该显示一个单一的按钮。按下它! 调试面板应该会显示和之前一样的信息。
恭喜你你已经发送了你的第一个希望不是最后一个WebSocket 消息!
这只是使用 obs-websockets 和 Node-RED 可以做的事情的起步。支持的完整文档记录在 obs-websockets 的 GitHub 仓库的 `protocol.md` 文件中。通过一点点的实验,你可以创建一个功能齐全的控制面板来启动和停止流媒体、改变场景,以及更多。如果你和我一样,在意识到之前,你就可以设置好各种控件了。
![OBS Websocket][13]
*如此多的能力让我有点疯*
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/obs-websockets-streaming
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera)
[2]: https://obsproject.com/
[3]: https://nodered.org/
[4]: https://github.com/Palakis/obs-websocket
[5]: https://opensource.com/sites/default/files/uploads/obscontrol-img1.png.jpg (My OBS Control Dashboard)
[6]: https://github.com/palakis/obs-websocket/releases
[7]: https://nodered.org/docs/getting-started/
[8]: https://opensource.com/sites/default/files/uploads/obscontrol-img2.png.jpg (The basic flows)
[9]: https://opensource.com/sites/default/files/uploads/obscontrol-img3.png.jpg (Setting up the button)
[10]: https://opensource.com/sites/default/files/uploads/obscontrol-img4.png.jpg (Websocket Out Node configuration)
[11]: https://opensource.com/sites/default/files/uploads/obscontrol-img5.png.jpg (Switch Node configuration)
[12]: https://opensource.com/sites/default/files/uploads/obscontrol-img6.png.jpg (The response to "GetVersion")
[13]: https://opensource.com/sites/default/files/uploads/obscontrol-img7.png.jpg (OBS Websocket)