diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 88d163d67cebf..dd4ecb79e3a0b 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,4 +1,4 @@ - +- [为文档做贡献](#为文档做贡献) +- [README.md 本地化](#readmemd-本地化) + + @@ -46,7 +53,7 @@ Before you start, install the dependencies. Clone the repository and navigate to --> 开始前,先安装这些依赖。克隆本仓库并进入对应目录: -``` +```bash git clone https://github.com/kubernetes/website.git cd website ``` @@ -57,8 +64,8 @@ The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/doc Kubernetes 网站使用的是 [Docsy Hugo 主题](https://github.com/google/docsy#readme)。 即使你打算在容器中运行网站,我们也强烈建议你通过运行以下命令来引入子模块和其他开发依赖项: -``` -# pull in the Docsy submodule +```bash +# 引入 Docsy 子模块 git submodule update --init --recursive --depth 1 ``` @@ -72,15 +79,23 @@ To build the site in a container, run the following to build the container image 要在容器中构建网站,请通过以下命令来构建容器镜像并运行: -``` +```bash make container-image make container-serve ``` -启动浏览器,打开 http://localhost:1313 来查看网站。 +如果您看到错误,这可能意味着 hugo 容器没有足够的可用计算资源。 +要解决这个问题,请增加机器([MacOSX](https://docs.docker.com/docker-for-mac/#resources) +和 [Windows](https://docs.docker.com/docker-for-windows/#resources))上 +Docker 允许的 CPU 和内存使用量。 + + +启动浏览器,打开 来查看网站。 当你对源文件作出修改时,Hugo 会更新网站并强制浏览器执行刷新操作。 上述命令会在端口 1313 上启动本地 Hugo 服务器。 -启动浏览器,打开 http://localhost:1313 来查看网站。 +启动浏览器,打开 来查看网站。 当你对源文件作出修改时,Hugo 会更新网站并强制浏览器执行刷新操作。 + +## 构建 API 参考页面 + + +位于 `content/en/docs/reference/kubernetes-api` 的 API 参考页面是根据 Swagger 规范构建的,使用 。 + +要更新新 Kubernetes 版本的参考页面,请执行以下步骤: + + +1. 拉取 `api-ref-generator` 子模块: + + ```bash + git submodule update --init --recursive --depth 1 + ``` + + +2. 更新 Swagger 规范: + + ```bash + curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json + ``` + + +3. 在 `api-ref-assets/config/` 中,调整文件 `toc.yaml` 和 `fields.yaml` 以反映新版本的变化。 + + +4. 接下来,构建页面: + + ```bash + make api-reference + ``` + + + 您可以通过从容器映像创建和提供站点来在本地测试结果: + + ```bash + make container-image + make container-serve + ``` + + + 在 Web 浏览器中,打开 查看 API 参考。 + + +5. 当所有新的更改都反映到配置文件 `toc.yaml` 和 `fields.yaml` 中时,使用新生成的 API 参考页面创建一个 Pull Request。 + ## 故障排除 @@ -135,18 +216,24 @@ If you run `make serve` on macOS and receive the following error: 如果在 macOS 上运行 `make serve` 收到以下错误: -``` +```bash ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files make: *** [serve] Error 1 ``` + 试着查看一下当前打开文件数的限制: `launchctl limit maxfiles` -然后运行以下命令(参考https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c): + +然后运行以下命令(参考 ): -``` +```shell #!/bin/sh # These are the original gist links, linking to my gists now. @@ -165,6 +252,9 @@ sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist ``` + 这适用于 Catalina 和 Mojave macOS。 # 参与 SIG Docs 工作 @@ -184,20 +275,21 @@ You can also reach the maintainers of this project at: 你也可以通过以下渠道联系本项目的维护人员: -- [Slack](https://kubernetes.slack.com/messages/sig-docs) [加入Slack](https://slack.k8s.io/) +- [Slack](https://kubernetes.slack.com/messages/sig-docs) + - [获得此 Slack 的邀请](https://slack.k8s.io/) - [邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) # 为文档做贡献 你也可以点击屏幕右上方区域的 **Fork** 按钮,在你自己的 GitHub -账号下创建本仓库的拷贝。此拷贝被称作 *fork*。 +账号下创建本仓库的拷贝。此拷贝被称作 _fork_。 你可以在自己的拷贝中任意地修改文档,并在你已准备好将所作修改提交给我们时, 在你自己的拷贝下创建一个拉取请求(Pull Request),以便让我们知道。 @@ -208,7 +300,7 @@ Once your pull request is created, a Kubernetes reviewer will take responsibilit 还要提醒的一点,有时可能会有不止一个 Kubernetes 评审人为你提供反馈意见。 有时候,某个评审人的意见和另一个最初被指派的评审人的意见不同。 @@ -220,17 +312,65 @@ Furthermore, in some cases, one of your reviewers might ask for a technical revi 有关为 Kubernetes 文档做出贡献的更多信息,请参阅: -* [贡献 Kubernetes 文档](https://kubernetes.io/docs/contribute/) -* [页面内容类型](https://kubernetes.io/docs/contribute/style/page-content-types/) -* [文档风格指南](https://kubernetes.io/docs/contribute/style/style-guide/) -* [本地化 Kubernetes 文档](https://kubernetes.io/docs/contribute/localization/) +- [贡献 Kubernetes 文档](https://kubernetes.io/docs/contribute/) +- [页面内容类型](https://kubernetes.io/docs/contribute/style/page-content-types/) +- [文档风格指南](https://kubernetes.io/docs/contribute/style/style-guide/) +- [本地化 Kubernetes 文档](https://kubernetes.io/docs/contribute/localization/) + + +### 新贡献者大使 + + +如果您在贡献时需要帮助,[新贡献者大使](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador)是一个很好的联系人。 +这些是 SIG Docs 批准者,其职责包括指导新贡献者并帮助他们完成最初的几个拉取请求。 +联系新贡献者大使的最佳地点是 [Kubernetes Slack](https://slack.k8s.io/)。 +SIG Docs 的当前新贡献者大使: + + +| 姓名 | Slack | GitHub | +| -------------------------- | -------------------------- | -------------------------- | +| Arsh Sharma | @arsh | @RinkiyaKeDad | + + +## `README.md` 本地化 + + +| 语言 | 语言 | +| -------------------------- | -------------------------- | +| [中文](README-zh.md) | [韩语](README-ko.md) | +| [法语](README-fr.md) | [波兰语](README-pl.md) | +| [德语](README-de.md) | [葡萄牙语](README-pt.md) | +| [印地语](README-hi.md) | [俄语](README-ru.md) | +| [印尼语](README-id.md) | [西班牙语](README-es.md) | +| [意大利语](README-it.md) | [乌克兰语](README-uk.md) | +| [日语](README-ja.md) | [越南语](README-vi.md) | # 中文本地化 @@ -241,19 +381,19 @@ For more information about contributing to the Kubernetes documentation, see: * [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-zh) -# 行为准则 +## 行为准则 参与 Kubernetes 社区受 [CNCF 行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) 约束。 -# 感谢! +## 感谢你 Kubernetes 因为社区的参与而蓬勃发展,感谢您对我们网站和文档的贡献! diff --git a/README.md b/README.md index 005452ef06c1a..f037e0426b633 100644 --- a/README.md +++ b/README.md @@ -36,10 +36,10 @@ git submodule update --init --recursive --depth 1 ## Running the website using a container -To build the site in a container, run the following to build the container image and run it: +To build the site in a container, run the following: ```bash -make container-image +# You can set $CONTAINER_ENGINE to the name of any Docker-like container tool make container-serve ``` @@ -189,7 +189,7 @@ If you need help at any point when contributing, the [New Contributor Ambassador ## Code of conduct -Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). +Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). ## Thank you diff --git a/assets/scss/_custom.scss b/assets/scss/_custom.scss index 1ebe8c81faed2..db0263d9914b7 100644 --- a/assets/scss/_custom.scss +++ b/assets/scss/_custom.scss @@ -634,12 +634,12 @@ body.td-documentation { a { color: inherit; - border-bottom: 1px solid #fff; + text-decoration: underline; } a:hover { color: inherit; - border-bottom: none; + text-decoration: initial; } } @@ -648,6 +648,9 @@ body.td-documentation { } #announcement { + // default background is blue; overrides are possible + color: #fff; + .announcement-main { margin-left: auto; margin-right: auto; @@ -660,9 +663,8 @@ body.td-documentation { } - /* always white */ h1, h2, h3, h4, h5, h6, p * { - color: #ffffff; + color: inherit; /* defaults to white */ background: transparent; img.event-logo { diff --git a/cloudbuild.yaml b/cloudbuild.yaml index 5039818482f3d..542d58016c7e9 100644 --- a/cloudbuild.yaml +++ b/cloudbuild.yaml @@ -9,17 +9,20 @@ options: steps: # It's fine to bump the tag to a recent version, as needed - name: "gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20210917-12df099d55" - entrypoint: make + entrypoint: 'bash' env: - DOCKER_CLI_EXPERIMENTAL=enabled - TAG=$_GIT_TAG - BASE_REF=$_PULL_BASE_REF args: - - container-image + - -c + - | + gcloud auth configure-docker \ + && make container-push substitutions: # _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and # can be used as a substitution _GIT_TAG: "12345" # _PULL_BASE_REF will contain the ref that was pushed to to trigger this build - - # a branch like 'master' or 'release-0.2', or a tag like 'v0.2'. - _PULL_BASE_REF: "master" + # a branch like 'main' or 'release-0.2', or a tag like 'v0.2'. + _PULL_BASE_REF: "main" diff --git a/config.toml b/config.toml index 6ad3ac39f3f11..1f80097163db6 100644 --- a/config.toml +++ b/config.toml @@ -122,7 +122,7 @@ id = "UA-00000000-0" [params] copyright_k8s = "The Kubernetes Authors" -copyright_linux = "Copyright © 2020 The Linux Foundation ®." +copyright_linux = "Copyright © 2020 The Linux Foundation ®." # privacy_policy = "https://policies.google.com/privacy" @@ -155,10 +155,10 @@ githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website" # GitHub repository link for editing a page and opening issues. github_repo = "https://github.com/kubernetes/website" -#Searching +# Searching k8s_search = true -#The following search parameters are specific to Docsy's implementation. Kubernetes implementes its own search-related partials and scripts. +# The following search parameters are specific to Docsy's implementation. Kubernetes implementes its own search-related partials and scripts. # Google Custom Search Engine ID. Remove or comment out to disable search. #gcs_engine_id = "011737558837375720776:fsdu1nryfng" @@ -221,11 +221,11 @@ sidebar_menu_compact = false sidebar_menu_foldable = true # https://github.com/gohugoio/hugo/issues/8918#issuecomment-903314696 sidebar_cache_limit = 1 -# Set to true to disable breadcrumb navigation. +# Set to true to disable breadcrumb navigation. breadcrumb_disable = false -# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled) +# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled) sidebar_search_disable = false -# Set to false if you don't want to display a logo (/assets/icons/logo.svg) in the top nav bar +# Set to false if you don't want to display a logo (/assets/icons/logo.svg) in the top nav bar navbar_logo = true # Set to true to disable the About link in the site footer footer_about_disable = false @@ -246,50 +246,50 @@ no = 'Sorry to hear that. Please }} As our contributing community grows in great numbers, with more than 16,000 contributors this year across 150+ GitHub repositories, it’s important to provide face to face connections for our large distributed teams to have opportunities for collaboration and learning. In [Contributor Experience], our methodology with planning events is a lot like our documentation; we build from personas -- interests, skills, and motivators to name a few. This way we ensure there is valuable content and learning for everyone. @@ -28,13 +28,14 @@ We build the contributor summits around you: These personas combined with ample feedback from previous events, produce the altogether experience that welcomed over 600 contributors in Copenhagen (May), Shanghai(November), and Seattle(December) in 2018. Seattle's event drew over 300+ contributors, equal to Shanghai and Copenhagen combined, for the 6th contributor event in Kubernetes history. In true Kubernetes fashion, we expect another record breaking year of attendance. We've pre-ordered 900+ [contributor patches], a tradition, and we are looking forward to giving them to you! -With that said... +With that said… + **Save the Dates:** Barcelona: May 19th (evening) and 20th (all day) Shanghai: June 24th (all day) San Diego: November 18th, 19th, and activities in KubeCon/CloudNativeCon week -In an effort of continual improvement, here's what to expect from us this year: +In an effort of continual improvement, here's what to expect from us this year: * Large new contributor workshops and contributor socials at all three events expected to break previous attendance records * A multiple track event in San Diego for all contributor types including workshops, birds of a feather, lightning talks and more @@ -42,7 +43,8 @@ In an effort of continual improvement, here's what to expect from us this year: * [An event website]! * Follow along with updates: kubernetes-dev@googlegroups.com is our main communication hub as always; however, we will also blog here, our [Thursday Kubernetes Community Meeting], [twitter], SIG meetings, event site, discuss.kubernetes.io, and #contributor-summit on Slack. * Opportunities to get involved: We still have 2019 roles available! -Reach out to Contributor Experience via community@kubernetes.io, stop by a Wednesday SIG update meeting, or catch us on Slack (#sig-contribex). +Reach out to Contributor Experience via community@kubernetes.io, stop by a Wednesday SIG update meeting, or catch us on Slack (#sig-contribex). + {{
}} @@ -51,11 +53,11 @@ Reach out to Contributor Experience via community@kubernetes.io, stop by a Wedne Our 2018 crew 🥁 Jorge Castro, Paris Pittman, Bob Killen, Jeff Sica, Megan Lehn, Guinevere Saenger, Josh Berkus, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Lindsey Tulloch, Zach Corleissen, Tim Pepper, Ihor Dvoretskyi, Nancy Mohamed, Chris Short, Mario Loria, Jason DeTiberus, Sahdev Zala, Mithra Raja -And an introduction to our 2019 crew (a thanks in advance ;) )... -Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles, Guinevere Saenger, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Rui Chen, Tim Pepper, Ihor Dvoretskyi, Dawn Foster +And an introduction to our 2019 crew (a thanks in advance ;) )… +Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles, Guinevere Saenger, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Rui Chen, Tim Pepper, Ihor Dvoretskyi, Dawn Foster -## Relive Seattle Contributor Summit +## Relive Seattle Contributor Summit 📈 80% growth rate since the Austin 2017 December event @@ -81,15 +83,11 @@ Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles, 📸 Pictures (special thanks to [rdodev]) -Garage Pic -Reg Desk - {{
}} “I love Contrib Summit! The intros and deep dives during KubeCon were a great extension of Contrib Summit. Y'all did an excellent job in the morning to level set expectations and prime everyone.” -- julianv “great work! really useful and fun!” - coffeepac -[click here]: https://events.linuxfoundation.org/events/contributor-summit-europe-2019/ [Contributor Experience]: https://github.com/kubernetes/community/tree/master/sig-contributor-experience [Subproject OWNERs]: https://github.com/kubernetes/community/blob/master/community-membership.md [Chair or Tech Lead]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md diff --git a/content/en/blog/_posts/2019-03-25-1-14-release-announcement.md b/content/en/blog/_posts/2019-03-25-1-14-release-announcement.md index 448309ba3a1fc..e021bbf1dbf8d 100644 --- a/content/en/blog/_posts/2019-03-25-1-14-release-announcement.md +++ b/content/en/blog/_posts/2019-03-25-1-14-release-announcement.md @@ -2,6 +2,7 @@ title: 'Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA' date: 2019-03-25 slug: kubernetes-1-14-release-announcement +evergreen: true --- **Authors:** The 1.14 [Release Team](https://bit.ly/k8s114-team) diff --git a/content/en/blog/_posts/2019-04-26-latest-on-localization.md b/content/en/blog/_posts/2019-04-26-latest-on-localization.md index 8e6176a8ce608..98f1d74706996 100644 --- a/content/en/blog/_posts/2019-04-26-latest-on-localization.md +++ b/content/en/blog/_posts/2019-04-26-latest-on-localization.md @@ -8,7 +8,7 @@ date: 2019-04-26 Last year we optimized the Kubernetes website for [hosting multilingual content](/blog/2018/11/08/kubernetes-docs-updates-international-edition/). Contributors responded by adding multiple new localizations: as of April 2019, Kubernetes docs are partially available in nine different languages, with six added in 2019 alone. You can see a list of available languages in the language selector at the top of each page. -By _partially available_, I mean that localizations are ongoing projects. They range from mostly complete ([Chinese docs for 1.12](https://v1-12.docs.kubernetes.io/zh/)) to brand new (1.14 docs in [Portuguese](https://kubernetes.io/pt/)). If you're interested in helping an existing localization, read on! +By _partially available_, I mean that localizations are ongoing projects. They range from mostly complete ([Chinese docs for 1.12](https://v1-12.docs.kubernetes.io/zh-cn/)) to brand new (1.14 docs in [Portuguese](https://kubernetes.io/pt/)). If you're interested in helping an existing localization, read on! ## What is a localization? diff --git a/content/en/blog/_posts/2019-05-02-kubecon-diversity-lunch-and-hack.md b/content/en/blog/_posts/2019-05-02-kubecon-diversity-lunch-and-hack.md index 41c5fbcf3678d..bf8417673b246 100644 --- a/content/en/blog/_posts/2019-05-02-kubecon-diversity-lunch-and-hack.md +++ b/content/en/blog/_posts/2019-05-02-kubecon-diversity-lunch-and-hack.md @@ -2,6 +2,7 @@ title: "Join us for the 2019 KubeCon Diversity Lunch & Hack" date: 2019-05-02 slug: kubecon-diversity-lunch-and-hack +evergreen: false --- **Authors:** Kiran Oliver, Podcast Producer, The New Stack @@ -36,4 +37,4 @@ To make this all possible, we need you. Yes, you, to register. As much as we lov We look forward to seeing you! -_Special thanks to [Leah Petersen](https://www.linkedin.com/in/leahstunts/), [Sarah Conway](https://www.linkedin.com/in/sarah-conway-6166151/) and [Paris Pittman](https://www.linkedin.com/in/parispittman/) for their help in editing this post._ +_Special thanks to [Leah Petersen](https://www.linkedin.com/in/leahstunts/), [Sarah Conway](https://www.linkedin.com/in/sarah-conway-6166151/) and [Paris Pittman](https://www.linkedin.com/in/parispittman/) for their help in editing this post._ diff --git a/content/en/blog/_posts/2019-06-19-kubernetes-1-15-release-announcement.md b/content/en/blog/_posts/2019-06-19-kubernetes-1-15-release-announcement.md index 3561203548834..83dfb6dd50a93 100644 --- a/content/en/blog/_posts/2019-06-19-kubernetes-1-15-release-announcement.md +++ b/content/en/blog/_posts/2019-06-19-kubernetes-1-15-release-announcement.md @@ -3,6 +3,7 @@ layout: blog title: "Kubernetes 1.15: Extensibility and Continuous Improvement" date: 2019-06-19 slug: kubernetes-1-15-release-announcement +evergreen: true --- **Authors:** The 1.15 [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.15/release_team.md) diff --git a/content/en/blog/_posts/2019-09-24-san-diego-contributor-summit.md b/content/en/blog/_posts/2019-09-24-san-diego-contributor-summit.md index 350fbb1735d80..62ea9c833740f 100644 --- a/content/en/blog/_posts/2019-09-24-san-diego-contributor-summit.md +++ b/content/en/blog/_posts/2019-09-24-san-diego-contributor-summit.md @@ -3,19 +3,18 @@ layout: blog title: "Contributor Summit San Diego Registration Open!" date: 2019-09-24 slug: san-diego-contributor-summit +evergreen: true --- -**Authors: Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware)** - - +**Authors:** Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware) [Contributor Summit San Diego 2019 Event Page] -Registration is now open and in record time, we’ve hit capacity for the -*new contributor workshop* session of the event! Waitlist is now available. +In record time, we’ve hit capacity for the *new contributor workshop* session of +the event! **Sunday, November 17** Evening Contributor Celebration: -[QuartYard]* +[QuartYard]† Address: 1301 Market Street, San Diego, CA 92101 Time: 6:00PM - 9:00PM @@ -68,7 +67,7 @@ Check out past blogs on [persona building around our events] and the [Barcelona ![Group Picture in 2018](/images/blog/2019-09-24-san-diego-contributor-summit/IMG_2588.JPG) -*=QuartYard has a huge stage! Want to perform something in front of your contributor peers? Reach out to us! community@kubernetes.io +†=QuartYard has a huge stage! Want to perform something in front of your contributor peers? Reach out to us! community@kubernetes.io diff --git a/content/en/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md b/content/en/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md index e78b6a0ec7c39..f0f28355131b4 100644 --- a/content/en/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md +++ b/content/en/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md @@ -5,13 +5,7 @@ date: 2019-10-10 slug: contributor-summit-san-diego-schedule --- - -Authors: Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware) - -tl;dr A week ago we announced that [registration is open][reg] for the contributor -summit , and we're now live with [the full Contributor Summit schedule!][schedule] -Grab your spot while tickets are still available. There is currently a waitlist -for new contributor workshop. ([Register here!][reg]) +**Authors:** Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware) There are many great sessions planned for the Contributor Summit, spread across five rooms of current contributor content in addition to the new contributor @@ -32,7 +26,7 @@ While the schedule contains difficult decisions in every timeslot, we've picked a few below to give you a taste of what you'll hear, see, and participate in, at the summit: -* **[Vision]**: SIG-Architecture will be sharing their vision of where we're going +* **[Vision]**: SIG Architecture will be sharing their vision of where we're going with Kubernetes development for the next year and beyond. * **[Security]**: Tim Allclair and CJ Cullen will present on the current state of Kubernetes security. In another security talk, Vallery Lancey will lead a @@ -47,7 +41,7 @@ the summit: one, or at least pass one. * **[End Users]**: Several end users from the CNCF partner ecosystem, invited by Cheryl Hung, will hold a Q&A with contributors to strengthen our feedback loop. -* **[Docs]**: As always, SIG-Docs will run a three-hour contributing-to-documentation +* **[Docs]**: As always, SIG Docs will run a three-hour contributing-to-documentation workshop. We're also giving out awards to contributors who distinguished themselves in 2019, diff --git a/content/en/blog/_posts/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced.md b/content/en/blog/_posts/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced.md index ae05bd8b9cbc8..11e3ccc4124e7 100644 --- a/content/en/blog/_posts/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced.md +++ b/content/en/blog/_posts/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced.md @@ -5,25 +5,7 @@ date: 2020-02-18 slug: Contributor-Summit-Amsterdam-Schedule-Announced --- -**Authors:** Jeffrey Sica (Red Hat), Amanda Katona (VMware) - -tl;dr [Registration is open](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/) and the [schedule is live](https://kcseu2020.sched.com/) so register now and we’ll see you in Amsterdam! - -## Kubernetes Contributor Summit - -**Sunday, March 29, 2020** - -- Evening Contributor Celebration: -[ZuidPool](https://www.zuid-pool.nl/en/) -- Address: [Europaplein 22, 1078 GZ Amsterdam, Netherlands](https://www.google.com/search?q=KubeCon+Amsterdam+2020&ie=UTF-8&ibp=htl;events&rciv=evn&sa=X&ved=2ahUKEwiZoLvQ0dvnAhVST6wKHScBBZ8Q5bwDMAB6BAgSEAE#) -- Time: 18:00 - 21:00 - -**Monday, March 30, 2020** - -- All Day Contributor Summit: -- [Amsterdam RAI](https://www.rai.nl/en/) -- Address: [Europaplein 24, 1078 GZ Amsterdam, Netherlands](https://www.google.com/search?q=kubecon+amsterdam+2020&oq=kubecon+amste&aqs=chrome.0.35i39j69i57j0l4j69i61l2.3957j1j4&sourceid=chrome&ie=UTF-8&ibp=htl;events&rciv=evn&sa=X&ved=2ahUKEwiZoLvQ0dvnAhVST6wKHScBBZ8Q5bwDMAB6BAgSEAE#) -- Time: 09:00 - 17:00 (Breakfast at 08:00) +**Authors:** Jeffrey Sica (Red Hat), Amanda Katona (VMware) ![Contributor Summit](/images/blog/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced/contribsummit.jpg) @@ -32,14 +14,31 @@ Hello everyone and Happy 2020! It’s hard to believe that KubeCon EU 2020 is le * Community * Contributor Improvement * Sustainability -* In-depth Technical +* In-depth Technical On top of the presentations, there will be a dedicated Docs Sprint as well as the New Contributor Workshop 101 and 201 Sessions. All told, we will have five separate rooms of content throughout the day on Monday. Please **[see the full schedule](https://kcseu2020.sched.com/)** to see what sessions you’d be interested in. We hope between the content provided and the inevitable hallway track, everyone has a fun and enriching experience. Speaking of fun, the social Sunday night should be a blast! We’re hosting this summit’s social close to the conference center, at [ZuidPool](https://www.zuid-pool.nl/en/). There will be games, bingo, and unconference sign-up throughout the evening. It should be a relaxed way to kick off the week. -[Registration is open](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/)! Space is limited so it’s always a good idea to register early. +[~Registration is open~](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/)! Space is limited so it’s always a good idea to register early. If you have any questions, reach out to the [Amsterdam Team](https://github.com/kubernetes/community/tree/master/events/2020/03-contributor-summit#team) on Slack in the [#contributor-summit](https://kubernetes.slack.com/archives/C7J893413) channel. Hope to see you there! + +## Kubernetes Contributor Summit schedule + +**Sunday, March 29, 2020** + +- Evening Contributor Celebration: +[ZuidPool](https://www.zuid-pool.nl/en/) +- Address: [Europaplein 22, 1078 GZ Amsterdam, Netherlands](https://www.google.com/search?q=KubeCon+Amsterdam+2020&ie=UTF-8&ibp=htl;events&rciv=evn&sa=X&ved=2ahUKEwiZoLvQ0dvnAhVST6wKHScBBZ8Q5bwDMAB6BAgSEAE#) +- Time: 18:00 - 21:00 + +**Monday, March 30, 2020** + +- All Day Contributor Summit: +- [Amsterdam RAI](https://www.rai.nl/en/) +- Address: [Europaplein 24, 1078 GZ Amsterdam, Netherlands](https://www.google.com/search?q=kubecon+amsterdam+2020&oq=kubecon+amste&aqs=chrome.0.35i39j69i57j0l4j69i61l2.3957j1j4&sourceid=chrome&ie=UTF-8&ibp=htl;events&rciv=evn&sa=X&ved=2ahUKEwiZoLvQ0dvnAhVST6wKHScBBZ8Q5bwDMAB6BAgSEAE#) +- Time: 09:00 - 17:00 (Breakfast at 08:00) + diff --git a/content/en/blog/_posts/2020-03-04-Contributor-Summit-Delayed.md b/content/en/blog/_posts/2020-03-04-Contributor-Summit-Delayed.md index 1996b6201e339..10d96ec351883 100644 --- a/content/en/blog/_posts/2020-03-04-Contributor-Summit-Delayed.md +++ b/content/en/blog/_posts/2020-03-04-Contributor-Summit-Delayed.md @@ -3,13 +3,14 @@ layout: blog title: Contributor Summit Amsterdam Postponed date: 2020-03-04 slug: Contributor-Summit-Delayed +evergreen: true --- -**Authors:** Dawn Foster (VMware), Jorge Castro (VMware) +**Authors:** Dawn Foster (VMware), Jorge Castro (VMware) The CNCF has announced that [KubeCon + CloudNativeCon EU has been delayed](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/attend/novel-coronavirus-update/) until July/August of 2020. As a result the Contributor Summit planning team is weighing options for how to proceed. Here’s the current plan: - There will be an in-person Contributor Summit as planned when KubeCon + CloudNativeCon is rescheduled. -- We are looking at options for having additional virtual contributor activities in the meantime. +- We are looking at options for having additional virtual contributor activities in the meantime. We will communicate via this blog and the usual communications channels on the final plan. Please bear with us as we adapt when we get more information. Thank you for being patient as the team pivots to bring you a great Contributor Summit! \ No newline at end of file diff --git a/content/en/blog/_posts/2020-09-03-warnings/index.md b/content/en/blog/_posts/2020-09-03-warnings/index.md index 082aa72f6f03a..a5cfb9f710db7 100644 --- a/content/en/blog/_posts/2020-09-03-warnings/index.md +++ b/content/en/blog/_posts/2020-09-03-warnings/index.md @@ -60,7 +60,7 @@ so we added two administrator-facing tools to help track use of deprecated APIs Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint, an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process. This metric has labels for the API `group`, `version`, `resource`, and `subresource`, -and a `removed_version` label that indicates the Kubernetes release in which the API will no longer be served. +and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served. This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json), and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested @@ -169,7 +169,7 @@ You can also find that information through the following Prometheus query, which returns information about requests made to deprecated APIs which will be removed in v1.22: ```promql -apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource) +apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total ``` diff --git a/content/en/blog/_posts/2021-04-22-gateway-api/index.md b/content/en/blog/_posts/2021-04-22-gateway-api/index.md index c22d45cdbbfaf..145554803d20a 100644 --- a/content/en/blog/_posts/2021-04-22-gateway-api/index.md +++ b/content/en/blog/_posts/2021-04-22-gateway-api/index.md @@ -30,9 +30,9 @@ This led to design principles that allow the Gateway API to improve upon Ingress The Gateway API introduces a few new resource types: -- **[GatewayClasses](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.GatewayClass)** are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes. -- **[Gateways](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.Gateway)** are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs. -- **Routes** are not a single resource, but represent many different protocol-specific Route resources. The [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.HTTPRoute) has matching, filtering, and routing rules that get applied to Gateways that can process HTTP and HTTPS traffic. Similarly, there are [TCPRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.TCPRoute), [UDPRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.UDPRoute), and [TLSRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.TLSRoute) which also have protocol-specific semantics. This model also allows the Gateway API to incrementally expand its protocol support in the future. +- **[GatewayClasses](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gatewayclass)** are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes. +- **[Gateways](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway)** are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs. +- **Routes** are not a single resource, but represent many different protocol-specific Route resources. The [HTTPRoute](https://gateway-api.sigs.k8s.io/concepts/api-overview/#httproute) has matching, filtering, and routing rules that get applied to Gateways that can process HTTP and HTTPS traffic. Similarly, there are [TCPRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#tcproute-and-udproute), [UDPRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#tcproute-and-udproute), and [TLSRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) which also have protocol-specific semantics. This model also allows the Gateway API to incrementally expand its protocol support in the future. ![The resources of the Gateway API](gateway-api-resources.png) diff --git a/content/en/blog/_posts/2021-05-14-using-finalizers-to-control-deletion.md b/content/en/blog/_posts/2021-05-14-using-finalizers-to-control-deletion.md index a361c4d0bea28..c868b1bd5cb70 100644 --- a/content/en/blog/_posts/2021-05-14-using-finalizers-to-control-deletion.md +++ b/content/en/blog/_posts/2021-05-14-using-finalizers-to-control-deletion.md @@ -108,7 +108,7 @@ metadata: uid: 93a37fed-23e3-45e8-b6ee-b2521db81638 ``` -In short, what’s happened is that the object was updated, not deleted. That’s because Kubernetes saw that the object contained finalizers and put it into a read-only state. The deletion timestamp signals that the object can only be read, with the exception of removing the finalizer key updates. In other words, the deletion will not be complete until we edit the object and remove the finalizer. +In short, what’s happened is that the object was updated, not deleted. That’s because Kubernetes saw that the object contained finalizers and blocked removal of the object from etcd. The deletion timestamp signals that deletion was requested, but the deletion will not be complete until we edit the object and remove the finalizer. Here's a demonstration of using the `patch` command to remove finalizers. If we want to delete an object, we can simply patch it on the command line to remove the finalizers. In this way, the deletion that was running in the background will complete and the object will be deleted. When we attempt to `get` that configmap, it will be gone. diff --git a/content/en/blog/_posts/2021-09-03-api-server-tracing.md b/content/en/blog/_posts/2021-09-03-api-server-tracing.md index fc98a68d23fa7..344ff8fa46ab8 100644 --- a/content/en/blog/_posts/2021-09-03-api-server-tracing.md +++ b/content/en/blog/_posts/2021-09-03-api-server-tracing.md @@ -42,7 +42,7 @@ samplingRatePerMillion: 10000 ### Enabling Etcd Tracing -Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it. +Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it. Required etcd version is [v3.5+](https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing). ### Example Trace: List Nodes diff --git a/content/en/blog/_posts/2021-09-13-read-write-once-pod-access-mode-alpha.md b/content/en/blog/_posts/2021-09-13-read-write-once-pod-access-mode-alpha.md index e569c7a015b9e..64d47ae6574fe 100644 --- a/content/en/blog/_posts/2021-09-13-read-write-once-pod-access-mode-alpha.md +++ b/content/en/blog/_posts/2021-09-13-read-write-once-pod-access-mode-alpha.md @@ -255,7 +255,7 @@ The minimum required versions are: ## What’s next? -As part of the beta graduation for this feature, SIG Storage plans to update the Kubenetes scheduler to support pod preemption in relation to ReadWriteOncePod storage. +As part of the beta graduation for this feature, SIG Storage plans to update the Kubernetes scheduler to support pod preemption in relation to ReadWriteOncePod storage. This means if two pods request a PersistentVolumeClaim with ReadWriteOncePod, the pod with highest priority will gain access to the PersistentVolumeClaim and any pod with lower priority will be preempted from the node and be unable to access the PersistentVolumeClaim. ## How can I learn more? diff --git a/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md index 50d66ec966ffa..74e61b117da72 100644 --- a/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -175,7 +175,7 @@ runtime where possible. Another thing to look out for is anything expecting to run for system maintenance or nested inside a container when building images will no longer work. For the former, you can use the [`crictl`][cr] tool as a drop-in replacement (see -[mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) +[mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/#mapping-from-docker-cli-to-crictl)) and for the latter you can use newer container build options like [img], [buildah], [kaniko], or [buildkit-cli-for-kubectl] that don’t require Docker. diff --git a/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md b/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md index 999c3c514ed24..7bc79b38d1e83 100644 --- a/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md +++ b/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md @@ -69,11 +69,11 @@ been deprecated. These removals have been superseded by newer, stable/generally * [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information. * [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=). -* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations) for more information. +* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://git.k8s.io/design-proposals-archive/storage/csi-migration.md#background-and-motivations) for more information. * [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/). * [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information. * [The `master` label is no longer present on kubeadm control plane nodes](https://github.com/kubernetes/kubernetes/pull/107533). For new clusters, the label 'node-role.kubernetes.io/master' will no longer be added to control plane nodes, only the label 'node-role.kubernetes.io/control-plane' will be added. For more information, refer to [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint). -* [VolumeSnapshot v1beta1 CRD will be removed](https://github.com/kubernetes/enhancements/issues/177). Volume snapshot and restore functionality for Kubernetes and the [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, entered beta in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.21 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v4.1.0) for more information. +* [VolumeSnapshot v1beta1 CRD will be removed](https://github.com/kubernetes/enhancements/issues/177). Volume snapshot and restore functionality for Kubernetes and the [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, moved to GA in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.20 and will become unsupported with the v1.24 release. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and the [Volume Snapshot GA blog](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/) blog article for more information. ## What to do diff --git a/content/en/blog/_posts/2022-05-03-kubernetes-release-1.24.md b/content/en/blog/_posts/2022-05-03-kubernetes-release-1.24.md index 39c574786427e..f29c6c92fe7a8 100644 --- a/content/en/blog/_posts/2022-05-03-kubernetes-release-1.24.md +++ b/content/en/blog/_posts/2022-05-03-kubernetes-release-1.24.md @@ -118,6 +118,13 @@ With containerd v1.6.0–v1.6.3, if you do not upgrade the CNI plugins and/or declare the CNI config version, you might encounter the following "Incompatible CNI versions" or "Failed to destroy network for sandbox" error conditions. +## CSI Snapshot + +_This information was added after initial publication._ + +[VolumeSnapshot v1beta1 CRD has been removed](https://github.com/kubernetes/enhancements/issues/177). +Volume snapshot and restore functionality for Kubernetes and the Container Storage Interface (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, moved to GA in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.20 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [Volume Snapshot GA blog](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/) for more information. + ## Other Updates ### Graduations to Stable @@ -200,7 +207,7 @@ all the stargazers out there. ✨ ### Ecosystem Updates -* KubeCon + CloudNativeCon Europe 2022 will take place in Valencia, Spain, from 16 – 20 May 2022! You can find more information about the conference and registration on the [event site](https://events.linuxfoundation.org/archive/2021/kubecon-cloudnativecon-europe/). +* KubeCon + CloudNativeCon Europe 2022 will take place in Valencia, Spain, from 16 – 20 May 2022! You can find more information about the conference and registration on the [event site](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/). * In the [2021 Cloud Native Survey](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/), the CNCF saw record Kubernetes and container adoption. Take a look at the [results of the survey](https://www.cncf.io/reports/cncf-annual-survey-2021/). * The [Linux Foundation](https://www.linuxfoundation.org/) and [The Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) announced the availability of a new [Cloud Native Developer Bootcamp](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322) to provide participants with the knowledge and skills to design, build, and deploy cloud native applications. Check out the [announcement](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/) to learn more. @@ -211,7 +218,7 @@ aggregates a number of interesting data points related to the velocity of Kubern sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem. -In the v1.24 release cycle, which [ran for 17 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24) (January 10 to May 3), we saw contributions from [1029 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.23.0%20-%20now&var-metric=contributions) and [1179 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.23.0%20-%20now&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All&var-repo_name=kubernetes%2Fkubernetes). +In the v1.24 release cycle, which [ran for 17 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24) (January 10 to May 3), we saw contributions from [1029 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions) and [1179 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All&var-repo_name=kubernetes%2Fkubernetes). ## Upcoming Release Webinar diff --git a/content/en/blog/_posts/2022-05-05-volume-expansion-ga.md b/content/en/blog/_posts/2022-05-05-volume-expansion-ga.md index c97d50ea3142c..c823ae8a2c251 100644 --- a/content/en/blog/_posts/2022-05-05-volume-expansion-ga.md +++ b/content/en/blog/_posts/2022-05-05-volume-expansion-ga.md @@ -49,7 +49,7 @@ Not every volume type however is expandable by default. Some volume types such a must have capability `EXPAND_VOLUME` in controller or node service (or both if appropriate). Please refer to documentation of your CSI driver, to find out if it supports volume expansion. -Please refer to volume expansion documentation for intree volume types which support volume expansion - [Expanding Persistent Volumes](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) +Please refer to volume expansion documentation for intree volume types which support volume expansion - [Expanding Persistent Volumes](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims). In general to provide some degree of control over volumes that can be expanded, only dynamically provisioned PVCs whose storage class has `allowVolumeExpansion` parameter set to `true` are expandable. @@ -85,7 +85,7 @@ provider to find out - what mode of volume expansion it supports. When volume expansion was introduced as an alpha feature, Kubernetes only supported offline filesystem expansion on the node and hence required users to restart their pods for file system resizing to finish. -his behaviour has been changed and Kubernetes tries its best to fulfil any resize request regardless +His behaviour has been changed and Kubernetes tries its best to fulfil any resize request regardless of whether the underlying PersistentVolume volume is online or offline. If your storage provider supports online expansion then no Pod restart should be necessary for volume expansion to finish. diff --git a/content/en/blog/_posts/2022-05-06-storage-capacity-GA/index.md b/content/en/blog/_posts/2022-05-06-storage-capacity-GA/index.md index 35d6838f518d0..2bb85059e3682 100644 --- a/content/en/blog/_posts/2022-05-06-storage-capacity-GA/index.md +++ b/content/en/blog/_posts/2022-05-06-storage-capacity-GA/index.md @@ -1,6 +1,6 @@ --- layout: blog -title: "Storage Capacity Tracking reaches GA in Kubernetes 1.24" +title: "Kubernetes 1.24: Storage Capacity Tracking Now Generally Available" date: 2022-05-06 slug: storage-capacity-ga --- diff --git a/content/en/blog/_posts/2022-05-13-grpc-probes-in-beta.md b/content/en/blog/_posts/2022-05-13-grpc-probes-in-beta.md new file mode 100644 index 0000000000000..5ff495410b61d --- /dev/null +++ b/content/en/blog/_posts/2022-05-13-grpc-probes-in-beta.md @@ -0,0 +1,209 @@ +--- +layout: blog +title: "Kubernetes 1.24: gRPC container probes in beta" +date: 2022-05-13 +slug: grpc-probes-now-in-beta +--- + +**Author**: Sergey Kanzhelev (Google) + + +With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default. +Now you can configure startup, liveness, and readiness probes for your gRPC app +without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can natively connect to your your workload via gRPC and query its status. + +## Some history + +It's useful to let the system managing your workload check that the app is +healthy, has started OK, and whether the app considers itself good to accept +traffic. Before the gRPC support was added, Kubernetes already allowed you to +check for health based on running an executable from inside the container image, +by making an HTTP request, or by checking whether a TCP connection succeeded. + +For most apps, those checks are enough. If your app provides a gRPC endpoint +for a health (or readiness) check, it is easy +to repurpose the `exec` probe to use it for gRPC health checking. +In the blog article [Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/), +Ahmet Alp Balkan described how you can do that — a mechanism that still works today. + +There is a commonly used tool to enable this that was [created](https://github.com/grpc-ecosystem/grpc-health-probe/commit/2df4478982e95c9a57d5fe3f555667f4365c025d) +on August 21, 2018, and with +the first release at [Sep 19, 2018](https://github.com/grpc-ecosystem/grpc-health-probe/releases/tag/v0.1.0-alpha.1). + +This approach for gRPC apps health checking is very popular. There are [3,626 Dockerfiles](https://github.com/search?l=Dockerfile&q=grpc_health_probe&type=code) +with the `grpc_health_probe` and [6,621 yaml](https://github.com/search?l=YAML&q=grpc_health_probe&type=Code) files that are discovered with the +basic search on GitHub (at the moment of writing). This is good indication of the tool popularity +and the need to support this natively. + +Kubernetes v1.23 introduced an alpha-quality implementation of native support for +querying a workload status using gRPC. Because it was an alpha feature, +this was disabled by default for the v1.23 release. + +## Using the feature + +We built gRPC health checking in similar way with other probes and believe +it will be [easy to use](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe) +if you are familiar with other probe types in Kubernetes. +The natively supported health probe has many benefits over the workaround involving `grpc_health_probe` executable. + +With the native gRPC support you don't need to download and carry `10MB` of an additional executable with your image. +Exec probes are generally slower than a gRPC call as they require instantiating a new process to run an executable. +It also makes the checks less sensible for edge cases when the pod is running at maximum resources and has troubles +instantiating new processes. + +There are a few limitations though. Since configuring a client certificate for probes is hard, +services that require client authentication are not supported. The built-in probes are also +not checking the server certificates and ignore related problems. + +Built-in checks also cannot be configured to ignore certain types of errors +(`grpc_health_probe` returns different exit codes for different errors), +and cannot be "chained" to run the health check on multiple services in a single probe. + +But all these limitations are quite standard for gRPC and there are easy workarounds +for those. + +## Try it for yourself + +### Cluster-level setup + +You can try this feature today. To try native gRPC probes, you can spin up a Kubernetes cluster +yourself with the `GRPCContainerProbe` feature gate enabled, there are many [tools available](/docs/tasks/tools/). + +Since the feature gate `GRPCContainerProbe` is enabled by default in 1.24, +many vendors will have this functionality working out of the box. +So you may just create an 1.24 cluster on platform of your choice. Some vendors +allow to enable alpha features on 1.23 clusters. + +For example, at the moment of writing, you can spin up the test cluster on GKE for a quick test. +Other vendors may also have similar capabilities, especially if you +are reading this blog post long after the Kubernetes 1.24 release. + +On GKE use the following command (note, version is `1.23` and `enable-kubernetes-alpha` are specified). + +```shell +gcloud container clusters create test-grpc \ + --enable-kubernetes-alpha \ + --no-enable-autorepair \ + --no-enable-autoupgrade \ + --release-channel=rapid \ + --cluster-version=1.23 +``` + +You will also need to configure `kubectl` to access the cluster: + +```shell +gcloud container clusters get-credentials test-grpc +``` + +### Trying the feature out + +Let's create the pod to test how gRPC probes work. For this test we will use the `agnhost` image. +This is a k8s maintained image with that can be used for all sorts of workload testing. +For example, it has a useful [grpc-health-checking](https://github.com/kubernetes/kubernetes/blob/b2c5bd2a278288b5ef19e25bf7413ecb872577a4/test/images/agnhost/README.md#grpc-health-checking) module +that exposes two ports - one is serving health checking service, +another - http port to react on commands `make-serving` and `make-not-serving`. + +Here is an example pod definition. It starts the `grpc-health-checking` module, +exposes ports `5000` and `8080`, and configures gRPC readiness probe: + +``` yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: test-grpc +spec: + containers: + - name: agnhost + image: k8s.gcr.io/e2e-test-images/agnhost:2.35 + command: ["/agnhost", "grpc-health-checking"] + ports: + - containerPort: 5000 + - containerPort: 8080 + readinessProbe: + grpc: + port: 5000 +``` + +If the file called `test.yaml`, you can create the pod and check it's status. +The pod will be in ready state as indicated by the snippet of the output. + +```shell +kubectl apply -f test.yaml +kubectl describe test-grpc +``` + +The output will contain something like this: + +``` +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +``` + +Now let's change the health checking endpoint status to NOT_SERVING. +In order to call the http port of the Pod, let's create a port forward: + +```shell +kubectl port-forward test-grpc 8080:8080 +``` + +You can `curl` to call the command... + +```shell +curl http://localhost:8080/make-not-serving +``` + +... and in a few seconds the port status will switch to not ready. + +```shell +kubectl describe pod test-grpc +``` + +The output now will have: + +``` +Conditions: + Type Status + Initialized True + Ready False + ContainersReady False + PodScheduled True + +... + + Warning Unhealthy 2s (x6 over 42s) kubelet Readiness probe failed: service unhealthy (responded with "NOT_SERVING") +``` + +Once it is switched back, in about one second the Pod will get back to ready status: + +``` bsh +curl http://localhost:8080/make-serving +kubectl describe test-grpc +``` + +The output indicates that the Pod went back to being `Ready`: + +``` +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +``` + +This new built-in gRPC health probing on Kubernetes makes implementing a health-check via gRPC +much easier than the older approach that relied on using a separate `exec` probe. Read through +the official +[documentation](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe) +to learn more and provide feedback before the feature will be promoted to GA. + +## Summary + +Kubernetes is a popular workload orchestration platform and we add features based on feedback and demand. +Features like gRPC probes support is a minor improvement that will make life of many app developers +easier and apps more resilient. Try it today and give feedback, before the feature went into GA. diff --git a/content/en/blog/_posts/2022-05-16-volume-populators-beta.md b/content/en/blog/_posts/2022-05-16-volume-populators-beta.md index 8585014f7ad59..4558f07eae979 100644 --- a/content/en/blog/_posts/2022-05-16-volume-populators-beta.md +++ b/content/en/blog/_posts/2022-05-16-volume-populators-beta.md @@ -8,11 +8,11 @@ slug: volume-populators-beta **Author:** Ben Swartzlander (NetApp) -The volume populators feature is now two releases old and entering beta! The `AnyVolumeDataSouce` feature +The volume populators feature is now two releases old and entering beta! The `AnyVolumeDataSource` feature gate defaults to enabled in Kubernetes v1.24, which means that users can specify any custom resource as the data source of a PVC. -An [earlier blog article](/blog/2021/08/30-volume-populators-redesigned/) detailed how the +An [earlier blog article](/blog/2021/08/30/volume-populators-redesigned/) detailed how the volume populators feature works. In short, a cluster administrator can install a CRD and associated populator controller in the cluster, and any user who can create instances of the CR can create pre-populated volumes by taking advantage of the populator. diff --git a/content/en/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md b/content/en/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md new file mode 100644 index 0000000000000..920d578d01688 --- /dev/null +++ b/content/en/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md @@ -0,0 +1,117 @@ +--- +layout: blog +title: 'Kubernetes 1.24: Prevent unauthorised volume mode conversion' +date: 2022-05-18 +slug: prevent-unauthorised-volume-mode-conversion-alpha +--- + +**Author:** Raunak Pradip Shah (Mirantis) + +Kubernetes v1.24 introduces a new alpha-level feature that prevents unauthorised users +from modifying the volume mode of a [`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/) created from an +existing [`VolumeSnapshot`](/docs/concepts/storage/volume-snapshots/) in the Kubernetes cluster. + + + +### The problem + +The [Volume Mode](/docs/concepts/storage/persistent-volumes/#volume-mode) determines whether a volume +is formatted into a filesystem or presented as a raw block device. + +Users can leverage the `VolumeSnapshot` feature, which has been stable since Kubernetes v1.20, +to create a `PersistentVolumeClaim` (shortened as PVC) from an existing `VolumeSnapshot` in +the Kubernetes cluster. The PVC spec includes a `dataSource` field, which can point to an +existing `VolumeSnapshot` instance. +Visit [Create a PersistentVolumeClaim from a Volume Snapshot](/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot) for more details. + +When leveraging the above capability, there is no logic that validates whether the mode of the +original volume, whose snapshot was taken, matches the mode of the newly created volume. + +This presents a security gap that allows malicious users to potentially exploit an +as-yet-unknown vulnerability in the host operating system. + +Many popular storage backup vendors convert the volume mode during the course of a +backup operation, for efficiency purposes, which prevents Kubernetes from blocking +the operation completely and presents a challenge in distinguishing trusted +users from malicious ones. + +### Preventing unauthorised users from converting the volume mode + +In this context, an authorised user is one who has access rights to perform `Update` +or `Patch` operations on `VolumeSnapshotContents`, which is a cluster-level resource. +It is upto the cluster administrator to provide these rights only to trusted users +or applications, like backup vendors. + +If the alpha feature is [enabled](https://kubernetes-csi.github.io/docs/) in +`snapshot-controller`, `snapshot-validation-webhook` and `external-provisioner`, +then unauthorised users will not be allowed to modify the volume mode of a PVC +when it is being created from a `VolumeSnapshot`. + +To convert the volume mode, an authorised user must do the following: + +1. Identify the `VolumeSnapshot` that is to be used as the data source for a newly +created PVC in the given namespace. +2. Identify the `VolumeSnapshotContent` bound to the above `VolumeSnapshot`. + + ```shell + kubectl get volumesnapshot -n + ``` + +3. Add the annotation [`snapshot.storage.kubernetes.io/allowVolumeModeChange`](/docs/reference/labels-annotations-taints/#snapshot-storage-kubernetes-io-allowvolumemodechange) +to the `VolumeSnapshotContent`. + +4. This annotation can be added either via software or manually by the authorised +user. The `VolumeSnapshotContent` annotation must look like following manifest fragment: + + ```yaml + kind: VolumeSnapshotContent + metadata: + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" + ... + ``` + +**Note**: For pre-provisioned `VolumeSnapshotContents`, you must take an extra +step of setting `spec.sourceVolumeMode` field to either `Filesystem` or `Block`, +depending on the mode of the volume from which this snapshot was taken. + +An example is shown below: + + ```yaml + apiVersion: snapshot.storage.k8s.io/v1 + kind: VolumeSnapshotContent + metadata: + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" + name: new-snapshot-content-test + spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default + ``` + +Repeat steps 1 to 3 for all `VolumeSnapshotContents` whose volume mode needs to be +converted during a backup or restore operation. + +If the annotation shown in step 4 above is present on a `VolumeSnapshotContent` +object, Kubernetes will not prevent the volume mode from being converted. +Users should keep this in mind before they attempt to add the annotation +to any `VolumeSnapshotContent`. + + +### What's next + +[Enable this feature](https://kubernetes-csi.github.io/docs/) and let us know +what you think! + +We hope this feature causes no disruption to existing workflows while preventing +malicious users from exploiting security vulnerabilities in their clusters. + +For any queries or issues, join [Kubernetes on Slack](https://slack.k8s.io/) and +create a thread in the #sig-storage channel. Alternately, create an issue in the +CSI external-snapshotter [repository](https://github.com/kubernetes-csi/external-snapshotter). \ No newline at end of file diff --git a/content/en/blog/_posts/2022-05-20-non-graceful-node-shutdown.md b/content/en/blog/_posts/2022-05-20-non-graceful-node-shutdown.md new file mode 100644 index 0000000000000..f8f4876285bdd --- /dev/null +++ b/content/en/blog/_posts/2022-05-20-non-graceful-node-shutdown.md @@ -0,0 +1,96 @@ +--- +layout: blog +title: "Kubernetes 1.24: Introducing Non-Graceful Node Shutdown Alpha" +date: 2022-05-20 +slug: kubernetes-1-24-non-graceful-node-shutdown-alpha +--- + +**Authors** Xing Yang and Yassine Tijani (VMware) + +Kubernetes v1.24 introduces alpha support for [Non-Graceful Node Shutdown](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown). This feature allows stateful workloads to failover to a different node after the original node is shutdown or in a non-recoverable state such as hardware failure or broken OS. + +## How is this different from Graceful Node Shutdown + +You might have heard about the [Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown) capability of Kubernetes, +and are wondering how the Non-Graceful Node Shutdown feature is different from that. Graceful Node Shutdown +allows Kubernetes to detect when a node is shutting down cleanly, and handles that situation appropriately. +A Node Shutdown can be "graceful" only if the node shutdown action can be detected by the kubelet ahead +of the actual shutdown. However, there are cases where a node shutdown action may not be detected by +the kubelet. This could happen either because the shutdown command does not trigger the systemd inhibitor +locks mechanism that kubelet relies upon, or because of a configuration error +(the `ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are not configured properly). + +Graceful node shutdown relies on Linux-specific support. The kubelet does not watch for upcoming +shutdowns on Windows nodes (this may change in a future Kubernetes release). + +When a node is shutdown but without the kubelet detecting it, pods on that node +also shut down ungracefully. For stateless apps, that's often not a problem (a ReplicaSet adds a new pod once +the cluster detects that the affected node or pod has failed). For stateful apps, the story is more complicated. +If you use a StatefulSet and have a pod from that StatefulSet on a node that fails uncleanly, that affected pod +will be marked as terminating; the StatefulSet cannot create a replacement pod because the pod +still exists in the cluster. +As a result, the application running on the StatefulSet may be degraded or even offline. If the original, shut +down node comes up again, the kubelet on that original node reports in, deletes the existing pods, and +the control plane makes a replacement pod for that StatefulSet on a different running node. +If the original node has failed and does not come up, those stateful pods would be stuck in a +terminating status on that failed node indefinitely. + +``` +$ kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +web-0 1/1 Running 0 100m 10.244.2.4 k8s-node-876-1639279816 +web-1 1/1 Terminating 0 100m 10.244.1.3 k8s-node-433-1639279804 +``` + +## Try out the new non-graceful shutdown handling + +To use the non-graceful node shutdown handling, you must enable the `NodeOutOfServiceVolumeDetach` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the `kube-controller-manager` +component. + +In the case of a node shutdown, you can manually taint that node as out of service. You should make certain that +the node is truly shutdown (not in the middle of restarting) before you add that taint. You could add that +taint following a shutdown that the kubelet did not detect and handle in advance; another case where you +can use that taint is when the node is in a non-recoverable state due to a hardware failure or a broken OS. +The values you set for that taint can be `node.kubernetes.io/out-of-service=nodeshutdown: "NoExecute"` +or `node.kubernetes.io/out-of-service=nodeshutdown:" NoSchedule"`. +Provided you have enabled the feature gate mentioned earlier, setting the out-of-service taint on a Node +means that pods on the node will be deleted unless if there are matching tolerations on the pods. +Persistent volumes attached to the shutdown node will be detached, and for StatefulSets, replacement pods will +be created successfully on a different running node. + +``` +$ kubectl taint nodes node.kubernetes.io/out-of-service=nodeshutdown:NoExecute + +$ kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +web-0 1/1 Running 0 150m 10.244.2.4 k8s-node-876-1639279816 +web-1 1/1 Running 0 10m 10.244.1.7 k8s-node-433-1639279804 +``` + +Note: Before applying the out-of-service taint, you **must** verify that a node is already in shutdown or power off state (not in the middle of restarting), either because the user intentionally shut it down or the node is down due to hardware failures, OS issues, etc. + +Once all the workload pods that are linked to the out-of-service node are moved to a new running node, and the shutdown node has been recovered, you should remove +that taint on the affected node after the node is recovered. +If you know that the node will not return to service, you could instead delete the node from the cluster. + +## What’s next? + +Depending on feedback and adoption, the Kubernetes team plans to push the Non-Graceful Node Shutdown implementation to Beta in either 1.25 or 1.26. + +This feature requires a user to manually add a taint to the node to trigger workloads failover and remove the taint after the node is recovered. In the future, we plan to find ways to automatically detect and fence nodes that are shutdown/failed and automatically failover workloads to another node. + +## How can I learn more? + +Check out the [documentation](/docs/concepts/architecture/nodes/#non-graceful-node-shutdown) +for non-graceful node shutdown. + +## How to get involved? + +This feature has a long story. Yassine Tijani ([yastij](https://github.com/yastij)) started the KEP more than two years ago. Xing Yang ([xing-yang](https://github.com/xing-yang)) continued to drive the effort. There were many discussions among SIG Storage, SIG Node, and API reviewers to nail down the design details. Ashutosh Kumar ([sonasingh46](https://github.com/sonasingh46)) did most of the implementation and brought it to Alpha in Kubernetes 1.24. + +We want to thank the following people for their insightful reviews: Tim Hockin ([thockin](https://github.com/thockin)) for his guidance on the design, Jing Xu ([jingxu97](https://github.com/jingxu97)), Hemant Kumar ([gnufied](https://github.com/gnufied)), and Michelle Au ([msau42](https://github.com/msau42)) for reviews from SIG Storage side, and Mrunal Patel ([mrunalp](https://github.com/mrunalp)), David Porter ([bobbypage](https://github.com/bobbypage)), Derek Carr ([derekwaynecarr](https://github.com/derekwaynecarr)), and Danielle Endocrimes ([endocrimes](https://github.com/endocrimes)) for reviews from SIG Node side. + +There are many people who have helped review the design and implementation along the way. We want to thank everyone who has contributed to this effort including the about 30 people who have reviewed the [KEP](https://github.com/kubernetes/enhancements/pull/1116) and implementation over the last couple of years. + +This feature is a collaboration between SIG Storage and SIG Node. For those interested in getting involved with the design and development of any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). For those interested in getting involved with the design and development of the components that support the controlled interactions between pods and host resources, join the [Kubernetes Node SIG](https://github.com/kubernetes/community/tree/master/sig-node). diff --git a/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md b/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md new file mode 100644 index 0000000000000..c17453a6def84 --- /dev/null +++ b/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md @@ -0,0 +1,137 @@ +--- +layout: blog +title: "Kubernetes 1.24: Avoid Collisions Assigning IP Addresses to Services" +date: 2022-05-23 +slug: service-ip-dynamic-and-static-allocation +--- + +**Author:** Antonio Ojea (Red Hat) + + +In Kubernetes, [Services](/docs/concepts/services-networking/service/) are an abstract way to expose +an application running on a set of Pods. Services +can have a cluster-scoped virtual IP address (using a Service of `type: ClusterIP`). +Clients can connect using that virtual IP address, and Kubernetes then load-balances traffic to that +Service across the different backing Pods. + +## How Service ClusterIPs are allocated? + +A Service `ClusterIP` can be assigned: + +_dynamically_ +: the cluster's control plane automatically picks a free IP address from within the configured IP range for `type: ClusterIP` Services. + +_statically_ +: you specify an IP address of your choice, from within the configured IP range for Services. + +Across your whole cluster, every Service `ClusterIP` must be unique. +Trying to create a Service with a specific `ClusterIP` that has already +been allocated will return an error. + +## Why do you need to reserve Service Cluster IPs? + +Sometimes you may want to have Services running in well-known IP addresses, so other components and +users in the cluster can use them. + +The best example is the DNS Service for the cluster. Some Kubernetes installers assign the 10th address from +the Service IP range to the DNS service. Assuming you configured your cluster with Service IP range +10.96.0.0/16 and you want your DNS Service IP to be 10.96.0.10, you'd have to create a Service like +this: + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + k8s-app: kube-dns + kubernetes.io/cluster-service: "true" + kubernetes.io/name: CoreDNS + name: kube-dns + namespace: kube-system +spec: + clusterIP: 10.96.0.10 + ports: + - name: dns + port: 53 + protocol: UDP + targetPort: 53 + - name: dns-tcp + port: 53 + protocol: TCP + targetPort: 53 + selector: + k8s-app: kube-dns + type: ClusterIP +``` + +but as I explained before, the IP address 10.96.0.10 has not been reserved; if other Services are created +before or in parallel with dynamic allocation, there is a chance they can allocate this IP, hence, +you will not be able to create the DNS Service because it will fail with a conflict error. + +## How can you avoid Service ClusterIP conflicts? {#avoid-ClusterIP-conflict} + +In Kubernetes 1.24, you can enable a new feature gate `ServiceIPStaticSubrange`. +Turning this on allows you to use a different IP +allocation strategy for Services, reducing the risk of collision. + +The `ClusterIP` range will be divided, based on the formula `min(max(16, cidrSize / 16), 256)`, +described as _never less than 16 or more than 256 with a graduated step between them_. + +Dynamic IP assignment will use the upper band by default, once this has been exhausted it will +use the lower range. This will allow users to use static allocations on the lower band with a low +risk of collision. + +Examples: + +#### Service IP CIDR block: 10.96.0.0/24 + +Range Size: 28 - 2 = 254 +Band Offset: `min(max(16, 256/16), 256)` = `min(16, 256)` = 16 +Static band start: 10.96.0.1 +Static band end: 10.96.0.16 +Range end: 10.96.0.254 + +{{< mermaid >}} +pie showData + title 10.96.0.0/24 + "Static" : 16 + "Dynamic" : 238 +{{< /mermaid >}} + +#### Service IP CIDR block: 10.96.0.0/20 + +Range Size: 212 - 2 = 4094 +Band Offset: `min(max(16, 4096/16), 256)` = `min(256, 256)` = 256 +Static band start: 10.96.0.1 +Static band end: 10.96.1.0 +Range end: 10.96.15.254 + +{{< mermaid >}} +pie showData + title 10.96.0.0/20 + "Static" : 256 + "Dynamic" : 3838 +{{< /mermaid >}} + +#### Service IP CIDR block: 10.96.0.0/16 + +Range Size: 216 - 2 = 65534 +Band Offset: `min(max(16, 65536/16), 256)` = `min(4096, 256)` = 256 +Static band start: 10.96.0.1 +Static band ends: 10.96.1.0 +Range end: 10.96.255.254 + +{{< mermaid >}} +pie showData + title 10.96.0.0/16 + "Static" : 256 + "Dynamic" : 65278 +{{< /mermaid >}} + +## Get involved with SIG Network + +The current SIG-Network [KEPs](https://github.com/orgs/kubernetes/projects/10) and [issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fnetwork) on GitHub illustrate the SIG’s areas of emphasis. + +[SIG Network meetings](https://github.com/kubernetes/community/tree/master/sig-network) are a friendly, welcoming venue for you to connect with the community and share your ideas. +Looking forward to hearing from you! + diff --git a/content/en/blog/_posts/2022-05-25-contextual-logging/index.md b/content/en/blog/_posts/2022-05-25-contextual-logging/index.md new file mode 100644 index 0000000000000..2d5ef5c4c7229 --- /dev/null +++ b/content/en/blog/_posts/2022-05-25-contextual-logging/index.md @@ -0,0 +1,251 @@ +--- +layout: blog +title: "Contextual Logging in Kubernetes 1.24" +date: 2022-05-25 +slug: contextual-logging +canonicalUrl: https://kubernetes.dev/blog/2022/05/25/contextual-logging/ +--- + + **Authors:** Patrick Ohly (Intel) + +The [Structured Logging Working +Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md) +has added new capabilities to the logging infrastructure in Kubernetes +1.24. This blog post explains how developers can take advantage of those to +make log output more useful and how they can get involved with improving Kubernetes. + +## Structured logging + +The goal of [structured +logging](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/1602-structured-logging/README.md) +is to replace C-style formatting and the resulting opaque log strings with log +entries that have a well-defined syntax for storing message and parameters +separately, for example as a JSON struct. + +When using the traditional klog text output format for structured log calls, +strings were originally printed with `\n` escape sequences, except when +embedded inside a struct. For structs, log entries could still span multiple +lines, with no clean way to split the log stream into individual entries: + +``` +I1112 14:06:35.783529 328441 structured_logging.go:51] "using InfoS" longData={Name:long Data:Multiple +lines +with quite a bit +of text. internal:0} +I1112 14:06:35.783549 328441 structured_logging.go:52] "using InfoS with\nthe message across multiple lines" int=1 stringData="long: Multiple\nlines\nwith quite a bit\nof text." str="another value" +``` + +Now, the `<` and `>` markers along with indentation are used to ensure that splitting at a +klog header at the start of a line is reliable and the resulting output is human-readable: + +``` +I1126 10:31:50.378204 121736 structured_logging.go:59] "using InfoS" longData=< + {Name:long Data:Multiple + lines + with quite a bit + of text. internal:0} + > +I1126 10:31:50.378228 121736 structured_logging.go:60] "using InfoS with\nthe message across multiple lines" int=1 stringData=< + long: Multiple + lines + with quite a bit + of text. + > str="another value" +``` + +Note that the log message itself is printed with quoting. It is meant to be a +fixed string that identifies a log entry, so newlines should be avoided there. + +Before Kubernetes 1.24, some log calls in kube-scheduler still used `klog.Info` +for multi-line strings to avoid the unreadable output. Now all log calls have +been updated to support structured logging. + +## Contextual logging + +[Contextual logging](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/3077-contextual-logging/README.md) +is based on the [go-logr API](https://github.com/go-logr/logr#a-minimal-logging-api-for-go). The key +idea is that libraries are passed a logger instance by their caller and use +that for logging instead of accessing a global logger. The binary decides about +the logging implementation, not the libraries. The go-logr API is designed +around structured logging and supports attaching additional information to a +logger. + +This enables additional use cases: + +- The caller can attach additional information to a logger: + - [`WithName`](https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName) adds a prefix + - [`WithValues`](https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues) adds key/value pairs + + When passing this extended logger into a function and a function uses it + instead of the global logger, the additional information is + then included in all log entries, without having to modify the code that + generates the log entries. This is useful in highly parallel applications + where it can become hard to identify all log entries for a certain operation + because the output from different operations gets interleaved. + +- When running unit tests, log output can be associated with the current test. + Then when a test fails, only the log output of the failed test gets shown + by `go test`. That output can also be more verbose by default because it + will not get shown for successful tests. Tests can be run in parallel + without interleaving their output. + +One of the design decisions for contextual logging was to allow attaching a +logger as value to a `context.Context`. Since the logger encapsulates all +aspects of the intended logging for the call, it is *part* of the context and +not just *using* it. A practical advantage is that many APIs already have a +`ctx` parameter or adding one has additional advantages, like being able to get +rid of `context.TODO()` calls inside the functions. + +Another decision was to not break compatibility with klog v2: + +- Libraries that use the traditional klog logging calls in a binary that has + set up contextual logging will work and log through the logging backend + chosen by the binary. However, such log output will not include the + additional information and will not work well in unit tests, so libraries + should be modified to support contextual logging. The [migration guide](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md) + for structured logging has been extended to also cover contextual logging. + +- When a library supports contextual logging and retrieves a logger from its + context, it will still work in a binary that does not initialize contextual + logging because it will get a logger that logs through klog. + +In Kubernetes 1.24, contextual logging is a new alpha feature with +`ContextualLogging` as feature gate. When disabled (the default), the new klog +API calls for contextual logging (see below) become no-ops to avoid performance +or functional regressions. + +No Kubernetes component has been converted yet. An [example program](https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/staging/src/k8s.io/component-base/logs/example/cmd/logger.go) +in the Kubernetes repository demonstrates how to enable contextual logging in a +binary and how the output depends on the binary's parameters: + +```console +$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/ +$ go run . --help +... + --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are: + AllAlpha=true|false (ALPHA - default=false) + AllBeta=true|false (BETA - default=false) + ContextualLogging=true|false (ALPHA - default=false) +$ go run . --feature-gates ContextualLogging=true +... +I0404 18:00:02.916429 451895 logger.go:94] "example/myname: runtime" foo="bar" duration="1m0s" +I0404 18:00:02.916447 451895 logger.go:95] "example: another runtime" foo="bar" duration="1m0s" +``` + +The `example` prefix and `foo="bar"` were added by the caller of the function +which logs the `runtime` message and `duration="1m0s"` value. + +The sample code for klog includes an +[example](https://github.com/kubernetes/klog/blob/v2.60.1/ktesting/example/example_test.go) +for a unit test with per-test output. + +## klog enhancements + +### Contextual logging API + +The following calls manage the lookup of a logger: + +[`FromContext`](https://pkg.go.dev/k8s.io/klog/v2#FromContext) +: from a `context` parameter, with fallback to the global logger + +[`Background`](https://pkg.go.dev/k8s.io/klog/v2#Background) +: the global fallback, with no intention to support contextual logging + +[`TODO`](https://pkg.go.dev/k8s.io/klog/v2#TODO) +: the global fallback, but only as a temporary solution until the function gets extended to accept + a logger through its parameters + +[`SetLoggerWithOptions`](https://pkg.go.dev/k8s.io/klog/v2#SetLoggerWithOptions) +: changes the fallback logger; when called with [`ContextualLogger(true)`](https://pkg.go.dev/k8s.io/klog/v2#ContextualLogger), + the logger is ready to be called directly, in which case logging will be done + without going through klog + +To support the feature gate mechanism in Kubernetes, klog has wrapper calls for +the corresponding go-logr calls and a global boolean controlling their behavior: + +- [`LoggerWithName`](https://pkg.go.dev/k8s.io/klog/v2#LoggerWithName) +- [`LoggerWithValues`](https://pkg.go.dev/k8s.io/klog/v2#LoggerWithValues) +- [`NewContext`](https://pkg.go.dev/k8s.io/klog/v2#NewContext) +- [`EnableContextualLogging`](https://pkg.go.dev/k8s.io/klog/v2#EnableContextualLogging) + +Usage of those functions in Kubernetes code is enforced with a linter +check. The klog default for contextual logging is to enable the functionality +because it is considered stable in klog. It is only in Kubernetes binaries +where that default gets overridden and (in some binaries) controlled via the +`--feature-gate` parameter. + +### ktesting logger + +The new [ktesting](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting) package +implements logging through `testing.T` using klog's text output format. It has +a [single API call](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting#NewTestContext) for +instrumenting a test case and [support for command line flags](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting/init). + +### klogr + +[`klog/klogr`](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/klogr) continues to be +supported and it's default behavior is unchanged: it formats structured log +entries using its own, custom format and prints the result via klog. + +However, this usage is discouraged because that format is neither +machine-readable (in contrast to real JSON output as produced by zapr, the +go-logr implementation used by Kubernetes) nor human-friendly (in contrast to +the klog text format). + +Instead, a klogr instance should be created with +[`WithFormat(FormatKlog)`](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/klogr#WithFormat) +which chooses the klog text format. A simpler construction method with the same +result is the new +[`klog.NewKlogr`](https://pkg.go.dev/k8s.io/klog/v2#NewKlogr). That is the +logger that klog returns as fallback when nothing else is configured. + +### Reusable output test + +A lot of go-logr implementations have very similar unit tests where they check +the result of certain log calls. If a developer didn't know about certain +caveats like for example a `String` function that panics when called, then it +is likely that both the handling of such caveats and the unit test are missing. + +[`klog.test`](https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/test) is a reusable set +of test cases that can be applied to a go-logr implementation. + +### Output flushing + +klog used to start a goroutine unconditionally during `init` which flushed +buffered data at a hard-coded interval. Now that goroutine is only started on +demand (i.e. when writing to files with buffering) and can be controlled with +[`StopFlushDaemon`](https://pkg.go.dev/k8s.io/klog/v2#StopFlushDaemon) and +[`StartFlushDaemon`](https://pkg.go.dev/k8s.io/klog/v2#StartFlushDaemon). + +When a go-logr implementation buffers data, flushing that data can be +integrated into [`klog.Flush`](https://pkg.go.dev/k8s.io/klog/v2#Flush) by +registering the logger with the +[`FlushLogger`](https://pkg.go.dev/k8s.io/klog/v2#FlushLogger) option. + +### Various other changes + +For a description of all other enhancements see in the [release notes](https://github.com/kubernetes/klog/releases). + +## logcheck + +Originally designed as a linter for structured log calls, the + [`logcheck`](https://github.com/kubernetes/klog/tree/788efcdee1e9be0bfbe5b076343d447314f2377e/hack/tools/logcheck) +tool has been enhanced to support also contextual logging and traditional klog +log calls. These enhanced checks already found bugs in Kubernetes, like calling +`klog.Info` instead of `klog.Infof` with a format string and parameters. + +It can be included as a plugin in a `golangci-lint` invocation, which is how +[Kubernetes uses it now](https://github.com/kubernetes/kubernetes/commit/17e3c555c5115f8c9176bae10ba45baa04d23a7b), +or get invoked stand-alone. + +We are in the process of [moving the tool](https://github.com/kubernetes/klog/issues/312) into a new repository because it isn't +really related to klog and its releases should be tracked and tagged properly. + +## Next steps + +The [Structured Logging WG](https://github.com/kubernetes/community/tree/master/wg-structured-logging) +is always looking for new contributors. The migration +away from C-style logging is now going to target structured, contextual logging +in one step to reduce the overall code churn and number of PRs. Changing log +calls is good first contribution to Kubernetes and an opportunity to get to +know code in various different areas. diff --git a/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md b/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md new file mode 100644 index 0000000000000..aa6257eb3ef6a --- /dev/null +++ b/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md @@ -0,0 +1,148 @@ +--- +layout: blog +title: 'Kubernetes 1.24: Maximum Unavailable Replicas for StatefulSet' +date: 2022-05-27 +slug: maxunavailable-for-statefulset +--- + +**Author:** Mayank Kumar (Salesforce) + +Kubernetes [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), since their introduction in +1.5 and becoming stable in 1.9, have been widely used to run stateful applications. They provide stable pod identity, persistent +per pod storage and ordered graceful deployment, scaling and rolling updates. You can think of StatefulSet as the atomic building +block for running complex stateful applications. As the use of Kubernetes has grown, so has the number of scenarios requiring +StatefulSets. Many of these scenarios, require faster rolling updates than the currently supported one-pod-at-a-time updates, in the +case where you're using the `OrderedReady` Pod management policy for a StatefulSet. + + +Here are some examples: + +- I am using a StatefulSet to orchestrate a multi-instance, cache based application where the size of the cache is large. The cache + starts cold and requires some siginificant amount of time before the container can start. There could be more initial startup tasks + that are required. A RollingUpdate on this StatefulSet would take a lot of time before the application is fully updated. If the + StatefulSet supported updating more than one pod at a time, it would result in a much faster update. + +- My stateful application is composed of leaders and followers or one writer and multiple readers. I have multiple readers or + followers and my application can tolerate multiple pods going down at the same time. I want to update this application more than + one pod at a time so that i get the new updates rolled out quickly, especially if the number of instances of my application are + large. Note that my application still requires unique identity per pod. + + +In order to support such scenarios, Kubernetes 1.24 includes a new alpha feature to help. Before you can use the new feature you must +enable the `MaxUnavailableStatefulSet` feature flag. Once you enable that, you can specify a new field called `maxUnavailable`, part +of the `spec` for a StatefulSet. For example: + +``` +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: web + namespace: default +spec: + podManagementPolicy: OrderedReady # you must set OrderedReady + replicas: 5 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: k8s.gcr.io/nginx-slim:0.8 + imagePullPolicy: IfNotPresent + name: nginx + updateStrategy: + rollingUpdate: + maxUnavailable: 2 # this is the new alpha field, whose default value is 1 + partition: 0 + type: RollingUpdate +``` + +If you enable the new feature and you don't specify a value for `maxUnavailable` in a StatefulSet, Kubernetes applies a default +`maxUnavailable: 1`. This matches the behavior you would see if you don't enable the new feature. + +I'll run through a scenario based on that example manifest to demonstrate how this feature works. I will deploy a StatefulSet that +has 5 replicas, with `maxUnavailable` set to 2 and `partition` set to 0. + +I can trigger a rolling update by changing the image to `k8s.gcr.io/nginx-slim:0.9`. Once I initiate the rolling update, I can +watch the pods update 2 at a time as the current value of maxUnavailable is 2. The below output shows a span of time and is not +complete. The maxUnavailable can be an absolute number (for example, 2) or a percentage of desired Pods (for example, 10%). The +absolute number is calculated from percentage by rounding down. +``` +kubectl get pods --watch +``` + +``` +NAME READY STATUS RESTARTS AGE +web-0 1/1 Running 0 85s +web-1 1/1 Running 0 2m6s +web-2 1/1 Running 0 106s +web-3 1/1 Running 0 2m47s +web-4 1/1 Running 0 2m27s +web-4 1/1 Terminating 0 5m43s ----> start terminating 4 +web-3 1/1 Terminating 0 6m3s ----> start terminating 3 +web-3 0/1 Terminating 0 6m7s +web-3 0/1 Pending 0 0s +web-3 0/1 Pending 0 0s +web-4 0/1 Terminating 0 5m48s +web-4 0/1 Terminating 0 5m48s +web-3 0/1 ContainerCreating 0 2s +web-3 1/1 Running 0 2s +web-4 0/1 Pending 0 0s +web-4 0/1 Pending 0 0s +web-4 0/1 ContainerCreating 0 0s +web-4 1/1 Running 0 1s +web-2 1/1 Terminating 0 5m46s ----> start terminating 2 (only after both 4 and 3 are running) +web-1 1/1 Terminating 0 6m6s ----> start terminating 1 +web-2 0/1 Terminating 0 5m47s +web-1 0/1 Terminating 0 6m7s +web-1 0/1 Pending 0 0s +web-1 0/1 Pending 0 0s +web-1 0/1 ContainerCreating 0 1s +web-1 1/1 Running 0 2s +web-2 0/1 Pending 0 0s +web-2 0/1 Pending 0 0s +web-2 0/1 ContainerCreating 0 0s +web-2 1/1 Running 0 1s +web-0 1/1 Terminating 0 6m6s ----> start terminating 0 (only after 2 and 1 are running) +web-0 0/1 Terminating 0 6m7s +web-0 0/1 Pending 0 0s +web-0 0/1 Pending 0 0s +web-0 0/1 ContainerCreating 0 0s +web-0 1/1 Running 0 1s +``` +Note that as soon as the rolling update starts, both 4 and 3 (the two highest ordinal pods) start terminating at the same time. Pods +with ordinal 4 and 3 may become ready at their own pace. As soon as both pods 4 and 3 are ready, pods 2 and 1 start terminating at the +same time. When pods 2 and 1 are both running and ready, pod 0 starts terminating. + +In Kubernetes, updates to StatefulSets follow a strict ordering when updating Pods. In this example, the update starts at replica 4, then +replica 3, then replica 2, and so on, one pod at a time. When going one pod at a time, its not possible for 3 to be running and ready +before 4. When `maxUnavailable` is more than 1 (in the example scenario I set `maxUnavailable` to 2), it is possible that replica 3 becomes +ready and running before replica 4 is ready—and that is ok. If you're a developer and you set `maxUnavailable` to more than 1, you should +know that this outcome is possible and you must ensure that your application is able to handle such ordering issues that occur +if any. When you set `maxUnavailable` greater than 1, the ordering is guaranteed in between each batch of pods being updated. That guarantee +means that pods in update batch 2 (replicas 2 and 1) cannot start updating until the pods from batch 0 (replicas 4 and 3) are ready. + +Although Kubernetes refers to these as _replicas_, your stateful application may have a different view and each pod of the StatefulSet may +be holding completely different data than other pods. The important thing here is that updates to StatefulSets happen in batches, and you can +now have a batch size larger than 1 (as an alpha feature). + +Also note, that the above behavior is with `podManagementPolicy: OrderedReady`. If you defined a StatefulSet as `podManagementPolicy: Parallel`, +not only `maxUnavailable` number of replicas are terminated at the same time; `maxUnavailable` number of replicas start in `ContainerCreating` +phase at the same time as well. This is called bursting. + +So, now you may have a lot of questions about:- +- What is the behavior when you set `podManagementPolicy: Parallel`? +- What is the behavior when `partition` to a value other than `0`? + +It might be better to try and see it for yourself. This is an alpha feature, and the Kubernetes contributors are looking for feedback on this feature. Did +this help you achieve your stateful scenarios Did you find a bug or do you think the behavior as implemented is not intuitive or can +break applications or catch them by surprise? Please [open an issue](https://github.com/kubernetes/kubernetes/issues) to let us know. + +## Further reading and next steps {#next-steps} +- [Maximum unavailable Pods](/docs/concepts/workloads/controllers/statefulset/#maximum-unavailable-pods) +- [KEP for MaxUnavailable for StatefulSet](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/961-maxunavailable-for-statefulset) +- [Implementation](https://github.com/kubernetes/kubernetes/pull/82162/files) +- [Enhancement Tracking Issue](https://github.com/kubernetes/enhancements/issues/961) diff --git a/content/en/blog/_posts/2022-06-01-annual-report-2021.md b/content/en/blog/_posts/2022-06-01-annual-report-2021.md new file mode 100644 index 0000000000000..e0a41303573b0 --- /dev/null +++ b/content/en/blog/_posts/2022-06-01-annual-report-2021.md @@ -0,0 +1,19 @@ +--- +layout: blog +title: "Annual Report Summary 2021" +date: 2022-06-01 +slug: annual-report-summary-2021 +--- + +**Author:** Paris Pittman (Steering Committee) + +Last year, we published our first [Annual Report Summary](/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/) for 2020 and it's already time for our second edition! + +[2021 Annual Report Summary](https://www.cncf.io/reports/kubernetes-annual-report-2021/) + +This summary reflects the work that has been done in 2021 and the initiatives on deck for the rest of 2022. Please forward to organizations and indidviduals participating in upstream activities, planning cloud native strategies, and/or those looking to help out. To find a specific community group's complete report, go to the [kubernetes/community repo](https://github.com/kubernetes/community) under the groups folder. Example: [sig-api-machinery/annual-report-2021.md](https://github.com/kubernetes/community/blob/master/sig-api-machinery/annual-report-2021.md) + +You’ll see that this report summary is a growth area in itself. It takes us roughly 6 months to prepare and execute, which isn’t helpful or valuable to anyone as a fast moving project with short and long term needs. How can we make this better? Provide your feedback here: https://github.com/kubernetes/steering/issues/242 + +Reference: +[Annual Report Documentation](https://github.com/kubernetes/community/blob/master/committee-steering/governance/annual-reports.md) diff --git a/content/en/community/code-of-conduct.md b/content/en/community/code-of-conduct.md index a66b0572bffd9..84c95370a33f0 100644 --- a/content/en/community/code-of-conduct.md +++ b/content/en/community/code-of-conduct.md @@ -8,9 +8,9 @@ community_styles_migrated: true

Kubernetes follows the -CNCF Code of Conduct. +CNCF Code of Conduct. The text of the CNCF CoC is replicated below, as of -commit 214585e. +commit 71b12a2. If you notice that this is out of date, please file an issue.

diff --git a/content/en/community/static/cncf-code-of-conduct.md b/content/en/community/static/cncf-code-of-conduct.md index d07444c418368..fb3202b24a24f 100644 --- a/content/en/community/static/cncf-code-of-conduct.md +++ b/content/en/community/static/cncf-code-of-conduct.md @@ -1,45 +1,72 @@ -## CNCF Community Code of Conduct v1.0 + https://github.com/cncf/foundation/blob/main/code-of-conduct.md --> +## CNCF Community Code of Conduct v1.1 ### Contributor Code of Conduct -As contributors and maintainers of this project, and in the interest of fostering +As contributors and maintainers in the CNCF community, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. -We are committed to making participation in this project a harassment-free experience for -everyone, regardless of level of experience, gender, gender identity and expression, +We are committed to making participation in the CNCF community a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. -Examples of unacceptable behavior by participants include: +## Scope -* The use of sexualized language or imagery -* Personal attacks -* Trolling or insulting/derogatory comments +This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. + +### CNCF Events + +CNCF events, or events run by the Linux Foundation with professional events staff, are governed by the Linux Foundation [Events Code of Conduct](https://events.linuxfoundation.org/code-of-conduct/) available on the event page. This is designed to be used in conjunction with the CNCF Code of Conduct. + +## Our Standards + +Examples of behavior that contributes to a positive environment include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +* Focusing on what is best not just for us as individuals, but for the + overall community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or + advances of any kind +* Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment -* Publishing other's private information, such as physical or electronic addresses, - without explicit permission -* Other unethical or unprofessional conduct. - -Project maintainers have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are not -aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers -commit themselves to fairly and consistently applying these principles to every aspect -of managing this project. Project maintainers who do not follow or enforce the Code of +* Publishing others' private information, such as a physical or email + address, without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. +By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect +of managing this project. +Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. -This code of conduct applies both within project spaces and in public spaces -when an individual is representing the project or its community. +## Reporting -Instances of abusive, harassing, or otherwise unacceptable behavior in Kubernetes may be reported by contacting the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via . For other projects, please contact a CNCF project maintainer or our mediator, Mishi Choudhary . +For incidents occurring in the Kubernetes community, contact the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via . You can expect a response within three business days. -This Code of Conduct is adapted from the Contributor Covenant -(https://contributor-covenant.org), version 1.2.0, available at -https://contributor-covenant.org/version/1/2/0/ +For other projects, please contact the CNCF staff via . You can expect a response within three business days. + +In matters that require an outside mediator, CNCF has retained Mishi Choudhary (mishi@linux.com). Use of an outside mediator can be requested when reporting or used at CNCF staff's discretion. In general, contacting directly is preferred. + + +## Enforcement -### CNCF Events Code of Conduct +The Kubernetes project's [Code of Conduct Committee](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct) enforces code of conduct issues. For all other projects, the CNCF enforces code of conduct issues. -CNCF events are governed by the Linux Foundation [Code of Conduct](https://events.linuxfoundation.org/code-of-conduct/) available on the event page. This is designed to be compatible with the above policy and also includes more details on responding to incidents. +Both bodies try to resolve incidents without punishment, but may remove people from the project or CNCF communities at their discretion. + +## Acknowledgements + +This Code of Conduct is adapted from the Contributor Covenant +(http://contributor-covenant.org), version 2.0 available at +http://contributor-covenant.org/version/2/0/code_of_conduct/ \ No newline at end of file diff --git a/content/en/community/static/community-values.md b/content/en/community/static/community-values.md index f6469a3e61ad2..6fd1a1a06ccf2 100644 --- a/content/en/community/static/community-values.md +++ b/content/en/community/static/community-values.md @@ -3,26 +3,26 @@ # Kubernetes Community Values -Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement. +Kubernetes Community culture contributes substantially to the project's success. The following values have evolved over time, pushing our project and peers toward constant improvement. ## Distribution is better than centralization -The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community. +The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstones of our world-wide community. ## Community over product or company -We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate. +We are here as a community first. Our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem, providing an excellent experience for our users. Individuals gain status through work. Companies gain status through their commitments to support this community and fund the resources necessary for the project to operate. ## Automation over process -Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable. +Large projects have a lot of hard yet less exciting work. We value time spent automating repetitive work more highly than toil. Where work cannot be automated, our culture recognizes and rewards all types of contributions while recognizing that heroism is not sustainable. ## Inclusive is better than exclusive -Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path. +Broadly successful and useful technologies require different perspectives and skill sets, which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community members earn leadership through effort, scope, quality, quantity, and duration of contributions. Our community respects the time and effort put into a discussion, regardless of where a contributor is on their growth path. ## Evolution is better than stagnation -Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up. +Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship, and respect are the foundations of Kubernetes culture. Kubernetes community leaders have a duty to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up. **"Culture eats strategy for breakfast." --Peter Drucker** diff --git a/content/en/community/values.md b/content/en/community/values.md index 675e93c865b71..2974dc1e434b6 100644 --- a/content/en/community/values.md +++ b/content/en/community/values.md @@ -9,7 +9,15 @@ community_styles_migrated: true sitemap: priority: 0.1 --- +
+

+This page is a replicated version of +Kubernetes Community Values, as of +commit 5c64274. +If you notice that this is out of date, please +file an issue. +

{{< include "/static/community-values.md" >}}
diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index a4814aab4b45e..df384800e9606 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -2,7 +2,7 @@ reviewers: - dchen1107 - liggitt -title: Control Plane-Node Communication +title: Communication between Nodes and the Control Plane content_type: concept weight: 20 aliases: @@ -11,62 +11,109 @@ aliases: -This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). - - +This document catalogs the communication paths between the API server and the Kubernetes cluster. +The intent is to allow users to customize their installation to harden the network configuration +such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud +provider). ## Node to Control Plane -Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. -One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. -Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. +Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) +terminates at the API server. None of the other control plane components are designed to expose +remote services. The API server is configured to listen for remote connections on a secure HTTPS +port (typically 443) with one or more forms of client +[authentication](/docs/reference/access-authn-authz/authentication/) enabled. +One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be +enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) +or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) +are allowed. + +Nodes should be provisioned with the public root certificate for the cluster such that they can +connect securely to the API server along with valid client credentials. A good approach is that the +client credentials provided to the kubelet are in the form of a client certificate. See +[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) +for automated provisioning of kubelet client certificates. -Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. -The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. +Pods that wish to connect to the API server can do so securely by leveraging a service account so +that Kubernetes will automatically inject the public root certificate and a valid bearer token +into the pod when it is instantiated. +The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is +redirected (via `kube-proxy`) to the HTTPS endpoint on the API server. -The control plane components also communicate with the cluster apiserver over the secure port. +The control plane components also communicate with the API server over the secure port. -As a result, the default operating mode for connections from the nodes and pods running on the nodes to the control plane is secured by default and can run over untrusted and/or public networks. +As a result, the default operating mode for connections from the nodes and pods running on the +nodes to the control plane is secured by default and can run over untrusted and/or public +networks. -## Control Plane to node +## Control plane to node -There are two primary communication paths from the control plane (apiserver) to the nodes. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver's proxy functionality. +There are two primary communication paths from the control plane (the API server) to the nodes. +The first is from the API server to the kubelet process which runs on each node in the cluster. +The second is from the API server to any node, pod, or service through the API server's _proxy_ +functionality. -### apiserver to kubelet +### API server to kubelet -The connections from the apiserver to the kubelet are used for: +The connections from the API server to the kubelet are used for: * Fetching logs for pods. -* Attaching (through kubectl) to running pods. +* Attaching (usually through `kubectl`) to running pods. * Providing the kubelet's port-forwarding functionality. -These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks. +These connections terminate at the kubelet's HTTPS endpoint. By default, the API server does not +verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle +attacks and **unsafe** to run over untrusted and/or public networks. -To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. +To verify this connection, use the `--kubelet-certificate-authority` flag to provide the API +server with a root certificate bundle to use to verify the kubelet's serving certificate. -If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an +If that is not possible, use [SSH tunneling](#ssh-tunnels) between the API server and kubelet if +required to avoid connecting over an untrusted or public network. -Finally, [Kubelet authentication and/or authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) should be enabled to secure the kubelet API. -### apiserver to nodes, pods, and services +Finally, [Kubelet authentication and/or authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) +should be enabled to secure the kubelet API. -The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks. +### API server to nodes, pods, and services + +The connections from the API server to a node, pod, or service default to plain HTTP connections +and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS +connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will +not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So +while the connection will be encrypted, it will not provide any guarantees of integrity. These +connections **are not currently safe** to run over untrusted or public networks. ### SSH tunnels -Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. -This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. +Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this +configuration, the API server initiates an SSH tunnel to each node in the cluster (connecting to +the SSH server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or +service through the tunnel. +This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are +running. -SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. +{{< note >}} +SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you +are doing. The [Konnectivity service](#konnectivity-service) is a replacement for this +communication channel. +{{< /note >}} ### Konnectivity service {{< feature-state for_k8s_version="v1.18" state="beta" >}} -As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. -After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. +As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the +control plane to cluster communication. The Konnectivity service consists of two parts: the +Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. +The Konnectivity agents initiate connections to the Konnectivity server and maintain the network +connections. +After enabling the Konnectivity service, all control plane to nodes traffic goes through these +connections. + +Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set +up the Konnectivity service in your cluster. -Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 39d229a8976a5..2321fc6474ff9 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -458,7 +458,7 @@ Message: Pod was terminated in response to imminent node shutdown. {{< feature-state state="alpha" for_k8s_version="v1.24" >}} -A node shutdown action may not be detected by kubelet's Node Shutdown Mananger, +A node shutdown action may not be detected by kubelet's Node Shutdown Manager, either because the command does not trigger the inhibitor locks mechanism used by kubelet or because of a user error, i.e., the ShutdownGracePeriod and ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above @@ -654,7 +654,7 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its * Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node. * Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). -* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) +* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. * Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index 7e5827a6f7a2c..ace5297b330cf 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -63,8 +63,8 @@ Before choosing a guide, here are some considerations: ### Securing the kubelet * [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) - * [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) - * [Kubelet authentication/authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) + * [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) + * [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/) ## Optional Cluster Services diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 20626f2ff4a14..5f7df077c9b72 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -18,19 +18,19 @@ This page lists some of the available add-ons and links to their respective inst * [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI. * [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and security services for Kubernetes, leveraging Open vSwitch as the networking data plane. * [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy. +* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unites Flannel and Calico, providing networking and network policy. * [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins. -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave. +* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. * [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. * [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod. -* Multus is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes. +* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes. * [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy. -* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking -* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. +* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking. +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring. -* **Romana** is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). +* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API. * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. ## Service Discovery diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 9e8f2a79238a7..ccf1eb5887f84 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -174,7 +174,7 @@ to balance progress between request flows. The queuing configuration allows tuning the fair queuing algorithm for a priority level. Details of the algorithm can be read in the -[enhancement proposal](#whats-next), but in short: +[enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness), but in short: * Increasing `queues` reduces the rate of collisions between different flows, at the cost of increased memory usage. A value of 1 here effectively disables the @@ -331,7 +331,7 @@ Thus, in a situation with a mixture of servers of different versions there may be thrashing as long as different servers have different opinions of the proper content of these objects. -Each `kube-apiserver` makes an inital maintenance pass over the +Each `kube-apiserver` makes an initial maintenance pass over the mandatory and suggested configuration objects, and after that does periodic maintenance (once per minute) of those objects. @@ -471,11 +471,15 @@ poorly-behaved workloads that may be harming system health. requests, broken down by the labels `phase` (which takes on the values `waiting` and `executing`) and `request_kind` (which takes on the values `mutating` and `readOnly`). The observations are made - periodically at a high rate. + periodically at a high rate. Each observed value is a ratio, + between 0 and 1, of a number of requests divided by the + corresponding limit on the number of requests (queue length limit + for waiting and concurrency limit for executing). * `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a histogram vector of high or low water marks of the number of - requests broken down by the labels `phase` (which takes on the + requests (divided by the corresponding limit to get a ratio in the + range 0 to 1) broken down by the labels `phase` (which takes on the values `waiting` and `executing`) and `request_kind` (which takes on the values `mutating` and `readOnly`); the label `mark` takes on values `high` and `low`. The water marks are accumulated over @@ -502,11 +506,15 @@ poorly-behaved workloads that may be harming system health. values `waiting` and `executing`) and `priority_level`. Each histogram gets observations taken periodically, up through the last activity of the relevant sort. The observations are made at a high - rate. + rate. Each observed value is a ratio, between 0 and 1, of a number + of requests divided by the corresponding limit on the number of + requests (queue length limit for waiting and concurrency limit for + executing). * `apiserver_flowcontrol_priority_level_request_count_watermarks` is a histogram vector of high or low water marks of the number of - requests broken down by the labels `phase` (which takes on the + requests (divided by the corresponding limit to get a ratio in the + range 0 to 1) broken down by the labels `phase` (which takes on the values `waiting` and `executing`) and `priority_level`; the label `mark` takes on values `high` and `low`. The water marks are accumulated over windows bounded by the times when an observation @@ -514,6 +522,31 @@ poorly-behaved workloads that may be harming system health. `apiserver_flowcontrol_priority_level_request_count_samples`. These water marks show the range of values that occurred between samples. +* `apiserver_flowcontrol_priority_level_seat_count_samples` is a + histogram vector of observations of the utilization of a priority + level's concurrency limit, broken down by `priority_level`. This + utilization is the fraction (number of seats occupied) / + (concurrency limit). This metric considers all stages of execution + (both normal and the extra delay at the end of a write to cover for + the corresponding notification work) of all requests except WATCHes; + for those it considers only the initial stage that delivers + notifications of pre-existing objects. Each histogram in the vector + is also labeled with `phase: executing` (there is no seat limit for + the waiting phase). Each histogram gets observations taken + periodically, up through the last activity of the relevant sort. + The observations + are made at a high rate. + +* `apiserver_flowcontrol_priority_level_seat_count_watermarks` is a + histogram vector of high or low water marks of the utilization of a + priority level's concurrency limit, broken down by `priority_level` + and `mark` (which takes on values `high` and `low`). Each histogram + in the vector is also labeled with `phase: executing` (there is no + seat limit for the waiting phase). The water marks are accumulated + over windows bounded by the times when an observation was added to + `apiserver_flowcontrol_priority_level_seat_count_samples`. These + water marks show the range of values that occurred between samples. + * `apiserver_flowcontrol_request_queue_length_after_enqueue` is a histogram vector of queue lengths for the queues, broken down by the labels `priority_level` and `flow_schema`, as sampled by the @@ -556,6 +589,22 @@ poorly-behaved workloads that may be harming system health. and `priority_level` (indicating the one to which the request was assigned). +* `apiserver_flowcontrol_watch_count_samples` is a histogram vector of + the number of active WATCH requests relevant to a given write, + broken down by `flow_schema` and `priority_level`. + +* `apiserver_flowcontrol_work_estimated_seats` is a histogram vector + of the number of estimated seats (maximum of initial and final stage + of execution) associated with requests, broken down by `flow_schema` + and `priority_level`. + +* `apiserver_flowcontrol_request_dispatch_no_accommodation_total` is a + counter vec of the number of events that in principle could have led + to a request being dispatched but did not, due to lack of available + concurrency, broken down by `flow_schema` and `priority_level`. The + relevant sorts of events are arrival of a request and completion of + a request. + ### Debug endpoints When you enable the API Priority and Fairness feature, the `kube-apiserver` diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index c09e59f1df879..c90715da09956 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -461,7 +461,7 @@ That's it! The Deployment will declaratively update the deployed nginx applicati ## {{% heading "whatsnext" %}} -- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/). +- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug/debug-application/debug-running-pod/). - See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/). diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index 9fed36c2fd698..b780ef15ca43b 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -203,4 +203,4 @@ to run, and in both cases, the network provides one IP address per pod - as is s The early design of the networking model and its rationale, and some future plans are described in more detail in the -[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). +[networking design document](https://git.k8s.io/design-proposals-archive/network/networking.md). diff --git a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index 713592cf989ec..87c4737ad765f 100644 --- a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -18,7 +18,7 @@ It does not mean that there is a file named `kubeconfig`. {{< /note >}} {{< warning >}} -Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. +Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script. {{< /warning>}} @@ -53,7 +53,7 @@ clusters and namespaces. A *context* element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the `kubectl` command-line tool uses parameters from -the *current context* to communicate with the cluster. +the *current context* to communicate with the cluster. To choose the current context: ``` @@ -150,16 +150,16 @@ are stored absolutely. ## Proxy -You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like: +You can configure `kubectl` to use a proxy per cluster using `proxy-url` in your kubeconfig file, like this: ```yaml apiVersion: v1 kind: Config -proxy-url: https://proxy.host:3128 - clusters: - cluster: + proxy-url: http://proxy.example.org:3128 + server: https://k8s.example.org/k8s/clusters/c-xxyyzz name: development users: @@ -168,7 +168,6 @@ users: contexts: - context: name: development - ``` diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index d9611439a4566..f83372532fc5e 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -247,6 +247,8 @@ You can still [manually create](/docs/tasks/configure-pod-container/configure-se a service account token Secret; for example, if you need a token that never expires. However, using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) subresource to obtain a token to access the API is recommended instead. +You can use the [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) +command to obtain a token from the `TokenRequest` API. {{< /note >}} #### Projection of Secret keys to specific paths @@ -886,15 +888,30 @@ In this case, `0` means you have created an empty Secret. ### Service account token Secrets A `kubernetes.io/service-account-token` type of Secret is used to store a -token that identifies a +token credential that identifies a {{< glossary_tooltip text="service account" term_id="service-account" >}}. + +Since 1.22, this type of Secret is no longer used to mount credentials into Pods, +and obtaining tokens via the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +API is recommended instead of using service account token Secret objects. +Tokens obtained from the `TokenRequest` API are more secure than ones stored in Secret objects, +because they have a bounded lifetime and are not readable by other API clients. +You can use the [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) +command to obtain a token from the `TokenRequest` API. + +You should only create a service account token Secret object +if you can't use the `TokenRequest` API to obtain a token, +and the security exposure of persisting a non-expiring token credential +in a readable API object is acceptable to you. + When using this Secret type, you need to ensure that the `kubernetes.io/service-account.name` annotation is set to an existing -service account name. A Kubernetes -{{< glossary_tooltip text="controller" term_id="controller" >}} fills in some -other fields such as the `kubernetes.io/service-account.uid` annotation, and the -`token` key in the `data` field, which is set to contain an authentication -token. +service account name. If you are creating both the ServiceAccount and +the Secret objects, you should create the ServiceAccount object first. + +After the Secret is created, a Kubernetes {{< glossary_tooltip text="controller" term_id="controller" >}} +fills in some other fields such as the `kubernetes.io/service-account.uid` annotation, and the +`token` key in the `data` field, which is populated with an authentication token. The following example configuration declares a service account token Secret: @@ -911,20 +928,14 @@ data: extra: YmFyCg== ``` -When creating a `Pod`, Kubernetes automatically finds or creates a service account -Secret and then automatically modifies your Pod to use this Secret. The service account -token Secret contains credentials for accessing the Kubernetes API. - -The automatic creation and use of API credentials can be disabled or -overridden if desired. However, if all you need to do is securely access the -API server, this is the recommended workflow. +After creating the Secret, wait for Kubernetes to populate the `token` key in the `data` field. See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more information on how service accounts work. You can also check the `automountServiceAccountToken` field and the `serviceAccountName` field of the [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) -for information on referencing service account from Pods. +for information on referencing service account credentials from within Pods. ### Docker config Secrets @@ -982,7 +993,7 @@ kubectl create secret docker-registry secret-tiger-docker \ ``` That command creates a Secret of type `kubernetes.io/dockerconfigjson`. -If you dump the `.data.dockercfgjson` field from that new Secret and then +If you dump the `.data.dockerconfigjson` field from that new Secret and then decode it from base64: ```shell @@ -1291,7 +1302,7 @@ on that node. - When deploying applications that interact with the Secret API, you should limit access using [authorization policies](/docs/reference/access-authn-authz/authorization/) such as - [RBAC]( /docs/reference/access-authn-authz/rbac/). + [RBAC](/docs/reference/access-authn-authz/rbac/). - In the Kubernetes API, `watch` and `list` requests for Secrets within a namespace are extremely powerful capabilities. Avoid granting this access where feasible, since listing Secrets allows the clients to inspect the values of every Secret in that @@ -1310,7 +1321,7 @@ have access to run a Pod that then exposes the Secret. - When deploying applications that interact with the Secret API, you should limit access using [authorization policies](/docs/reference/access-authn-authz/authorization/) such as - [RBAC]( /docs/reference/access-authn-authz/rbac/). + [RBAC](/docs/reference/access-authn-authz/rbac/). - In the API server, objects (including Secrets) are persisted into {{< glossary_tooltip term_id="etcd" >}}; therefore: - only allow cluster admistrators to access etcd (this includes read-only access); diff --git a/content/en/docs/concepts/configuration/windows-resource-management.md b/content/en/docs/concepts/configuration/windows-resource-management.md index 6593caa5fb09c..955fea194a4b1 100644 --- a/content/en/docs/concepts/configuration/windows-resource-management.md +++ b/content/en/docs/concepts/configuration/windows-resource-management.md @@ -32,52 +32,48 @@ host, and thus privileged containers are not available on Windows. Containers cannot assume an identity from the host because the Security Account Manager (SAM) is separate. -## Memory reservations {#resource-management-memory} +## Memory management {#resource-management-memory} Windows does not have an out-of-memory process killer as Linux does. Windows always treats all user-mode memory allocations as virtual, and pagefiles are mandatory. -Windows nodes do not overcommit memory for processes running in containers. The +Windows nodes do not overcommit memory for processes. The net effect is that Windows won't reach out of memory conditions the same way Linux does, and processes page to disk instead of being subject to out of memory (OOM) termination. If memory is over-provisioned and all physical memory is exhausted, then paging can slow down performance. -You can place bounds on memory use for workloads using the kubelet -parameters `--kubelet-reserve` and/or `--system-reserve`; these account -for memory usage on the node (outside of containers), and reduce -[NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). -As you deploy workloads, set resource limits on containers. This also subtracts from -`NodeAllocatable` and prevents the scheduler from adding more pods once a node is full. - -{{< note >}} -When you set memory resource limits for Windows containers, you should either set a -limit and leave the memory request unspecified, or set the request equal to the limit. -{{< /note >}} - -On Windows, good practice to avoid over-provisioning is to configure the kubelet -with a system reserved memory of at least 2GiB to account for Windows, Kubernetes -and container runtime overheads. - -## CPU reservations {#resource-management-cpu} - -To account for CPU use by the operating system, the container runtime, and by -Kubernetes host processes such as the kubelet, you can (and should) reserve a -percentage of total CPU. You should determine this CPU reservation taking account of -to the number of CPU cores available on the node. To decide on the CPU percentage to -reserve, identify the maximum pod density for each node and monitor the CPU usage of -the system services running there, then choose a value that meets your workload needs. +## CPU management {#resource-management-cpu} -You can place bounds on CPU usage for workloads using the -kubelet parameters `--kubelet-reserve` and/or `--system-reserve` to -account for CPU usage on the node (outside of containers). -This reduces `NodeAllocatable`. -The cluster-wide scheduler then takes this reservation into account when determining -pod placement. +Windows can limit the amount of CPU time allocated for different processes but cannot +guarantee a minimum amount of CPU time. -On Windows, the kubelet supports a command-line flag to set the priority of the +On Windows, the kubelet supports a command-line flag to set the +[scheduling priority](https://docs.microsoft.com/windows/win32/procthread/scheduling-priorities) of the kubelet process: `--windows-priorityclass`. This flag allows the kubelet process to get more CPU time slices when compared to other processes running on the Windows host. More information on the allowable values and their meaning is available at [Windows Priority Classes](https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities#priority-class). To ensure that running Pods do not starve the kubelet of CPU cycles, set this flag to `ABOVE_NORMAL_PRIORITY_CLASS` or above. + +## Resource reservation {#resource-reservation} + +To account for memory and CPU used by the operating system, the container runtime, and by +Kubernetes host processes such as the kubelet, you can (and should) reserve +memory and CPU resources with the `--kube-reserved` and/or `--system-reserved` kubelet flags. +On Windows these values are only used to calculate the node's +[allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) resources. + +{{< caution >}} +As you deploy workloads, set resource memory and CPU limits on containers. +This also subtracts from `NodeAllocatable` and helps the cluster-wide scheduler in determining which pods to place on which nodes. + +Scheduling pods without limits may over-provision the Windows nodes and in extreme +cases can cause the nodes to become unhealthy. +{{< /caution >}} + +On Windows, a good practice is to reserve at least 2GiB of memory. + +To determine how much CPU to reserve, +identify the maximum pod density for each node and monitor the CPU usage of +the system services running there, then choose a value that meets your workload needs. diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 38a74cf18680e..6366ee05519e1 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -97,7 +97,7 @@ spec: This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the `Failed` terminal [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase). Look for a -corresponding [event](/docs/tasks/debug-application-cluster/debug-application-introspection/) for an +corresponding [event](/docs/tasks/debug/debug-application/debug-running-pod/) for an error message. If no `runtimeClassName` is specified, the default RuntimeHandler will be used, which is equivalent diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md index 3cf3eb1f7a676..5404f8c463b15 100644 --- a/content/en/docs/concepts/extend-kubernetes/_index.md +++ b/content/en/docs/concepts/extend-kubernetes/_index.md @@ -17,26 +17,29 @@ no_list: true -Kubernetes is highly configurable and extensible. As a result, -there is rarely a need to fork or submit patches to the Kubernetes -project code. +Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or +submit patches to the Kubernetes project code. -This guide describes the options for customizing a Kubernetes -cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to -understand how to adapt their Kubernetes cluster to the needs of -their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it -useful as an introduction to what extension points and patterns -exist, and their trade-offs and limitations. +This guide describes the options for customizing a Kubernetes cluster. It is aimed at +{{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to understand +how to adapt their Kubernetes cluster to the needs of their work environment. Developers who are +prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or +Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also +find it useful as an introduction to what extension points and patterns exist, and their +trade-offs and limitations. ## Overview -Customization approaches can be broadly divided into *configuration*, which only involves changing flags, local configuration files, or API resources; and *extensions*, which involve running additional programs or services. This document is primarily about extensions. +Customization approaches can be broadly divided into *configuration*, which only involves changing +flags, local configuration files, or API resources; and *extensions*, which involve running +additional programs or services. This document is primarily about extensions. ## Configuration -*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary: +*Configuration files* and *flags* are documented in the Reference section of the online +documentation, under each binary: * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) @@ -44,9 +47,22 @@ Customization approaches can be broadly divided into *configuration*, which only * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). -Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options. - -*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. +Flags and configuration files may not always be changeable in a hosted Kubernetes service or a +distribution with managed installation. When they are changeable, they are usually only changeable +by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and +setting them may require restarting processes. For those reasons, they should be used only when +there are no other options. + +*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), +[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), +[NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control +([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. +APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. +They are declarative and use the same conventions as other Kubernetes resources like pods, +so new cluster configuration can be repeatable and be managed the same way as applications. +And, where they are stable, they enjoy a +[defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. +For these reasons, they are preferred over *configuration files* and *flags* where suitable. ## Extensions @@ -70,10 +86,9 @@ There is a specific pattern for writing client programs that work well with Kubernetes called the *Controller* pattern. Controllers typically read an object's `.spec`, possibly do things, and then update the object's `.status`. -A controller is a client of Kubernetes. When Kubernetes is the client and -calls out to a remote service, it is called a *Webhook*. The remote service -is called a *Webhook Backend*. Like Controllers, Webhooks do add a point of -failure. +A controller is a client of Kubernetes. When Kubernetes is the client and calls out to a remote +service, it is called a *Webhook*. The remote service is called a *Webhook Backend*. Like +Controllers, Webhooks do add a point of failure. In the webhook model, Kubernetes makes a network request to a remote service. In the *Binary Plugin* model, Kubernetes executes a binary (program). @@ -95,15 +110,35 @@ This diagram shows the extension points in a Kubernetes system. ![Extension Points](/docs/concepts/extend-kubernetes/extension-points.png) -1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies. -2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section. -3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions. -4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section. -5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources. -6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking. -7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins). +1. Users often interact with the Kubernetes API using `kubectl`. + [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. + They only affect the individual user's local environment, and so cannot enforce site-wide policies. + +1. The API server handles all requests. Several types of extension points in the API server allow + authenticating requests, or blocking them based on their content, editing content, and handling + deletion. These are described in the [API Access Extensions](#api-access-extensions) section. + +1. The API server serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are + defined by the Kubernetes project and can't be changed. You can also add resources that you + define, or that other projects have defined, called *Custom Resources*, as explained in the + [Custom Resources](#user-defined-types) section. Custom Resources are often used with API access + extensions. + +1. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend + scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section. + +1. Much of the behavior of Kubernetes is implemented by programs called Controllers which are + clients of the API server. Controllers are often used in conjunction with Custom Resources. + +1. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on + the cluster network. [Network Plugins](#network-plugins) allow for different implementations of + pod networking. -If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions. +1. The kubelet also mounts and unmounts volumes for containers. New types of storage can be + supported via [Storage Plugins](#storage-plugins). + +If you are unsure where to start, this flowchart can help. Note that some solutions may involve +several types of extensions. ![Flowchart for Extension](/docs/concepts/extend-kubernetes/flowchart.png) @@ -112,60 +147,86 @@ If you are unsure where to start, this flowchart can help. Note that some soluti ### User-Defined Types -Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`. +Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application +configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such +as `kubectl`. Do not use a Custom Resource as data storage for application, user, or monitoring data. -For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). +For more about Custom Resources, see the +[Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). ### Combining New APIs with Automation -The combination of a custom resource API and a control loop is called the [Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage specific, usually stateful, applications. These custom APIs and control loops can also be used to control other resources, such as storage or policies. +The combination of a custom resource API and a control loop is called the +[Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage +specific, usually stateful, applications. These custom APIs and control loops can also be used to +control other resources, such as storage or policies. ### Changing Built-in Resources -When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups. -Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do. +When you extend the Kubernetes API by adding custom resources, the added resources always fall +into a new API Groups. You cannot replace or change existing API groups. +Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API +Access Extensions do. ### API Access Extensions -When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow. +When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then +subject to various types of Admission Control. See +[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) +for more on this flow. Each of these steps offers extension points. -Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the [Authentication documentation](/docs/reference/access-authn-authz/authentication/). +Kubernetes has several built-in authentication methods that it supports. It can also sit behind an +authenticating proxy, and it can send a token from an Authorization header to a remote service for +verification (a webhook). All of these methods are covered in the +[Authentication documentation](/docs/reference/access-authn-authz/authentication/). ### Authentication -[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates in all requests to a username for the client making the request. - -Kubernetes provides several built-in authentication methods, and an [Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) method if those don't meet your needs. +[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates +in all requests to a username for the client making the request. +Kubernetes provides several built-in authentication methods, and an +[Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) +method if those don't meet your needs. ### Authorization -[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. - +[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific +users can read, write, and do other operations on API resources. It works at the level of whole +resources -- it doesn't discriminate based on arbitrary object fields. If the built-in +authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/) +allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control -After a request is authorized, if it is a write operation, it also goes through [Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. In addition to the built-in steps, there are several extensions: +After a request is authorized, if it is a write operation, it also goes through +[Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. +In addition to the built-in steps, there are several extensions: -* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) restricts what images can be run in containers. -* To make arbitrary admission control decisions, a general [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) can be used. Admission Webhooks can reject creations or updates. +* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) + restricts what images can be run in containers. +* To make arbitrary admission control decisions, a general + [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) + can be used. Admission Webhooks can reject creations or updates. ## Infrastructure Extensions ### Storage Plugins -[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md -) allow users to mount volume types without built-in support by having the -Kubelet call a Binary Plugin to mount the volume. - -FlexVolume is deprecated since Kubernetes v1.23. The Out-of-tree CSI driver is the recommended way to write volume drivers in Kubernetes. See [Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors) for more information. +[Flex Volumes](https://git.k8s.io/design-proposals-archive/storage/flexvolume-deployment.md) +allow users to mount volume types without built-in support by having the kubelet call a binary +plugin to mount the volume. +FlexVolume is deprecated since Kubernetes v1.23. The out-of-tree CSI driver is the recommended way +to write volume drivers in Kubernetes. See +[Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors) +for more information. ### Device Plugins @@ -173,7 +234,6 @@ Device plugins allow a node to discover new Node resources (in addition to the builtin ones like cpu and memory) via a [Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/). - ### Network Plugins Different networking fabrics can be supported via node-level @@ -191,7 +251,7 @@ This is a significant undertaking, and almost all Kubernetes users find they do not need to modify the scheduler. The scheduler also supports a -[webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md) +[webhook](https://git.k8s.io/design-proposals-archive/scheduling/scheduler_extender.md) that permits a webhook backend (scheduler extension) to filter and prioritize the nodes chosen for a pod. diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 6785dccdac334..6f265f57f34cc 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -8,7 +8,7 @@ weight: 20 {{< feature-state for_k8s_version="v1.10" state="beta" >}} -Kubernetes provides a [device plugin framework](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md) +Kubernetes provides a [device plugin framework](https://git.k8s.io/design-proposals-archive/resource-management/device-plugin.md) that you can use to advertise system hardware resources to the {{< glossary_tooltip term_id="kubelet" >}}. diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index dc3940d5e9719..647111b37555a 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -14,6 +14,8 @@ weight: 10 Kubernetes {{< skew currentVersion >}} supports [Container Network Interface](https://github.com/containernetworking/cni) (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem. +A CNI plugin is required to implement the [Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model). + You must use a CNI plugin that is compatible with the [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) or later releases of the CNI specification. The Kubernetes project recommends using a plugin that is @@ -24,26 +26,37 @@ CNI specification (plugins can be compatible with multiple spec versions). ## Installation -A CNI plugin is required to implement the [Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model). The CRI manages its own CNI plugins. There are two Kubelet command line parameters to keep in mind when using plugins: +A Container Runtime, in the networking context, is a daemon on a node configured to provide CRI Services for kubelet. In particular, the Container Runtime must be configured to load the CNI plugins required to implement the Kubernetes network model. -* `cni-bin-dir`: Kubelet probes this directory for plugins on startup -* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`. +{{< note >}} +Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the `cni-bin-dir` and `network-plugin` command-line parameters. +These command-line parameters were removed in Kubernetes 1.24, with management of the CNI no longer in scope for kubelet. -## Network Plugin Requirements +See [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/) +if you are facing issues following the removal of dockershim. +{{< /note >}} + +For specific information about how a Container Runtime manages the CNI plugins, see the documentation for that Container Runtime, for example: +- [containerd](https://github.com/containerd/containerd/blob/main/script/setup/install-cni) +- [CRI-O](https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md) -Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy. +For specific information about how to install and manage a CNI plugin, see the documentation for that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model). -By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy. +## Network Plugin Requirements -### CNI +For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need specific configuration to support kube-proxy. +The iptables proxy depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. +For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. +If the plugin does not use a Linux bridge, but uses something like Open vSwitch or some other mechanism instead, it should ensure container traffic is appropriately routed for the proxy. -The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads a file from `--cni-conf-dir` (default `/etc/cni/net.d`) and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and any required CNI plugins referenced by the configuration must be present in `--cni-bin-dir` (default `/opt/cni/bin`). +By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy. -If there are multiple CNI configuration files in the directory, the kubelet uses the configuration file that comes first by name in lexicographic order. +### Loopback CNI -In addition to the CNI plugin specified by the configuration file, Kubernetes requires the standard CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) plugin, at minimum version 0.2.0 +In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network model, Kubernetes also requires the container runtimes to provide a loopback interface `lo`, which is used for each sandbox (pod sandboxes, vm sandboxes, ...). +Implementing the loopback interface can be accomplished by re-using the [CNI loopback plugin.](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) or by developing your own code to achieve this (see [this example from CRI-O](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91)). -#### Support hostPort +### Support hostPort The CNI networking plugin supports `hostPort`. You can use the official [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap) plugin offered by the CNI plugin team or use your own plugin with portMapping functionality. @@ -80,7 +93,7 @@ For example: } ``` -#### Support traffic shaping +### Support traffic shaping **Experimental Feature** @@ -132,8 +145,4 @@ metadata: ... ``` -## Usage Summary - -* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`). - ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index 96a17ada96601..133814e98acb5 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -111,7 +111,9 @@ Operator. {{% thirdparty-content %}} * [Charmed Operator Framework](https://juju.is/) +* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk) * [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework) +* [kube-rs](https://kube.rs/) (Rust) * [kubebuilder](https://book.kubebuilder.io/) * [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK) * [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index 3c0bba3adb927..b72d1aab1fdd8 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -76,7 +76,7 @@ request headers as follows: Kubernetes implements an alternative Protobuf based serialization format that is primarily intended for intra-cluster communication. For more information -about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the +about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md) design proposal and the Interface Definition Language (IDL) files for each schema located in the Go packages that define the API objects. diff --git a/content/en/docs/concepts/overview/working-with-objects/names.md b/content/en/docs/concepts/overview/working-with-objects/names.md index 9bafb1584c549..7b6b380e3566d 100644 --- a/content/en/docs/concepts/overview/working-with-objects/names.md +++ b/content/en/docs/concepts/overview/working-with-objects/names.md @@ -100,4 +100,4 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. ## {{% heading "whatsnext" %}} * Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. -* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document. +* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document. diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 8158b78437516..f5a7e66cace4a 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -53,7 +53,7 @@ Neither contention nor changes to a LimitRange will affect already created resou ## {{% heading "whatsnext" %}} -Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information. +Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for more information. For examples on using limits, see: diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 0e88ca34335a0..8d9490b8289eb 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -22,8 +22,7 @@ be consumed by resources in that namespace. Resource quotas work like this: -- Different teams work in different namespaces. Currently this is voluntary, but - support for making this mandatory via ACLs is planned. +- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/). - The administrator creates one ResourceQuota for each namespace. @@ -698,7 +697,7 @@ and it is to be created in a namespace other than `kube-system`. ## {{% heading "whatsnext" %}} -- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. +- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information. - See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/). -- Read [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md). +- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md). - See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 067e3d60ed76b..db9f1d900d682 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -124,8 +124,8 @@ For example, consider the following Pod spec: In this example, the following rules apply: - * The node *must* have a label with the key `kubernetes.io/os` and - the value `linux`. + * The node *must* have a label with the key `topology.kubernetes.io/zone` and + the value of that label *must* be either `antarctica-east1` or `antarctica-west1`. * The node *preferably* has a label with the key `another-node-label-key` and the value `another-node-label-value`. @@ -302,9 +302,8 @@ the Pod onto a node that is in the same zone as one or more Pods with the label `topology.kubernetes.io/zone=R` label if there are other nodes in the same zone currently running Pods with the `Security=S2` Pod label. -See the -[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) -for many more examples of Pod affinity and anti-affinity. +To get yourself more familiar with the examples of Pod affinity and anti-affinity, +refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the `operator` field for Pod affinity and anti-affinity. @@ -472,8 +471,8 @@ The above Pod will only run on the node `kube-01`. ## {{% heading "whatsnext" %}} * Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) . -* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md) - and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md). +* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md) + and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). * Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level resource allocation decisions. * Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/). diff --git a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md index 72b160653e1ae..2610ea80ad998 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md @@ -97,7 +97,7 @@ The output is: map[cpu:250m memory:120Mi] ``` -If a ResourceQuota is defined, the sum of container requests as well as the +If a [ResourceQuota](/docs/concepts/policy/resource-quotas/) is defined, the sum of container requests as well as the `overhead` field are counted. When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's diff --git a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md index a81d9904ac632..951d3f273de4e 100644 --- a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md +++ b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md @@ -3,70 +3,100 @@ reviewers: - bsalamat - k82cn - ahg-g -title: Resource Bin Packing for Extended Resources +title: Resource Bin Packing content_type: concept weight: 80 --- -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} - -The kube-scheduler can be configured to enable bin packing of resources along -with extended resources using `RequestedToCapacityRatioResourceAllocation` -priority function. Priority functions can be used to fine-tune the -kube-scheduler as per custom needs. +In the [scheduling-plugin](/docs/reference/scheduling/config/#scheduling-plugins) `NodeResourcesFit` of kube-scheduler, there are two +scoring strategies that support the bin packing of resources: `MostAllocated` and `RequestedToCapacityRatio`. -## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation +## Enabling bin packing using MostAllocated strategy +The `MostAllocated` strategy scores the nodes based on the utilization of resources, favoring the ones with higher allocation. +For each resource type, you can set a weight to modify its influence in the node score. + +To set the `MostAllocated` strategy for the `NodeResourcesFit` plugin, use a +[scheduler configuration](/docs/reference/scheduling/config) similar to the following: -Kubernetes allows the users to specify the resources along with weights for +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration +profiles: +- pluginConfig: + - args: + scoringStrategy: + resources: + - name: cpu + weight: 1 + - name: memory + weight: 1 + - name: intel.com/foo + weight: 3 + - name: intel.com/bar + weight: 3 + type: MostAllocated + name: NodeResourcesFit +``` + +To learn more about other parameters and their default configuration, see the API documentation for +[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs). + +## Enabling bin packing using RequestedToCapacityRatio + +The `RequestedToCapacityRatio` strategy allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters -and improves the utilization of scarce resources in large clusters. The -behavior of the `RequestedToCapacityRatioResourceAllocation` priority function -can be controlled by a configuration option called `RequestedToCapacityRatioArgs`. -This argument consists of two parameters `shape` and `resources`. The `shape` +to improve the utilization of scarce resources in large clusters. It favors nodes according to a +configured function of the allocated resources. The behavior of the `RequestedToCapacityRatio` in +the `NodeResourcesFit` score function can be controlled by the +[scoringStrategy](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) field. +Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatioParam` and +`resources`. The `shape` in `requestedToCapacityRatioParam` parameter allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. The `resources` parameter consists of `name` of the resource to be considered during scoring and `weight` specify the weight of each resource. Below is an example configuration that sets -`requestedToCapacityRatioArguments` to bin packing behavior for extended -resources `intel.com/foo` and `intel.com/bar`. +the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar` +using the `requestedToCapacityRatio` field. ```yaml apiVersion: kubescheduler.config.k8s.io/v1beta3 kind: KubeSchedulerConfiguration profiles: -# ... - pluginConfig: - - name: RequestedToCapacityRatio - args: - shape: - - utilization: 0 - score: 10 - - utilization: 100 - score: 0 - resources: - - name: intel.com/foo - weight: 3 - - name: intel.com/bar - weight: 5 +- pluginConfig: + - args: + scoringStrategy: + resources: + - name: intel.com/foo + weight: 3 + - name: intel.com/bar + weight: 3 + requestedToCapacityRatioParam: + shape: + - utilization: 0 + score: 0 + - utilization: 100 + score: 10 + type: RequestedToCapacityRatio + name: NodeResourcesFit ``` Referencing the `KubeSchedulerConfiguration` file with the kube-scheduler flag `--config=/path/to/config/file` will pass the configuration to the scheduler. -**This feature is disabled by default** +To learn more about other parameters and their default configuration, see the API documentation for +[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs). -### Tuning the Priority Function +### Tuning the score function -`shape` is used to specify the behavior of the -`RequestedToCapacityRatioPriority` function. +`shape` is used to specify the behavior of the `RequestedToCapacityRatio` function. ```yaml shape: diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index 030f28e7d1887..ff7718a1f3fe4 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -15,8 +15,7 @@ is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attrac a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods. -_Tolerations_ are applied to pods, and allow (but do not require) the pods to schedule -onto nodes with matching taints. +_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also [evaluates other parameters](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) as part of its function. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md index e7ba78e1c4cff..04b13a82c5f29 100644 --- a/content/en/docs/concepts/security/controlling-access.md +++ b/content/en/docs/concepts/security/controlling-access.md @@ -4,6 +4,7 @@ reviewers: - lavalamp title: Controlling Access to the Kubernetes API content_type: concept +weight: 50 --- diff --git a/content/en/docs/concepts/security/multi-tenancy.md b/content/en/docs/concepts/security/multi-tenancy.md new file mode 100755 index 0000000000000..8db2ad87c1c5c --- /dev/null +++ b/content/en/docs/concepts/security/multi-tenancy.md @@ -0,0 +1,271 @@ +--- +title: Multi-tenancy +content_type: concept +weight: 70 +--- + + + +This page provides an overview of available configuration options and best practices for cluster multi-tenancy. + +Sharing clusters saves costs and simplifies administration. However, sharing clusters also presents challenges such as security, fairness, and managing _noisy neighbors_. + +Clusters can be shared in many ways. In some cases, different applications may run in the same cluster. In other cases, multiple instances of the same application may run in the same cluster, one for each end user. All these types of sharing are frequently described using the umbrella term _multi-tenancy_. + +While Kubernetes does not have first-class concepts of end users or tenants, it provides several features to help manage different tenancy requirements. These are discussed below. + + +## Use cases + +The first step to determining how to share your cluster is understanding your use case, so you can evaluate the patterns and tools available. In general, multi-tenancy in Kubernetes clusters falls into two broad categories, though many variations and hybrids are also possible. + +### Multiple teams + +A common form of multi-tenancy is to share a cluster between multiple teams within an organization, each of whom may operate one or more workloads. These workloads frequently need to communicate with each other, and with other workloads located on the same or different clusters. + +In this scenario, members of the teams often have direct access to Kubernetes resources via tools such as `kubectl`, or indirect access through GitOps controllers or other types of release automation tools. There is often some level of trust between members of different teams, but Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly share clusters. + +### Multiple customers + +The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." However, a better term might be "multi-customer tenancy,” since SaaS vendors may also use other deployment models, and this deployment model can also be used outside of SaaS. + + +In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from their perspective and is only used by the vendor to manage the workloads. Cost optimization is frequently a critical concern, and Kubernetes policies are used to ensure that the workloads are strongly isolated from each other. + + +## Terminology + +### Tenants + +When discussing multi-tenancy in Kubernetes, there is no single definition for a "tenant". Rather, the definition of a tenant will vary depending on whether multi-team or multi-customer tenancy is being discussed. + +In multi-team usage, a tenant is typically a team, where each team typically deploys a small number of workloads that scales with the complexity of the service. However, the definition of "team" may itself be fuzzy, as teams may be organized into higher-level divisions or subdivided into smaller teams. + + +By contrast, if each team deploys dedicated workloads for each new client, they are using a multi-customer model of tenancy. In this case, a "tenant" is simply a group of users who share a single workload. This may be as large as an entire company, or as small as a single team at that company. + +In many cases, the same organization may use both definitions of "tenants" in different contexts. For example, a platform team may offer shared services such as security tools and databases to multiple internal “customers” and a SaaS vendor may also have multiple teams sharing a development cluster. Finally, hybrid architectures are also possible, such as a SaaS provider using a combination of per-customer workloads for sensitive data, combined with multi-tenant shared services. + + +{{< figure src="/images/docs/multi-tenancy.png" title="A cluster showing coexisting tenancy models" class="diagram-large" >}} + + +### Isolation + +There are several ways to design and build multi-tenant solutions with Kubernetes. Each of these methods comes with its own set of tradeoffs that impact the isolation level, implementation effort, operational complexity, and cost of service. + + +A Kubernetes cluster consists of a control plane which runs Kubernetes software, and a data plane consisting of worker nodes where tenant workloads are executed as pods. Tenant isolation can be applied in both the control plane and the data plane based on organizational requirements. + +The level of isolation offered is sometimes described using terms like “hard” multi-tenancy, which implies strong isolation, and “soft” multi-tenancy, which implies weaker isolation. In particular, "hard" multi-tenancy is often used to describe cases where the tenants do not trust each other, often from security and resource sharing perspectives (e.g. guarding against attacks such as data exfiltration or DoS). Since data planes typically have much larger attack surfaces, "hard" multi-tenancy often requires extra attention to isolating the data-plane, though control plane isolation also remains critical. + +However, the terms "hard" and "soft" can often be confusing, as there is no single definition that will apply to all users. Rather, "hardness" or "softness" is better understood as a broad spectrum, with many different techniques that can be used to maintain different types of isolation in your clusters, based on your requirements. + + +In more extreme cases, it may be easier or necessary to forgo any cluster-level sharing at all and assign each tenant their dedicated cluster, possibly even running on dedicated hardware if VMs are not considered an adequate security boundary. This may be easier with managed Kubernetes clusters, where the overhead of creating and operating clusters is at least somewhat taken on by a cloud provider. The benefit of stronger tenant isolation must be evaluated against the cost and complexity of managing multiple clusters. The [Multi-cluster SIG](https://git.k8s.io/community/sig-multicluster/README.md) is responsible for addressing these types of use cases. + + + +The remainder of this page focuses on isolation techniques used for shared Kubernetes clusters. However, even if you are considering dedicated clusters, it may be valuable to review these recommendations, as it will give you the flexibility to shift to shared clusters in the future if your needs or capabilities change. + + +## Control plane isolation + +Control plane isolation ensures that different tenants cannot access or affect each others' Kubernetes API resources. + +### Namespaces + +In Kubernetes, a {{< glossary_tooltip text="Namespace" term_id="namespace" >}} provides a mechanism for isolating groups of API resources within a single cluster. This isolation has two key dimensions: + +1. Object names within a namespace can overlap with names in other namespaces, similar to files in folders. This allows tenants to name their resources without having to consider what other tenants are doing. + +2. Many Kubernetes security policies are scoped to namespaces. For example, RBAC Roles and Network Policies are namespace-scoped resources. Using RBAC, Users and Service Accounts can be restricted to a namespace. + +In a multi-tenant environment, a Namespace helps segment a tenant's workload into a logical and distinct management unit. In fact, a common practice is to isolate every workload in its own namespace, even if multiple workloads are operated by the same tenant. This ensures that each workload has its own identity and can be configured with an appropriate security policy. + +The namespace isolation model requires configuration of several other Kubernetes resources, networking plugins, and adherence to security best practices to properly isolate tenant workloads. These considerations are discussed below. + +### Access controls + +The most important type of isolation for the control plane is authorization. If teams or their workloads can access or modify each others' API resources, they can change or disable all other types of policies thereby negating any protection those policies may offer. As a result, it is critical to ensure that each tenant has the appropriate access to only the namespaces they need, and no more. This is known as the "Principle of Least Privilege." + + +Role-based access control (RBAC) is commonly used to enforce authorization in the Kubernetes control plane, for both users and workloads (service accounts). [Roles](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [role bindings](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) are Kubernetes objects that are used at a namespace level to enforce access control in your application; similar objects exist for authorizing access to cluster-level objects, though these are less useful for multi-tenant clusters. + +In a multi-team environment, RBAC must be used to restrict tenants' access to the appropriate namespaces, and ensure that cluster-wide resources can only be accessed or modified by privileged users such as cluster administrators. + +If a policy ends up granting a user more permissions than they need, this is likely a signal that the namespace containing the affected resources should be refactored into finer-grained namespaces. Namespace management tools may simplify the management of these finer-grained namespaces by applying common RBAC policies to different namespaces, while still allowing fine-grained policies where necessary. + +### Quotas + +Kubernetes workloads consume node resources, like CPU and memory. In a multi-tenant environment, you can use +[Resource Quotas](/docs/concepts/policy/resource-quotas/) to manage resource usage of tenant workloads. +For the multiple teams use case, where tenants have access to the Kubernetes API, you can use resource quotas +to limit the number of API resources (for example: the number of Pods, or the number of ConfigMaps) +that a tenant can create. Limits on object count ensure fairness and aim to avoid _noisy neighbor_ issues from +affecting other tenants that share a control plane. + +Resource quotas are namespaced objects. By mapping tenants to namespaces, cluster admins can use quotas to ensure that a tenant cannot monopolize a cluster's resources or overwhelm its control plane. Namespace management tools simplify the administration of quotas. In addition, while Kubernetes quotas only apply within a single namespace, some namespace management tools allow groups of namespaces to share quotas, giving administrators far more flexibility with less effort than built-in quotas. + +Quotas prevent a single tenant from consuming greater than their allocated share of resources hence minimizing the “noisy neighbor” issue, where one tenant negatively impacts the performance of other tenants' workloads. + +When you apply a quota to namespace, Kubernetes requires you to also specify resource requests and limits for each container. Limits are the upper bound for the amount of resources that a container can consume. Containers that attempt to consume resources that exceed the configured limits will either be throttled or killed, based on the resource type. When resource requests are set lower than limits, each container is guaranteed the requested amount but there may still be some potential for impact across workloads. + +Quotas cannot protect against all kinds of resource sharing, such as network traffic. Node isolation (described below) may be a better solution for this problem. + +## Data Plane Isolation + +Data plane isolation ensures that pods and workloads for different tenants are sufficiently isolated. + +### Network isolation + +By default, all pods in a Kubernetes cluster are allowed to communicate with each other, and all network traffic is unencrypted. This can lead to security vulnerabilities where traffic is accidentally or maliciously sent to an unintended destination, or is intercepted by a workload on a compromised node. + +Pod-to-pod communication can be controlled using [Network Policies](/docs/concepts/services-networking/network-policies/), which restrict communication between pods using namespace labels or IP address ranges. In a multi-tenant environment where strict network isolation between tenants is required, starting with a default policy that denies communication between pods is recommended with another rule that allows all pods to query the DNS server for name resolution. With such a default policy in place, you can begin adding more permissive rules that allow for communication within a namespace. This scheme can be further refined as required. Note that this only applies to pods within a single control plane; pods that belong to different virtual control planes cannot talk to each other via Kubernetes networking. + +Namespace management tools may simplify the creation of default or common network policies. In addition, some of these tools allow you to enforce a consistent set of namespace labels across your cluster, ensuring that they are a trusted basis for your policies. + +{{< warning >}} +Network policies require a [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored. +{{< /warning >}} + +More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespaced based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users. + +### Storage isolation + +Kubernetes offers several types of volumes that can be used as persistent storage for workloads. For security and data-isolation, [dynamic volume provisioning](/docs/concepts/storage/dynamic-provisioning/) is recommended and volume types that use node resources should be avoided. + +[StorageClasses](/docs/concepts/storage/storage-classes/) allow you to describe custom "classes" of storage offered by your cluster, based on quality-of-service levels, backup policies, or custom policies determined by the cluster administrators. + +Pods can request storage using a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/). A PersistentVolumeClaim is a namespaced resource, which enables isolating portions of the storage system and dedicating it to tenants within the shared Kubernetes cluster. However, it is important to note that a PersistentVolume is a cluster-wide resource and has a lifecycle independent of workloads and namespaces. + +For example, you can configure a separate StorageClass for each tenant and use this to strengthen isolation. +If a StorageClass is shared, you should set a [reclaim policy of `Delete`](/docs/concepts/storage/storage-classes/#reclaim-policy) +to ensure that a PersistentVolume cannot be reused across different namespaces. + +### Sandboxing containers + +{{% thirdparty-content %}} + +Kubernetes pods are composed of one or more containers that execute on worker nodes. Containers utilize OS-level virtualization and hence offer a weaker isolation boundary than virtual machines that utilize hardware-based virtualization. + +In a shared environment, unpatched vulnerabilities in the application and system layers can be exploited by attackers for container breakouts and remote code execution that allow access to host resources. In some applications, like a Content Management System (CMS), customers may be allowed the ability to upload and execute untrusted scripts or code. In either case, mechanisms to further isolate and protect workloads using strong isolation are desirable. + +Sandboxing provides a way to isolate workloads running in a shared cluster. It typically involves running each pod in a separate execution environment such as a virtual machine or a userspace kernel. Sandboxing is often recommended when you are running untrusted code, where workloads are assumed to be malicious. Part of the reason this type of isolation is necessary is because containers are processes running on a shared kernel; they mount file systems like /sys and /proc from the underlying host, making them less secure than an application that runs on a virtual machine which has its own kernel. While controls such as seccomp, AppArmor, and SELinux can be used to strengthen the security of containers, it is hard to apply a universal set of rules to all workloads running in a shared cluster. Running workloads in a sandbox environment helps to insulate the host from container escapes, where an attacker exploits a vulnerability to gain access to the host system and all the processes/files running on that host. + +Virtual machines and userspace kernels are 2 popular approaches to sandboxing. The following sandboxing implementations are available: +* [gVisor](https://gvisor.dev/) intercepts syscalls from containers and runs them through a userspace kernel, written in Go, with limited access to the underlying host. +* [Kata Containers](https://katacontainers.io/) is an OCI compliant runtime that allows you to run containers in a VM. The hardware virtualization available in Kata offers an added layer of security for containers running untrusted code. + +### Node Isolation + +Node isolation is another technique that you can use to isolate tenant workloads from each other. With node isolation, a set of nodes is dedicated to running pods from a particular tenant and co-mingling of tenant pods is prohibited. This configuration reduces the noisy tenant issue, as all pods running on a node will belong to a single tenant. The risk of information disclosure is slightly lower with node isolation because an attacker that manages to escape from a container will only have access to the containers and volumes mounted to that node. + +Although workloads from different tenants are running on different nodes, it is important to be aware that the kubelet and (unless using virtual control planes) the API service are still shared services. A skilled attacker could use the permissions assigned to the kubelet or other pods running on the node to move laterally within the cluster and gain access to tenant workloads running on other nodes. If this is a major concern, consider implementing compensating controls such as seccomp, AppArmor or SELinux or explore using sandboxed containers or creating separate clusters for each tenant. + +Node isolation is a little easier to reason about from a billing standpoint than sandboxing containers since you can charge back per node rather than per pod. It also has fewer compatibility and performance issues and may be easier to implement than sandboxing containers. For example, nodes for each tenant can be configured with taints so that only pods with the corresponding toleration can run on them. A mutating webhook could then be used to automatically add tolerations and node affinities to pods deployed into tenant namespaces so that they run on a specific set of nodes designated for that tenant. + +Node isolation can be implemented using an [pod node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet). + +## Additional Considerations + +This section discusses other Kubernetes constructs and patterns that are relevant for multi-tenancy. + +### API Priority and Fairness + +[API priority and fairness](/docs/concepts/cluster-administration/flow-control/) is a Kubernetes feature that allows you to assign a priority to certain pods running within the cluster. When an application calls the Kubernetes API, the API server evaluates the priority assigned to pod. Calls from pods with higher priority are fulfilled before those with a lower priority. When contention is high, lower priority calls can be queued until the server is less busy or you can reject the requests. + +Using API priority and fairness will not be very common in SaaS environments unless you are allowing customers to run applications that interface with the Kubernetes API, e.g. a controller. + +### Quality-of-Service (QoS) {#qos} + +When you’re running a SaaS application, you may want the ability to offer different Quality-of-Service (QoS) tiers of service to different tenants. For example, you may have freemium service that comes with fewer performance guarantees and features and a for-fee service tier with specific performance guarantees. Fortunately, there are several Kubernetes constructs that can help you accomplish this within a shared cluster, including network QoS, storage classes, and pod priority and preemption. The idea with each of these is to provide tenants with the quality of service that they paid for. Let’s start by looking at networking QoS. + +Typically, all pods on a node share a network interface. Without network QoS, some pods may consume an unfair share of the available bandwidth at the expense of other pods. The Kubernetes [bandwidth plugin](https://www.cni.dev/plugins/current/meta/bandwidth/) creates an [extended resource](/docs/concepts/configuration/manage-resources-containers/#extended-resources) for networking that allows you to use Kubernetes resources constructs, i.e. requests/limits, to apply rate limits to pods by using Linux tc queues. Be aware that the plugin is considered experimental as per the [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping) documentation and should be thoroughly tested before use in production environments. + +For storage QoS, you will likely want to create different storage classes or profiles with different performance characteristics. Each storage profile can be associated with a different tier of service that is optimized for different workloads such IO, redundancy, or throughput. Additional logic might be necessary to allow the tenant to associate the appropriate storage profile with their workload. + +Finally, there’s [pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) where you can assign priority values to pods. When scheduling pods, the scheduler will try evicting pods with lower priority when there are insufficient resources to schedule pods that are assigned a higher priority. If you have a use case where tenants have different service tiers in in a shared cluster e.g. free and paid, you may want to give higher priority to certain tiers using this feature. + +### DNS + +Kubernetes clusters include a Domain Name System (DNS) service to provide translations from names to IP addresses, for all Services and Pods. By default, the Kubernetes DNS service allows lookups across all namespaces in the cluster. + +In multi-tenant environments where tenants can access pods and other Kubernetes resources, or where +stronger isolation is required, it may be necessary to prevent pods from looking up services in other +Namespaces. +You can restrict cross-namespace DNS lookups by configuring security rules for the DNS service. +For example, CoreDNS (the default DNS service for Kubernetes) can leverage Kubernetes metadata +to restrict queries to Pods and Services within a namespace. For more information, read an +[example](https://github.com/coredns/policy#kubernetes-metadata-multi-tenancy-policy) of configuring +this within the CoreDNS documentation. + +When a [Virtual Control Plane per tenant](#virtual-control-plane-per-tenant) model is used, a DNS service must be configured per tenant or a multi-tenant DNS service must be used. Here is an example of a [customized version of CoreDNS](https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/tenant-dns.md) that supports multiple tenants. + +### Operators + +[Operators](/docs/concepts/extend-kubernetes/operator/) are Kubernetes controllers that manage applications. Operators can simplify the management of multiple instances of an application, like a database service, which makes them a common building block in the multi-consumer (SaaS) multi-tenancy use case. + +Operators used in a multi-tenant environment should follow a stricter set of guidelines. Specifically, the Operator should: +* Support creating resources within different tenant namespaces, rather than just in the namespace in which the Operator is deployed. +* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness. +* Support configuration of Pods for data-plane isolation techniques such as node isolation and sandboxed containers. + +## Implementations + +{{% thirdparty-content %}} + +There are two primary ways to share a Kubernetes cluster for multi-tenancy: using Namespaces (i.e. a Namespace per tenant) or by virtualizing the control plane (i.e. Virtual control plane per tenant). + +In both cases, data plane isolation, and management of additional considerations such as API Priority and Fairness, is also recommended. + +Namespace isolation is well-supported by Kubernetes, has a negligible resource cost, and provides mechanisms to allow tenants to interact appropriately, such as by allowing service-to-service communication. However, it can be difficult to configure, and doesn't apply to Kubernetes resources that can't be namespaced, such as Custom Resource Definitions, Storage Classes, and Webhooks. + +Control plane virtualization allows for isolation of non-namespaced resources at the cost of somewhat higher resource usage and more difficult cross-tenant sharing. It is a good option when namespace isolation is insufficient but dedicated clusters are undesirable, due to the high cost of maintaining them (especially on-prem) or due to their higher overhead and lack of resource sharing. However, even within a virtualized control plane, you will likely see benefits by using namespaces as well. + +The two options are discussed in more detail in the following sections: + +### Namespace per tenant + +As previously mentioned, you should consider isolating each workload in its own namespace, even if you are using dedicated clusters or virtualized control planes. This ensures that each workload only has access to its own resources, such as Config Maps and Secrets, and allows you to tailor dedicated security policies for each workload. In addition, it is a best practice to give each namespace names that are unique across your entire fleet (i.e., even if they are in separate clusters), as this gives you the flexibility to switch between dedicated and shared clusters in the future, or to use multi-cluster tooling such as service meshes. + +Conversely, there are also advantages to assigning namespaces at the tenant level, not just the workload level, since there are often policies that apply to all workloads owned by a single tenant. However, this raises its own problems. Firstly, this makes it difficult or impossible to customize policies to individual workloads, and secondly, it may be challenging to come up with a single level of "tenancy" that should be given a namespace. For example, an organization may have divisions, teams, and subteams - which should be assigned a namespace? + +To solve this, Kubernetes provides the [Hierarchical Namespace Controller (HNC)](https://github.com/kubernetes-sigs/hierarchical-namespaces), which allows you to organize your namespaces into hierarchies, and share certain policies and resources between them. It also helps you manage namespace labels, namespace lifecycles, and delegated management, and share resource quotas across related namespaces. These capabilities can be useful in both multi-team and multi-customer scenarios. + +Other projects that provide similar capabilities and aid in managing namespaced resources are listed below: + +#### Multi-team tenancy + +* [Capsule](https://github.com/clastix/capsule) +* [Kiosk](https://github.com/loft-sh/kiosk) + +#### Multi-customer tenancy + +* [Kubeplus](https://github.com/cloud-ark/kubeplus) + +#### Policy engines + +Policy engines provide features to validate and generate tenant configurations: + +* [Kyverno](https://kyverno.io/) +* [OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper) + +### Virtual control plane per tenant + +Another form of control-plane isolation is to use Kubernetes extensions to provide each tenant a virtual control-plane that enables segmentation of cluster-wide API resources. [Data plane isolation](#data-plane-isolation) techniques can be used with this model to securely manage worker nodes across tenants. + +The virtual control plane based multi-tenancy model extends namespace-based multi-tenancy by providing each tenant with dedicated control plane components, and hence complete control over cluster-wide resources and add-on services. Worker nodes are shared across all tenants, and are managed by a Kubernetes cluster that is normally inaccessible to tenants. This cluster is often referred to as a _super-cluster_ (or sometimes as a _host-cluster_). Since a tenant’s control-plane is not directly associated with underlying compute resources it is referred to as a _virtual control plane_. + +A virtual control plane typically consists of the Kubernetes API server, the controller manager, and the etcd data store. It interacts with the super cluster via a metadata synchronization controller which coordinates changes across tenant control planes and the control plane of the super--cluster. + +By using per-tenant dedicated control planes, most of the isolation problems due to sharing one API server among all tenants are solved. Examples include noisy neighbors in the control plane, uncontrollable blast radius of policy misconfigurations, and conflicts between cluster scope objects such as webhooks and CRDs. Hence, the virtual control plane model is particularly suitable for cases where each tenant requires access to a Kubernetes API server and expects the full cluster manageability. + +The improved isolation comes at the cost of running and maintaining an individual virtual control plane per tenant. In addition, per-tenant control planes do not solve isolation problems in the data plane, such as node-level noisy neighbors or security threats. These must still be addressed separately. + +The Kubernetes [Cluster API - Nested (CAPN)](https://github.com/kubernetes-sigs/cluster-api-provider-nested/tree/main/virtualcluster) project provides an implementation of virtual control planes. + +#### Other implementations +* [Kamaji](https://github.com/clastix/kamaji) +* [vcluster](https://github.com/loft-sh/vcluster) + diff --git a/content/en/docs/concepts/security/pod-security-admission.md b/content/en/docs/concepts/security/pod-security-admission.md index 1e452ea6af8be..60f87c04d9e09 100644 --- a/content/en/docs/concepts/security/pod-security-admission.md +++ b/content/en/docs/concepts/security/pod-security-admission.md @@ -37,8 +37,8 @@ To use this mechanism, your cluster must enforce Pod Security admission. ### Built-in Pod Security admission enforcement -In Kubernetes v{{< skew currentVersion >}}, the `PodSecurity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -is a beta feature and is enabled by default. You must have this feature gate enabled. +From Kubernetes v1.23, the `PodSecurity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is a beta feature and is enabled by default. +This page is part of the documentation for Kubernetes v{{< skew currentVersion >}}. If you are running a different version of Kubernetes, consult the documentation for that release. ### Alternative: installing the `PodSecurity` admission webhook {#webhook} @@ -102,7 +102,7 @@ For each mode, there are two labels that determine the policy used: pod-security.kubernetes.io/: # Optional: per-mode version label that can be used to pin the policy to the -# version that shipped with a given Kubernetes minor version (for example v{{< skew latestVersion >}}). +# version that shipped with a given Kubernetes minor version (for example v{{< skew currentVersion >}}). # # MODE must be one of `enforce`, `audit`, or `warn`. # VERSION must be a valid Kubernetes minor version, or `latest`. diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 393468ac745af..47e93d3e9ea89 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -29,10 +29,9 @@ This guide outlines the requirements of each policy. **The _Privileged_ policy is purposely-open, and entirely unrestricted.** This type of policy is typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users. -The Privileged policy is defined by an absence of restrictions. For allow-by-default enforcement -mechanisms (such as gatekeeper), the Privileged policy may be an absence of applied constraints -rather than an instantiated profile. In contrast, for a deny-by-default mechanism (such as Pod -Security Policy) the Privileged policy should enable all controls (disable all restrictions). +The Privileged policy is defined by an absence of restrictions. Allow-by-default +mechanisms (such as gatekeeper) may be Privileged by default. In contrast, for a deny-by-default mechanism (such as Pod +Security Policy) the Privileged policy should disable all restrictions. ### Baseline @@ -58,7 +57,7 @@ fail validation. HostProcess -

Windows pods offer the ability to run HostProcess containers which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. HostProcess pods are an alpha feature as of Kubernetes v1.22.

+

Windows pods offer the ability to run HostProcess containers which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. {{< feature-state for_k8s_version="v1.23" state="beta" >}}

Restricted Fields

  • spec.securityContext.windowsOptions.hostProcess
  • @@ -458,6 +457,16 @@ of individual policies are not defined here. - {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}} - {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}} +### Alternatives + +{{% thirdparty-content %}} + +Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as: +- [Kubewarden](https://github.com/kubewarden) +- [Kyverno](https://kyverno.io/policies/pod-security/) +- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) + + ## FAQ ### Why isn't there a profile between privileged and baseline? @@ -481,14 +490,6 @@ as well as other related parameters outside the Security Context. As of July 202 [Pod Security Policies](/docs/concepts/security/pod-security-policy/) are deprecated in favor of the built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/). -{{% thirdparty-content %}} - -Other alternatives for enforcing security profiles are being developed in the Kubernetes -ecosystem, such as: -- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper). -- [Kubewarden](https://github.com/kubewarden). -- [Kyverno](https://kyverno.io/policies/pod-security/). - ### What profiles should I apply to my Windows Pods? Windows in Kubernetes has some limitations and differentiators from standard Linux-based diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md new file mode 100644 index 0000000000000..cfcc8b3cb93fb --- /dev/null +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -0,0 +1,180 @@ +--- +reviewers: +title: Role Based Access Control Good Practices +description: > + Principles and practices for good RBAC design for cluster operators. +content_type: concept +weight: 60 +--- + + + +Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control +to ensure that cluster users and workloads have only the access to resources required to +execute their roles. It is important to ensure that, when designing permissions for cluster +users, the cluster administrator understands the areas where privilge escalation could occur, +to reduce the risk of excessive access leading to security incidents. + +The good practices laid out here should be read in conjunction with the general [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update). + + + +## General good practice + +### Least privilege + +Ideally minimal RBAC rights should be assigned to users and service accounts. Only permissions +explicitly required for their operation should be used. Whilst each cluster will be different, +some general rules that can be applied are : + + - Assign permissions at the namespace level where possible. Use RoleBindings as opposed to + ClusterRoleBindings to give users rights only within a specific namespace. + - Avoid providing wildcard permissions when possible, especially to all resources. + As Kubernetes is an extensible system, providing wildcard access gives rights + not just to all object types presently in the cluster, but also to all future object types + which are created in the future. + - Administrators should not use `cluster-admin` accounts except where specifically needed. + Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) + can avoid accidental modification of cluster resources. + - Avoid adding users to the `system:masters` group. Any user who is a member of this group + bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be + revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is + using an authorization webhook, membership of this group also bypasses that webhook (requests + from users who are members of that group are never sent to the webhook) + +### Minimize distribution of privileged tokens + +Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions (for example, any of the rights listed under +[privilege escalation risks](#privilege-escalation-risks)). +In cases where a workload requires powerful permissions, consider the following practices: + + - Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run + are necessary and are run with least privilege to limit the blast radius of container escapes. + - Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using + [Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/), [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) to ensure + pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to + situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard. + +### Hardening + +Kubernetes defaults to providing access which may not be required in every cluster. Reviewing +the RBAC rights provided by default can provide opportunities for security hardening. +In general, changes should not be made to rights provided to `system:` accounts some options +to harden cluster rights exist: + +- Review bindings for the `system:unauthenticated` group and remove where possible, as this gives + access to anyone who can contact the API server at a network level. +- Avoid the default auto-mounting of service account tokens by setting + `automountServiceAccountToken: false`. For more details, see + [using default service account token](/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server). + Setting this value for a Pod will overwrite the service account setting, workloads + which require service account tokens can still mount them. + +### Periodic review + +It is vital to periodically review the Kubernetes RBAC settings for redundant entries and +possible privilege escalations. +If an attacker is able to create a user account with the same name as a deleted user, +they can automatically inherit all the rights of the deleted user, especially the +rights assigned to that user. + +## Kubernetes RBAC - privilege escalation risks {#privilege-escalation-risks} + +Within Kubernetes RBAC there are a number of privileges which, if granted, can allow a user or a service account +to escalate their privileges in the cluster or affect systems outside the cluster. + +This section is intended to provide visibility of the areas where cluster operators +should take care, to ensure that they do not inadvertently allow for more access to clusters than intended. + +### Listing secrets + +It is generally clear that allowing `get` access on Secrets will allow a user to read their contents. +It is also important to note that `list` and `watch` access also effectively allow for users to reveal the Secret contents. +For example, when a List response is returned (for example, via `kubectl get secrets -A -o yaml`), the response +includes the contents of all Secrets. + +### Workload creation + +Users who are able to create workloads (either Pods, or +[workload resources](/docs/concepts/workloads/controllers/) that manage Pods) will +be able to gain access to the underlying node unless restrictions based on the Kubernetes +[Pod Security Standards](/docs/concepts/security/pod-security-standards/) are in place. + +Users who can run privileged Pods can use that access to gain node access and potentially to +further elevate their privileges. Where you do not fully trust a user or other principal +with the ability to create suitably secure and isolated Pods, you should enforce either the +**Baseline** or **Restricted** Pod Security Standard. +You can use [Pod Security admission](/docs/concepts/security/pod-security-admission/) +or other (third party) mechanisms to implement that enforcement. + +You can also use the deprecated [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) mechanism +to restrict users' abilities to create privileged Pods (N.B. PodSecurityPolicy is scheduled for removal +in version 1.25). + +Creating a workload in a namespace also grants indirect access to Secrets in that namespace. +Creating a pod in kube-system or a similarly privileged namespace can grant a user access to +Secrets they would not have through RBAC directly. + +### Persistent volume creation + +As noted in the [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. Where access to persistent storage is required trusted administrators should create +PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage. + +### Access to `proxy` subresource of Nodes + +Users with access to the proxy sub-resource of node objects have rights to the Kubelet API, +which allows for command execution on every pod on the node(s) which they have rights to. +This access bypasses audit logging and admission control, so care should be taken before +granting rights to this resource. + +### Escalate verb + +Generally the RBAC system prevents users from creating clusterroles with more rights than +they possess. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), +users with this right can effectively escalate their privileges. + +### Bind verb + +Similar to the `escalate` verb, granting users this right allows for bypass of Kubernetes +in-built protections against privilege escalation, allowing users to create bindings to +roles with rights they do not already have. + +### Impersonate verb + +This verb allows users to impersonate and gain the rights of other users in the cluster. +Care should be taken when granting it, to ensure that excessive permissions cannot be gained +via one of the impersonated accounts. + +### CSRs and certificate issuing + +The CSR API allows for users with `create` rights to CSRs and `update` rights on `certificatesigningrequests/approval` +where the signer is `kubernetes.io/kube-apiserver-client` to create new client certificates +which allow users to authenticate to the cluster. Those client certificates can have arbitrary +names including duplicates of Kubernetes system components. This will effectively allow for privilege escalation. + +### Token request + +Users with `create` rights on `serviceaccounts/token` can create TokenRequests to issue +tokens for existing service accounts. + +### Control admission webhooks + +Users with control over `validatingwebhookconfigurations` or `mutatingwebhookconfigurations` +can control webhooks that can read any object admitted to the cluster, and in the case of +mutating webhooks, also mutate admitted objects. + + +## Kubernetes RBAC - denial of service risks {#denial-of-service-risks} + +### Object creation denial-of-service {#object-creation-dos} +Users who have rights to create objects in a cluster may be able to create sufficient large +objects to create a denial of service condition either based on the size or number of objects, as discussed in +[etcd used by Kubernetes is vulnerable to OOM attack](https://github.com/kubernetes/kubernetes/issues/107325). This may be +specifically relevant in multi-tenant clusters if semi-trusted or untrusted users +are allowed limited access to a system. + +One option for mitigation of this issue would be to use [resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota) +to limit the quantity of objects which can be created. + +## {{% heading "whatsnext" %}} +* To learn more about RBAC, see the [RBAC documentation](/docs/reference/access-authn-authz/rbac/). diff --git a/content/en/docs/concepts/security/windows-security.md b/content/en/docs/concepts/security/windows-security.md index 1341f38c59e4a..8c0704ac2bad4 100644 --- a/content/en/docs/concepts/security/windows-security.md +++ b/content/en/docs/concepts/security/windows-security.md @@ -6,7 +6,7 @@ reviewers: - perithompson title: Security For Windows Nodes content_type: concept -weight: 75 +weight: 40 --- @@ -51,5 +51,5 @@ Windows containers can also run as Active Directory identities by utilizing [Gro Linux-specific pod security context mechanisms (such as SELinux, AppArmor, Seccomp, or custom POSIX capabilities) are not supported on Windows nodes. -Privileged containers are [not supported](#compatibility-v1-pod-spec-containers-securitycontext) on Windows. +Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext) on Windows. Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod) can be used on Windows to perform many of the tasks performed by privileged containers on Linux. diff --git a/content/en/docs/concepts/services-networking/_index.md b/content/en/docs/concepts/services-networking/_index.md index ab1b784658bb8..b4f78610753c9 100644 --- a/content/en/docs/concepts/services-networking/_index.md +++ b/content/en/docs/concepts/services-networking/_index.md @@ -7,26 +7,25 @@ description: > ## The Kubernetes network model -Every [`Pod`](/docs/concepts/workloads/pods/) gets its own IP address. +Every [`Pod`](/docs/concepts/workloads/pods/) in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links between `Pods` and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where `Pods` can be treated much like VMs or physical hosts from the perspectives of port allocation, -naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing), application configuration, -and migration. +naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing), +application configuration, and migration. Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): - * pods on a [node](/docs/concepts/architecture/nodes/) can communicate with all pods on all nodes without NAT + * pods can communicate with all other pods on any other [node](/docs/concepts/architecture/nodes/) + without NAT * agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node Note: For those platforms that support `Pods` running in the host network (e.g. -Linux): - - * pods in the host network of a node can communicate with all pods on all - nodes without NAT +Linux), when pods are attached to the host network of a node they can still communicate +with all pods on all nodes without NAT. This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction porting of apps from VMs diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 9ca11a34571cb..939269f8ec2c7 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -8,8 +8,8 @@ weight: 20 --- -Kubernetes creates DNS records for services and pods. You can contact -services with consistent DNS names instead of IP addresses. +Kubernetes creates DNS records for Services and Pods. You can contact +Services with consistent DNS names instead of IP addresses. @@ -25,20 +25,20 @@ Pod's own namespace and the cluster's default domain. ### Namespaces of Services -A DNS query may return different results based on the namespace of the pod making -it. DNS queries that don't specify a namespace are limited to the pod's -namespace. Access services in other namespaces by specifying it in the DNS query. +A DNS query may return different results based on the namespace of the Pod making +it. DNS queries that don't specify a namespace are limited to the Pod's +namespace. Access Services in other namespaces by specifying it in the DNS query. -For example, consider a pod in a `test` namespace. A `data` service is in +For example, consider a Pod in a `test` namespace. A `data` Service is in the `prod` namespace. -A query for `data` returns no results, because it uses the pod's `test` namespace. +A query for `data` returns no results, because it uses the Pod's `test` namespace. A query for `data.prod` returns the intended result, because it specifies the namespace. -DNS queries may be expanded using the pod's `/etc/resolv.conf`. Kubelet -sets this file for each pod. For example, a query for just `data` may be +DNS queries may be expanded using the Pod's `/etc/resolv.conf`. Kubelet +sets this file for each Pod. For example, a query for just `data` may be expanded to `data.test.svc.cluster.local`. The values of the `search` option are used to expand queries. To learn more about DNS queries, see [the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) @@ -49,7 +49,7 @@ search .svc.cluster.local svc.cluster.local cluster.local options ndots:5 ``` -In summary, a pod in the _test_ namespace can successfully resolve either +In summary, a Pod in the _test_ namespace can successfully resolve either `data.prod` or `data.prod.svc.cluster.local`. ### DNS Records @@ -70,14 +70,14 @@ For more up-to-date specification, see ### A/AAAA records "Normal" (not headless) Services are assigned a DNS A or AAAA record, -depending on the IP family of the service, for a name of the form +depending on the IP family of the Service, for a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP of the Service. "Headless" (without a cluster IP) Services are also assigned a DNS A or AAAA record, -depending on the IP family of the service, for a name of the form +depending on the IP family of the Service, for a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal -Services, this resolves to the set of IPs of the pods selected by the Service. +Services, this resolves to the set of IPs of the Pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set. @@ -87,36 +87,36 @@ SRV Records are created for named ports that are part of normal or [Headless Services](/docs/concepts/services-networking/service/#headless-services). For each named port, the SRV record would have the form `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`. -For a regular service, this resolves to the port number and the domain name: +For a regular Service, this resolves to the port number and the domain name: `my-svc.my-namespace.svc.cluster-domain.example`. -For a headless service, this resolves to multiple answers, one for each pod -that is backing the service, and contains the port number and the domain name of the pod +For a headless Service, this resolves to multiple answers, one for each Pod +that is backing the Service, and contains the port number and the domain name of the Pod of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`. ## Pods ### A/AAAA records -In general a pod has the following DNS resolution: +In general a Pod has the following DNS resolution: `pod-ip-address.my-namespace.pod.cluster-domain.example`. -For example, if a pod in the `default` namespace has the IP address 172.17.0.3, +For example, if a Pod in the `default` namespace has the IP address 172.17.0.3, and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name: `172-17-0-3.default.pod.cluster.local`. -Any pods exposed by a Service have the following DNS resolution available: +Any Pods exposed by a Service have the following DNS resolution available: `pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`. ### Pod's hostname and subdomain fields -Currently when a pod is created, its hostname is the Pod's `metadata.name` value. +Currently when a Pod is created, its hostname is the Pod's `metadata.name` value. The Pod spec has an optional `hostname` field, which can be used to specify the Pod's hostname. When specified, it takes precedence over the Pod's name to be -the hostname of the pod. For example, given a Pod with `hostname` set to +the hostname of the Pod. For example, given a Pod with `hostname` set to "`my-host`", the Pod will have its hostname set to "`my-host`". The Pod spec also has an optional `subdomain` field which can be used to specify @@ -173,14 +173,14 @@ spec: name: busybox ``` -If there exists a headless service in the same namespace as the pod and with +If there exists a headless Service in the same namespace as the Pod and with the same name as the subdomain, the cluster's DNS Server also returns an A or AAAA record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to "`default-subdomain`", and a headless Service named "`default-subdomain`" in -the same namespace, the pod will see its own FQDN as +the same namespace, the Pod will see its own FQDN as "`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`". DNS serves an -A or AAAA record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and +A or AAAA record at that name, pointing to the Pod's IP. Both Pods "`busybox1`" and "`busybox2`" can have their distinct A or AAAA records. The Endpoints object can specify the `hostname` for any endpoint addresses, @@ -189,7 +189,7 @@ along with its IP. {{< note >}} Because A or AAAA records are not created for Pod names, `hostname` is required for the Pod's A or AAAA record to be created. A Pod with no `hostname` but with `subdomain` will only create the -A or AAAA record for the headless service (`default-subdomain.my-namespace.svc.cluster-domain.example`), +A or AAAA record for the headless Service (`default-subdomain.my-namespace.svc.cluster-domain.example`), pointing to the Pod's IP address. Also, Pod needs to become ready in order to have a record unless `publishNotReadyAddresses=True` is set on the Service. {{< /note >}} @@ -205,17 +205,17 @@ When you set `setHostnameAsFQDN: true` in the Pod spec, the kubelet writes the P {{< note >}} In Linux, the hostname field of the kernel (the `nodename` field of `struct utsname`) is limited to 64 characters. -If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment. +If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from Pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment. {{< /note >}} ### Pod's DNS Policy -DNS policies can be set on a per-pod basis. Currently Kubernetes supports the -following pod-specific DNS policies. These policies are specified in the +DNS policies can be set on a per-Pod basis. Currently Kubernetes supports the +following Pod-specific DNS policies. These policies are specified in the `dnsPolicy` field of a Pod Spec. - "`Default`": The Pod inherits the name resolution configuration from the node - that the pods run on. + that the Pods run on. See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers) for more details. - "`ClusterFirst`": Any DNS query that does not match the configured cluster @@ -226,6 +226,7 @@ following pod-specific DNS policies. These policies are specified in the for details on how DNS queries are handled in those cases. - "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should explicitly set its DNS policy "`ClusterFirstWithHostNet`". + - Note: This is not supported on Windows. See [below](#dns-windows) for details - "`None`": It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the `dnsConfig` field in the Pod Spec. @@ -306,7 +307,7 @@ For IPv6 setup, search path and name server should be setup like this: kubectl exec -it dns-example -- cat /etc/resolv.conf ``` The output is similar to this: -```shell +``` nameserver fd00:79:30::a search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example options ndots:5 @@ -323,8 +324,25 @@ If the feature gate `ExpandedDNSConfig` is enabled for the kube-apiserver and the kubelet, it is allowed for Kubernetes to have at most 32 search domains and a list of search domains of up to 2048 characters. -## {{% heading "whatsnext" %}} +## DNS resolution on Windows nodes {#dns-windows} + +- ClusterFirstWithHostNet is not supported for Pods that run on Windows nodes. + Windows treats all names with a `.` as a FQDN and skips FQDN resolution. +- On Windows, there are multiple DNS resolvers that can be used. As these come with + slightly different behaviors, using the + [`Resolve-DNSName`](https://docs.microsoft.com/powershell/module/dnsclient/resolve-dnsname) + powershell cmdlet for name query resolutions is recommended. +- On Linux, you have a DNS suffix list, which is used after resolution of a name as fully + qualified has failed. + On Windows, you can only have 1 DNS suffix, which is the DNS suffix associated with that + Pod's namespace (example: `mydns.svc.cluster.local`). Windows can resolve FQDNs, Services, + or network name which can be resolved with this single suffix. For example, a Pod spawned + in the `default` namespace, will have the DNS suffix `default.svc.cluster.local`. + Inside a Windows Pod, you can resolve both `kubernetes.default.svc.cluster.local` + and `kubernetes`, but not the partially qualified names (`kubernetes.default` or + `kubernetes.default.svc`). +## {{% heading "whatsnext" %}} For guidance on administering DNS configurations, check [Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index a120df4392784..921d69e8fbf94 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -1,16 +1,15 @@ --- -reviewers: -- lachie83 -- khenidak -- aramase -- bridgetkromhout title: IPv4/IPv6 dual-stack feature: title: IPv4/IPv6 dual-stack description: > Allocation of IPv4 and IPv6 addresses to Pods and Services - content_type: concept +reviewers: + - lachie83 + - khenidak + - aramase + - bridgetkromhout weight: 70 --- @@ -18,11 +17,11 @@ weight: 70 {{< feature-state for_k8s_version="v1.23" state="stable" >}} -IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}. - -IPv4/IPv6 dual-stack networking is enabled by default for your Kubernetes cluster starting in 1.21, allowing the simultaneous assignment of both IPv4 and IPv6 addresses. - +IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to +{{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}. +IPv4/IPv6 dual-stack networking is enabled by default for your Kubernetes cluster starting in +1.21, allowing the simultaneous assignment of both IPv4 and IPv6 addresses. @@ -30,68 +29,78 @@ IPv4/IPv6 dual-stack networking is enabled by default for your Kubernetes cluste IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: - * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) - * IPv4 and IPv6 enabled Services - * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces +* Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) +* IPv4 and IPv6 enabled Services +* Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces ## Prerequisites The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: - * Kubernetes 1.20 or later - For information about using dual-stack services with earlier - Kubernetes versions, refer to the documentation for that version - of Kubernetes. - * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) - * A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports dual-stack networking. +* Kubernetes 1.20 or later + + For information about using dual-stack services with earlier + Kubernetes versions, refer to the documentation for that version + of Kubernetes. + +* Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide + Kubernetes nodes with routable IPv4/IPv6 network interfaces) +* A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that + supports dual-stack networking. ## Configure IPv4/IPv6 dual-stack To configure IPv4/IPv6 dual-stack, set dual-stack cluster network assignments: - * kube-apiserver: - * `--service-cluster-ip-range=,` - * kube-controller-manager: - * `--cluster-cidr=,` - * `--service-cluster-ip-range=,` - * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6 - * kube-proxy: - * `--cluster-cidr=,` - * kubelet: - * when there is no `--cloud-provider` the administrator can pass a comma-separated pair - of IP addresses via `--node-ip` to manually configure dual-stack `.status.addresses` - for that Node. - If a Pod runs on that node in HostNetwork mode, the Pod reports these IP addresses in its - `.status.podIPs` field. - All `podIPs` in a node match the IP family preference defined by the - `.status.addresses` field for that Node. +* kube-apiserver: + * `--service-cluster-ip-range=,` +* kube-controller-manager: + * `--cluster-cidr=,` + * `--service-cluster-ip-range=,` + * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6 +* kube-proxy: + * `--cluster-cidr=,` +* kubelet: + * when there is no `--cloud-provider` the administrator can pass a comma-separated pair of IP + addresses via `--node-ip` to manually configure dual-stack `.status.addresses` for that Node. + If a Pod runs on that node in HostNetwork mode, the Pod reports these IP addresses in its + `.status.podIPs` field. + All `podIPs` in a node match the IP family preference defined by the `.status.addresses` + field for that Node. {{< note >}} An example of an IPv4 CIDR: `10.244.0.0/16` (though you would supply your own address range) -An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but is not a valid address - see [RFC 4193](https://tools.ietf.org/html/rfc4193)) - +An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but is not a valid +address - see [RFC 4193](https://tools.ietf.org/html/rfc4193)) {{< /note >}} ## Services You can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both. -The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver). +The address family of a Service defaults to the address family of the first service cluster IP +range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver). When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you set the `.spec.ipFamilyPolicy` field to one of the following values: -* `SingleStack`: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range. +* `SingleStack`: Single-stack service. The control plane allocates a cluster IP for the Service, + using the first configured service cluster IP range. * `PreferDualStack`: * Allocates IPv4 and IPv6 cluster IPs for the Service. * `RequireDualStack`: Allocates Service `.spec.ClusterIPs` from both IPv4 and IPv6 address ranges. - * Selects the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family of the first element in the `.spec.ipFamilies` array. + * Selects the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family + of the first element in the `.spec.ipFamilies` array. -If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the address families by setting an optional field, `.spec.ipFamilies`, on the Service. +If you would like to define which IP family to use for single stack or define the order of IP +families for dual-stack, you can choose the address families by setting an optional field, +`.spec.ipFamilies`, on the Service. {{< note >}} -The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a Service that already exists. If you want to change `.spec.ipFamilies`, delete and recreate the Service. +The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a +Service that already exists. If you want to change `.spec.ipFamilies`, delete and recreate the +Service. {{< /note >}} You can set `.spec.ipFamilies` to any of the following array values: @@ -109,139 +118,197 @@ These examples demonstrate the behavior of various dual-stack Service configurat #### Dual-stack options on new Services -1. This Service specification does not explicitly define `.spec.ipFamilyPolicy`. When you create this Service, Kubernetes assigns a cluster IP for the Service from the first configured `service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services without selectors](/docs/concepts/services-networking/service/#services-without-selectors) and [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors will behave in this same way.) - -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} - -1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the service. The control plane updates the `.spec` for the Service to record the IP address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from `.spec.ClusterIPs`. +1. This Service specification does not explicitly define `.spec.ipFamilyPolicy`. When you create + this Service, Kubernetes assigns a cluster IP for the Service from the first configured + `service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services + without selectors](/docs/concepts/services-networking/service/#services-without-selectors) and + [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors + will behave in this same way.) + + {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + +1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When + you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 + addresses for the service. The control plane updates the `.spec` for the Service to record the IP + address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned + IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from + `.spec.ClusterIPs`. - * For the `.spec.ClusterIP` field, the control plane records the IP address that is from the same address family as the first service cluster IP range. - * On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list one address. - * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`. + * For the `.spec.ClusterIP` field, the control plane records the IP address that is from the + same address family as the first service cluster IP range. + * On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list + one address. + * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` + behaves the same as `PreferDualStack`. -{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}} + {{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}} -1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is the first element in the `.spec.ClusterIPs` array, overriding the default. +1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well + as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and + IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is + the first element in the `.spec.ClusterIPs` array, overriding the default. -{{< codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" >}} + {{< codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" >}} #### Dual-stack defaults on existing Services -These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.) +These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster +where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable +dual-stack.) -1. When dual-stack is enabled on a cluster, existing Services (whether `IPv4` or `IPv6`) are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP will be stored in `.spec.ClusterIPs`. +1. When dual-stack is enabled on a cluster, existing Services (whether `IPv4` or `IPv6`) are + configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set + `.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP + will be stored in `.spec.ClusterIPs`. -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} You can validate this behavior by using kubectl to inspect an existing service. -```shell -kubectl get svc my-service -o yaml -``` - -```yaml -apiVersion: v1 -kind: Service -metadata: - labels: - app: MyApp - name: my-service -spec: - clusterIP: 10.0.197.123 - clusterIPs: - - 10.0.197.123 - ipFamilies: - - IPv4 - ipFamilyPolicy: SingleStack - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: MyApp - type: ClusterIP -status: - loadBalancer: {} -``` - -1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`. - -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + ```shell + kubectl get svc my-service -o yaml + ``` + + ```yaml + apiVersion: v1 + kind: Service + metadata: + labels: + app: MyApp + name: my-service + spec: + clusterIP: 10.0.197.123 + clusterIPs: + - 10.0.197.123 + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: MyApp + type: ClusterIP + status: + loadBalancer: {} + ``` + +1. When dual-stack is enabled on a cluster, existing + [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are + configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set + `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the + `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to + `None`. + + {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} You can validate this behavior by using kubectl to inspect an existing headless service with selectors. -```shell -kubectl get svc my-service -o yaml -``` - -```yaml -apiVersion: v1 -kind: Service -metadata: - labels: - app: MyApp - name: my-service -spec: - clusterIP: None - clusterIPs: - - None - ipFamilies: - - IPv4 - ipFamilyPolicy: SingleStack - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: MyApp -``` + ```shell + kubectl get svc my-service -o yaml + ``` + + ```yaml + apiVersion: v1 + kind: Service + metadata: + labels: + app: MyApp + name: my-service + spec: + clusterIP: None + clusterIPs: + - None + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: MyApp + ``` #### Switching Services between single-stack and dual-stack Services can be changed from single-stack to dual-stack and from dual-stack to single-stack. -1. To change a Service from single-stack to dual-stack, change `.spec.ipFamilyPolicy` from `SingleStack` to `PreferDualStack` or `RequireDualStack` as desired. When you change this Service from single-stack to dual-stack, Kubernetes assigns the missing address family so that the Service now has IPv4 and IPv6 addresses. +1. To change a Service from single-stack to dual-stack, change `.spec.ipFamilyPolicy` from + `SingleStack` to `PreferDualStack` or `RequireDualStack` as desired. When you change this + Service from single-stack to dual-stack, Kubernetes assigns the missing address family so that the + Service now has IPv4 and IPv6 addresses. Edit the Service specification updating the `.spec.ipFamilyPolicy` from `SingleStack` to `PreferDualStack`. -Before: -```yaml -spec: - ipFamilyPolicy: SingleStack -``` -After: -```yaml -spec: - ipFamilyPolicy: PreferDualStack -``` + Before: + + ```yaml + spec: + ipFamilyPolicy: SingleStack + ``` + + After: -1. To change a Service from dual-stack to single-stack, change `.spec.ipFamilyPolicy` from `PreferDualStack` or `RequireDualStack` to `SingleStack`. When you change this Service from dual-stack to single-stack, Kubernetes retains only the first element in the `.spec.ClusterIPs` array, and sets `.spec.ClusterIP` to that IP address and sets `.spec.ipFamilies` to the address family of `.spec.ClusterIPs`. + ```yaml + spec: + ipFamilyPolicy: PreferDualStack + ``` + +1. To change a Service from dual-stack to single-stack, change `.spec.ipFamilyPolicy` from + `PreferDualStack` or `RequireDualStack` to `SingleStack`. When you change this Service from + dual-stack to single-stack, Kubernetes retains only the first element in the `.spec.ClusterIPs` + array, and sets `.spec.ClusterIP` to that IP address and sets `.spec.ipFamilies` to the address + family of `.spec.ClusterIPs`. ### Headless Services without selector -For [Headless Services without selectors](/docs/concepts/services-networking/service/#without-selectors) and without `.spec.ipFamilyPolicy` explicitly set, the `.spec.ipFamilyPolicy` field defaults to `RequireDualStack`. +For [Headless Services without selectors](/docs/concepts/services-networking/service/#without-selectors) +and without `.spec.ipFamilyPolicy` explicitly set, the `.spec.ipFamilyPolicy` field defaults to +`RequireDualStack`. ### Service type LoadBalancer To provision a dual-stack load balancer for your Service: - * Set the `.spec.type` field to `LoadBalancer` - * Set `.spec.ipFamilyPolicy` field to `PreferDualStack` or `RequireDualStack` + +* Set the `.spec.type` field to `LoadBalancer` +* Set `.spec.ipFamilyPolicy` field to `PreferDualStack` or `RequireDualStack` {{< note >}} -To use a dual-stack `LoadBalancer` type Service, your cloud provider must support IPv4 and IPv6 load balancers. +To use a dual-stack `LoadBalancer` type Service, your cloud provider must support IPv4 and IPv6 +load balancers. {{< /note >}} ## Egress traffic -If you want to enable egress traffic in order to reach off-cluster destinations (eg. the public Internet) from a Pod that uses non-publicly routable IPv6 addresses, you need to enable the Pod to use a publicly routed IPv6 address via a mechanism such as transparent proxying or IP masquerading. The [ip-masq-agent](https://github.com/kubernetes-sigs/ip-masq-agent) project supports IP masquerading on dual-stack clusters. +If you want to enable egress traffic in order to reach off-cluster destinations (eg. the public +Internet) from a Pod that uses non-publicly routable IPv6 addresses, you need to enable the Pod to +use a publicly routed IPv6 address via a mechanism such as transparent proxying or IP +masquerading. The [ip-masq-agent](https://github.com/kubernetes-sigs/ip-masq-agent) project +supports IP masquerading on dual-stack clusters. {{< note >}} Ensure your {{< glossary_tooltip text="CNI" term_id="cni" >}} provider supports IPv6. {{< /note >}} -## {{% heading "whatsnext" %}} +## Windows support +Kubernetes on Windows does not support single-stack "IPv6-only" networking. However, +dual-stack IPv4/IPv6 networking for pods and nodes with single-family services +is supported. + +You can use IPv4/IPv6 dual-stack networking with `l2bridge` networks. + +{{< note >}} +Overlay (VXLAN) networks on Windows **do not** support dual-stack networking. +{{< /note >}} + +You can read more about the different network modes for Windows within the +[Networking on Windows](/docs/concepts/services-networking/windows-networking#network-modes) topic. + +## {{% heading "whatsnext" %}} * [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking -* [Enable dual-stack networking using kubeadm -](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) +* [Enable dual-stack networking using kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) + diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index da3ad5c6b4d13..aebcc800c62b0 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -30,23 +30,8 @@ For clarity, this guide defines the following terms: Traffic routing is controlled by rules defined on the Ingress resource. Here is a simple example where an Ingress sends all its traffic to one Service: -{{< mermaid >}} -graph LR; - client([client])-. Ingress-managed
    load balancer .->ingress[Ingress]; - ingress-->|routing rule|service[Service]; - subgraph cluster - ingress; - service-->pod1[Pod]; - service-->pod2[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service,pod1,pod2 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="Figure. Ingress" link="https://mermaid.live/edit#pako:eNqNkstuwyAQRX8F4U0r2VHqPlSRKqt0UamLqlnaWWAYJygYLB59KMm_Fxcix-qmGwbuXA7DwAEzzQETXKutof0Ovb4vaoUQkwKUu6pi3FwXM_QSHGBt0VFFt8DRU2OWSGrKUUMlVQwMmhVLEV1Vcm9-aUksiuXRaO_CEhkv4WjBfAgG1TrGaLa-iaUw6a0DcwGI-WgOsF7zm-pN881fvRx1UDzeiFq7ghb1kgqFWiElyTjnuXVG74FkbdumefEpuNuRu_4rZ1pqQ7L5fL6YQPaPNiFuywcG9_-ihNyUkm6YSONWkjVNM8WUIyaeOJLO3clTB_KhL8NQDmVe-OJjxgZM5FhFiiFTK5zjDkxHBQ9_4zB4a-x20EGNSZhyaKmXrg7f5hSsvufUwTMXThtMWiot5Jh6p9ffimHijIezaSVoeN0uiqcfMJvf7w" >}} An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. @@ -398,25 +383,8 @@ A fanout configuration routes traffic from a single IP address to more than one based on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers down to a minimum. For example, a setup like: -{{< mermaid >}} -graph LR; - client([client])-. Ingress-managed
    load balancer .->ingress[Ingress, 178.91.123.132]; - ingress-->|/foo|service1[Service service1:4200]; - ingress-->|/bar|service2[Service service2:8080]; - subgraph cluster - ingress; - service1-->pod1[Pod]; - service1-->pod2[Pod]; - service2-->pod3[Pod]; - service2-->pod4[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/docs/images/ingressFanOut.svg" alt="ingress-fanout-diagram" class="diagram-large" caption="Figure. Ingress Fan Out" link="https://mermaid.live/edit#pako:eNqNUslOwzAQ_RXLvYCUhMQpUFzUUzkgcUBwbHpw4klr4diR7bCo8O8k2FFbFomLPZq3jP00O1xpDpjijWHtFt09zAuFUCUFKHey8vf6NE7QrdoYsDZumGIb4Oi6NAskNeOoZJKpCgxK4oXwrFVgRyi7nCVXWZKRPMlysv5yD6Q4Xryf1Vq_WzDPooJs9egLNDbolKTpT03JzKgh3zWEztJZ0Niu9L-qZGcdmAMfj4cxvWmreba613z9C0B-AMQD-V_AdA-A4j5QZu0SatRKJhSqhZR0wjmPrDP6CeikrutQxy-Cuy2dtq9RpaU2dJKm6fzI5Glmg0VOLio4_5dLjx27hFSC015KJ2VZHtuQvY2fuHcaE43G0MaCREOow_FV5cMxHZ5-oPX75UM5avuXhXuOI9yAaZjg_aLuBl6B3RYaKDDtSw4166QrcKE-emrXcubghgunDaY1kxYizDqnH99UhakzHYykpWD9hjS--fEJoIELqQ" >}} + would require an Ingress such as: @@ -460,25 +428,7 @@ you are using, you may need to create a default-http-backend Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. -{{< mermaid >}} -graph LR; - client([client])-. Ingress-managed
    load balancer .->ingress[Ingress, 178.91.123.132]; - ingress-->|Host: foo.bar.com|service1[Service service1:80]; - ingress-->|Host: bar.foo.com|service2[Service service2:80]; - subgraph cluster - ingress; - service1-->pod1[Pod]; - service1-->pod2[Pod]; - service2-->pod3[Pod]; - service2-->pod4[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/docs/images/ingressNameBased.svg" alt="ingress-namebase-diagram" class="diagram-large" caption="Figure. Ingress Name Based Virtual hosting" link="https://mermaid.live/edit#pako:eNqNkl9PwyAUxb8KYS-atM1Kp05m9qSJJj4Y97jugcLtRqTQAPVPdN_dVlq3qUt8gZt7zvkBN7xjbgRgiteW1Rt0_zjLNUJcSdD-ZBn21WmcoDu9tuBcXDHN1iDQVWHnSBkmUMEU0xwsSuK5DK5l745QejFNLtMkJVmSZmT1Re9NcTz_uDXOU1QakxTMJtxUHw7ss-SQLhehQEODTsdH4l20Q-zFyc84-Y67pghv5apxHuweMuj9eS2_NiJdPhix-kMgvwQShOyYMNkJoEUYM3PuGkpUKyY1KqVSdCSEiJy35gnoqCzLvo5fpPAbOqlfI26UsXQ0Ho9nB5CnqesRGTnncPYvSqsdUvqp9KRdlI6KojjEkB0mnLgjDRONhqENBYm6oXbLV5V1y6S7-l42_LowlIN2uFm_twqOcAW2YlK0H_i9c-bYb6CCHNO2FFCyRvkc53rbWptaMA83QnpjMS2ZchBh1nizeNMcU28bGEzXkrV_pArN7Sc0rBTu" >}} The following Ingress tells the backing load balancer to route requests based on diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index dc4e2b76ca93a..9a97abfb5b1bf 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -54,7 +54,7 @@ POSTing this to the API server for your cluster will have no effect unless your __Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see -[Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), +[Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), and [Object Management](/docs/concepts/overview/working-with-objects/object-management). __spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace. @@ -258,7 +258,7 @@ standardized label to target a specific namespace. ## What you can't do with network policies (at least, not yet) -As of Kubernetes {{< skew latestVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. +As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. - Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy). - Anything TLS related (use a service mesh or ingress controller for this). diff --git a/content/en/docs/concepts/services-networking/service-traffic-policy.md b/content/en/docs/concepts/services-networking/service-traffic-policy.md index b7a367a4b7e19..b9abe34b3fc73 100644 --- a/content/en/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/en/docs/concepts/services-networking/service-traffic-policy.md @@ -60,12 +60,6 @@ considered. When the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ServiceInternalTrafficPolicy` is enabled, `spec.internalTrafficPolicy` defaults to "Cluster". -## Constraints - -* Service Internal Traffic Policy is not used when `externalTrafficPolicy` is set - to `Local` on a Service. It is possible to use both features in the same cluster - on different Services, just not on the same Service. - ## {{% heading "whatsnext" %}} * Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 679f96b139d69..ad5af427ec3ed 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -122,7 +122,7 @@ metadata: spec: containers: - name: nginx - image: nginx:11.14.2 + image: nginx:stable ports: - containerPort: 80 name: http-web-svc @@ -192,6 +192,7 @@ where it's running, by adding an Endpoints object manually: apiVersion: v1 kind: Endpoints metadata: + # the name here should match the name of the Service name: my-service subsets: - addresses: @@ -203,6 +204,10 @@ subsets: The name of the Endpoints object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +When you create an [Endpoints](/docs/reference/kubernetes-api/service-resources/endpoints-v1/) +object for a Service, you set the name of the new object to be the same as that +of the Service. + {{< note >}} The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). @@ -394,6 +399,10 @@ You can also set the maximum session sticky time by setting `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately. (the default value is 10800, which works out to be 3 hours). +{{< note >}} +On Windows, setting the maximum session sticky time for Services is not supported. +{{< /note >}} + ## Multi-Port Services For some Services, you need to expose more than one port. @@ -447,7 +456,7 @@ server will return a 422 HTTP status code to indicate that there's a problem. You can set the `spec.externalTrafficPolicy` field to control how traffic from external sources is routed. Valid values are `Cluster` and `Local`. Set the field to `Cluster` to route external traffic to all ready endpoints -and `Local` to only route to ready node-local endpoints. If the traffic policy is `Local` and there are are no node-local +and `Local` to only route to ready node-local endpoints. If the traffic policy is `Local` and there are no node-local endpoints, the kube-proxy does not forward any traffic for the relevant Service. {{< note >}} @@ -853,6 +862,17 @@ metadata: [...] ``` +{{% /tab %}} +{{% tab name="OCI" %}} + +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/oci-load-balancer-internal: true +[...] +``` {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/concepts/services-networking/windows-networking.md b/content/en/docs/concepts/services-networking/windows-networking.md new file mode 100644 index 0000000000000..6aa79f0a0369a --- /dev/null +++ b/content/en/docs/concepts/services-networking/windows-networking.md @@ -0,0 +1,164 @@ +--- +reviewers: +- aravindhp +- jayunit100 +- jsturtevant +- marosset +title: Networking on Windows +content_type: concept +weight: 75 +--- + + + +Kubernetes supports running nodes on either Linux or Windows. You can mix both kinds of node +within a single cluster. +This page provides an overview to networking specific to the Windows operating system. + + +## Container networking on Windows {#networking} + +Networking for Windows containers is exposed through +[CNI plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). +Windows containers function similarly to virtual machines in regards to +networking. Each container has a virtual network adapter (vNIC) which is connected +to a Hyper-V virtual switch (vSwitch). The Host Networking Service (HNS) and the +Host Compute Service (HCS) work together to create containers and attach container +vNICs to networks. HCS is responsible for the management of containers whereas HNS +is responsible for the management of networking resources such as: + +* Virtual networks (including creation of vSwitches) +* Endpoints / vNICs +* Namespaces +* Policies including packet encapsulations, load-balancing rules, ACLs, and NAT rules. + +The Windows HNS and vSwitch implement namespacing and can +create virtual NICs as needed for a pod or container. However, many configurations such +as DNS, routes, and metrics are stored in the Windows registry database rather than as +files inside `/etc`, which is how Linux stores those configurations. The Windows registry for the container +is separate from that of the host, so concepts like mapping `/etc/resolv.conf` from +the host into a container don't have the same effect they would on Linux. These must +be configured using Windows APIs run in the context of that container. Therefore +CNI implementations need to call the HNS instead of relying on file mappings to pass +network details into the pod or container. + +## Network modes + +Windows supports five different networking drivers/modes: L2bridge, L2tunnel, +Overlay (Beta), Transparent, and NAT. In a heterogeneous cluster with Windows and Linux +worker nodes, you need to select a networking solution that is compatible on both +Windows and Linux. The following table lists the out-of-tree plugins are supported on Windows, +with recommendations on when to use each CNI: + +| Network Driver | Description | Container Packet Modifications | Network Plugins | Network Plugin Characteristics | +| -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | +| L2bridge | Containers are attached to an external vSwitch. Containers are attached to the underlay network, although the physical network doesn't need to learn the container MACs because they are rewritten on ingress/egress. | MAC is rewritten to host MAC, IP may be rewritten to host IP using HNS OutboundNAT policy. | [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge), [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md), Flannel host-gateway uses win-bridge | win-bridge uses L2bridge network mode, connects containers to the underlay of hosts, offering best performance. Requires user-defined routes (UDR) for inter-node connectivity. | +| L2Tunnel | This is a special case of l2bridge, but only used on Azure. All packets are sent to the virtualization host where SDN policy is applied. | MAC rewritten, IP visible on the underlay network | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNI allows integration of containers with Azure vNET, and allows them to leverage the set of capabilities that [Azure Virtual Network provides](https://azure.microsoft.com/en-us/services/virtual-network/). For example, securely connect to Azure services or use Azure NSGs. See [azure-cni for some examples](https://docs.microsoft.com/azure/aks/concepts-network#azure-cni-advanced-networking) | +| Overlay | Containers are given a vNIC connected to an external vSwitch. Each overlay network gets its own IP subnet, defined by a custom IP prefix.The overlay network driver uses VXLAN encapsulation. | Encapsulated with an outer header. | [win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay), Flannel VXLAN (uses win-overlay) | win-overlay should be used when virtual container networks are desired to be isolated from underlay of hosts (e.g. for security reasons). Allows for IPs to be re-used for different overlay networks (which have different VNID tags) if you are restricted on IPs in your datacenter. This option requires [KB4489899](https://support.microsoft.com/help/4489899) on Windows Server 2019. | +| Transparent (special use case for [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)) | Requires an external vSwitch. Containers are attached to an external vSwitch which enables intra-pod communication via logical networks (logical switches and routers). | Packet is encapsulated either via [GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/) or [STT](https://datatracker.ietf.org/doc/draft-davie-stt/) tunneling to reach pods which are not on the same host.
    Packets are forwarded or dropped via the tunnel metadata information supplied by the ovn network controller.
    NAT is done for north-south communication. | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [Deploy via ansible](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib). Distributed ACLs can be applied via Kubernetes policies. IPAM support. Load-balancing can be achieved without kube-proxy. NATing is done without using iptables/netsh. | +| NAT (*not used in Kubernetes*) | Containers are given a vNIC connected to an internal vSwitch. DNS/DHCP is provided using an internal component called [WinNAT](https://techcommunity.microsoft.com/t5/virtualization/windows-nat-winnat-capabilities-and-limitations/ba-p/382303) | MAC and IP is rewritten to host MAC/IP. | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | Included here for completeness | + +As outlined above, the [Flannel](https://github.com/coreos/flannel) +[CNI plugin](https://github.com/flannel-io/cni-plugin) +is also [supported](https://github.com/flannel-io/cni-plugin#windows-support-experimental) on Windows via the +[VXLAN network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) (**Beta support** ; delegates to win-overlay) +and [host-gateway network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) (stable support; delegates to win-bridge). + +This plugin supports delegating to one of the reference CNI plugins (win-overlay, +win-bridge), to work in conjunction with Flannel daemon on Windows (Flanneld) for +automatic node subnet lease assignment and HNS network creation. This plugin reads +in its own configuration file (cni.conf), and aggregates it with the environment +variables from the FlannelD generated subnet.env file. It then delegates to one of +the reference CNI plugins for network plumbing, and sends the correct configuration +containing the node-assigned subnet to the IPAM plugin (for example: `host-local`). + +For Node, Pod, and Service objects, the following network flows are supported for +TCP/UDP traffic: + +* Pod → Pod (IP) +* Pod → Pod (Name) +* Pod → Service (Cluster IP) +* Pod → Service (PQDN, but only if there are no ".") +* Pod → Service (FQDN) +* Pod → external (IP) +* Pod → external (DNS) +* Node → Pod +* Pod → Node + +## IP address management (IPAM) {#ipam} + +The following IPAM options are supported on Windows: + +* [host-local](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local) +* [azure-vnet-ipam](https://github.com/Azure/azure-container-networking/blob/master/docs/ipam.md) (for azure-cni only) +* [Windows Server IPAM](https://docs.microsoft.com/windows-server/networking/technologies/ipam/ipam-top) (fallback option if no IPAM is set) + +## Load balancing and Services + +A Kubernetes {{< glossary_tooltip text="Service" term_id="service" >}} is an abstraction +that defines a logical set of Pods and a means to access them over a network. +In a cluster that includes Windows nodes, you can use the following types of Service: + +* `NodePort` +* `ClusterIP` +* `LoadBalancer` +* `ExternalName` + +Windows container networking differs in some important ways from Linux networking. +The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) +provides additional details and background. + +On Windows, you can use the following settings to configure Services and load +balancing behavior: + +{{< table caption="Windows Service Settings" >}} +| Feature | Description | Minimum Supported Windows OS build | How to enable | +| ------- | ----------- | -------------------------- | ------------- | +| Session affinity | Ensures that connections from a particular client are passed to the same Pod each time. | Windows Server 2022 | Set `service.spec.sessionAffinity` to "ClientIP" | +| Direct Server Return (DSR) | Load balancing mode where the IP address fixups and the LBNAT occurs at the container vSwitch port directly; service traffic arrives with the source IP set as the originating pod IP. | Windows Server 2019 | Set the following flags in kube-proxy: `--feature-gates="WinDSR=true" --enable-dsr=true` | +| Preserve-Destination | Skips DNAT of service traffic, thereby preserving the virtual IP of the target service in packets reaching the backend Pod. Also disables node-node forwarding. | Windows Server, version 1903 | Set `"preserve-destination": "true"` in service annotations and enable DSR in kube-proxy. | +| IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | Windows Server 2019 | See [IPv4/IPv6 dual-stack](#ipv4ipv6-dual-stack) | +| Client IP preservation | Ensures that source IP of incoming ingress traffic gets preserved. Also disables node-node forwarding. | Windows Server 2019 | Set `service.spec.externalTrafficPolicy` to "Local" and enable DSR in kube-proxy | +{{< /table >}} + +{{< warning >}} +There are known issue with NodePort Services on overlay networking, if the destination node is running Windows Server 2022. +To avoid the issue entirely, you can configure the service with `externalTrafficPolicy: Local`. + +There are known issues with Pod to Pod connectivity on l2bridge network on Windows Server 2022 with KB5005619 or higher installed. +To workaround the issue and restore Pod to Pod connectivity, you can disable the WinDSR feature in kube-proxy. + +These issues require OS fixes. +Please follow https://github.com/microsoft/Windows-Containers/issues/204 for updates. +{{< /warning >}} + +## Limitations + +The following networking functionality is _not_ supported on Windows nodes: + +* Host networking mode +* Local NodePort access from the node itself (works for other nodes or external clients) +* More than 64 backend pods (or unique destination addresses) for a single Service +* IPv6 communication between Windows pods connected to overlay networks +* Local Traffic Policy in non-DSR mode +* Outbound communication using the ICMP protocol via the `win-overlay`, `win-bridge`, or using the Azure-CNI plugin.\ + Specifically, the Windows data plane ([VFP](https://www.microsoft.com/research/project/azure-virtual-filtering-platform/)) + doesn't support ICMP packet transpositions, and this means: + * ICMP packets directed to destinations within the same network (such as pod to pod communication via ping) + work as expected; + * TCP/UDP packets work as expected; + * ICMP packets directed to pass through a remote network (e.g. pod to external internet communication via ping) + cannot be transposed and thus will not be routed back to their source; + * Since TCP/UDP packets can still be transposed, you can substitute `ping ` with + `curl ` when debugging connectivity with the outside world. + +Other limitations: + +* Windows reference network plugins win-bridge and win-overlay do not implement + [CNI spec](https://github.com/containernetworking/cni/blob/master/SPEC.md) v0.4.0, + due to a missing `CHECK` implementation. +* The Flannel VXLAN CNI plugin has the following limitations on Windows: + * Node-pod connectivity is only possible for local pods with Flannel v0.12.0 (or higher). + * Flannel is restricted to using VNI 4096 and UDP port 4789. See the official + [Flannel VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) + backend docs for more details on these parameters. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 16f0b8ed3b354..66aa8f7f8a860 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -540,6 +540,15 @@ In the CLI, the access modes are abbreviated to: * RWX - ReadWriteMany * RWOP - ReadWriteOncePod +{{< note >}} +Kubernetes uses volume access modes to match PersistentVolumeClaims and PersistentVolumes. +In some cases, the volume access modes also constrain where the PersistentVolume can be mounted. +Volume access modes do **not** enforce write protection once the storage has been mounted. +Even if the access modes are specified as ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, they don't set any constraints on the volume. +For example, even if a PersistentVolume is created as ReadOnlyMany, it is no guarantee that it will be read-only. +If the access modes are specified as ReadWriteOncePod, the volume is constrained and can be mounted on only a single Pod. +{{< /note >}} + > __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time. @@ -673,7 +682,7 @@ Claims use [the same convention as volumes](#volume-mode) to indicate the consum ### Resources -Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims. +Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/design-proposals-archive/scheduling/resources.md) applies to both volumes and claims. ### Selector @@ -1012,7 +1021,7 @@ and need persistent storage, it is recommended that you use the following patter * Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). * Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). -* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md). +* Read the [Persistent Storage design document](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md). ### API references {#reference} diff --git a/content/en/docs/concepts/storage/projected-volumes.md b/content/en/docs/concepts/storage/projected-volumes.md index ed7c31db1475d..df67132cf599f 100644 --- a/content/en/docs/concepts/storage/projected-volumes.md +++ b/content/en/docs/concepts/storage/projected-volumes.md @@ -26,7 +26,7 @@ Currently, the following types of volume sources can be projected: * [`serviceAccountToken`](#serviceaccounttoken) All sources are required to be in the same namespace as the Pod. For more details, -see the [all-in-one volume](https://github.com/kubernetes/design-proposals-archive/blob/main/node/all-in-one-volume.md) design document. +see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) design document. ### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap} diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 53ee88a2e7071..8fda0b2ff3f3e 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -87,7 +87,7 @@ for provisioning PVs. This field must be specified. You are not restricted to specifying the "internal" provisioners listed here (whose names are prefixed with "kubernetes.io" and shipped alongside Kubernetes). You can also run and specify external provisioners, -which are independent programs that follow a [specification](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/volume-provisioning.md) +which are independent programs that follow a [specification](https://git.k8s.io/design-proposals-archive/storage/volume-provisioning.md) defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be run, what volume plugin it uses (including Flex), etc. The repository diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index cc9a9565eb87a..fd994c97fa7fb 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -64,7 +64,9 @@ a different volume. Kubernetes supports several types of volumes. -### awsElasticBlockStore {#awselasticblockstore} +### awsElasticBlockStore (deprecated) {#awselasticblockstore} + +{{< feature-state for_k8s_version="v1.17" state="deprecated" >}} An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS volume](https://aws.amazon.com/ebs/) into your pod. Unlike @@ -115,7 +117,7 @@ spec: fsType: ext4 ``` -If the EBS volume is partitioned, you can supply the optional field `partition: ""` to specify which parition to mount on. +If the EBS volume is partitioned, you can supply the optional field `partition: ""` to specify which partition to mount on. #### AWS EBS CSI migration @@ -135,7 +137,9 @@ beta features must be enabled. To disable the `awsElasticBlockStore` storage plugin from being loaded by the controller manager and the kubelet, set the `InTreePluginAWSUnregister` flag to `true`. -### azureDisk {#azuredisk} +### azureDisk (deprecated) {#azuredisk} + +{{< feature-state for_k8s_version="v1.19" state="deprecated" >}} The `azureDisk` volume type mounts a Microsoft Azure [Data Disk](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) into a pod. @@ -158,7 +162,9 @@ must be installed on the cluster and the `CSIMigration` feature must be enabled. To disable the `azureDisk` storage plugin from being loaded by the controller manager and the kubelet, set the `InTreePluginAzureDiskUnregister` flag to `true`. -### azureFile {#azurefile} +### azureFile (deprecated) {#azurefile} + +{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} The `azureFile` volume type mounts a Microsoft Azure File volume (SMB 2.1 and 3.0) into a pod. @@ -201,7 +207,9 @@ You must have your own Ceph server running with the share exported before you ca See the [CephFS example](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) for more details. -### cinder +### cinder (deprecated) {#cinder} + +{{< feature-state for_k8s_version="v1.18" state="deprecated" >}} {{< note >}} Kubernetes must be configured with the OpenStack cloud provider. @@ -295,15 +303,17 @@ keyed with `log_level`. ### downwardAPI {#downwardapi} -A `downwardAPI` volume makes downward API data available to applications. -It mounts a directory and writes the requested data in plain text files. +A `downwardAPI` volume makes {{< glossary_tooltip term_id="downward-api" text="downward API" >}} +data available to applications. Within the volume, you can find the exposed +data as read-only files in plain text format. {{< note >}} -A container using the downward API as a [`subPath`](#using-subpath) volume mount will not -receive downward API updates. +A container using the downward API as a [`subPath`](#using-subpath) volume mount does not +receive updates when field values change. {{< /note >}} -See the [downward API example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. +See [Expose Pod Information to Containers Through Files](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) +to learn more. ### emptyDir {#emptydir} @@ -390,7 +400,9 @@ You must have your own Flocker installation running before you can use it. See the [Flocker example](https://github.com/kubernetes/examples/tree/master/staging/volumes/flocker) for more details. -### gcePersistentDisk +### gcePersistentDisk (deprecated) {#gcepersistentdisk} + +{{< feature-state for_k8s_version="v1.17" state="deprecated" >}} A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [persistent disk](https://cloud.google.com/compute/docs/disks) (PD) into your Pod. @@ -1146,7 +1158,7 @@ to the [volume plugin FAQ](https://github.com/kubernetes/community/blob/master/s (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads. -Please read the [CSI design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) for more information. +Please read the [CSI design proposal](https://git.k8s.io/design-proposals-archive/storage/container-storage-interface.md) for more information. {{< note >}} Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes @@ -1166,8 +1178,8 @@ CSI driver. A `csi` volume can be used in a Pod in three different ways: * through a reference to a [PersistentVolumeClaim](#persistentvolumeclaim) -* with a [generic ephemeral volume](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volume) -* with a [CSI ephemeral volume](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume) +* with a [generic ephemeral volume](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes) +* with a [CSI ephemeral volume](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes) if the driver supports that (beta feature) The following fields are available to storage administrators to configure a CSI @@ -1234,12 +1246,26 @@ You can set up your You can directly configure CSI volumes within the Pod specification. Volumes specified in this way are ephemeral and do not persist across pod restarts. See [Ephemeral -Volumes](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume) +Volumes](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes) for more information. For more information on how to develop a CSI driver, refer to the [kubernetes-csi documentation](https://kubernetes-csi.github.io/docs/) +#### Windows CSI proxy + +{{< feature-state for_k8s_version="v1.22" state="stable" >}} + +CSI node plugins need to perform various privileged +operations like scanning of disk devices and mounting of file systems. These operations +differ for each host operating system. For Linux worker nodes, containerized CSI node +node plugins are typically deployed as privileged containers. For Windows worker nodes, +privileged operations for containerized CSI node plugins is supported using +[csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed, +stand-alone binary that needs to be pre-installed on each Windows node. + +For more details, refer to the deployment guide of the CSI plugin you wish to deploy. + #### Migrating to CSI drivers from in-tree plugins {{< feature-state for_k8s_version="v1.17" state="beta" >}} @@ -1256,6 +1282,14 @@ provisioning/delete, attach/detach, mount/unmount and resizing of volumes. In-tree plugins that support `CSIMigration` and have a corresponding CSI driver implemented are listed in [Types of Volumes](#volume-types). +The following in-tree plugins support persistent storage on Windows nodes: + +* [`awsElasticBlockStore`](#awselasticblockstore) +* [`azureDisk`](#azuredisk) +* [`azureFile`](#azurefile) +* [`gcePersistentDisk`](#gcepersistentdisk) +* [`vsphereVolume`](#vspherevolume) + ### flexVolume {{< feature-state for_k8s_version="v1.23" state="deprecated" >}} @@ -1267,6 +1301,12 @@ volume plugin path on each node and in some cases the control plane nodes as wel Pods interact with FlexVolume drivers through the `flexVolume` in-tree volume plugin. For more details, see the FlexVolume [README](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md#readme) document. +The following FlexVolume [plugins](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows), +deployed as PowerShell scripts on the host, support Windows nodes: + +* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd) +* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd) + {{< note >}} FlexVolume is deprecated. Using an out-of-tree CSI driver is the recommended way to integrate external storage with Kubernetes. diff --git a/content/en/docs/concepts/storage/windows-storage.md b/content/en/docs/concepts/storage/windows-storage.md new file mode 100644 index 0000000000000..b8f40177ca005 --- /dev/null +++ b/content/en/docs/concepts/storage/windows-storage.md @@ -0,0 +1,71 @@ +--- +reviewers: +- jingxu97 +- mauriciopoppe +- jayunit100 +- jsturtevant +- marosset +- aravindhp +title: Windows Storage +content_type: concept +--- + + + +This page provides an storage overview specific to the Windows operating system. + + + +## Persistent storage {#storage} + +Windows has a layered filesystem driver to mount container layers and create a copy +filesystem based on NTFS. All file paths in the container are resolved only within +the context of that container. + +* With Docker, volume mounts can only target a directory in the container, and not + an individual file. This limitation does not apply to containerd. +* Volume mounts cannot project files or directories back to the host filesystem. +* Read-only filesystems are not supported because write access is always required + for the Windows registry and SAM database. However, read-only volumes are supported. +* Volume user-masks and permissions are not available. Because the SAM is not shared + between the host & container, there's no mapping between them. All permissions are + resolved within the context of the container. + +As a result, the following storage functionality is not supported on Windows nodes: + +* Volume subpath mounts: only the entire volume can be mounted in a Windows container +* Subpath volume mounting for Secrets +* Host mount projection +* Read-only root filesystem (mapped volumes still support `readOnly`) +* Block device mapping +* Memory as the storage medium (for example, `emptyDir.medium` set to `Memory`) +* File system features like uid/gid; per-user Linux filesystem permissions +* Setting [secret permissions with DefaultMode](/docs/concepts/configuration/secret/#secret-files-permissions) (due to UID/GID dependency) +* NFS based storage/volume support +* Expanding the mounted volume (resizefs) + +Kubernetes {{< glossary_tooltip text="volumes" term_id="volume" >}} enable complex +applications, with data persistence and Pod volume sharing requirements, to be deployed +on Kubernetes. Management of persistent volumes associated with a specific storage +back-end or protocol includes actions such as provisioning/de-provisioning/resizing +of volumes, attaching/detaching a volume to/from a Kubernetes node and +mounting/dismounting a volume to/from individual containers in a pod that needs to +persist data. + +Volume management components are shipped as Kubernetes volume +[plugin](/docs/concepts/storage/volumes/#types-of-volumes). +The following broad classes of Kubernetes volume plugins are supported on Windows: + +* [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexVolume) + * Please note that FlexVolumes have been deprecated as of 1.23 +* [`CSI Plugins`](/docs/concepts/storage/volumes/#csi) + +##### In-tree volume plugins + +The following in-tree plugins support persistent storage on Windows nodes: + +* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) +* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) +* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) +* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) +* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) diff --git a/content/en/docs/setup/production-environment/windows/_index.md b/content/en/docs/concepts/windows/_index.md similarity index 100% rename from content/en/docs/setup/production-environment/windows/_index.md rename to content/en/docs/concepts/windows/_index.md diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md new file mode 100644 index 0000000000000..91e9412759fe5 --- /dev/null +++ b/content/en/docs/concepts/windows/intro.md @@ -0,0 +1,387 @@ +--- +reviewers: +- jayunit100 +- jsturtevant +- marosset +- perithompson +title: Windows containers in Kubernetes +content_type: concept +weight: 65 +--- + + + +Windows applications constitute a large portion of the services and applications that +run in many organizations. [Windows containers](https://aka.ms/windowscontainers) +provide a way to encapsulate processes and package dependencies, making it easier +to use DevOps practices and follow cloud native patterns for Windows applications. + +Organizations with investments in Windows-based applications and Linux-based +applications don't have to look for separate orchestrators to manage their workloads, +leading to increased operational efficiencies across their deployments, regardless +of operating system. + + + +## Windows nodes in Kubernetes + +To enable the orchestration of Windows containers in Kubernetes, include Windows nodes +in your existing Linux cluster. Scheduling Windows containers in +{{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is similar to +scheduling Linux-based containers. + +In order to run Windows containers, your Kubernetes cluster must include +multiple operating systems. +While you can only run the {{< glossary_tooltip text="control plane" term_id="control-plane" >}} on Linux, +you can deploy worker nodes running either Windows or Linux. + +Windows {{< glossary_tooltip text="nodes" term_id="node" >}} are +[supported](#windows-os-version-support) provided that the operating system is +Windows Server 2019. + +This document uses the term *Windows containers* to mean Windows containers with +process isolation. Kubernetes does not support running Windows containers with +[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container). + +## Compatibility and limitations {#limitations} + +Some node features are only available if you use a specific +[container runtime](#container-runtime); others are not available on Windows nodes, +including: + +* HugePages: not supported for Windows containers +* Privileged containers: not supported for Windows containers. + [HostProcess Containers](/docs/tasks/configure-pod-container/create-hostprocess-pod/) offer similar functionality. +* TerminationGracePeriod: requires containerD + +Not all features of shared namespaces are supported. See [API compatibility](#api) +for more details. + +See [Windows OS version compatibility](#windows-os-version-support) for details on +the Windows versions that Kubernetes is tested against. + +From an API and kubectl perspective, Windows containers behave in much the same +way as Linux-based containers. However, there are some notable differences in key +functionality which are outlined in this section. + +### Comparison with Linux {#compatibility-linux-similarities} + +Key Kubernetes elements work the same way in Windows as they do in Linux. This +section refers to several key workload abstractions and how they map to Windows. + +* [Pods](/docs/concepts/workloads/pods/) + + A Pod is the basic building block of Kubernetes–the smallest and simplest unit in + the Kubernetes object model that you create or deploy. You may not deploy Windows and + Linux containers in the same Pod. All containers in a Pod are scheduled onto a single + Node where each Node represents a specific platform and architecture. The following + Pod capabilities, properties and events are supported with Windows containers: + + * Single or multiple containers per Pod with process isolation and volume sharing + * Pod `status` fields + * Readiness, liveness, and startup probes + * postStart & preStop container lifecycle hooks + * ConfigMap, Secrets: as environment variables or volumes + * `emptyDir` volumes + * Named pipe host mounts + * Resource limits + * OS field: + + The `.spec.os.name` field should be set to `windows` to indicate that the current Pod uses Windows containers. + The `IdentifyPodOS` feature gate needs to be enabled for this field to be recognized. + + {{< note >}} + Starting from 1.24, the `IdentifyPodOS` feature gate is in Beta stage and defaults to be enabled. + {{< /note >}} + + If the `IdentifyPodOS` feature gate is enabled and you set the `.spec.os.name` field to `windows`, + you must not set the following fields in the `.spec` of that Pod: + + * `spec.hostPID` + * `spec.hostIPC` + * `spec.securityContext.seLinuxOptions` + * `spec.securityContext.seccompProfile` + * `spec.securityContext.fsGroup` + * `spec.securityContext.fsGroupChangePolicy` + * `spec.securityContext.sysctls` + * `spec.shareProcessNamespace` + * `spec.securityContext.runAsUser` + * `spec.securityContext.runAsGroup` + * `spec.securityContext.supplementalGroups` + * `spec.containers[*].securityContext.seLinuxOptions` + * `spec.containers[*].securityContext.seccompProfile` + * `spec.containers[*].securityContext.capabilities` + * `spec.containers[*].securityContext.readOnlyRootFilesystem` + * `spec.containers[*].securityContext.privileged` + * `spec.containers[*].securityContext.allowPrivilegeEscalation` + * `spec.containers[*].securityContext.procMount` + * `spec.containers[*].securityContext.runAsUser` + * `spec.containers[*].securityContext.runAsGroup` + + In the above list, wildcards (`*`) indicate all elements in a list. + For example, `spec.containers[*].securityContext` refers to the SecurityContext object + for all containers. If any of these fields is specified, the Pod will + not be admited by the API server. + +* [Workload resources](/docs/concepts/workloads/controllers/) including: + * ReplicaSet + * Deployment + * StatefulSet + * DaemonSet + * Job + * CronJob + * ReplicationController +* {{< glossary_tooltip text="Services" term_id="service" >}} + See [Load balancing and Services](#load-balancing-and-services) for more details. + +Pods, workload resources, and Services are critical elements to managing Windows +workloads on Kubernetes. However, on their own they are not enough to enable +the proper lifecycle management of Windows workloads in a dynamic cloud native +environment. + +* `kubectl exec` +* Pod and container metrics +* {{< glossary_tooltip text="Horizontal pod autoscaling" term_id="horizontal-pod-autoscaler" >}} +* {{< glossary_tooltip text="Resource quotas" term_id="resource-quota" >}} +* Scheduler preemption + +### Command line options for the kubelet {#kubelet-compatibility} + +Some kubelet command line options behave differently on Windows, as described below: + +* The `--windows-priorityclass` lets you set the scheduling priority of the kubelet process + (see [CPU resource management](/docs/concepts/configuration/windows-resource-management/#resource-management-cpu)) +* The `--kube-reserved`, `--system-reserved` , and `--eviction-hard` flags update + [NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +* Eviction by using `--enforce-node-allocable` is not implemented +* Eviction by using `--eviction-hard` and `--eviction-soft` are not implemented +* When running on a Windows node the kubelet does not have memory or CPU + restrictions. `--kube-reserved` and `--system-reserved` only subtract from `NodeAllocatable` + and do not guarantee resource provided for workloads. + See [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/#resource-reservation) + for more information. +* The `MemoryPressure` Condition is not implemented +* The kubelet does not take OOM eviction actions + +### API compatibility {#api} + +There are subtle differences in the way the Kubernetes APIs work for Windows due to the OS +and container runtime. Some workload properties were designed for Linux, and fail to run on Windows. + +At a high level, these OS concepts are different: + +* Identity - Linux uses userID (UID) and groupID (GID) which + are represented as integer types. User and group names + are not canonical - they are just an alias in `/etc/groups` + or `/etc/passwd` back to UID+GID. Windows uses a larger binary + [security identifier](https://docs.microsoft.com/en-us/windows/security/identity-protection/access-control/security-identifiers) (SID) + which is stored in the Windows Security Access Manager (SAM) database. This + database is not shared between the host and containers, or between containers. +* File permissions - Windows uses an access control list based on (SIDs), whereas + POSIX systems such as Linux use a bitmask based on object permissions and UID+GID, + plus _optional_ access control lists. +* File paths - the convention on Windows is to use `\` instead of `/`. The Go IO + libraries typically accept both and just make it work, but when you're setting a + path or command line that's interpreted inside a container, `\` may be needed. +* Signals - Windows interactive apps handle termination differently, and can + implement one or more of these: + * A UI thread handles well-defined messages including `WM_CLOSE`. + * Console apps handle Ctrl-C or Ctrl-break using a Control Handler. + * Services register a Service Control Handler function that can accept + `SERVICE_CONTROL_STOP` control codes. + +Container exit codes follow the same convention where 0 is success, and nonzero is failure. +The specific error codes may differ across Windows and Linux. However, exit codes +passed from the Kubernetes components (kubelet, kube-proxy) are unchanged. + +#### Field compatibility for container specifications {#compatibility-v1-pod-spec-containers} + +The following list documents differences between how Pod container specifications +work between Windows and Linux: + +* Huge pages are not implemented in the Windows container + runtime, and are not available. They require [asserting a user + privilege](https://docs.microsoft.com/en-us/windows/desktop/Memory/large-page-support) + that's not configurable for containers. +* `requests.cpu` and `requests.memory` - requests are subtracted + from node available resources, so they can be used to avoid overprovisioning a + node. However, they cannot be used to guarantee resources in an overprovisioned + node. They should be applied to all containers as a best practice if the operator + wants to avoid overprovisioning entirely. +* `securityContext.allowPrivilegeEscalation` - + not possible on Windows; none of the capabilities are hooked up +* `securityContext.capabilities` - + POSIX capabilities are not implemented on Windows +* `securityContext.privileged` - + Windows doesn't support privileged containers +* `securityContext.procMount` - + Windows doesn't have a `/proc` filesystem +* `securityContext.readOnlyRootFilesystem` - + not possible on Windows; write access is required for registry & system + processes to run inside the container +* `securityContext.runAsGroup` - + not possible on Windows as there is no GID support +* `securityContext.runAsNonRoot` - + this setting will prevent containers from running as `ContainerAdministrator` + which is the closest equivalent to a root user on Windows. +* `securityContext.runAsUser` - + use [`runAsUserName`](/docs/tasks/configure-pod-container/configure-runasusername) + instead +* `securityContext.seLinuxOptions` - + not possible on Windows as SELinux is Linux-specific +* `terminationMessagePath` - + this has some limitations in that Windows doesn't support mapping single files. The + default value is `/dev/termination-log`, which does work because it does not + exist on Windows by default. + +#### Field compatibility for Pod specifications {#compatibility-v1-pod} + +The following list documents differences between how Pod specifications work between Windows and Linux: + +* `hostIPC` and `hostpid` - host namespace sharing is not possible on Windows +* `hostNetwork` - There is no Windows OS support to share the host network +* `dnsPolicy` - setting the Pod `dnsPolicy` to `ClusterFirstWithHostNet` is + not supported on Windows because host networking is not provided. Pods always + run with a container network. +* `podSecurityContext` (see below) +* `shareProcessNamespace` - this is a beta feature, and depends on Linux namespaces + which are not implemented on Windows. Windows cannot share process namespaces or + the container's root filesystem. Only the network can be shared. +* `terminationGracePeriodSeconds` - this is not fully implemented in Docker on Windows, + see the [GitHub issue](https://github.com/moby/moby/issues/25982). + The behavior today is that the ENTRYPOINT process is sent CTRL_SHUTDOWN_EVENT, + then Windows waits 5 seconds by default, and finally shuts down + all processes using the normal Windows shutdown behavior. The 5 + second default is actually in the Windows registry + [inside the container](https://github.com/moby/moby/issues/25982#issuecomment-426441183), + so it can be overridden when the container is built. +* `volumeDevices` - this is a beta feature, and is not implemented on Windows. + Windows cannot attach raw block devices to pods. +* `volumes` + * If you define an `emptyDir` volume, you cannot set its volume source to `memory`. +* You cannot enable `mountPropagation` for volume mounts as this is not + supported on Windows. + +#### Field compatibility for Pod security context {#compatibility-v1-pod-spec-containers-securitycontext} + +None of the Pod [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) fields work on Windows. + +## Node problem detector + +The node problem detector (see +[Monitor Node Health](/docs/tasks/debug/debug-cluster/monitor-node-health/)) +has preliminary support for Windows. +For more information, visit the project's [GitHub page](https://github.com/kubernetes/node-problem-detector#windows). + +## Pause container + +In a Kubernetes Pod, an infrastructure or “pause” container is first created +to host the container. In Linux, the cgroups and namespaces that make up a pod +need a process to maintain their continued existence; the pause process provides +this. Containers that belong to the same pod, including infrastructure and worker +containers, share a common network endpoint (same IPv4 and / or IPv6 address, same +network port spaces). Kubernetes uses pause containers to allow for worker containers +crashing or restarting without losing any of the networking configuration. + +Kubernetes maintains a multi-architecture image that includes support for Windows. +For Kubernetes v{{< skew currentVersion >}} the recommended pause image is `k8s.gcr.io/pause:3.6`. +The [source code](https://github.com/kubernetes/kubernetes/tree/master/build/pause) +is available on GitHub. + +Microsoft maintains a different multi-architecture image, with Linux and Windows +amd64 support, that you can find as `mcr.microsoft.com/oss/kubernetes/pause:3.6`. +This image is built from the same source as the Kubernetes maintained image but +all of the Windows binaries are [authenticode signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/authenticode) by Microsoft. +The Kubernetes project recommends using the Microsoft maintained image if you are +deploying to a production or production-like environment that requires signed +binaries. + +## Container runtimes {#container-runtime} + +You need to install a +{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} +into each node in the cluster so that Pods can run there. + +The following container runtimes work with Windows: + +{{% thirdparty-content %}} + +### ContainerD + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ +as the container runtime for Kubernetes nodes that run Windows. + +Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd). + +{{< note >}} +There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations) +when using GMSA with containerd to access Windows network shares, which requires a +kernel patch. +{{< /note >}} + +### Mirantis Container Runtime {#mcr} + +[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR) +is available as a container runtime for all Windows Server 2019 and later versions. + +See [Install MCR on Windows Servers](https://docs.mirantis.com/mcr/20.10/install/mcr-windows.html) for more information. + +## Windows OS version compatibility {#windows-os-version-support} + +On Windows nodes, strict compatibility rules apply where the host OS version must +match the container base image OS version. Only Windows containers with a container +operating system of Windows Server 2019 are fully supported. + +For Kubernetes v{{< skew currentVersion >}}, operating system compatibility for Windows nodes (and Pods) +is as follows: + +Windows Server LTSC release +: Windows Server 2019 +: Windows Server 2022 + +Windows Server SAC release +: Windows Server version 20H2 + +The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies. + +## Getting help and troubleshooting {#troubleshooting} + +Your main source of help for troubleshooting your Kubernetes cluster should start +with the [Troubleshooting](/docs/tasks/debug/) +page. + +Some additional, Windows-specific troubleshooting help is included +in this section. Logs are an important element of troubleshooting +issues in Kubernetes. Make sure to include them any time you seek +troubleshooting assistance from other contributors. Follow the +instructions in the +SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs). + +### Reporting issues and feature requests + +If you have what looks like a bug, or you would like to +make a feature request, please follow the [SIG Windows contributing guide](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#reporting-issues-and-feature-requests) to create a new issue. +You should first search the list of issues in case it was +reported previously and comment with your experience on the issue and add additional +logs. SIG Windows channel on the Kubernetes Slack is also a great avenue to get some initial support and +troubleshooting ideas prior to creating a ticket. + +## Deployment tools + +The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control +plane to manage the cluster it, and nodes to run your workloads. +[Adding Windows nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) +explains how to deploy Windows nodes to your cluster using kubeadm. + +The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes. + +## Windows distribution channels + +For a detailed explanation of Windows distribution channels see the +[Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19). + +Information on the different Windows Server servicing channels +including their support models can be found at +[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/concepts/windows/user-guide.md similarity index 79% rename from content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md rename to content/en/docs/concepts/windows/user-guide.md index 7177a4aa05046..450a1bb5e0739 100644 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/en/docs/concepts/windows/user-guide.md @@ -3,7 +3,6 @@ reviewers: - jayunit100 - jsturtevant - marosset -- perithompson title: Guide for scheduling Windows containers in Kubernetes content_type: concept weight: 75 @@ -11,31 +10,29 @@ weight: 75 -Windows applications constitute a large portion of the services and applications that run in many organizations. -This guide walks you through the steps to configure and deploy a Windows container in Kubernetes. - - +Windows applications constitute a large portion of the services and applications that run in many organizations. +This guide walks you through the steps to configure and deploy Windows containers in Kubernetes. ## Objectives * Configure an example deployment to run Windows containers on the Windows node -* (Optional) Configure an Active Directory Identity for your Pod using Group Managed Service Accounts (GMSA) +* Highlight Windows specific funcationality in Kubernetes ## Before you begin -* Create a Kubernetes cluster that includes a +* Create a Kubernetes cluster that includes a control plane and a [worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -* It is important to note that creating and deploying services and workloads on Kubernetes -behaves in much the same way for Linux and Windows containers. -[Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical. +* It is important to note that creating and deploying services and workloads on Kubernetes +behaves in much the same way for Linux and Windows containers. +[Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers. ## Getting Started: Deploying a Windows container -To deploy a Windows container on Kubernetes, you must first create an example application. -The example YAML file below creates a simple webserver application. +The example YAML file below deploys a simple webserver application running inside a Windows container. + Create a service spec named `win-webserver.yaml` with the contents below: ```yaml @@ -83,8 +80,8 @@ spec: ``` {{< note >}} -Port mapping is also supported, but for simplicity in this example -the container port 80 is exposed directly to the service. +Port mapping is also supported, but for simplicity this example exposes +port 80 of the container directly to the Service. {{< /note >}} 1. Check that all nodes are healthy: @@ -104,20 +101,19 @@ the container port 80 is exposed directly to the service. 1. Check that the deployment succeeded. To verify: - * Two containers per pod on the Windows node, use `docker ps` - * Two pods listed from the Linux control plane node, use `kubectl get pods` - * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux control plane node + * Two pods listed from the Linux control plane node, use `kubectl get pods` + * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux control plane node to check for a web server response - * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) + * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) using docker exec or kubectl exec - * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) + * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) from the Linux control plane node and from individual pods * Service discovery, `curl` the service name with the Kubernetes [default DNS suffix](/docs/concepts/services-networking/dns-pod-service/#services) * Inbound connectivity, `curl` the NodePort from the Linux control plane node or machines outside of the cluster * Outbound connectivity, `curl` external IPs from inside the pod using kubectl exec {{< note >}} -Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack. +Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack. Only Windows pods are able to access service IPs. {{< /note >}} @@ -125,43 +121,43 @@ Only Windows pods are able to access service IPs. ### Capturing logs from workloads -Logs are an important element of observability; they enable users to gain insights -into the operational aspect of workloads and are a key ingredient to troubleshooting issues. -Because Windows containers and workloads inside Windows containers behave differently from Linux containers, -users had a hard time collecting logs, limiting operational visibility. -Windows workloads for example are usually configured to log to ETW (Event Tracing for Windows) -or push entries to the application event log. -[LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor), an open source tool by Microsoft, -is the recommended way to monitor configured log sources inside a Windows container. -LogMonitor supports monitoring event logs, ETW providers, and custom application logs, +Logs are an important element of observability; they enable users to gain insights +into the operational aspect of workloads and are a key ingredient to troubleshooting issues. +Because Windows containers and workloads inside Windows containers behave differently from Linux containers, +users had a hard time collecting logs, limiting operational visibility. +Windows workloads for example are usually configured to log to ETW (Event Tracing for Windows) +or push entries to the application event log. +[LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor), an open source tool by Microsoft, +is the recommended way to monitor configured log sources inside a Windows container. +LogMonitor supports monitoring event logs, ETW providers, and custom application logs, piping them to STDOUT for consumption by `kubectl logs `. -Follow the instructions in the LogMonitor GitHub page to copy its binaries and configuration files +Follow the instructions in the LogMonitor GitHub page to copy its binaries and configuration files to all your containers and add the necessary entrypoints for LogMonitor to push your logs to STDOUT. -## Using configurable Container usernames +## Configuring container user -Starting with Kubernetes v1.16, Windows containers can be configured to run their entrypoints and processes -with different usernames than the image defaults. -The way this is achieved is a bit different from the way it is done for Linux containers. +### Using configurable Container usernames + +Windows containers can be configured to run their entrypoints and processes +with different usernames than the image defaults. Learn more about it [here](/docs/tasks/configure-pod-container/configure-runasusername/). -## Managing Workload Identity with Group Managed Service Accounts +### Managing Workload Identity with Group Managed Service Accounts -Starting with Kubernetes v1.14, Windows container workloads can be configured to use Group Managed Service Accounts (GMSA). -Group Managed Service Accounts are a specific type of Active Directory account that provides automatic password management, -simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers. -Containers configured with a GMSA can access external Active Directory Domain resources while carrying the identity configured with the GMSA. +Windows container workloads can be configured to use Group Managed Service Accounts (GMSA). +Group Managed Service Accounts are a specific type of Active Directory account that provide automatic password management, +simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers. +Containers configured with a GMSA can access external Active Directory Domain resources while carrying the identity configured with the GMSA. Learn more about configuring and using GMSA for Windows containers [here](/docs/tasks/configure-pod-container/configure-gmsa/). ## Taints and Tolerations -Users today need to use some combination of taints and node selectors in order to -keep Linux and Windows workloads on their respective OS-specific nodes. -This likely imposes a burden only on Windows users. The recommended approach is outlined below, +Users need to use some combination of taints and node selectors in order to +schedule Linux and Windows workloads to their respective OS-specific nodes. +The recommended approach is outlined below, with one of its main goals being that this approach should not break compatibility for existing Linux workloads. - If the `IdentifyPodOS` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled, you can (and should) set `.spec.os.name` for a Pod to indicate the operating system that the containers in that Pod are designed for. For Pods that run Linux containers, set @@ -184,26 +180,26 @@ so taints and tolerations and node selectors are still required ### Ensuring OS-specific workloads land on the appropriate container host -Users can ensure Windows containers can be scheduled on the appropriate host using Taints and Tolerations. +Users can ensure Windows containers can be scheduled on the appropriate host using Taints and Tolerations. All Kubernetes nodes today have the following default labels: * kubernetes.io/os = [windows|linux] * kubernetes.io/arch = [amd64|arm64|...] -If a Pod specification does not specify a nodeSelector like `"kubernetes.io/os": windows`, -it is possible the Pod can be scheduled on any host, Windows or Linux. -This can be problematic since a Windows container can only run on Windows and a Linux container can only run on Linux. +If a Pod specification does not specify a nodeSelector like `"kubernetes.io/os": windows`, +it is possible the Pod can be scheduled on any host, Windows or Linux. +This can be problematic since a Windows container can only run on Windows and a Linux container can only run on Linux. The best practice is to use a nodeSelector. -However, we understand that in many cases users have a pre-existing large number of deployments for Linux containers, -as well as an ecosystem of off-the-shelf configurations, such as community Helm charts, and programmatic Pod generation cases, such as with Operators. -In those situations, you may be hesitant to make the configuration change to add nodeSelectors. -The alternative is to use Taints. Because the kubelet can set Taints during registration, +However, we understand that in many cases users have a pre-existing large number of deployments for Linux containers, +as well as an ecosystem of off-the-shelf configurations, such as community Helm charts, and programmatic Pod generation cases, such as with Operators. +In those situations, you may be hesitant to make the configuration change to add nodeSelectors. +The alternative is to use Taints. Because the kubelet can set Taints during registration, it could easily be modified to automatically add a taint when running on Windows only. For example: `--register-with-taints='os=windows:NoSchedule'` -By adding a taint to all Windows nodes, nothing will be scheduled on them (that includes existing Linux Pods). +By adding a taint to all Windows nodes, nothing will be scheduled on them (that includes existing Linux Pods). In order for a Windows Pod to be scheduled on a Windows node, it would need both the nodeSelector and the appropriate matching toleration to choose Windows. @@ -223,26 +219,24 @@ tolerations: The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSelectors. -Kubernetes 1.17 automatically adds a new label `node.kubernetes.io/windows-build` to simplify this. +Kubernetes 1.17 automatically adds a new label `node.kubernetes.io/windows-build` to simplify this. If you're running an older version, then it's recommended to add this label manually to Windows nodes. -This label reflects the Windows major, minor, and build number that need to match for compatibility. +This label reflects the Windows major, minor, and build number that need to match for compatibility. Here are values used today for each Windows Server version. | Product Name | Build Number(s) | |--------------------------------------|------------------------| | Windows Server 2019 | 10.0.17763 | -| Windows Server version 1809 | 10.0.17763 | -| Windows Server version 1903 | 10.0.18362 | - +| Windows Server, Version 20H2 | 10.0.19042 | +| Windows Server 2022 | 10.0.20348 | ### Simplifying with RuntimeClass -[RuntimeClass] can be used to simplify the process of using taints and tolerations. +[RuntimeClass] can be used to simplify the process of using taints and tolerations. A cluster administrator can create a `RuntimeClass` object which is used to encapsulate these taints and tolerations. - -1. Save this file to `runtimeClasses.yml`. It includes the appropriate `nodeSelector` +1. Save this file to `runtimeClasses.yml`. It includes the appropriate `nodeSelector` for the Windows OS, architecture, and version. ```yaml @@ -313,7 +307,4 @@ spec: app: iis-2019 ``` - - - [RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/ diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 74a232831a45c..9d917505dee8a 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -51,13 +51,8 @@ job.batch/pi created Check on the status of the Job with `kubectl`: -```shell -kubectl describe jobs/pi -``` - -The output is similar to this: - -``` +{{< tabs name="Check status of Job" >}} +{{< tab name="kubectl describe job pi" codelang="bash" >}} Name: pi Namespace: default Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c @@ -91,7 +86,62 @@ Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 -``` +{{< /tab >}} +{{< tab name="kubectl get job pi -o yaml" codelang="bash" >}} +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl","name":"pi"}],"restartPolicy":"Never"}}}} + creationTimestamp: "2022-06-15T08:40:15Z" + generation: 1 + labels: + controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + job-name: pi + name: pi + namespace: default + resourceVersion: "987" + uid: 863452e6-270d-420e-9b94-53a54146c223 +spec: + backoffLimit: 4 + completionMode: NonIndexed + completions: 1 + parallelism: 1 + selector: + matchLabels: + controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + suspend: false + template: + metadata: + creationTimestamp: null + labels: + controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + job-name: pi + spec: + containers: + - command: + - perl + - -Mbignum=bpi + - -wle + - print bpi(2000) + image: perl + imagePullPolicy: Always + name: pi + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + restartPolicy: Never + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + active: 1 + ready: 1 + startTime: "2022-06-15T08:40:15Z" +{{< /tab >}} +{{< /tabs >}} To view completed Pods of a Job, use `kubectl get pods`. @@ -119,7 +169,7 @@ kubectl logs $pods The output is similar to this: -```shell +``` 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` @@ -253,9 +303,19 @@ due to a logical error in configuration etc. To do so, set `.spec.backoffLimit` to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an -exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The -back-off count is reset when a Job's Pod is deleted or successful without any -other Pods for the Job failing around that time. +exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. + +The number of retries is calculated in two ways: +- The number of Pods with `.status.phase = "Failed"`. +- When using `restartPolicy = "OnFailure"`, the number of retries in all the + containers of Pods with `.status.phase` equal to `Pending` or `Running`. + +If either of the calculations reaches the `.spec.backoffLimit`, the Job is +considered failed. + +When the [`JobTrackingWithFinalizers`](#job-tracking-with-finalizers) feature is +disabled, the number of failed Pods is only based on Pods that are still present +in the API. {{< note >}} If your job has `restartPolicy = "OnFailure"`, keep in mind that your Pod running the Job @@ -405,7 +465,7 @@ The pattern names are also links to examples and more detailed description. | ----------------------------------------- |:-----------------:|:---------------------------:|:-------------------:| | [Queue with Pod Per Work Item] | ✓ | | sometimes | | [Queue with Variable Pod Count] | ✓ | ✓ | | -| [Indexed Job with Static Work Assignment] | ✓ | | ✓ | +| [Indexed Job with Static Work Assignment] | ✓ | | ✓ | | [Job Template Expansion] | | | ✓ | When you specify completions with `.spec.completions`, each Pod created by the Job controller @@ -631,7 +691,6 @@ In order to use this behavior, you must enable the `JobTrackingWithFinalizers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/). -It is enabled by default. When enabled, the control plane tracks new Jobs using the behavior described below. Jobs created before the feature was enabled are unaffected. As a user, diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 8024e6246a9ef..470a5e5024150 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -78,7 +78,7 @@ kubectl describe rs/frontend And you will see output similar to: -```shell +``` Name: frontend Namespace: default Selector: tier=frontend @@ -130,7 +130,7 @@ kubectl get pods frontend-b2zdv -o yaml The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field: -```shell +```yaml apiVersion: v1 kind: Pod metadata: @@ -181,7 +181,7 @@ kubectl get pods The output shows that the new Pods are either already terminated, or in the process of being terminated: -```shell +``` NAME READY STATUS RESTARTS AGE frontend-b2zdv 1/1 Running 0 10m frontend-vcmts 1/1 Running 0 10m @@ -210,7 +210,7 @@ kubectl get pods ``` Will reveal in its output: -```shell +``` NAME READY STATUS RESTARTS AGE frontend-hmmj2 1/1 Running 0 9s pod1 1/1 Running 0 36s @@ -387,7 +387,7 @@ As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods -Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker). +Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. ### Job diff --git a/content/en/docs/concepts/workloads/pods/downward-api.md b/content/en/docs/concepts/workloads/pods/downward-api.md new file mode 100644 index 0000000000000..fcee383c45f44 --- /dev/null +++ b/content/en/docs/concepts/workloads/pods/downward-api.md @@ -0,0 +1,131 @@ +--- +title: Downward API +content_type: concept +description: > + There are two ways to expose Pod and container fields to a running container: + environment variables, and as files that are populated by a special volume type. + Together, these two ways of exposing Pod and container fields are called the downward API. +--- + + + +It is sometimes useful for a container to have information about itself, without +being overly coupled to Kubernetes. The _downward API_ allows containers to consume +information about themselves or the cluster without using the Kubernetes client +or API server. + +An example is an existing application that assumes a particular well-known +environment variable holds a unique identifier. One possibility is to wrap the +application, but that is tedious and error-prone, and it violates the goal of low +coupling. A better option would be to use the Pod's name as an identifier, and +inject the Pod's name into the well-known environment variable. + +In Kubernetes, there are two ways to expose Pod and container fields to a running container: + +* as [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api) +* as [files in a `downwardAPI` volume](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) + +Together, these two ways of exposing Pod and container fields are called the +_downward API_. + + + +## Available fields + +Only some Kubernetes API fields are available through the downward API. This +section lists which fields you can make available. + +You can pass information from available Pod-level fields using `fieldRef`. +At the API level, the `spec` for a Pod always defines at least one +[Container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container). +You can pass information from available Container-level fields using +`resourceFieldRef`. + +### Information available via `fieldRef` {#downwardapi-fieldRef} + +For most Pod-level fields, you can provide them to a container either as +an environment variable or using a `downwardAPI` volume. The fields available +via either mechanism are: + +`metadata.name` +: the pod's name + +`metadata.namespace` +: the pod's {{< glossary_tooltip text="namespace" term_id="namespace" >}} + +`metadata.uid` +: the pod's unique ID + +`metadata.annotations['']` +: the value of the pod's {{< glossary_tooltip text="annotation" term_id="annotation" >}} named `` (for example, `metadata.annotations['myannotation']`) + +`metadata.labels['']` +: the text value of the pod's {{< glossary_tooltip text="label" term_id="label" >}} named `` (for example, `metadata.labels['mylabel']`) + +`spec.serviceAccountName` +: the name of the pod's {{< glossary_tooltip text="service account" term_id="service-account" >}} + +`spec.nodeName` +: the name of the {{< glossary_tooltip term_id="node" text="node">}} where the Pod is executing + +`status.hostIP` +: the primary IP address of the node to which the Pod is assigned + +`status.podIP` +: the pod's primary IP address (usually, its IPv4 address) + +In addition, the following information is available through +a `downwardAPI` volume `fieldRef`, but **not as environment variables**: + +`metadata.labels` +: all of the pod's labels, formatted as `label-key="escaped-label-value"` with one label per line + +`metadata.annotations` +: all of the pod's annotations, formatted as `annotation-key="escaped-annotation-value"` with one annotation per line + +### Information available via `resourceFieldRef` {#downwardapi-resourceFieldRef} + +These container-level fields allow you to provide information about +[requests and limits](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) +for resources such as CPU and memory. + + +`resource: limits.cpu` +: A container's CPU limit + +`resource: requests.cpu` +: A container's CPU request + +`resource: limits.memory` +: A container's memory limit + +`resource: requests.memory` +: A container's memory request + +`resource: limits.hugepages-*` +: A container's hugepages limit (provided that the `DownwardAPIHugePages` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) + +`resource: requests.hugepages-*` +: A container's hugepages request (provided that the `DownwardAPIHugePages` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) + +`resource: limits.ephemeral-storage` +: A container's ephemeral-storage limit + +`resource: requests.ephemeral-storage` +: A container's ephemeral-storage request + +#### Fallback information for resource limits + +If CPU and memory limits are not specified for a container, and you use the +downward API to try to expose that information, then the +kubelet defaults to exposing the maximum allocatable value for CPU and memory +based on the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +calculation. + +## {{% heading "whatsnext" %}} + +You can read about [`downwardAPI` volumes](/docs/concepts/storage/volumes/#downwardapi). + +You can try using the downward API to expose container- or Pod-level information: +* as [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api) +* as [files in `downwardAPI` volume](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 371ef58dac2df..61b90d17d0b05 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -334,4 +334,3 @@ Kubernetes, consult the documentation for the version you are using. * Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) * Learn how to [debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/) - diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 34ca33d6b13cd..fe50cf7e2d4f8 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -64,6 +64,7 @@ metadata: spec: topologySpreadConstraints: - maxSkew: + minDomains: topologyKey: whenUnsatisfiable: labelSelector: @@ -143,7 +144,7 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones, `topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint. -If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB": +If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed into "zoneB": {{}} graph BT @@ -188,7 +189,7 @@ graph BT You can tweak the Pod spec to meet various kinds of requirements: -- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed onto "zoneA" as well. +- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed into "zoneA" as well. - Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4". - Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.) @@ -219,7 +220,7 @@ You can use 2 TopologySpreadConstraints to control the Pods spreading on both zo {{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} -In this case, to match the first constraint, the incoming Pod can only be placed onto "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4". +In this case, to match the first constraint, the incoming Pod can only be placed into "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4". Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: @@ -242,7 +243,7 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} -If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing. +If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto "node2". Then a joint result of "zoneB" and "node2" returns nothing. To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. @@ -286,7 +287,7 @@ class n5 k8s; class zoneC cluster; {{< /mermaid >}} -and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. +and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} @@ -301,9 +302,9 @@ There are some implicit conventions worth noting here: - The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that: 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". + 2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". -- Be aware of what will happen if the incomingPod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels. +- Be aware of what will happen if the incoming Pod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed into "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels. ### Cluster-level default constraints diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index 61a4e0a118588..b4c7d838fa704 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -18,8 +18,15 @@ card: {{< note >}} To learn more about contributing to Kubernetes in general, see the [contributor documentation](https://www.kubernetes.dev/docs/). + +You can also read the +{{< glossary_tooltip text="CNCF" term_id="cncf" >}} +[page](https://contribute.cncf.io/contributors/projects/#kubernetes) +about contributing to Kubernetes. {{< /note >}} +--- + This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs). Kubernetes documentation contributors: @@ -138,7 +145,7 @@ class first,second white {{}} Figure 2. Preparation for your first contribution. -- Read the [Contribution overview](/docs/contribute/new-content/overview/) to +- Read the [Contribution overview](/docs/contribute/new-content/) to learn about the different ways you can contribute. - Check [`kubernetes/website` issues list](https://github.com/kubernetes/website/issues/) for issues that make good entry points. diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md index 9c37906f3d4e4..b825ad5e2c469 100644 --- a/content/en/docs/contribute/advanced.md +++ b/content/en/docs/contribute/advanced.md @@ -8,7 +8,7 @@ weight: 98 This page assumes that you understand how to -[contribute to new content](/docs/contribute/new-content/overview) and +[contribute to new content](/docs/contribute/new-content/) and [review others' work](/docs/contribute/review/reviewing-prs/), and are ready to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. @@ -136,7 +136,7 @@ The role of co-chair is one of service: co-chairs build contributor capacity, ha Responsibilities include: - Keep SIG Docs focused on maximizing developer happiness through excellent documentation -- Exemplify the [community code of conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and hold SIG members accountable to it +- Exemplify the [community code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) and hold SIG members accountable to it - Learn and set best practices for the SIG by updating contribution guidelines - Schedule and run SIG meetings: weekly status updates, quarterly retro/planning sessions, and others as needed - Schedule and run doc sprints at KubeCon events and other conferences @@ -147,7 +147,7 @@ Responsibilities include: To schedule and run effective meetings, these guidelines show what to do, how to do it, and why. -**Uphold the [community code of conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)**: +**Uphold the [community code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)**: - Hold respectful, inclusive discussions with respectful, inclusive language. diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 7630bdc7d9ebd..86ef49b2b914f 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -13,14 +13,19 @@ card: -This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/) the docs for a different language. +This page shows you how to +[localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/) the docs for a +different language. ## Contribute to an existing localization -You can help add or improve content to an existing localization. In [Kubernetes Slack](https://slack.k8s.io/) you'll find a channel for each localization. There is also a general [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations) where you can say hello. +You can help add or improve content to an existing localization. In [Kubernetes +Slack](https://slack.k8s.io/) you'll find a channel for each localization. There is also a general +[SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations) +where you can say hello. {{< note >}} If you want to work on a localization that already exists, check @@ -30,11 +35,14 @@ English original. You might see extra details there. ### Find your two-letter language code -First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) to find your localization's two-letter language code. For example, the two-letter code for Korean is `ko`. +First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) +to find your localization's two-letter language code. For example, the two-letter code for Korean +is `ko`. ### Fork and clone the repo -First, [create your own fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository. +First, [create your own fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the +[kubernetes/website](https://github.com/kubernetes/website) repository. Then, clone your fork and `cd` into it: @@ -43,7 +51,8 @@ git clone https://github.com//website cd website ``` -The website content directory includes sub-directories for each language. The localization you want to help out with is inside `content/`. +The website content directory includes sub-directories for each language. The localization you +want to help out with is inside `content/`. ### Suggest changes @@ -57,8 +66,9 @@ equivalent fix by updating the localization you're working on. Please limit pull requests to a single localization, since pull requests that change content in multiple localizations could be difficult to review. -Follow [Suggesting Content Improvements](/docs/contribute/suggest-improvements/) to propose changes to -that localization. The process is very similar to proposing changes to the upstream (English) content. +Follow [Suggesting Content Improvements](/docs/contribute/suggesting-improvements/) +to propose changes to that localization. The process is very similar to proposing changes to the +upstream (English) content. ## Start a new localization @@ -86,60 +96,92 @@ can incrementally work towards that goal. ### Find community -Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) and the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). Other localization teams are happy to help you get started and answer any questions you have. +Let Kubernetes SIG Docs know you're interested in creating a localization! Join the +[SIG Docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) and the +[SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). +Other localization teams are happy to help you get started and answer any questions you have. -Please also consider participating in the [SIG Docs Localization Subgroup meeting](https://github.com/kubernetes/community/tree/master/sig-docs). The mission of the SIG Docs localization subgroup is to work across the SIG Docs localization teams to collaborate on defining and documenting the processes for creating localized contribution guides. In addition, the SIG Docs localization subgroup will look for opportunities for the creation and sharing of common tools across localization teams and also serve to identify new requirements to the SIG Docs Leadership team. If you have questions about this meeting, please inquire on the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). +Please also consider participating in the +[SIG Docs Localization Subgroup meeting](https://github.com/kubernetes/community/tree/master/sig-docs). +The mission of the SIG Docs localization subgroup is to work across the SIG Docs localization +teams to collaborate on defining and documenting the processes for creating localized contribution +guides. In addition, the SIG Docs localization subgroup will look for opportunities for the +creation and sharing of common tools across localization teams and also serve to identify new +requirements to the SIG Docs Leadership team. If you have questions about this meeting, please +inquire on the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). -You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980). +You can also create a Slack channel for your localization in the `kubernetes/community` +repository. For an example of adding a Slack channel, see the PR for +[adding a channel for Persian](https://github.com/kubernetes/community/pull/4980). ### Join the Kubernetes GitHub organization -Once you've opened a localization PR, you can become members of the Kubernetes GitHub organization. Each person on the team needs to create their own [Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose) in the `kubernetes/org` repository. +Once you've opened a localization PR, you can become members of the Kubernetes GitHub +organization. Each person on the team needs to create their own +[Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose) +in the `kubernetes/org` repository. ### Add your localization team in GitHub -Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685). - -Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content within (and only within) your localization directory: `/content/**/`. - -For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs. +Next, add your Kubernetes localization team to +[`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml). +For an example of adding a localization team, see the PR to add the +[Spanish localization team](https://github.com/kubernetes/org/pull/685). +Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content within (and only +within) your localization directory: `/content/**/`. +For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for +new PRs. Members of `@kubernetes/website-maintainers` can create new localization branches to coordinate translation efforts. - -Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. +Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` +[Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. ### Configure the workflow -Next, add a GitHub label for your localization in the `kubernetes/test-infra` repository. A label lets you filter issues and pull requests for your specific language. - -For an example of adding a label, see the PR for adding the [Italian language label](https://github.com/kubernetes/test-infra/pull/11316). +Next, add a GitHub label for your localization in the `kubernetes/test-infra` repository. A label +lets you filter issues and pull requests for your specific language. +For an example of adding a label, see the PR for adding the +[Italian language label](https://github.com/kubernetes/test-infra/pull/11316). ### Modify the site configuration -The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml) file. To support a new localization, you'll need to modify `config.toml`. +The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in +the [`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml) file. +To support a new localization, you'll need to modify `config.toml`. -Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. The German block, for example, looks like: +Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. +The German block, for example, looks like: ```toml [languages.de] title = "Kubernetes" description = "Produktionsreife Container-Verwaltung" languageName = "Deutsch (German)" -languageNameLatinScript = "German" +languageNameLatinScript = "Deutsch" contentDir = "content/de" weight = 8 ``` -The value for `languageName` will be listed in language selection bar. Assign "language name in native script (language name in latin script)" to `languageName`, for example, `languageName = "한국어 (Korean)"`. `languageNameLatinScript` can be used to access the language name in latin script and use it in the theme. Assign "language name in latin script" to `languageNameLatinScript`, for example, `languageNameLatinScript ="Korean"`. +The value for `languageName` will be listed in language selection bar. Assign "language name in +native script and language (English language name in latin script)" to `languageName`. +For example, `languageName = "한국어 (Korean)"` or `languageName = "Deutsch (German)"`. + +`languageNameLatinScript` can be used to access the language name in latin script and use it in +the theme. Assign "language name in latin script" to `languageNameLatinScript`. For example, +`languageNameLatinScript ="Korean"` or `languageNameLatinScript = "Deutsch"`. -When assigning a `weight` parameter for your block, find the language block with the highest weight and add 1 to that value. +When assigning a `weight` parameter for your block, find the language block with the highest +weight and add 1 to that value. -For more information about Hugo's multilingual support, see "[Multilingual Mode](https://gohugo.io/content-management/multilingual/)". +For more information about Hugo's multilingual support, see +"[Multilingual Mode](https://gohugo.io/content-management/multilingual/)". ### Add a new localization directory -Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/main/content) folder in the repository. For example, the two-letter code for German is `de`: +Add a language-specific subdirectory to the +[`content`](https://github.com/kubernetes/website/tree/main/content) folder in the repository. +For example, the two-letter code for German is `de`: ```shell mkdir content/de @@ -149,28 +191,34 @@ You also need to create a directory inside `data/i18n` for [localized strings](#site-strings-in-i18n); look at existing localizations for an example. To use these new strings, you must also create a symbolic link from `i18n/.toml` to the actual string configuration in -`data/i18n//.toml` (remember to commit the symbolic -link). +`data/i18n//.toml` (remember to commit the symbolic link). For example, for German the strings live in `data/i18n/de/de.toml`, and `i18n/de.toml` is a symbolic link to `data/i18n/de/de.toml`. ### Localize the community code of conduct -Open a PR against the [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) repository to add the code of conduct in your language. - +Open a PR against the [`cncf/foundation`](https://github.com/cncf/foundation/tree/main/code-of-conduct-languages) +repository to add the code of conduct in your language. ### Setting up the OWNERS files -To set the roles of each user contributing to the localization, create an `OWNERS` file inside the language-specific subdirectory with: +To set the roles of each user contributing to the localization, create an `OWNERS` file inside the +language-specific subdirectory with: -- **reviewers**: A list of kubernetes teams with reviewer roles, in this case, the `sig-docs-**-reviews` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github). -- **approvers**: A list of kubernetes teams with approvers roles, in this case, the `sig-docs-**-owners` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github). -- **labels**: A list of GitHub labels to automatically apply to a PR, in this case, the language label created in [Configure the workflow](#configure-the-workflow). +- **reviewers**: A list of kubernetes teams with reviewer roles, in this case, the + `sig-docs-**-reviews` team created in + [Add your localization team in GitHub](#add-your-localization-team-in-github). +- **approvers**: A list of kubernetes teams with approvers roles, in this case, the + `sig-docs-**-owners` team created in + [Add your localization team in GitHub](#add-your-localization-team-in-github). +- **labels**: A list of GitHub labels to automatically apply to a PR, in this case, the language + label created in [Configure the workflow](#configure-the-workflow). More information about the `OWNERS` file can be found at [go.k8s.io/owners](https://go.k8s.io/owners). -The [Spanish OWNERS file](https://git.k8s.io/website/content/es/OWNERS), with language code `es`, looks like: +The [Spanish OWNERS file](https://git.k8s.io/website/content/es/OWNERS), +with language code `es`, looks like: ```yaml # See the OWNERS docs at https://go.k8s.io/owners @@ -188,9 +236,13 @@ labels: - language/es ``` -After adding the language-specific `OWNERS` file, update the [root `OWNERS_ALIASES`](https://git.k8s.io/website/OWNERS_ALIASES) file with the new Kubernetes teams for the localization, `sig-docs-**-owners` and `sig-docs-**-reviews`. +After adding the language-specific `OWNERS` file, update the [root +`OWNERS_ALIASES`](https://git.k8s.io/website/OWNERS_ALIASES) file with the new Kubernetes teams +for the localization, `sig-docs-**-owners` and `sig-docs-**-reviews`. -For each team, add the list of GitHub users requested in [Add your localization team in GitHub](#add-your-localization-team-in-github), in alphabetical order. +For each team, add the list of GitHub users requested in +[Add your localization team in GitHub](#add-your-localization-team-in-github), +in alphabetical order. ```diff --- a/OWNERS_ALIASES @@ -214,33 +266,45 @@ For each team, add the list of GitHub users requested in [Add your localization ### Open a pull request -Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) to add a localization to the `kubernetes/website` repository. +Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) to add a +localization to the `kubernetes/website` repository. +The PR must include all of the [minimum required content](#minimum-required-content) before it can +be approved. -The PR must include all of the [minimum required content](#minimum-required-content) before it can be approved. - -For an example of adding a new localization, see the PR to enable [docs in French](https://github.com/kubernetes/website/pull/12548). +For an example of adding a new localization, see the PR to enable +[docs in French](https://github.com/kubernetes/website/pull/12548). ### Add a localized README file -To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of [k/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. For example, a German README file would be `README-de.md`. +To guide other localization contributors, add a new +[`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of +[k/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. +For example, a German README file would be `README-de.md`. -Provide guidance to localization contributors in the localized `README-**.md` file. Include the same information contained in `README.md` as well as: +Provide guidance to localization contributors in the localized `README-**.md` file. +Include the same information contained in `README.md` as well as: - A point of contact for the localization project - Any information specific to the localization -After you create the localized README, add a link to the file from the main English `README.md`, and include contact information in English. You can provide a GitHub ID, email address, [Slack channel](https://slack.com/), or other method of contact. You must also provide a link to your localized Community Code of Conduct. +After you create the localized README, add a link to the file from the main English `README.md`, +and include contact information in English. You can provide a GitHub ID, email address, +[Slack channel](https://slack.com/), or other method of contact. You must also provide a link to your +localized Community Code of Conduct. ### Launching your new localization Once a localization meets requirements for workflow and minimum output, SIG Docs will: - Enable language selection on the website -- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). +- Publicize the localization's availability through + [Cloud Native Computing Foundation](https://www.cncf.io/about/)(CNCF) channels, including the + [Kubernetes blog](https://kubernetes.io/blog/). ## Translating content -Localizing *all* of the Kubernetes documentation is an enormous task. It's okay to start small and expand over time. +Localizing *all* of the Kubernetes documentation is an enormous task. It's okay to start small and +expand over time. ### Minimum required content @@ -251,23 +315,29 @@ Description | URLs Home | [All heading and subheading URLs](/docs/home/) Setup | [All heading and subheading URLs](/docs/setup/) Tutorials | [Kubernetes Basics](/docs/tutorials/kubernetes-basics/), [Hello Minikube](/docs/tutorials/hello-minikube/) -Site strings | [All site strings](#Site-strings-in-i18n) in a new localized TOML file +Site strings | [All site strings](#site-strings-in-i18n) in a new localized TOML file Releases | [All heading and subheading URLs](/releases) -Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the same URL path as the English source. For example, to prepare the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) tutorial for translation into German, create a subfolder under the `content/de/` folder and copy the English source: +Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the +same URL path as the English source. For example, to prepare the +[Kubernetes Basics](/docs/tutorials/kubernetes-basics/) tutorial for translation into German, +create a subfolder under the `content/de/` folder and copy the English source: ```shell mkdir -p content/de/docs/tutorials cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md ``` -Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text. +Translation tools can speed up the translation process. For example, some editors offers plugins +to quickly translate text. {{< caution >}} -Machine-generated translation is insufficient on its own. Localization requires extensive human review to meet minimum standards of quality. +Machine-generated translation is insufficient on its own. Localization requires extensive human +review to meet minimum standards of quality. {{< /caution >}} -To ensure accuracy in grammar and meaning, members of your localization team should carefully review all machine-generated translations before publishing. +To ensure accuracy in grammar and meaning, members of your localization team should carefully +review all machine-generated translations before publishing. ### Source files @@ -278,17 +348,21 @@ To find source files for your target version: 1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website. 2. Select a branch for your target version from the following table: - Target version | Branch - -----|----- - Latest version | [`main`](https://github.com/kubernetes/website/tree/main) - Previous version | [`release-{{< skew prevMinorVersion >}}`](https://github.com/kubernetes/website/tree/release-{{< skew prevMinorVersion >}}) - Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}}) -The `main` branch holds content for the current release `{{< latest-version >}}`. The release team will create a `{{< release-branch >}}` branch before the next release: v{{< skew nextMinorVersion >}}. + Target version | Branch + -----|----- + Latest version | [`main`](https://github.com/kubernetes/website/tree/main) + Previous version | [`release-{{< skew prevMinorVersion >}}`](https://github.com/kubernetes/website/tree/release-{{< skew prevMinorVersion >}}) + Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}}) + +The `main` branch holds content for the current release `{{< latest-version >}}`. The release team +will create a `{{< release-branch >}}` branch before the next release: v{{< skew nextMinorVersion >}}. ### Site strings in i18n -Localizations must include the contents of [`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml) in a new language-specific file. Using German as an example: `data/i18n/de/de.toml`. +Localizations must include the contents of +[`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml) +in a new language-specific file. Using German as an example: `data/i18n/de/de.toml`. Add a new localization directory and file to `data/i18n/`. For example, with German (`de`): @@ -306,17 +380,22 @@ placeholder text for the search form: other = "Suchen" ``` -Localizing site strings lets you customize site-wide text and features: for example, the legal copyright text in the footer on each page. +Localizing site strings lets you customize site-wide text and features: for example, the legal +copyright text in the footer on each page. ### Language specific style guide and glossary -Some language teams have their own language-specific style guide and glossary. For example, see the [Korean Localization Guide](/ko/docs/contribute/localization_ko/). +Some language teams have their own language-specific style guide and glossary. +For example, see the [Korean Localization Guide](/ko/docs/contribute/localization_ko/). ### Language specific Zoom meetings -If the localization project needs a separate meeting time, contact a SIG Docs Co-Chair or Tech Lead to create a new reoccurring Zoom meeting and calendar invite. This is only needed when the the team is large enough to sustain and require a separate meeting. +If the localization project needs a separate meeting time, contact a SIG Docs Co-Chair or Tech +Lead to create a new reoccurring Zoom meeting and calendar invite. This is only needed when the +the team is large enough to sustain and require a separate meeting. -Per CNCF policy, the localization teams must upload their meetings to the SIG Docs YouTube playlist. A SIG Docs Co-Chair or Tech Lead can help with the process until SIG Docs automates it. +Per CNCF policy, the localization teams must upload their meetings to the SIG Docs YouTube +playlist. A SIG Docs Co-Chair or Tech Lead can help with the process until SIG Docs automates it. ## Branching strategy @@ -326,43 +405,66 @@ when starting out and the localization is not yet live. To collaborate on a localization branch: -1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a localization branch from a source branch on https://github.com/kubernetes/website. +1. A team member of + [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) + opens a localization branch from a source branch on https://github.com/kubernetes/website. - Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository. + Your team approvers joined the `@kubernetes/website-maintainers` team when you + [added your localization team](#add-your-localization-team-in-github) to the + [`kubernetes/org`](https://github.com/kubernetes/org) repository. - We recommend the following branch naming scheme: + We recommend the following branch naming scheme: - `dev--.` + `dev--.` - For example, an approver on a German localization team opens the localization branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12. + For example, an approver on a German localization team opens the localization branch + `dev-1.12-de.1` directly against the k/website repository, based on the source branch for + Kubernetes v1.12. 2. Individual contributors open feature branches based on the localization branch. - For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`. + For example, a German contributor opens a pull request with changes to + `kubernetes:dev-1.12-de.1` from `username:local-branch-name`. 3. Approvers review and merge feature branches into the localization branch. -4. Periodically, an approver merges the localization branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request. +4. Periodically, an approver merges the localization branch to its source branch by opening and + approving a new pull request. Be sure to squash the commits before approving the pull request. -Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. +Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German +localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. Teams must merge localized content into the same branch from which the content was sourced. - For example: + - a localization branch sourced from `main` must be merged into `main`. -- a localization branch sourced from `release-{{% skew "prevMinorVersion" %}}` must be merged into `release-{{% skew "prevMinorVersion" %}}`. +- a localization branch sourced from `release-{{% skew "prevMinorVersion" %}}` must be merged into + `release-{{% skew "prevMinorVersion" %}}`. {{< note >}} -If your localization branch was created from `main` branch but it is not merged into `main` before new release branch `{{< release-branch >}}` created, merge it into both `main` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`. +If your localization branch was created from `main` branch but it is not merged into `main` before +new release branch `{{< release-branch >}}` created, merge it into both `main` and new release +branch `{{< release-branch >}}`. To merge your localization branch into new release branch +`{{< release-branch >}}`, you need to switch upstream branch of your localization branch to +`{{< release-branch >}}`. {{< /note >}} -At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch. +At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes +between the previous localization branch and the current localization branch. +There are two scripts for comparing upstream changes. + +- [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy) + is useful for checking the changes made to a specific file. And +- [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy) + is useful for creating a list of outdated files for a specific localization branch. -While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required. +While only approvers can open a new localization branch and merge pull requests, anyone can open a +pull request for a new localization branch. No special permissions are required. -For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo). +For more information about working from forks or directly from the repository, see +["fork and clone the repo"](#fork-and-clone-the-repo). ## Upstream contributions SIG Docs welcomes upstream contributions and corrections to the English source. - + diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 83b950105c8ad..3034f66ce8722 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -16,16 +16,17 @@ Case studies require extensive review before they're approved. ## The Kubernetes Blog -The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community. -This includes end users and developers. -Most of the blog's content is about things happening in the core project, but we encourage you to submit about things happening elsewhere in the ecosystem too! +The Kubernetes blog is used by the project to communicate new features, community reports, and any +news that might be relevant to the Kubernetes community. This includes end users and developers. +Most of the blog's content is about things happening in the core project, but we encourage you to +submit about things happening elsewhere in the ecosystem too! Anyone can write a blog post and submit it for review. ### Submit a Post -Blog posts should not be commercial in nature and should consist of original content that applies broadly to the Kubernetes community. -Appropriate blog content includes: +Blog posts should not be commercial in nature and should consist of original content that applies +broadly to the Kubernetes community. Appropriate blog content includes: - New Kubernetes capabilities - Kubernetes projects updates @@ -43,75 +44,138 @@ Unsuitable content includes: To submit a blog post, follow these steps: -1. [Sign the CLA](https://kubernetes.io/docs/contribute/start/#sign-the-cla) if you have not yet done so. -1. Have a look at the Markdown format for existing blog posts in the [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts). +1. [Sign the CLA](https://github.com/kubernetes/community/blob/master/CLA.md) + if you have not yet done so. + +1. Have a look at the Markdown format for existing blog posts in the + [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts). + 1. Write out your blog post in a text editor of your choice. -1. On the same link from step 2, click the Create new file button. Paste your content into the editor. Name the file to match the proposed title of the blog post, but don’t put the date in the file name. The blog reviewers will work with you on the final file name and the date the blog will be published. + +1. On the same link from step 2, click the Create new file button. Paste your content into the editor. + Name the file to match the proposed title of the blog post, but don’t put the date in the file name. + The blog reviewers will work with you on the final file name and the date the blog will be published. + 1. When you save the file, GitHub will walk you through the pull request process. -1. A blog post reviewer will review your submission and work with you on feedback and final details. When the blog post is approved, the blog will be scheduled for publication. + +1. A blog post reviewer will review your submission and work with you on feedback and final details. + When the blog post is approved, the blog will be scheduled for publication. ### Guidelines and expectations - Blog posts should not be vendor pitches. - - Articles must contain content that applies broadly to the Kubernetes community. For example, a submission should focus on upstream Kubernetes as opposed to vendor-specific configurations. Check the [Documentation style guide](/docs/contribute/style/content-guide/#what-s-allowed) for what is typically allowed on Kubernetes properties. - - Links should primarily be to the official Kubernetes documentation. When using external references, links should be diverse - For example a submission shouldn't contain only links back to a single company's blog. - - Sometimes this is a delicate balance. The [blog team](https://kubernetes.slack.com/messages/sig-docs-blog/) is there to give guidance on whether a post is appropriate for the Kubernetes blog, so don't hesitate to reach out. + + - Articles must contain content that applies broadly to the Kubernetes community. For example, a + submission should focus on upstream Kubernetes as opposed to vendor-specific configurations. + Check the [Documentation style guide](/docs/contribute/style/content-guide/#what-s-allowed) for + what is typically allowed on Kubernetes properties. + - Links should primarily be to the official Kubernetes documentation. When using external + references, links should be diverse - For example a submission shouldn't contain only links + back to a single company's blog. + - Sometimes this is a delicate balance. The [blog team](https://kubernetes.slack.com/messages/sig-docs-blog/) + is there to give guidance on whether a post is appropriate for the Kubernetes blog, so don't + hesitate to reach out. + - Blog posts are not published on specific dates. - - Articles are reviewed by community volunteers. We'll try our best to accommodate specific timing, but we make no guarantees. - - Many core parts of the Kubernetes projects submit blog posts during release windows, delaying publication times. Consider submitting during a quieter period of the release cycle. - - If you are looking for greater coordination on post release dates, coordinating with [CNCF marketing](https://www.cncf.io/about/contact/) is a more appropriate choice than submitting a blog post. - - Sometimes reviews can get backed up. If you feel your review isn't getting the attention it needs, you can reach out to the blog team via [this slack channel](https://kubernetes.slack.com/messages/sig-docs-blog/) to ask in real time. + + - Articles are reviewed by community volunteers. We'll try our best to accommodate specific + timing, but we make no guarantees. + - Many core parts of the Kubernetes projects submit blog posts during release windows, delaying + publication times. Consider submitting during a quieter period of the release cycle. + - If you are looking for greater coordination on post release dates, coordinating with + [CNCF marketing](https://www.cncf.io/about/contact/) is a more appropriate choice than submitting a blog post. + - Sometimes reviews can get backed up. If you feel your review isn't getting the attention it needs, + you can reach out to the blog team via [this slack channel](https://kubernetes.slack.com/messages/sig-docs-blog/) + to ask in real time. + - Blog posts should be relevant to Kubernetes users. - - Topics related to participation in or results of Kubernetes SIGs activities are always on topic (see the work in the [Upstream Marketing Team](https://github.com/kubernetes/community/blob/master/communication/marketing-team/blog-guidelines.md#upstream-marketing-blog-guidelines) for support on these posts). - - The components of Kubernetes are purposely modular, so tools that use existing integration points like CNI and CSI are on topic. - - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft. - - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - - Blog posts about contributing to the Kubernetes project should be in the [Kubernetes Contributors site](https://kubernetes.dev) + + - Topics related to participation in or results of Kubernetes SIGs activities are always on + topic (see the work in the [Upstream Marketing Team](https://github.com/kubernetes/community/blob/master/communication/marketing-team/blog-guidelines.md#upstream-marketing-blog-guidelines) + for support on these posts). + - The components of Kubernetes are purposely modular, so tools that use existing integration + points like CNI and CSI are on topic. + - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team + before submitting a draft. + - Many CNCF projects have their own blog. These are often a better choice for posts. There are + times of major feature or milestone for a CNCF project that users would be interested in + reading on the Kubernetes blog. + - Blog posts about contributing to the Kubernetes project should be in the + [Kubernetes Contributors site](https://kubernetes.dev) + - Blog posts should be original content + - The official blog is not for repurposing existing content from a third party as new content. - - The [license](https://github.com/kubernetes/website/blob/main/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around. + - The [license](https://github.com/kubernetes/website/blob/main/LICENSE) for the blog allows + commercial use of the content for commercial purposes, but not the other way around. + - Blog posts should aim to be future proof - - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. - - Consider concentrating the long technical content as a call to action of the blog post, and focus on the problem space or why readers should care. + + - Given the development velocity of the project, we want evergreen content that won't require + updates to stay accurate for the reader. + - It can be a better choice to add a tutorial or update official documentation than to write a + high level overview as a blog post. + - Consider concentrating the long technical content as a call to action of the blog post, and + focus on the problem space or why readers should care. ### Technical Considerations for submitting a blog post -Submissions need to be in Markdown format to be used by the [Hugo](https://gohugo.io/) generator for the blog. There are [many resources available](https://gohugo.io/documentation/) on how to use this technology stack. +Submissions need to be in Markdown format to be used by the [Hugo](https://gohugo.io/) generator +for the blog. There are [many resources available](https://gohugo.io/documentation/) on how to use +this technology stack. -We recognize that this requirement makes the process more difficult for less-familiar folks to submit, and we're constantly looking at solutions to lower this bar. If you have ideas on how to lower the barrier, please volunteer to help out. +We recognize that this requirement makes the process more difficult for less-familiar folks to +submit, and we're constantly looking at solutions to lower this bar. If you have ideas on how to +lower the barrier, please volunteer to help out. -The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) manages the review process for blog posts. For more information, see [Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post). +The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) +manages the review process for blog posts. For more information, see +[Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post). To submit a blog post follow these directions: -- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts) directory. +- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. + New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts) + directory. -- Ensure that your blog post follows the correct naming conventions and the following frontmatter (metadata) information: +- Ensure that your blog post follows the correct naming conventions and the following frontmatter + (metadata) information: - - The Markdown file name must follow the format `YYYY-MM-DD-Your-Title-Here.md`. For example, `2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`. - - Do **not** include dots in the filename. A name like `2020-01-01-whats-new-in-1.19.md` causes failures during a build. + - The Markdown file name must follow the format `YYYY-MM-DD-Your-Title-Here.md`. For example, + `2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`. + - Do **not** include dots in the filename. A name like `2020-01-01-whats-new-in-1.19.md` causes + failures during a build. - The front matter must include the following: - ```yaml - --- - layout: blog - title: "Your Title Here" - date: YYYY-MM-DD - slug: text-for-URL-link-here-no-spaces - --- - ``` - - The first or initial commit message should be a short summary of the work being done and should stand alone as a description of the blog post. Please note that subsequent edits to your blog will be squashed into this main commit, so it should be as useful as possible. + ```yaml + --- + layout: blog + title: "Your Title Here" + date: YYYY-MM-DD + slug: text-for-URL-link-here-no-spaces + --- + ``` + + - The first or initial commit message should be a short summary of the work being done and + should stand alone as a description of the blog post. Please note that subsequent edits to + your blog will be squashed into this main commit, so it should be as useful as possible. + - Examples of a good commit message: - - _Add blog post on the foo kubernetes feature_ - - _blog: foobar announcement_ + - _Add blog post on the foo kubernetes feature_ + - _blog: foobar announcement_ - Examples of bad commit message: - _Add blog post_ - _._ - _initial commit_ - _draft post_ - - The blog team will then review your PR and give you comments on things you might need to fix. After that the bot will merge your PR and your blog post will be published. - - If the content of the blog post contains only content that is not expected to require updates to stay accurate for the reader, it can be marked as evergreen and exempted from the automatic warning about outdated content added to blog posts older than one year. + + - The blog team will then review your PR and give you comments on things you might need to fix. + After that the bot will merge your PR and your blog post will be published. + + - If the content of the blog post contains only content that is not expected to require updates + to stay accurate for the reader, it can be marked as evergreen and exempted from the automatic + warning about outdated content added to blog posts older than one year. + - To mark a blog post as evergreen, add this to the front matter: ```yaml @@ -121,13 +185,15 @@ To submit a blog post follow these directions: - **Tutorials** that only apply to specific releases or versions and not all future versions - References to pre-GA APIs or features - ## Submit a case study -Case studies highlight how organizations are using Kubernetes to solve -real-world problems. The Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} collaborate with you on all case studies. +Case studies highlight how organizations are using Kubernetes to solve real-world problems. The +Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} +collaborate with you on all case studies. Have a look at the source for the [existing case studies](https://github.com/kubernetes/website/tree/main/content/en/case-studies). -Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) and submit your request as outlined in the guidelines. +Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) +and submit your request as outlined in the guidelines. + diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md index 548dbac5d01eb..666dfdce965ec 100644 --- a/content/en/docs/contribute/new-content/open-a-pr.md +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -28,7 +28,7 @@ If your changes are large, read [Work from a local fork](#fork-the-repo) to lear ## Changes using GitHub If you're less experienced with git workflows, here's an easier method of -opening a pull request. The figure below outlines the steps and the details follow. +opening a pull request. Figure 1 outlines the steps and the details follow. @@ -61,7 +61,7 @@ class tasks,tasks2 white class id1 k8s {{}} -***Figure - Steps for opening a PR using GitHub*** +Figure 1. Steps for opening a PR using GitHub. 1. On the page where you see the issue, select the pencil icon at the top right. You can also scroll to the bottom of the page and select **Edit this page**. @@ -122,7 +122,7 @@ work from a local fork. Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your computer. You can also use a git UI application. -The figure below shows the steps to follow when you work from a local fork. The details for each step follow. +Figure 2 shows the steps to follow when you work from a local fork. The details for each step follow. @@ -151,7 +151,8 @@ class 1,2,3,3a,4,5,6 grey class S,T spacewhite class changes,changes2 white {{}} -***Figure - Working from a local fork to make your changes*** + +Figure 2. Working from a local fork to make your changes. ### Fork the kubernetes/website repository @@ -291,34 +292,24 @@ You can either build the website's container image or run Hugo locally. Building The commands below use Docker as default container engine. Set the `CONTAINER_ENGINE` environment variable to override this behaviour. {{< /note >}} -1. Build the image locally: - - ```bash - # Use docker (default) - make container-image - - ### OR ### +1. Build the container image locally + _You only need this step if you are testing a change to the Hugo tool itself_ + ```bash + # Run this in a terminal (if required) + make container-image + ``` - # Use podman - CONTAINER_ENGINE=podman make container-image - ``` +1. Start Hugo in a container: -2. After building the `kubernetes-hugo` image locally, build and serve the site: + ```bash + # Run this in a terminal + make container-serve + ``` - ```bash - # Use docker (default) - make container-serve - - ### OR ### - - # Use podman - CONTAINER_ENGINE=podman make container-serve - ``` - -3. In a web browser, navigate to `https://localhost:1313`. Hugo watches the +1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the changes and rebuilds the site as needed. -4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, +1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, or close the terminal window. {{% /tab %}} @@ -353,7 +344,7 @@ Alternately, install and use the `hugo` command on your computer: ### Open a pull request from your fork to kubernetes/website {#open-a-pr} -The figure below shows the steps to open a PR from your fork to the K8s/website. The details follow. +Figure 3 shows the steps to open a PR from your fork to the K8s/website. The details follow. @@ -379,7 +370,8 @@ classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:b class 1,2,3,4,5,6,7,8 grey class first,second white {{}} -***Figure - Steps to open a PR from your fork to the K8s/website*** + +Figure 3. Steps to open a PR from your fork to the K8s/website. 1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. 2. Select **New Pull Request**. diff --git a/content/en/docs/contribute/participate/_index.md b/content/en/docs/contribute/participate/_index.md index ff4714f8034c4..b76423ededd89 100644 --- a/content/en/docs/contribute/participate/_index.md +++ b/content/en/docs/contribute/participate/_index.md @@ -115,6 +115,6 @@ SIG Docs approvers. Here's how it works. For more information about contributing to the Kubernetes documentation, see: -- [Contributing new content](/docs/contribute/new-content/overview/) +- [Contributing new content](/docs/contribute/new-content/) - [Reviewing content](/docs/contribute/review/reviewing-prs) - [Documentation style guide](/docs/contribute/style/) diff --git a/content/en/docs/contribute/participate/pr-wranglers.md b/content/en/docs/contribute/participate/pr-wranglers.md index 865af3580531b..ee553c64d68c8 100644 --- a/content/en/docs/contribute/participate/pr-wranglers.md +++ b/content/en/docs/contribute/participate/pr-wranglers.md @@ -84,7 +84,7 @@ To close a pull request, leave a `/close` comment on the PR. {{< note >}} -The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity. +The [`k8s-triage-robot`](https://github.com/k8s-triage-robot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity. {{< /note >}} @@ -100,4 +100,4 @@ In late 2021, SIG Docs introduced the PR Wrangler Shadow Program. The program wa - Others can reach out on the [#sig-docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) for requesting to shadow an assigned PR Wrangler for a specific week. Feel free to reach out to Brad Topol (`@bradtopol`) or one of the [SIG Docs co-chairs/leads](https://github.com/kubernetes/community/tree/master/sig-docs#leadership). -- Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the [Kubernetes Slack](slack.k8s.io). \ No newline at end of file +- Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the [Kubernetes Slack](https://slack.k8s.io). diff --git a/content/en/docs/contribute/participate/roles-and-responsibilities.md b/content/en/docs/contribute/participate/roles-and-responsibilities.md index c577c3f8be8dc..10d6072ce3699 100644 --- a/content/en/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/en/docs/contribute/participate/roles-and-responsibilities.md @@ -32,7 +32,7 @@ Anyone can: - Suggest improvements on [Slack](https://slack.k8s.io/) or the [SIG docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). -After [signing the CLA](/docs/contribute/new-content/overview/#sign-the-cla), anyone can also: +After [signing the CLA](https://github.com/kubernetes/community/blob/master/CLA.md), anyone can also: - Open a pull request to improve existing content, add new content, or write a blog post or case study - Create diagrams, graphics assets, and embeddable screencasts and videos diff --git a/content/en/docs/contribute/review/reviewing-prs.md b/content/en/docs/contribute/review/reviewing-prs.md index 3e71e9c43434a..54b209dc0e24e 100644 --- a/content/en/docs/contribute/review/reviewing-prs.md +++ b/content/en/docs/contribute/review/reviewing-prs.md @@ -7,7 +7,8 @@ weight: 10 -Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests. +Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) +section in the Kubernetes website repository to see open pull requests. Reviewing documentation pull requests is a great way to introduce yourself to the Kubernetes community. @@ -27,7 +28,9 @@ Before reviewing, it's a good idea to: Before you start a review: -- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and ensure that you abide by it at all times. + +- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) + and ensure that you abide by it at all times. - Be polite, considerate, and helpful. - Comment on positive aspects of PRs as well as changes. - Be empathetic and mindful of how your review may be received. @@ -36,7 +39,8 @@ Before you start a review: ## Review process -In general, review pull requests for content and style in English. Figure 1 outlines the steps for the review process. The details for each step follow. +In general, review pull requests for content and style in English. Figure 1 outlines the steps for +the review process. The details for each step follow. @@ -69,33 +73,40 @@ class third,fourth white Figure 1. Review process steps. -1. Go to - [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). - You see a list of every open pull request against the Kubernetes website and - docs. - -2. Filter the open PRs using one or all of the following labels: - - `cncf-cla: yes` (Recommended): PRs submitted by contributors who have not signed the CLA cannot be merged. See [Sign the CLA](/docs/contribute/new-content/overview/#sign-the-cla) for more information. - - `language/en` (Recommended): Filters for english language PRs only. - - `size/`: filters for PRs of a certain size. If you're new, start with smaller PRs. - - Additionally, ensure the PR isn't marked as a work in progress. PRs using the `work in progress` label are not ready for review yet. - -3. Once you've selected a PR to review, understand the change by: - - Reading the PR description to understand the changes made, and read any linked issues - - Reading any comments by other reviewers - - Clicking the **Files changed** tab to see the files and lines changed - - Previewing the changes in the Netlify preview build by scrolling to the PR's build check section at the bottom of the **Conversation** tab. - Here's a screenshot (this shows GitHub's desktop site; if you're reviewing - on a tablet or smartphone device, the GitHub web UI is slightly different): - {{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="GitHub pull request details including link to Netlify preview" >}} - To open the preview, click on the **Details** link of the **deploy/netlify** line in the list of checks. - -4. Go to the **Files changed** tab to start your review. - 1. Click on the `+` symbol beside the line you want to comment on. - 2. Fill in any comments you have about the line and click either **Add single comment** (if you have only one comment to make) or **Start a review** (if you have multiple comments to make). - 3. When finished, click **Review changes** at the top of the page. Here, you can add - add a summary of your review (and leave some positive comments for the contributor!), +1. Go to [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). + You see a list of every open pull request against the Kubernetes website and docs. + +2. Filter the open PRs using one or all of the following labels: + + - `cncf-cla: yes` (Recommended): PRs submitted by contributors who have not signed the CLA + cannot be merged. See [Sign the CLA](/docs/contribute/new-content/#sign-the-cla) + for more information. + - `language/en` (Recommended): Filters for english language PRs only. + - `size/`: filters for PRs of a certain size. If you're new, start with smaller PRs. + + Additionally, ensure the PR isn't marked as a work in progress. PRs using the `work in + progress` label are not ready for review yet. + +3. Once you've selected a PR to review, understand the change by: + + - Reading the PR description to understand the changes made, and read any linked issues + - Reading any comments by other reviewers + - Clicking the **Files changed** tab to see the files and lines changed + - Previewing the changes in the Netlify preview build by scrolling to the PR's build check + section at the bottom of the **Conversation** tab. + Here's a screenshot (this shows GitHub's desktop site; if you're reviewing + on a tablet or smartphone device, the GitHub web UI is slightly different): + {{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="GitHub pull request details including link to Netlify preview" >}} + To open the preview, click on the **Details** link of the **deploy/netlify** line in the + list of checks. + +4. Go to the **Files changed** tab to start your review. + + 1. Click on the `+` symbol beside the line you want to comment on. + 1. Fill in any comments you have about the line and click either **Add single comment** (if you + have only one comment to make) or **Start a review** (if you have multiple comments to make). + 1. When finished, click **Review changes** at the top of the page. Here, you can add + a summary of your review (and leave some positive comments for the contributor!), approve the PR, comment or request changes as needed. New contributors should always choose **Comment**. @@ -119,14 +130,22 @@ When reviewing, use the following as a starting point. ### Website -- Did this PR change or remove a page title, slug/alias or anchor link? If so, are there broken links as a result of this PR? Is there another option, like changing the page title without changing the slug? +- Did this PR change or remove a page title, slug/alias or anchor link? If so, are there broken + links as a result of this PR? Is there another option, like changing the page title without + changing the slug? + - Does the PR introduce a new page? If so: - - Is the page using the right [page content type](/docs/contribute/style/page-content-types/) and associated Hugo shortcodes? + + - Is the page using the right [page content type](/docs/contribute/style/page-content-types/) + and associated Hugo shortcodes? - Does the page appear correctly in the section's side navigation (or at all)? - Should the page appear on the [Docs Home](/docs/home/) listing? -- Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code blocks, tables, notes and images. + +- Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code + blocks, tables, notes and images. ### Other -For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. This lets the author know the issue is non-critical. +For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. +This lets the author know the issue is non-critical. diff --git a/content/en/docs/contribute/suggesting-improvements.md b/content/en/docs/contribute/suggesting-improvements.md index 9cab3f7a72bc4..d79df11476419 100644 --- a/content/en/docs/contribute/suggesting-improvements.md +++ b/content/en/docs/contribute/suggesting-improvements.md @@ -1,6 +1,5 @@ --- title: Suggesting content improvements -slug: suggest-improvements content_type: concept weight: 10 card: diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md index b955f95f567a7..fd3559a4d3786 100644 --- a/content/en/docs/home/supported-doc-versions.md +++ b/content/en/docs/home/supported-doc-versions.md @@ -10,3 +10,8 @@ card: This website contains documentation for the current version of Kubernetes and the four previous versions of Kubernetes. + +The availability of documentation for a Kubernetes version is separate from whether +that release is currently supported. +Read [Support period](/releases/patch-releases/#support-period) to learn about +which versions of Kubernetes are officially supported, and for how long. \ No newline at end of file diff --git a/content/en/docs/images/ingress.svg b/content/en/docs/images/ingress.svg new file mode 100644 index 0000000000000..450a0aae9b4fa --- /dev/null +++ b/content/en/docs/images/ingress.svg @@ -0,0 +1 @@ +
    cluster
    Ingress-managed
    load balancer
    routing rule
    Ingress
    Pod
    Service
    Pod
    client
    \ No newline at end of file diff --git a/content/en/docs/images/ingressFanOut.svg b/content/en/docs/images/ingressFanOut.svg new file mode 100644 index 0000000000000..a6bf202635164 --- /dev/null +++ b/content/en/docs/images/ingressFanOut.svg @@ -0,0 +1 @@ +
    cluster
    Ingress-managed
    load balancer
    /foo
    /bar
    Ingress, 178.91.123.132
    Pod
    Service service1:4200
    Pod
    Pod
    Service service2:8080
    Pod
    client
    \ No newline at end of file diff --git a/content/en/docs/images/ingressNameBased.svg b/content/en/docs/images/ingressNameBased.svg new file mode 100644 index 0000000000000..7e1d7be98c60f --- /dev/null +++ b/content/en/docs/images/ingressNameBased.svg @@ -0,0 +1 @@ +
    cluster
    Ingress-managed
    load balancer
    Host: foo.bar.com
    Host: bar.foo.com
    Ingress, 178.91.123.132
    Pod
    Service service1:80
    Pod
    Pod
    Service service2:80
    Pod
    client
    \ No newline at end of file diff --git a/content/en/docs/images/tutor-service-nodePort-fig01.svg b/content/en/docs/images/tutor-service-nodePort-fig01.svg new file mode 100644 index 0000000000000..bb4d866f853f3 --- /dev/null +++ b/content/en/docs/images/tutor-service-nodePort-fig01.svg @@ -0,0 +1 @@ +
    SNAT
    SNAT
    client
    Node 2
    Node 1
    Endpoint
    \ No newline at end of file diff --git a/content/en/docs/images/tutor-service-nodePort-fig02.svg b/content/en/docs/images/tutor-service-nodePort-fig02.svg new file mode 100644 index 0000000000000..1a891575e5f58 --- /dev/null +++ b/content/en/docs/images/tutor-service-nodePort-fig02.svg @@ -0,0 +1 @@ +
    client
    Node 1
    Node 2
    endpoint
    \ No newline at end of file diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index c4e217af2a497..0ebd59c40684f 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -77,6 +77,7 @@ operator to use or manage a cluster. * [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) +* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1/) * [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) @@ -88,6 +89,7 @@ operator to use or manage a cluster. * [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and [Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/) * [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/) +* [ImagePolicy API (v1alpha1)](/docs/reference/config-api/imagepolicy.v1alpha1/) ## Config API for kubeadm @@ -97,6 +99,6 @@ operator to use or manage a cluster. ## Design Docs An archive of the design docs for Kubernetes functionality. Good starting points are -[Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) and -[Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals). +[Kubernetes Architecture](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) and +[Kubernetes Design Overview](https://git.k8s.io/design-proposals-archive). diff --git a/content/en/docs/reference/access-authn-authz/_index.md b/content/en/docs/reference/access-authn-authz/_index.md index 86d06488a8742..3677f79c57149 100644 --- a/content/en/docs/reference/access-authn-authz/_index.md +++ b/content/en/docs/reference/access-authn-authz/_index.md @@ -24,3 +24,5 @@ Reference documentation: - Service accounts - [Developer guide](/docs/tasks/configure-pod-container/configure-service-account/) - [Administration](/docs/reference/access-authn-authz/service-accounts-admin/) +- [Kubelet Authentication & Authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) + - including kubelet [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index abc3ee968195f..f03b04f8e3908 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -94,7 +94,7 @@ kube-apiserver -h | grep enable-admission-plugins In the current version, the default ones are: ```shell -CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook +CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook ``` ## What does each admission controller do? @@ -139,7 +139,7 @@ requests with the `spec.signerName` requested on the CertificateSigningRequest r See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more information on the permissions required to perform different actions on CertificateSigningRequest resources. -### CertificateSubjectRestrictions {#certificatesubjectrestrictions} +### CertificateSubjectRestriction {#certificatesubjectrestriction} This admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName` of `kubernetes.io/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute') @@ -232,12 +232,10 @@ of it. This admission controller mitigates the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by: - * Enabling the `EventRateLimit` admission controller; - * Referencing an `EventRateLimit` configuration file from the file provided to the API - server's command line flag `--admission-control-config-file`: +* Enabling the `EventRateLimit` admission controller; +* Referencing an `EventRateLimit` configuration file from the file provided to the API + server's command line flag `--admission-control-config-file`: -{{< tabs name="eventratelimit_example" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -246,19 +244,6 @@ plugins: path: eventconfig.yaml ... ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: EventRateLimit - path: eventconfig.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} There are four types of limits that can be specified in the configuration: @@ -283,7 +268,7 @@ limits: burst: 50 ``` -See the [EventRateLimit proposal](https://git.k8s.io/community/contributors/design-proposals/api-machinery/admission_control_event_rate_limit.md) +See the [EventRateLimit Config API (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) for more details. ### ExtendedResourceToleration {#extendedresourcetoleration} @@ -319,8 +304,6 @@ imagePolicy: Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag `--admission-control-config-file`: -{{< tabs name="imagepolicywebhook_example1" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -329,24 +312,9 @@ plugins: path: imagepolicyconfig.yaml ... ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: ImagePolicyWebhook - path: imagepolicyconfig.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} Alternatively, you can embed the configuration directly in the file: -{{< tabs name="imagepolicywebhook_example2" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -360,31 +328,14 @@ plugins: retryBackoff: 500 defaultAllow: true ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: ImagePolicyWebhook - configuration: - imagePolicy: - kubeConfigFile: - allowTTL: 50 - denyTTL: 50 - retryBackoff: 500 - defaultAllow: true -``` -{{% /tab %}} -{{< /tabs >}} The ImagePolicyWebhook config file must reference a [kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS. -The kubeconfig file's cluster field must point to the remote service, and the user field must contain the returned authorizer. +The kubeconfig file's `cluster` field must point to the remote service, and the `user` field +must contain the returned authorizer. ```yaml # clusters refers to the remote service. @@ -405,11 +356,21 @@ users: For additional HTTP configuration, refer to the [kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) documentation. -#### Request Payloads +#### Request payloads -When faced with an admission decision, the API Server POSTs a JSON serialized `imagepolicy.k8s.io/v1alpha1` `ImageReview` object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`. +When faced with an admission decision, the API Server POSTs a JSON serialized +`imagepolicy.k8s.io/v1alpha1` `ImageReview` object describing the action. +This object contains fields describing the containers being admitted, as well as +any pod annotations that match `*.image-policy.k8s.io/*`. -Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +{{ note }} +The webhook API objects are subject to the same versioning compatibility rules +as other Kubernetes API objects. Implementers should be aware of looser compatibility +promises for alpha objects and check the `apiVersion` field of the request to +ensure correct deserialization. +Additionally, the API Server must enable the `imagepolicy.k8s.io/v1alpha1` API extensions +group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +{{ /note }} An example request body: @@ -434,7 +395,9 @@ An example request body: } ``` -The remote service is expected to fill the `ImageReviewStatus` field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return: +The remote service is expected to fill the `ImageReviewStatus` field of the request and +respond to either allow or disallow access. The response body's `spec` field is ignored and +may be omitted. A permissive response would return: ```json { @@ -459,19 +422,23 @@ To disallow access, the service would return: } ``` -For further documentation refer to the `imagepolicy.v1alpha1` API objects and `plugin/pkg/admission/imagepolicy/admission.go`. +For further documentation refer to the +[`imagepolicy.v1alpha1` API](/docs/reference/config-api/imagepolicy.v1alpha1/). #### Extending with Annotations -All annotations on a Pod that match `*.image-policy.k8s.io/*` are sent to the webhook. Sending annotations allows users who are aware of the image policy backend to send extra information to it, and for different backends implementations to accept different information. +All annotations on a Pod that match `*.image-policy.k8s.io/*` are sent to the webhook. +Sending annotations allows users who are aware of the image policy backend to +send extra information to it, and for different backends implementations to +accept different information. Examples of information you might put here are: - * request to "break glass" to override a policy, in case of emergency. - * a ticket number from a ticket system that documents the break-glass request - * provide a hint to the policy server as to the imageID of the image being provided, to save it a lookup +* request to "break glass" to override a policy, in case of emergency. +* a ticket number from a ticket system that documents the break-glass request +* provide a hint to the policy server as to the imageID of the image being provided, to save it a lookup -In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of `ImageReviewSpec`. +In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. ### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology} @@ -480,14 +447,16 @@ This admission controller denies any pod that defines `AntiAffinity` topology ke ### LimitRanger {#limitranger} -This admission controller will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in -your Kubernetes deployment, you MUST use this admission controller to enforce those constraints. LimitRanger can also -be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger -applies a 0.1 CPU requirement to all Pods in the `default` namespace. +This admission controller will observe the incoming request and ensure that it does not violate +any of the constraints enumerated in the `LimitRange` object in a `Namespace`. If you are using +`LimitRange` objects in your Kubernetes deployment, you MUST use this admission controller to +enforce those constraints. LimitRanger can also be used to apply default resource requests to Pods +that don't specify any; currently, the default LimitRanger applies a 0.1 CPU requirement to all +Pods in the `default` namespace. -See the [limitRange design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) -and the [example of Limit Range](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) for more details. +See the [LimitRange API reference](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/) +and the [example of LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) +for more details. ### MutatingAdmissionWebhook {#mutatingadmissionwebhook} @@ -502,21 +471,20 @@ webhooks or validating admission controllers will permit the request to finish. If you disable the MutatingAdmissionWebhook, you must also disable the `MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1` -group/version via the `--runtime-config` flag (both are on by default in -versions >= 1.9). +group/version via the `--runtime-config` flag, both are on by default. #### Use caution when authoring and installing mutating webhooks - * Users may be confused when the objects they try to create are different from - what they get back. - * Built in control loops may break when the objects they try to create are - different when read back. - * Setting originally unset fields is less likely to cause problems than - overwriting fields set in the original request. Avoid doing the latter. - * Future changes to control loops for built-in resources or third-party resources - may break webhooks that work well today. Even when the webhook installation API - is finalized, not all possible webhook behaviors will be guaranteed to be supported - indefinitely. +* Users may be confused when the objects they try to create are different from + what they get back. +* Built in control loops may break when the objects they try to create are + different when read back. + * Setting originally unset fields is less likely to cause problems than + overwriting fields set in the original request. Avoid doing the latter. +* Future changes to control loops for built-in resources or third-party resources + may break webhooks that work well today. Even when the webhook installation API + is finalized, not all possible webhook behaviors will be guaranteed to be supported + indefinitely. ### NamespaceAutoProvision {#namespaceautoprovision} @@ -533,26 +501,28 @@ If the namespace referenced from a request doesn't exist, the request is rejecte ### NamespaceLifecycle {#namespacelifecycle} -This admission controller enforces that a `Namespace` that is undergoing termination cannot have new objects created in it, -and ensures that requests in a non-existent `Namespace` are rejected. This admission controller also prevents deletion of -three system reserved namespaces `default`, `kube-system`, `kube-public`. +This admission controller enforces that a `Namespace` that is undergoing termination cannot have +new objects created in it, and ensures that requests in a non-existent `Namespace` are rejected. +This admission controller also prevents deletion of three system reserved namespaces `default`, +`kube-system`, `kube-public`. -A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that -namespace. In order to enforce integrity of that process, we strongly recommend running this admission controller. +A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, +etc.) in that namespace. In order to enforce integrity of that process, we strongly recommend +running this admission controller. ### NodeRestriction {#noderestriction} This admission controller limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission controller, kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:`. Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. -In Kubernetes 1.11+, kubelets are not allowed to update or remove taints from their `Node` API object. +kubelets are not allowed to update or remove taints from their `Node` API object. -In Kubernetes 1.13+, the `NodeRestriction` admission plugin prevents kubelets from deleting their `Node` API object, +The `NodeRestriction` admission plugin prevents kubelets from deleting their `Node` API object, and enforces kubelet modification of labels under the `kubernetes.io/` or `k8s.io/` prefixes as follows: * **Prevents** kubelets from adding/removing/updating labels with a `node-restriction.kubernetes.io/` prefix. -This label prefix is reserved for administrators to label their `Node` objects for workload isolation purposes, -and kubelets will not be allowed to modify labels with that prefix. + This label prefix is reserved for administrators to label their `Node` objects for workload isolation purposes, + and kubelets will not be allowed to modify labels with that prefix. * **Allows** kubelets to add/remove/update these labels and label prefixes: * `kubernetes.io/hostname` * `kubernetes.io/arch` @@ -566,9 +536,11 @@ and kubelets will not be allowed to modify labels with that prefix. * `kubelet.kubernetes.io/`-prefixed labels * `node.kubernetes.io/`-prefixed labels -Use of any other labels under the `kubernetes.io` or `k8s.io` prefixes by kubelets is reserved, and may be disallowed or allowed by the `NodeRestriction` admission plugin in the future. +Use of any other labels under the `kubernetes.io` or `k8s.io` prefixes by kubelets is reserved, +and may be disallowed or allowed by the `NodeRestriction` admission plugin in the future. -Future versions may add additional restrictions to ensure kubelets have the minimal set of permissions required to operate correctly. +Future versions may add additional restrictions to ensure kubelets have the minimal set of +permissions required to operate correctly. ### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement} @@ -582,7 +554,8 @@ subresource of the referenced *owner* can change it. {{< feature-state for_k8s_version="v1.24" state="stable" >}} -This admission controller implements additional validations for checking incoming `PersistentVolumeClaim` resize requests. +This admission controller implements additional validations for checking incoming +`PersistentVolumeClaim` resize requests. Enabling the `PersistentVolumeClaimResize` admission controller is recommended. This admission controller prevents resizing of all claims by default unless a claim's `StorageClass` @@ -624,9 +597,10 @@ Starting from 1.11, this admission controller is disabled by default. {{< feature-state for_k8s_version="v1.5" state="alpha" >}} -This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration. +This admission controller defaults and limits what node selectors may be used within a namespace +by reading a namespace annotation and a global configuration. -#### Configuration File Format +#### Configuration file format `PodNodeSelector` uses a configuration file to set options for the behavior of the backend. Note that the configuration file format will move to a versioned file in a future release. @@ -639,10 +613,9 @@ podNodeSelectorPluginConfig: namespace2: name-of-node-selector ``` -Reference the `PodNodeSelector` configuration file from the file provided to the API server's command line flag `--admission-control-config-file`: +Reference the `PodNodeSelector` configuration file from the file provided to the API server's +command line flag `--admission-control-config-file`: -{{< tabs name="podnodeselector_example1" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -651,23 +624,11 @@ plugins: path: podnodeselector.yaml ... ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: PodNodeSelector - path: podnodeselector.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} #### Configuration Annotation Format -`PodNodeSelector` uses the annotation key `scheduler.alpha.kubernetes.io/node-selector` to assign node selectors to namespaces. +`PodNodeSelector` uses the annotation key `scheduler.alpha.kubernetes.io/node-selector` to assign +node selectors to namespaces. ```yaml apiVersion: v1 @@ -682,13 +643,14 @@ metadata: This admission controller has the following behavior: -1. If the `Namespace` has an annotation with a key `scheduler.alpha.kubernetes.io/node-selector`, use its value as the -node selector. -2. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the `PodNodeSelector` -plugin configuration file as the node selector. -3. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts result in rejection. -4. Evaluate the pod's node selector against the namespace-specific allowed selector defined the plugin configuration file. -Conflicts result in rejection. +1. If the `Namespace` has an annotation with a key `scheduler.alpha.kubernetes.io/node-selector`, + use its value as the node selector. +2. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the + `PodNodeSelector` plugin configuration file as the node selector. +3. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts + result in rejection. +4. Evaluate the pod's node selector against the namespace-specific allowed selector defined the + plugin configuration file. Conflicts result in rejection. {{< note >}} PodNodeSelector allows forcing pods to run on specifically labeled nodes. Also see the PodTolerationRestriction @@ -721,7 +683,8 @@ for more information. {{< feature-state for_k8s_version="v1.7" state="alpha" >}} -The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace. +The PodTolerationRestriction admission controller verifies any conflict between tolerations of a +pod and the tolerations of its namespace. It rejects the pod request if there is a conflict. It then merges the tolerations annotated on the namespace into the tolerations of the pod. The resulting tolerations are checked against a list of allowed tolerations annotated to the namespace. @@ -748,16 +711,18 @@ metadata: ### Priority {#priority} -The priority admission controller uses the `priorityClassName` field and populates the integer value of the priority. +The priority admission controller uses the `priorityClassName` field and populates the integer +value of the priority. If the priority class is not found, the Pod is rejected. ### ResourceQuota {#resourcequota} -This admission controller will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota` -objects in your Kubernetes deployment, you MUST use this admission controller to enforce quota constraints. +This admission controller will observe the incoming request and ensure that it does not violate +any of the constraints enumerated in the `ResourceQuota` object in a `Namespace`. If you are +using `ResourceQuota` objects in your Kubernetes deployment, you MUST use this admission +controller to enforce quota constraints. -See the [resourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) +See the [ResourceQuota API reference](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details. ### RuntimeClass {#runtimeclass} @@ -793,14 +758,15 @@ pod privileges. This admission controller implements automation for [serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/). -We strongly recommend using this admission controller if you intend to make use of Kubernetes `ServiceAccount` objects. +We strongly recommend using this admission controller if you intend to make use of Kubernetes +`ServiceAccount` objects. ### StorageObjectInUseProtection The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection` finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV). In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed -from the PVC or PV by PVC or PV Protection Controller. +from the PVC or PV by PVC or PV Protection Controller. Refer to the [Storage Object in Use Protection](/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection) for more detailed information. @@ -809,7 +775,10 @@ for more detailed information. {{< feature-state for_k8s_version="v1.17" state="stable" >}} -This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}} newly created Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods to be scheduled on new Nodes before their taints were updated to accurately reflect their reported conditions. +This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}} newly created +Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods +to be scheduled on new Nodes before their taints were updated to accurately reflect their reported +conditions. ### ValidatingAdmissionWebhook {#validatingadmissionwebhook} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index b33b2391992cc..1641cb8e54ad1 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -856,6 +856,14 @@ rules: resourceNames: ["06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b"] ``` +{{< note >}} +Impersonating a user or group allows you to perform any action as if you were that user or group; +for that reason, impersonation is not namespace scoped. +If you want to allow impersonation using Kubernetes RBAC, +this requires using a `ClusterRole` and a `ClusterRoleBinding`, +not a `Role` and `RoleBinding`. +{{< /note >}} + ## client-go credential plugins {{< feature-state for_k8s_version="v1.22" state="stable" >}} diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index 7e743be63df58..74367d50c98e7 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -15,7 +15,7 @@ creating new clusters or joining new nodes to an existing cluster. It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts for users that wish to start clusters without `kubeadm`. It is also built to work, via RBAC policy, with the -[Kubelet TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) system. +[Kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system. @@ -70,7 +70,7 @@ controller on the controller manager. Each valid token is backed by a secret in the `kube-system` namespace. You can find the full design doc -[here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). +[here](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md). Here is what the secret looks like. diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md b/content/en/docs/reference/access-authn-authz/kubelet-authn-authz.md similarity index 100% rename from content/en/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md rename to content/en/docs/reference/access-authn-authz/kubelet-authn-authz.md diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md similarity index 100% rename from content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md rename to content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md diff --git a/content/en/docs/reference/access-authn-authz/node.md b/content/en/docs/reference/access-authn-authz/node.md index 6e7c538eb01c7..bc9863219f7d5 100644 --- a/content/en/docs/reference/access-authn-authz/node.md +++ b/content/en/docs/reference/access-authn-authz/node.md @@ -43,7 +43,7 @@ have the minimal set of permissions required to operate correctly. In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:`. This group and user name format match the identity created for each kubelet as part of -[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). +[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/). The value of `` **must** match precisely the name of the node as registered by the kubelet. By default, this is the host name as provided by `hostname`, or overridden via the [kubelet option](/docs/reference/command-line-tools-reference/kubelet/) `--hostname-override`. However, when using the `--cloud-provider` kubelet option, the specific hostname may be determined by the cloud provider, ignoring the local `hostname` and the `--hostname-override` option. For specifics about how the kubelet determines the hostname, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/). diff --git a/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md b/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md index 468579f982a37..6c820a6e99c1c 100644 --- a/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md +++ b/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md @@ -17,7 +17,7 @@ For each applicable parameter, the allowed values for the [Baseline](/docs/concepts/security/pod-security-standards/#baseline) and [Restricted](/docs/concepts/security/pod-security-standards/#restricted) profiles are listed. Anything outside the allowed values for those profiles would fall under the -[Privileged](/docs/concepts/security/pod-security-standards/#priveleged) profile. "No opinion" +[Privileged](/docs/concepts/security/pod-security-standards/#privileged) profile. "No opinion" means all values are allowed under all Pod Security Standards. For a step-by-step migration guide, see diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 57a074a29a47a..d085251e4337f 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -798,7 +798,7 @@ This is commonly used by add-on API servers for unified authentication and autho system:node-bootstrapper None Allows access to the resources required to perform -kubelet TLS bootstrapping. +kubelet TLS bootstrapping. system:node-problem-detector diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 8eb23b3a473eb..9d1e67b3c05ae 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -761,7 +761,7 @@ Each feature gate is designed for enabling/disabling a specific feature: Requires Portworx CSI driver to be installed and configured in the cluster. - `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in `csi.storage.k8s.io`. - `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a - [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) + [CSI (Container Storage Interface)](https://git.k8s.io/design-proposals-archive/storage/container-storage-interface.md) compatible volume plugin. - `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token that they mount volumes for. See @@ -1086,10 +1086,10 @@ Each feature gate is designed for enabling/disabling a specific feature: [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details. - `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet. - See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) + See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet. - See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) + See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RunAsGroup`: Enable control over the primary group ID set on the init processes of containers. diff --git a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md index d391dfc46b2c2..34ced450e0ca1 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md @@ -43,31 +43,31 @@ kube-proxy [flags] ---azure-container-registry-config string +--add_dir_header -

    Path to the file containing Azure container registry configuration information.

    +

    If true, adds the file directory to the header of the log messages

    ---bind-address string     Default: 0.0.0.0 +--alsologtostderr -

    The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces). This parameter is ignored if a config file is specified by --config.

    +

    log to standard error as well as files

    ---bind-address-hard-fail +--bind-address string     Default: 0.0.0.0 -

    If true kube-proxy will treat failure to bind to a port as fatal and exit

    +

    The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces). This parameter is ignored if a config file is specified by --config.

    ---boot-id-file string     Default: "/proc/sys/kernel/random/boot_id" +--bind-address-hard-fail -

    Comma-separated list of files to check for boot-id. Use the first one that exists.

    +

    If true kube-proxy will treat failure to bind to a port as fatal and exit

    @@ -84,20 +84,6 @@ kube-proxy [flags]

    If true cleanup iptables and ipvs rules and exit.

    - ---cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16 - - -

    CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks

    - - - ---cloud-provider-gce-lb-src-cidrs cidrs     Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 - - -

    CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks

    - - --cluster-cidr string @@ -147,20 +133,6 @@ kube-proxy [flags]

    Idle timeout for established TCP connections (0 to leave as-is)

    - ---default-not-ready-toleration-seconds int     Default: 300 - - -

    Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.

    - - - ---default-unreachable-toleration-seconds int     Default: 300 - - -

    Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.

    - - --detect-local-mode LocalMode @@ -172,7 +144,7 @@ kube-proxy [flags] --feature-gates <comma-separated 'key=True|False' pairs> -

    A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
    APIListChunking=true|false (BETA - default=true)
    APIPriorityAndFairness=true|false (BETA - default=true)
    APIResponseCompression=true|false (BETA - default=true)
    APIServerIdentity=true|false (ALPHA - default=false)
    APIServerTracing=true|false (ALPHA - default=false)
    AllAlpha=true|false (ALPHA - default=false)
    AllBeta=true|false (BETA - default=false)
    AnyVolumeDataSource=true|false (BETA - default=true)
    AppArmor=true|false (BETA - default=true)
    CPUManager=true|false (BETA - default=true)
    CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
    CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
    CPUManagerPolicyOptions=true|false (BETA - default=true)
    CSIInlineVolume=true|false (BETA - default=true)
    CSIMigration=true|false (BETA - default=true)
    CSIMigrationAWS=true|false (BETA - default=true)
    CSIMigrationAzureFile=true|false (BETA - default=true)
    CSIMigrationGCE=true|false (BETA - default=true)
    CSIMigrationPortworx=true|false (ALPHA - default=false)
    CSIMigrationRBD=true|false (ALPHA - default=false)
    CSIMigrationvSphere=true|false (BETA - default=false)
    CSIVolumeHealth=true|false (ALPHA - default=false)
    ContextualLogging=true|false (ALPHA - default=false)
    CronJobTimeZone=true|false (ALPHA - default=false)
    CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
    CustomResourceValidationExpressions=true|false (ALPHA - default=false)
    DaemonSetUpdateSurge=true|false (BETA - default=true)
    DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
    DevicePlugins=true|false (BETA - default=true)
    DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
    DisableCloudProviders=true|false (ALPHA - default=false)
    DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
    DownwardAPIHugePages=true|false (BETA - default=true)
    EndpointSliceTerminatingCondition=true|false (BETA - default=true)
    EphemeralContainers=true|false (BETA - default=true)
    ExpandedDNSConfig=true|false (ALPHA - default=false)
    ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
    GRPCContainerProbe=true|false (BETA - default=true)
    GracefulNodeShutdown=true|false (BETA - default=true)
    GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
    HPAContainerMetrics=true|false (ALPHA - default=false)
    HPAScaleToZero=true|false (ALPHA - default=false)
    HonorPVReclaimPolicy=true|false (ALPHA - default=false)
    IdentifyPodOS=true|false (BETA - default=true)
    InTreePluginAWSUnregister=true|false (ALPHA - default=false)
    InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
    InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
    InTreePluginGCEUnregister=true|false (ALPHA - default=false)
    InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
    InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
    InTreePluginRBDUnregister=true|false (ALPHA - default=false)
    InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
    JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
    JobReadyPods=true|false (BETA - default=true)
    JobTrackingWithFinalizers=true|false (BETA - default=false)
    KubeletCredentialProviders=true|false (BETA - default=true)
    KubeletInUserNamespace=true|false (ALPHA - default=false)
    KubeletPodResources=true|false (BETA - default=true)
    KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
    LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
    LocalStorageCapacityIsolation=true|false (BETA - default=true)
    LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
    LogarithmicScaleDown=true|false (BETA - default=true)
    MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
    MemoryManager=true|false (BETA - default=true)
    MemoryQoS=true|false (ALPHA - default=false)
    MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
    MixedProtocolLBService=true|false (BETA - default=true)
    NetworkPolicyEndPort=true|false (BETA - default=true)
    NetworkPolicyStatus=true|false (ALPHA - default=false)
    NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
    NodeSwap=true|false (ALPHA - default=false)
    OpenAPIEnums=true|false (BETA - default=true)
    OpenAPIV3=true|false (BETA - default=true)
    PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
    PodDeletionCost=true|false (BETA - default=true)
    PodSecurity=true|false (BETA - default=true)
    ProbeTerminationGracePeriod=true|false (BETA - default=false)
    ProcMountType=true|false (ALPHA - default=false)
    ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
    QOSReserved=true|false (ALPHA - default=false)
    ReadWriteOncePod=true|false (ALPHA - default=false)
    RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
    RemainingItemCount=true|false (BETA - default=true)
    RotateKubeletServerCertificate=true|false (BETA - default=true)
    SeccompDefault=true|false (ALPHA - default=false)
    ServerSideFieldValidation=true|false (ALPHA - default=false)
    ServiceIPStaticSubrange=true|false (ALPHA - default=false)
    ServiceInternalTrafficPolicy=true|false (BETA - default=true)
    SizeMemoryBackedVolumes=true|false (BETA - default=true)
    StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
    StatefulSetMinReadySeconds=true|false (BETA - default=true)
    StorageVersionAPI=true|false (ALPHA - default=false)
    StorageVersionHash=true|false (BETA - default=true)
    TopologyAwareHints=true|false (BETA - default=true)
    TopologyManager=true|false (BETA - default=true)
    VolumeCapacityPriority=true|false (ALPHA - default=false)
    WinDSR=true|false (ALPHA - default=false)
    WinOverlay=true|false (BETA - default=true)
    WindowsHostProcessContainers=true|false (BETA - default=true)This parameter is ignored if a config file is specified by --config.

    +

    A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
    APIListChunking=true|false (BETA - default=true)
    APIPriorityAndFairness=true|false (BETA - default=true)
    APIResponseCompression=true|false (BETA - default=true)
    APIServerIdentity=true|false (ALPHA - default=false)
    APIServerTracing=true|false (ALPHA - default=false)
    AllAlpha=true|false (ALPHA - default=false)
    AllBeta=true|false (BETA - default=false)
    AnyVolumeDataSource=true|false (BETA - default=true)
    AppArmor=true|false (BETA - default=true)
    CPUManager=true|false (BETA - default=true)
    CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
    CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
    CPUManagerPolicyOptions=true|false (BETA - default=true)
    CSIInlineVolume=true|false (BETA - default=true)
    CSIMigration=true|false (BETA - default=true)
    CSIMigrationAWS=true|false (BETA - default=true)
    CSIMigrationAzureFile=true|false (BETA - default=true)
    CSIMigrationGCE=true|false (BETA - default=true)
    CSIMigrationPortworx=true|false (ALPHA - default=false)
    CSIMigrationRBD=true|false (ALPHA - default=false)
    CSIMigrationvSphere=true|false (BETA - default=false)
    CSIVolumeHealth=true|false (ALPHA - default=false)
    CronJobTimeZone=true|false (ALPHA - default=false)
    CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
    CustomResourceValidationExpressions=true|false (ALPHA - default=false)
    DaemonSetUpdateSurge=true|false (BETA - default=true)
    DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
    DevicePlugins=true|false (BETA - default=true)
    DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
    DisableCloudProviders=true|false (ALPHA - default=false)
    DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
    DownwardAPIHugePages=true|false (BETA - default=true)
    EndpointSliceTerminatingCondition=true|false (BETA - default=true)
    EphemeralContainers=true|false (BETA - default=true)
    ExpandedDNSConfig=true|false (ALPHA - default=false)
    ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
    GRPCContainerProbe=true|false (BETA - default=true)
    GracefulNodeShutdown=true|false (BETA - default=true)
    GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
    HPAContainerMetrics=true|false (ALPHA - default=false)
    HPAScaleToZero=true|false (ALPHA - default=false)
    HonorPVReclaimPolicy=true|false (ALPHA - default=false)
    IdentifyPodOS=true|false (BETA - default=true)
    InTreePluginAWSUnregister=true|false (ALPHA - default=false)
    InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
    InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
    InTreePluginGCEUnregister=true|false (ALPHA - default=false)
    InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
    InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
    InTreePluginRBDUnregister=true|false (ALPHA - default=false)
    InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
    JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
    JobReadyPods=true|false (BETA - default=true)
    JobTrackingWithFinalizers=true|false (BETA - default=false)
    KubeletCredentialProviders=true|false (BETA - default=true)
    KubeletInUserNamespace=true|false (ALPHA - default=false)
    KubeletPodResources=true|false (BETA - default=true)
    KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
    LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
    LocalStorageCapacityIsolation=true|false (BETA - default=true)
    LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
    LogarithmicScaleDown=true|false (BETA - default=true)
    MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
    MemoryManager=true|false (BETA - default=true)
    MemoryQoS=true|false (ALPHA - default=false)
    MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
    MixedProtocolLBService=true|false (BETA - default=true)
    NetworkPolicyEndPort=true|false (BETA - default=true)
    NetworkPolicyStatus=true|false (ALPHA - default=false)
    NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
    NodeSwap=true|false (ALPHA - default=false)
    OpenAPIEnums=true|false (BETA - default=true)
    OpenAPIV3=true|false (BETA - default=true)
    PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
    PodDeletionCost=true|false (BETA - default=true)
    PodSecurity=true|false (BETA - default=true)
    ProbeTerminationGracePeriod=true|false (BETA - default=false)
    ProcMountType=true|false (ALPHA - default=false)
    ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
    QOSReserved=true|false (ALPHA - default=false)
    ReadWriteOncePod=true|false (ALPHA - default=false)
    RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
    RemainingItemCount=true|false (BETA - default=true)
    RotateKubeletServerCertificate=true|false (BETA - default=true)
    SeccompDefault=true|false (ALPHA - default=false)
    ServerSideFieldValidation=true|false (ALPHA - default=false)
    ServiceIPStaticSubrange=true|false (ALPHA - default=false)
    ServiceInternalTrafficPolicy=true|false (BETA - default=true)
    SizeMemoryBackedVolumes=true|false (BETA - default=true)
    StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
    StatefulSetMinReadySeconds=true|false (BETA - default=true)
    StorageVersionAPI=true|false (ALPHA - default=false)
    StorageVersionHash=true|false (BETA - default=true)
    TopologyAwareHints=true|false (BETA - default=true)
    TopologyManager=true|false (BETA - default=true)
    VolumeCapacityPriority=true|false (ALPHA - default=false)
    WinDSR=true|false (ALPHA - default=false)
    WinOverlay=true|false (BETA - default=true)
    WindowsHostProcessContainers=true|false (BETA - default=true)This parameter is ignored if a config file is specified by --config.

    @@ -302,10 +274,38 @@ kube-proxy [flags] ---machine-id-file string     Default: "/etc/machine-id,/var/lib/dbus/machine-id" +--log_backtrace_at <a string in the form 'file:N'>     Default: :0 -

    Comma-separated list of files to check for machine-id. Use the first one that exists.

    +

    when logging hits line file:N, emit a stack trace

    + + + +--log_dir string + + +

    If non-empty, write log files in this directory

    + + + +--log_file string + + +

    If non-empty, use this log file

    + + + +--log_file_max_size uint     Default: 1800 + + +

    Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

    + + + +--logtostderr     Default: true + + +

    log to standard error instead of files

    @@ -343,6 +343,13 @@ kube-proxy [flags]

    A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses. This parameter is ignored if a config file is specified by --config.

    + +--one_output + + +

    If true, only write logs to their native severity level (vs also writing to each lower severity level)

    + + --oom-score-adj int32     Default: -999 @@ -392,6 +399,27 @@ kube-proxy [flags]

    The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.This parameter is ignored if a config file is specified by --config.

    + +--skip_headers + + +

    If true, avoid header prefixes in the log messages

    + + + +--skip_log_headers + + +

    If true, avoid headers when opening log files

    + + + +--stderrthreshold int     Default: 2 + + +

    logs at or above this threshold go to stderr

    + + --udp-timeout duration     Default: 250ms @@ -399,6 +427,13 @@ kube-proxy [flags]

    How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace

    + +-v, --v int + + +

    number for the log level verbosity

    + + --version version[=true] @@ -406,6 +441,13 @@ kube-proxy [flags]

    Print version information and quit

    + +--vmodule <comma-separated 'pattern=N' settings> + + +

    comma-separated list of pattern=N settings for file-filtered logging

    + + --write-config-to string diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 555414a6ebcfb..b10e1b70574ea 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -44,70 +44,70 @@ kubelet [flags] --add-dir-header -If true, adds the file directory to the header of the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If true, adds the file directory to the header of the log messages (DEPRECATED: will be removed in a future release, see here.) --address string     Default: 0.0.0.0 -The IP address for the Kubelet to serve on (set to 0.0.0.0 or :: for listening in all interfaces and IP families) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The IP address for the Kubelet to serve on (set to 0.0.0.0 or :: for listening in all interfaces and IP families) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --allowed-unsafe-sysctls strings -Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --alsologtostderr -Log to standard error as well as files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +Log to standard error as well as files (DEPRECATED: will be removed in a future release, see here.) --anonymous-auth     Default: true -Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --authentication-token-webhook -Use the TokenReview API to determine authentication for bearer tokens. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Use the TokenReview API to determine authentication for bearer tokens. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --authentication-token-webhook-cache-ttl duration     Default: 2m0s -The duration to cache responses from the webhook token authenticator. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache responses from the webhook token authenticator. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --authorization-mode string     Default: AlwaysAllow -Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --authorization-webhook-cache-authorized-ttl duration     Default: 5m0s -The duration to cache 'authorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache 'authorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --authorization-webhook-cache-unauthorized-ttl duration     Default: 30s -The duration to cache 'unauthorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache 'unauthorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -135,28 +135,28 @@ kubelet [flags] --cgroup-driver string     Default: cgroupfs -Driver that the kubelet uses to manipulate cgroups on the host. Possible values: cgroupfs, systemd. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)/td> +Driver that the kubelet uses to manipulate cgroups on the host. Possible values: cgroupfs, systemd. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cgroup-root string     Default: '' -Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cgroups-per-qos     Default: true -Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --client-ca-file string -If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -177,14 +177,14 @@ kubelet [flags] --cluster-dns strings -Comma-separated list of DNS server IP address. This value is used for containers DNS server in case of Pods with "dnsPolicy=ClusterFirst".
    Note: all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly. There is no guarantee as to which DNS server may be contacted for name resolution. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of DNS server IP address. This value is used for containers DNS server in case of Pods with "dnsPolicy=ClusterFirst".
    Note: all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly. There is no guarantee as to which DNS server may be contacted for name resolution. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cluster-domain string -Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -219,14 +219,14 @@ kubelet [flags] --container-log-max-files int32     Default: 5 -<Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be >= 2. This flag can only be used with --container-runtime=remote. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be >= 2. This flag can only be used with --container-runtime=remote. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --container-log-max-size string     Default: 10Mi -<Warning: Beta feature> Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with --container-runtime=remote. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with --container-runtime=remote. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -248,42 +248,42 @@ kubelet [flags] --contention-profiling -Enable lock contention profiling, if profiling is enabled (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable lock contention profiling, if profiling is enabled (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cpu-cfs-quota     Default: true -Enable CPU CFS quota enforcement for containers that specify CPU limits (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable CPU CFS quota enforcement for containers that specify CPU limits (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cpu-cfs-quota-period duration     Default: 100ms -Sets CPU CFS quota period value, cpu.cfs_period_us, defaults to Linux Kernel default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Sets CPU CFS quota period value, cpu.cfs_period_us, defaults to Linux Kernel default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cpu-manager-policy string     Default: none -CPU Manager policy to use. Possible values: none, static. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +CPU Manager policy to use. Possible values: none, static. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cpu-manager-policy-options mapStringString -Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --cpu-manager-reconcile-period duration     Default: 10s -<Warning: Alpha feature> CPU Manager reconciliation period. Examples: 10s, or 1m. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Alpha feature> CPU Manager reconciliation period. Examples: 10s, or 1m. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -304,84 +304,84 @@ kubelet [flags] --enable-controller-attach-detach     Default: true -Enables the Attach/Detach controller to manage attachment/detachment of volumes scheduled to this node, and disables kubelet from executing any attach/detach operations. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enables the Attach/Detach controller to manage attachment/detachment of volumes scheduled to this node, and disables kubelet from executing any attach/detach operations. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --enable-debugging-handlers     Default: true -Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --enable-server     Default: true -Enable the Kubelet's server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable the Kubelet's server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --enforce-node-allocatable strings     Default: pods -A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are none, pods, system-reserved, and kube-reserved. If the latter two options are specified, --system-reserved-cgroup and --kube-reserved-cgroup must also be set, respectively. If none is specified, no additional options should be set. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are none, pods, system-reserved, and kube-reserved. If the latter two options are specified, --system-reserved-cgroup and --kube-reserved-cgroup must also be set, respectively. If none is specified, no additional options should be set. See here for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --event-burst int32     Default: 10 -Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding --event-qps. The number must be >= 0. If 0 will use default burst (10). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding --event-qps. The number must be >= 0. If 0 will use default burst (10). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --event-qps int32     Default: 5 -QPS to limit event creations. The number must be >= 0. If 0 will use default QPS (5). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +QPS to limit event creations. The number must be >= 0. If 0 will use default QPS (5). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-hard mapStringString     Default: imagefs.available<15%,memory.available<100Mi,nodefs.available<10% -A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction. On a Linux node, the default value also includes nodefs.inodesFree<5%. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction. On a Linux node, the default value also includes nodefs.inodesFree<5%. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-max-pod-grace-period int32 - Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-minimum-reclaim mapStringString -A set of minimum reclaims (e.g. imagefs.available=2Gi) that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of minimum reclaims (e.g. imagefs.available=2Gi) that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-pressure-transition-period duration     Default: 5m0s -Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-soft mapStringString -A set of eviction thresholds (e.g. memory.available<1.5Gi) that if met over a corresponding grace period would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction thresholds (e.g. memory.available<1.5Gi) that if met over a corresponding grace period would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --eviction-soft-grace-period mapStringString -A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -395,7 +395,7 @@ kubelet [flags] --experimental-allocatable-ignore-eviction     Default: false -When set to true, hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.24 or later) +When set to true, hard eviction thresholds will be ignored while calculating node allocatable. See here for more details. (DEPRECATED: will be removed in 1.24 or later) @@ -409,14 +409,14 @@ kubelet [flags] --experimental-kernel-memcg-notification -Use kernelMemcgNotification configuration, this flag will be removed in 1.24 or later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Use kernelMemcgNotification configuration, this flag will be removed in 1.24 or later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --experimental-log-sanitization bool -[Experimental] When enabled, prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +[Experimental] When enabled, prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -430,7 +430,7 @@ kubelet [flags] --fail-swap-on     Default: true -Makes the Kubelet fail to start if swap is enabled on the node. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Makes the Kubelet fail to start if swap is enabled on the node. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -548,35 +548,35 @@ WinDSR=true|false (ALPHA - default=false)
    WinOverlay=true|false (BETA - default=true)
    WindowsHostProcessContainers=true|false (BETA - default=true)
    csiMigrationRBD=true|false (ALPHA - default=false)
    -(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --file-check-frequency duration     Default: 20s -Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --hairpin-mode string     Default: promiscuous-bridge -How should the kubelet setup hairpin NAT. This allows endpoints of a Service to load balance back to themselves if they should try to access their own Service. Valid values are promiscuous-bridge, hairpin-veth and none. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +How should the kubelet setup hairpin NAT. This allows endpoints of a Service to load balance back to themselves if they should try to access their own Service. Valid values are promiscuous-bridge, hairpin-veth and none. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --healthz-bind-address string     Default: 127.0.0.1 -The IP address for the healthz server to serve on (set to 0.0.0.0 or :: for listening in all interfaces and IP families). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The IP address for the healthz server to serve on (set to 0.0.0.0 or :: for listening in all interfaces and IP families). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --healthz-port int32     Default: 10248 -The port of the localhost healthz endpoint (set to 0 to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The port of the localhost healthz endpoint (set to 0 to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -597,7 +597,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --http-check-frequency duration     Default: 20s -Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -618,14 +618,14 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --image-gc-high-threshold int32     Default: 85 -The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --image-gc-low-threshold int32     Default: 80 -The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of --image-gc-high-threshold. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of --image-gc-high-threshold. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -646,14 +646,14 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --iptables-drop-bit int32     Default: 15 -The bit of the fwmark space to mark packets for dropping. Must be within the range [0, 31]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The bit of the fwmark space to mark packets for dropping. Must be within the range [0, 31]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --iptables-masquerade-bit int32     Default: 14 -The bit of the fwmark space to mark packets for SNAT. Must be within the range [0, 31]. Please match this parameter with corresponding parameter in kube-proxy. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The bit of the fwmark space to mark packets for SNAT. Must be within the range [0, 31]. Please match this parameter with corresponding parameter in kube-proxy. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -667,42 +667,42 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --kernel-memcg-notification -If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --kube-api-burst int32     Default: 10 -Burst to use while talking with kubernetes API server. The number must be >= 0. If 0 will use default burst (10). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Burst to use while talking with kubernetes API server. The number must be >= 0. If 0 will use default burst (10). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --kube-api-content-type string     Default: application/vnd.kubernetes.protobuf -Content type of requests sent to apiserver. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Content type of requests sent to apiserver. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --kube-api-qps int32     Default: 5 -QPS to use while talking with kubernetes API server. The number must be >= 0. If 0 will use default QPS (5). Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +QPS to use while talking with kubernetes API server. The number must be >= 0. If 0 will use default QPS (5). Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --kube-reserved mapStringString     Default: <None> -A set of <resource name>=<resource quantity> (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100') pairs that describe resources reserved for kubernetes system components. Currently cpu, memory and local ephemeral-storage for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of <resource name>=<resource quantity> (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100') pairs that describe resources reserved for kubernetes system components. Currently cpu, memory and local ephemeral-storage for root file system are supported. See here for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --kube-reserved-cgroup string     Default: '' -Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via --kube-reserved flag. Ex. /kube-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via --kube-reserved flag. Ex. /kube-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -716,7 +716,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --kubelet-cgroups string -Optional absolute name of cgroups to create and run the Kubelet in. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Optional absolute name of cgroups to create and run the Kubelet in. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -730,28 +730,28 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --log-backtrace-at <A string of format 'file:line'>     Default: ":0" -When logging hits line :, emit a stack trace. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +When logging hits line :, emit a stack trace. (DEPRECATED: will be removed in a future release, see here.) --log-dir string -If non-empty, write log files in this directory. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If non-empty, write log files in this directory. (DEPRECATED: will be removed in a future release, see here.) --log-file string -If non-empty, use this log file. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If non-empty, use this log file. (DEPRECATED: will be removed in a future release, see here.) --log-file-max-size uint     Default: 1800 -Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (DEPRECATED: will be removed in a future release, see here.) @@ -765,49 +765,49 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --log-json-info-buffer-size string     Default: '0' -[Experimental] In JSON format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +[Experimental] In JSON format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --log-json-split-stream -[Experimental] In JSON format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +[Experimental] In JSON format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --logging-format string     Default: text -Sets the log format. Permitted formats: text, json.
    Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --skip_headers, --skip_log_headers, --stderrthreshold, --log-flush-frequency.
    Non-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Sets the log format. Permitted formats: text, json.
    Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --skip_headers, --skip_log_headers, --stderrthreshold, --log-flush-frequency.
    Non-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --logtostderr     Default: true -log to standard error instead of files. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +log to standard error instead of files. (DEPRECATED: will be removed in a future release, see here.) --make-iptables-util-chains     Default: true -If true, kubelet will ensure iptables utility rules are present on host. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If true, kubelet will ensure iptables utility rules are present on host. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --manifest-url string -URL for accessing additional Pod specifications to run (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +URL for accessing additional Pod specifications to run (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --manifest-url-header string -Comma-separated list of HTTP headers to use when accessing the URL provided to --manifest-url. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful' (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of HTTP headers to use when accessing the URL provided to --manifest-url. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful' (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -821,14 +821,14 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --max-open-files int     Default: 1000000 -Number of files that can be opened by Kubelet process. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of files that can be opened by Kubelet process. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --max-pods int32     Default: 110 -Number of Pods that can run on this Kubelet. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of Pods that can run on this Kubelet. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -849,7 +849,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --memory-manager-policy string     Default: None -Memory Manager policy to use. Possible values: 'None', 'Static'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Memory Manager policy to use. Possible values: 'None', 'Static'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -863,7 +863,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --minimum-image-ttl-duration duration     Default: 2m0s -Minimum age for an unused image before it is garbage collected. Examples: '300ms', '10s' or '2h45m'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Minimum age for an unused image before it is garbage collected. Examples: '300ms', '10s' or '2h45m'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -898,14 +898,14 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --node-status-max-images int32     Default: 50 -The maximum number of images to report in node.status.images. If -1 is specified, no cap will be applied. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The maximum number of images to report in node.status.images. If -1 is specified, no cap will be applied. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --node-status-update-frequency duration     Default: 10s -Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -919,21 +919,21 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --one-output -If true, only write logs to their native severity level (vs also writing to each lower severity level). (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If true, only write logs to their native severity level (vs also writing to each lower severity level). (DEPRECATED: will be removed in a future release, see here.) --oom-score-adj int32     Default: -999 -The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --pod-cidr string -The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. For IPv6, the maximum number of IP's allocated is 65536 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. For IPv6, the maximum number of IP's allocated is 65536 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -947,56 +947,56 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --pod-manifest-path string -Path to the directory containing static pod files to run, or the path to a single static pod file. Files starting with dots will be ignored. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Path to the directory containing static pod files to run, or the path to a single static pod file. Files starting with dots will be ignored. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --pod-max-pids int     Default: -1 -Set the maximum number of processes per pod. If -1, the kubelet defaults to the node allocatable PID capacity. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Set the maximum number of processes per pod. If -1, the kubelet defaults to the node allocatable PID capacity. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --pods-per-core int32 -Number of Pods per core that can run on this kubelet. The total number of pods on this kubelet cannot exceed --max-pods, so --max-pods will be used if this calculation results in a larger number of pods allowed on the kubelet. A value of 0 disables this limit. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of Pods per core that can run on this kubelet. The total number of pods on this kubelet cannot exceed --max-pods, so --max-pods will be used if this calculation results in a larger number of pods allowed on the kubelet. A value of 0 disables this limit. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --port int32     Default: 10250 -The port for the kubelet to serve on. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The port for the kubelet to serve on. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's --config flag. See kubelet-config-file for more information.) --protect-kernel-defaults - Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --provider-id string -Unique identifier for identifying the node in a machine database, i.e cloud provider. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Unique identifier for identifying the node in a machine database, i.e cloud provider. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --qos-reserved mapStringString -<Warning: Alpha feature> A set of <resource name>=<percentage> (e.g. memory=50%) pairs that describe how pod resource requests are reserved at the QoS level. Currently only memory is supported. Requires the QOSReserved feature gate to be enabled. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Alpha feature> A set of <resource name>=<percentage> (e.g. memory=50%) pairs that describe how pod resource requests are reserved at the QoS level. Currently only memory is supported. Requires the QOSReserved feature gate to be enabled. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --read-only-port int32     Default: 10255 -The read-only port for the kubelet to serve on with no authentication/authorization (set to 0 to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The read-only port for the kubelet to serve on with no authentication/authorization (set to 0 to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1010,7 +1010,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --register-node     Default: true -Register the node with the API server. If --kubeconfig is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Register the node with the API server. If --kubeconfig is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1024,42 +1024,42 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --register-with-taints mapStringString -Register the node with the given list of taints (comma separated <key>=<value>:<effect>). No-op if --register-node is false. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Register the node with the given list of taints (comma separated <key>=<value>:<effect>). No-op if --register-node is false. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --registry-burst int32     Default: 10 -Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding --registry-qps. Only used if --registry-qps is greater than 0. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding --registry-qps. Only used if --registry-qps is greater than 0. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --registry-qps int32     Default: 5 -If > 0, limit registry pull QPS to this value. If 0, unlimited. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If > 0, limit registry pull QPS to this value. If 0, unlimited. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --reserved-cpus string -A comma-separated list of CPUs or CPU ranges that are reserved for system and kubernetes usage. This specific list will supersede cpu counts in --system-reserved and --kube-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A comma-separated list of CPUs or CPU ranges that are reserved for system and kubernetes usage. This specific list will supersede cpu counts in --system-reserved and --kube-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --reserved-memory string -A comma-separated list of memory reservations for NUMA nodes. (e.g. --reserved-memory 0:memory=1Gi,hugepages-1M=2Gi --reserved-memory 1:memory=2Gi). The total sum for each memory type should be equal to the sum of --kube-reserved, --system-reserved and --eviction-threshold. See https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/#reserved-memory-flag for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A comma-separated list of memory reservations for NUMA nodes. (e.g. --reserved-memory 0:memory=1Gi,hugepages-1M=2Gi --reserved-memory 1:memory=2Gi). The total sum for each memory type should be equal to the sum of --kube-reserved, --system-reserved and --eviction-threshold. See here for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --resolv-conf string     Default: /etc/resolv.conf -Resolver configuration file used as the basis for the container DNS resolution configuration. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Resolver configuration file used as the basis for the container DNS resolution configuration. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1073,21 +1073,21 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --rotate-certificates -<Warning: Beta feature> Auto rotate the kubelet client certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Auto rotate the kubelet client certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --rotate-server-certificates -Auto-request and rotate the kubelet serving certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. Requires the RotateKubeletServerCertificate feature gate to be enabled, and approval of the submitted CertificateSigningRequest objects. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Auto-request and rotate the kubelet serving certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. Requires the RotateKubeletServerCertificate feature gate to be enabled, and approval of the submitted CertificateSigningRequest objects. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --runonce -If true, exit after spawning pods from local manifests or remote urls. Exclusive with --enable-server (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If true, exit after spawning pods from local manifests or remote urls. Exclusive with --enable-server (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1101,7 +1101,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --runtime-request-timeout duration     Default: 2m0s -Timeout of all runtime requests except long running request - pull, logs, exec and attach. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Timeout of all runtime requests except long running request - pull, logs, exec and attach. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1115,70 +1115,70 @@ csiMigrationRBD=true|false (ALPHA - default=false)
    --serialize-image-pulls     Default: true -Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version < 1.9 or an aufs storage backend. Issue #10959 has more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version < 1.9 or an aufs storage backend. Issue #10959 has more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --skip-headers -If true, avoid header prefixes in the log messages. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If true, avoid header prefixes in the log messages. (DEPRECATED: will be removed in a future release, see here.) --skip-log-headers -If true, avoid headers when opening log files. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +If true, avoid headers when opening log files. (DEPRECATED: will be removed in a future release, see here.) --stderrthreshold int     Default: 2 -logs at or above this threshold go to stderr. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) +logs at or above this threshold go to stderr. (DEPRECATED: will be removed in a future release, see here.) --streaming-connection-idle-timeout duration     Default: 4h0m0s -Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: 5m. Note: All connections to the kubelet server have a maximum duration of 4 hours. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: 5m. Note: All connections to the kubelet server have a maximum duration of 4 hours. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --sync-frequency duration     Default: 1m0s -Max period between synchronizing running containers and config. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Max period between synchronizing running containers and config. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --system-cgroups string -Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --system-reserved mapStringString     Default: <none> -A set of <resource name>=<resource quantity> (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100') pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of <resource name>=<resource quantity> (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100') pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See here for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --system-reserved-cgroup string     Default: '' -Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via --system-reserved flag. Ex. /system-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via --system-reserved flag. Ex. /system-reserved. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --tls-cert-file string -File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1190,21 +1190,21 @@ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
    Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. -(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --tls-min-version string -Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --tls-private-key-file string -File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1212,14 +1212,14 @@ TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_E --topology-manager-policy string     Default: 'none' -Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --topology-manager-scope string     Default: container -Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -1247,14 +1247,14 @@ TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_E --volume-plugin-dir string     Default: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ -The full path of the directory in which to search for additional third party volume plugins. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The full path of the directory in which to search for additional third party volume plugins. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --volume-stats-agg-period duration     Default: 1m0s -Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) diff --git a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md new file mode 100644 index 0000000000000..2189c4910d277 --- /dev/null +++ b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md @@ -0,0 +1,121 @@ +--- +title: Event Rate Limit Configuration (v1alpha1) +content_type: tool-reference +package: eventratelimit.admission.k8s.io/v1alpha1 +auto_generated: true +--- + + +## Resource Types + + +- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) + + + +## `Configuration` {#eventratelimit-admission-k8s-io-v1alpha1-Configuration} + + + +

    Configuration provides configuration for the EventRateLimit admission +controller.

    + + + + + + + + + + + + + + +
    FieldDescription
    apiVersion
    string
    eventratelimit.admission.k8s.io/v1alpha1
    kind
    string
    Configuration
    limits [Required]
    +[]Limit +
    +

    limits are the limits to place on event queries received. +Limits can be placed on events received server-wide, per namespace, +per user, and per source+object. +At least one limit is required.

    +
    + +## `Limit` {#eventratelimit-admission-k8s-io-v1alpha1-Limit} + + +**Appears in:** + +- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) + + +

    Limit is the configuration for a particular limit type

    + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    type [Required]
    +LimitType +
    +

    type is the type of limit to which this configuration applies

    +
    qps [Required]
    +int32 +
    +

    qps is the number of event queries per second that are allowed for this +type of limit. The qps and burst fields are used together to determine if +a particular event query is accepted. The qps determines how many queries +are accepted once the burst amount of queries has been exhausted.

    +
    burst [Required]
    +int32 +
    +

    burst is the burst number of event queries that are allowed for this type +of limit. The qps and burst fields are used together to determine if a +particular event query is accepted. The burst determines the maximum size +of the allowance granted for a particular bucket. For example, if the burst +is 10 and the qps is 3, then the admission control will accept 10 queries +before blocking any queries. Every second, 3 more queries will be allowed. +If some of that allowance is not used, then it will roll over to the next +second, until the maximum allowance of 10 is reached.

    +
    cacheSize
    +int32 +
    +

    cacheSize is the size of the LRU cache for this type of limit. If a bucket +is evicted from the cache, then the allowance for that bucket is reset. If +more queries are later received for an evicted bucket, then that bucket +will re-enter the cache with a clean slate, giving that bucket a full +allowance of burst queries.

    +

    The default cache size is 4096.

    +

    If limitType is 'server', then cacheSize is ignored.

    +
    + +## `LimitType` {#eventratelimit-admission-k8s-io-v1alpha1-LimitType} + +(Alias of `string`) + +**Appears in:** + +- [Limit](#eventratelimit-admission-k8s-io-v1alpha1-Limit) + + +

    LimitType is the type of the limit (e.g., per-namespace)

    + + + + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md new file mode 100644 index 0000000000000..f420623559cfa --- /dev/null +++ b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md @@ -0,0 +1,168 @@ +--- +title: Image Policy API (v1alpha1) +content_type: tool-reference +package: imagepolicy.k8s.io/v1alpha1 +auto_generated: true +--- + + +## Resource Types + + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + + + +## `ImageReview` {#imagepolicy-k8s-io-v1alpha1-ImageReview} + + + +

    ImageReview checks if the set of images in a pod are allowed.

    + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    apiVersion
    string
    imagepolicy.k8s.io/v1alpha1
    kind
    string
    ImageReview
    metadata
    +meta/v1.ObjectMeta +
    +

    Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

    +Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec [Required]
    +ImageReviewSpec +
    +

    Spec holds information about the pod being evaluated

    +
    status
    +ImageReviewStatus +
    +

    Status is filled in by the backend and indicates whether the pod should be allowed.

    +
    + +## `ImageReviewContainerSpec` {#imagepolicy-k8s-io-v1alpha1-ImageReviewContainerSpec} + + +**Appears in:** + +- [ImageReviewSpec](#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec) + + +

    ImageReviewContainerSpec is a description of a container within the pod creation request.

    + + + + + + + + + + + +
    FieldDescription
    image
    +string +
    +

    This can be in the form image:tag or image@SHA:012345679abcdef.

    +
    + +## `ImageReviewSpec` {#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec} + + +**Appears in:** + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + + +

    ImageReviewSpec is a description of the pod creation request.

    + + + + + + + + + + + + + + + + + +
    FieldDescription
    containers
    +[]ImageReviewContainerSpec +
    +

    Containers is a list of a subset of the information in each container of the Pod being created.

    +
    annotations
    +map[string]string +
    +

    Annotations is a list of key-value pairs extracted from the Pod's annotations. +It only includes keys which match the pattern *.image-policy.k8s.io/*. +It is up to each webhook backend to determine how to interpret these annotations, if at all.

    +
    namespace
    +string +
    +

    Namespace is the namespace the pod is being created in.

    +
    + +## `ImageReviewStatus` {#imagepolicy-k8s-io-v1alpha1-ImageReviewStatus} + + +**Appears in:** + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + + +

    ImageReviewStatus is the result of the review for the pod creation request.

    + + + + + + + + + + + + + + + + + +
    FieldDescription
    allowed [Required]
    +bool +
    +

    Allowed indicates that all images were allowed to be run.

    +
    reason
    +string +
    +

    Reason should be empty unless Allowed is false in which case it +may contain a short description of what is wrong. Kubernetes +may truncate excessively long errors when displaying to the user.

    +
    auditAnnotations
    +map[string]string +
    +

    AuditAnnotations will be added to the attributes object of the +admission controller request using 'AddAnnotation'. The keys should +be prefix-less (i.e., the admission controller will add an +appropriate prefix).

    +
    + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md index 8874cf6a36342..377ac021b6792 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md @@ -143,7 +143,7 @@ configuration types to be used during a kubeadm init run.

    criSocket: "/var/run/dockershim.sock" taints: - key: "kubeadmNode" - value: "master" + value: "someValue" effect: "NoSchedule" kubeletExtraArgs: v: 4 @@ -348,7 +348,7 @@ could be used for assigning a stable DNS to the control plane. string -

    mageRepository sets the container registry to pull images from. +

    imageRepository sets the container registry to pull images from. If empty, k8s.gcr.io will be used by default; in case of kubernetes version is a CI build (kubernetes version starts with ci/) gcr.io/k8s-staging-ci-images is used as a default for control plane components and for kube-proxy, while @@ -876,7 +876,9 @@ cluster information.

    tlsBootstrapToken is a token used for TLS bootstrapping. -If bootstrapToken is set, this field is defaulted to .bootstrapToken.token, but can be overridden. If file` is set, this field must be set in case the KubeConfigFile does not +If bootstrapToken is set, this field is defaulted to .bootstrapToken.token, +but can be overridden. +If file is set, this field must be set in case the KubeConfigFile does not contain any other authentication information.

    @@ -1080,7 +1082,7 @@ originated from the Kubernetes/Kubernetes release process

    string -

    mageRepository sets the container registry to pull images from. +

    imageRepository sets the container registry to pull images from. If not set, the imageRepository defined in ClusterConfiguration will be used.

    @@ -1267,7 +1269,7 @@ Defaults to the hostname of the node if not provided.

    string -

    `criSocket is used to retrieve container runtime information. This information will +

    criSocket is used to retrieve container runtime information. This information will be annotated to the Node API object, for later re-use.

    @@ -1276,9 +1278,9 @@ be annotated to the Node API object, for later re-use.

    taints specifies the taints the Node API object should be registered with. -If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted to -'node-role.kubernetes.io/master=""'. If you don't want to taint your control-plane node, -set this field to an empty list, i.e. taints: [] in the YAML file. This field is +If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted with +a control-plane taint for control-plane nodes. If you don't want to taint your control-plane +node, set this field to an empty list, i.e. taints: [], in the YAML file. This field is solely used for Node registration.

    diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md index ca7ef7c287502..75fc7c1ecfdfe 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -152,7 +152,7 @@ configuration types to be used during a kubeadm init run.

    criSocket: "/var/run/dockershim.sock" taints: - key: "kubeadmNode" - value: "master" + value: "someValue" effect: "NoSchedule" kubeletExtraArgs: v: 4 @@ -1160,9 +1160,9 @@ This information will be annotated to the Node API object, for later re-use

    tains specifies the taints the Node API object should be registered with. -If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted to -taints: ["node-role.kubernetes.io/master:""]. -If you don't want to taint your control-plane node, set this field to an empty slice, +If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted +with a control-plane taint for control-plane nodes. +If you don't want to taint your control-plane node, set this field to an empty list, i.e. taints: [] in the YAML file. This field is solely used for Node registration.

    diff --git a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md index ed68500044176..42755470cbb33 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md @@ -943,7 +943,7 @@ Default: ""

    systemReservedCgroup helps the kubelet identify absolute name of top level CGroup used to enforce systemReserved compute resource reservation for OS system daemons. -Refer to Node Allocatable +Refer to Node Allocatable doc for more information. Default: ""

    @@ -954,7 +954,7 @@ Default: ""

    kubeReservedCgroup helps the kubelet identify absolute name of top level CGroup used to enforce KubeReserved compute resource reservation for Kubernetes node system daemons. -Refer to Node Allocatable +Refer to Node Allocatable doc for more information. Default: ""

    @@ -970,7 +970,7 @@ If none is specified, no other options may be specified. When system-reserved is in the list, systemReservedCgroup must be specified. When kube-reserved is in the list, kubeReservedCgroup must be specified. This field is supported only when cgroupsPerQOS is set to true. -Refer to Node Allocatable +Refer to Node Allocatable for more information. Default: ["pods"]

    diff --git a/content/en/docs/reference/glossary/downward-api.md b/content/en/docs/reference/glossary/downward-api.md new file mode 100644 index 0000000000000..a3d9a46336174 --- /dev/null +++ b/content/en/docs/reference/glossary/downward-api.md @@ -0,0 +1,28 @@ +--- +title: Downward API +id: downward-api +date: 2022-03-21 +short_description: > + A mechanism to expose Pod and container field values to code running in a container. +aka: +full_link: /docs/concepts/workloads/pods/downward-api/ +tags: +- architecture +--- +Kubernetes' mechanism to expose Pod and container field values to code running in a container. + +It is sometimes useful for a container to have information about itself, without +needing to make changes to the container code that directly couple it to Kubernetes. + +The Kubernetes downward API allows containers to consume information about themselves +or their context in a Kubernetes cluster. Applications in containers can have +access to that information, without the application needing to act as a client of +the Kubernetes API. + +There are two ways to expose Pod and container fields to a running container: + +- using [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) +- using [a `downwardAPI` volume](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) + +Together, these two ways of exposing Pod and container fields are called the _downward API_. + diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index e1711ea542615..693c66d1fbe86 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -30,14 +30,14 @@ You can also use a shorthand alias for `kubectl` that also works with completion ```bash alias k=kubectl -complete -F __start_kubectl k +complete -o default -F __start_kubectl k ``` ### ZSH ```bash source <(kubectl completion zsh) # setup autocomplete in zsh into the current shell -echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell +echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell ``` ### A Note on --all-namespaces @@ -280,7 +280,7 @@ kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", " # Add a new element to a positional array kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]' -# Update a deployment's replicas count by patching it's scale subresource +# Update a deployment's replica count by patching its scale subresource kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' ``` diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md index dbcc8f889e0f0..d26dfdbaf9efc 100644 --- a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md @@ -1879,7 +1879,7 @@ PodStatus represents information about the status of a pod. Status may trail the - **qosClass** (string) - The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md + The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://git.k8s.io/design-proposals-archive/node/resource-qos.md diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index e68cb668d3081..4b74774ea7888 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -17,7 +17,7 @@ This document serves both as a reference to the values and as a coordination poi ### app.kubernetes.io/component -Example: `app.kubernetes.io/component=database` +Example: `app.kubernetes.io/component: "database"` Used on: All Objects @@ -27,7 +27,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/created-by -Example: `app.kubernetes.io/created-by=controller-manager` +Example: `app.kubernetes.io/created-by: "controller-manager"` Used on: All Objects @@ -37,7 +37,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/instance -Example: `app.kubernetes.io/instance=mysql-abcxzy` +Example: `app.kubernetes.io/instance: "mysql-abcxzy"` Used on: All Objects @@ -47,7 +47,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/managed-by -Example: `app.kubernetes.io/managed-by=helm` +Example: `app.kubernetes.io/managed-by: "helm"` Used on: All Objects @@ -57,7 +57,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/name -Example: `app.kubernetes.io/name=mysql` +Example: `app.kubernetes.io/name: "mysql"` Used on: All Objects @@ -67,7 +67,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/part-of -Example: `app.kubernetes.io/part-of=wordpress` +Example: `app.kubernetes.io/part-of: "wordpress"` Used on: All Objects @@ -77,7 +77,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### app.kubernetes.io/version -Example: `app.kubernetes.io/version="5.7.21"` +Example: `app.kubernetes.io/version: "5.7.21"` Used on: All Objects @@ -87,7 +87,7 @@ One of the [recommended labels](/docs/concepts/overview/working-with-objects/com ### kubernetes.io/arch -Example: `kubernetes.io/arch=amd64` +Example: `kubernetes.io/arch: "amd64"` Used on: Node @@ -95,7 +95,7 @@ The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be h ### kubernetes.io/os -Example: `kubernetes.io/os=linux` +Example: `kubernetes.io/os: "linux"` Used on: Node @@ -103,7 +103,7 @@ The Kubelet populates this with `runtime.GOOS` as defined by Go. This can be han ### kubernetes.io/metadata.name -Example: `kubernetes.io/metadata.name=mynamespace` +Example: `kubernetes.io/metadata.name: "mynamespace"` Used on: Namespaces @@ -124,7 +124,7 @@ This label has been deprecated. Please use `kubernetes.io/os` instead. ### kubernetes.io/hostname {#kubernetesiohostname} -Example: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` +Example: `kubernetes.io/hostname: "ip-172-20-114-199.ec2.internal"` Used on: Node @@ -135,7 +135,7 @@ This label is also used as part of the topology hierarchy. See [topology.kubern ### kubernetes.io/change-cause {#change-cause} -Example: `kubernetes.io/change-cause=kubectl edit --record deployment foo` +Example: `kubernetes.io/change-cause: "kubectl edit --record deployment foo"` Used on: All Objects @@ -161,7 +161,7 @@ The value for this annotation must be **true** to take effect. This annotation i ### controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost} -Example: `controller.kubernetes.io/pod-deletion-cost=10` +Example: `controller.kubernetes.io/pod-deletion-cost: "10"` Used on: Pod @@ -212,7 +212,7 @@ For example, `10M` means 10 megabits per second. ### node.kubernetes.io/instance-type {#nodekubernetesioinstance-type} -Example: `node.kubernetes.io/instance-type=m3.medium` +Example: `node.kubernetes.io/instance-type: "m3.medium"` Used on: Node @@ -237,7 +237,7 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone). Example: -`statefulset.kubernetes.io/pod-name=mystatefulset-7` +`statefulset.kubernetes.io/pod-name: "mystatefulset-7"` When a StatefulSet controller creates a Pod for the StatefulSet, the control plane sets this label on that Pod. The value of the label is the name of the Pod being created. @@ -249,7 +249,7 @@ StatefulSet topic for more details. Example: -`topology.kubernetes.io/region=us-east-1` +`topology.kubernetes.io/region: "us-east-1"` See [topology.kubernetes.io/zone](#topologykubernetesiozone). @@ -257,7 +257,7 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone). Example: -`topology.kubernetes.io/zone=us-east-1c` +`topology.kubernetes.io/zone: "us-east-1c"` Used on: Node, PersistentVolume @@ -286,7 +286,7 @@ adding the labels manually (or adding support for `PersistentVolumeLabel`). With ### volume.beta.kubernetes.io/storage-provisioner (deprecated) -Example: `volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath` +Example: `volume.beta.kubernetes.io/storage-provisioner: "k8s.io/minikube-hostpath"` Used on: PersistentVolumeClaim @@ -310,7 +310,7 @@ This annotation will be added to dynamic provisioning required PVC. ### node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} -Example: `node.kubernetes.io/windows-build=10.0.17763` +Example: `node.kubernetes.io/windows-build: "10.0.17763"` Used on: Node @@ -320,7 +320,7 @@ The label's value is in the format "MajorVersion.MinorVersion.BuildNumber". ### service.kubernetes.io/headless {#servicekubernetesioheadless} -Example: `service.kubernetes.io/headless=""` +Example: `service.kubernetes.io/headless: ""` Used on: Service @@ -328,15 +328,33 @@ The control plane adds this label to an Endpoints object when the owning Service ### kubernetes.io/service-name {#kubernetesioservice-name} -Example: `kubernetes.io/service-name="nginx"` +Example: `kubernetes.io/service-name: "nginx"` Used on: Service Kubernetes uses this label to differentiate multiple Services. Used currently for `ELB`(Elastic Load Balancer) only. +### kubernetes.io/service-account.name + +Example: `kubernetes.io/service-account.name: "sa-name"` + +Used on: Secret + +This annotation records the {{< glossary_tooltip term_id="name" text="name">}} of the +ServiceAccount that the token (stored in the Secret of type `kubernetes.io/service-account-token`) represents. + +### kubernetes.io/service-account.uid + +Example: `kubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da` + +Used on: Secret + +This annotation records the {{< glossary_tooltip term_id="uid" text="unique ID" >}} of the +ServiceAccount that the token (stored in the Secret of type `kubernetes.io/service-account-token`) represents. + ### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} -Example: `endpointslice.kubernetes.io/managed-by="controller"` +Example: `endpointslice.kubernetes.io/managed-by: "controller"` Used on: EndpointSlices @@ -344,7 +362,7 @@ The label is used to indicate the controller or entity that manages an EndpointS ### endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} -Example: `endpointslice.kubernetes.io/skip-mirror="true"` +Example: `endpointslice.kubernetes.io/skip-mirror: "true"` Used on: Endpoints @@ -352,7 +370,7 @@ The label can be set to `"true"` on an Endpoints resource to indicate that the E ### service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} -Example: `service.kubernetes.io/service-proxy-name="foo-bar"` +Example: `service.kubernetes.io/service-proxy-name: "foo-bar"` Used on: Service @@ -364,7 +382,7 @@ Example: `experimental.windows.kubernetes.io/isolation-type: "hyperv"` Used on: Pod -The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation experimental.windows.kubernetes.io/isolation-type=hyperv. +The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation `experimental.windows.kubernetes.io/isolation-type: hyperv`. {{< note >}} You can only set this annotation on Pods that have a single container. @@ -387,7 +405,7 @@ Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassN ### storageclass.kubernetes.io/is-default-class -Example: `storageclass.kubernetes.io/is-default-class=true` +Example: `storageclass.kubernetes.io/is-default-class: "true"` Used on: StorageClass @@ -449,43 +467,43 @@ Use [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-tolera ### node.kubernetes.io/not-ready -Example: `node.kubernetes.io/not-ready:NoExecute` +Example: `node.kubernetes.io/not-ready: "NoExecute"` The node controller detects whether a node is ready by monitoring its health and adds or removes this taint accordingly. ### node.kubernetes.io/unreachable -Example: `node.kubernetes.io/unreachable:NoExecute` +Example: `node.kubernetes.io/unreachable: "NoExecute"` The node controller adds the taint to a node corresponding to the [NodeCondition](/docs/concepts/architecture/nodes/#condition) `Ready` being `Unknown`. ### node.kubernetes.io/unschedulable -Example: `node.kubernetes.io/unschedulable:NoSchedule` +Example: `node.kubernetes.io/unschedulable: "NoSchedule"` The taint will be added to a node when initializing the node to avoid race condition. ### node.kubernetes.io/memory-pressure -Example: `node.kubernetes.io/memory-pressure:NoSchedule` +Example: `node.kubernetes.io/memory-pressure: "NoSchedule"` The kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available` observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed. ### node.kubernetes.io/disk-pressure -Example: `node.kubernetes.io/disk-pressure:NoSchedule` +Example: `node.kubernetes.io/disk-pressure :"NoSchedule"` The kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`, `nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed. ### node.kubernetes.io/network-unavailable -Example: `node.kubernetes.io/network-unavailable:NoSchedule` +Example: `node.kubernetes.io/network-unavailable: "NoSchedule"` This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration. Only when the route on the cloud is configured properly will the taint be removed by the cloud provider. ### node.kubernetes.io/pid-pressure -Example: `node.kubernetes.io/pid-pressure:NoSchedule` +Example: `node.kubernetes.io/pid-pressure: "NoSchedule"` The kubelet checks D-value of the size of `/proc/sys/kernel/pid_max` and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the `pid.available` metric. The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added/removed. @@ -506,19 +524,19 @@ for further details about when and how to use this taint. ### node.cloudprovider.kubernetes.io/uninitialized -Example: `node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` +Example: `node.cloudprovider.kubernetes.io/uninitialized: "NoSchedule"` Sets this taint on a node to mark it as unusable, when kubelet is started with the "external" cloud provider, until a controller from the cloud-controller-manager initializes this node, and then removes the taint. ### node.cloudprovider.kubernetes.io/shutdown -Example: `node.cloudprovider.kubernetes.io/shutdown:NoSchedule` +Example: `node.cloudprovider.kubernetes.io/shutdown: "NoSchedule"` If a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly with `node.cloudprovider.kubernetes.io/shutdown` and the taint effect of `NoSchedule`. ### pod-security.kubernetes.io/enforce -Example: `pod-security.kubernetes.io/enforce: baseline` +Example: `pod-security.kubernetes.io/enforce: "baseline"` Used on: Namespace @@ -532,7 +550,7 @@ for more information. ### pod-security.kubernetes.io/enforce-version -Example: `pod-security.kubernetes.io/enforce-version: {{< skew latestVersion >}}` +Example: `pod-security.kubernetes.io/enforce-version: "{{< skew currentVersion >}}"` Used on: Namespace @@ -545,7 +563,7 @@ for more information. ### pod-security.kubernetes.io/audit -Example: `pod-security.kubernetes.io/audit: baseline` +Example: `pod-security.kubernetes.io/audit: "baseline"` Used on: Namespace @@ -559,7 +577,7 @@ for more information. ### pod-security.kubernetes.io/audit-version -Example: `pod-security.kubernetes.io/audit-version: {{< skew latestVersion >}}` +Example: `pod-security.kubernetes.io/audit-version: "{{< skew currentVersion >}}"` Used on: Namespace @@ -572,7 +590,7 @@ for more information. ### pod-security.kubernetes.io/warn -Example: `pod-security.kubernetes.io/warn: baseline` +Example: `pod-security.kubernetes.io/warn: "baseline"` Used on: Namespace @@ -588,7 +606,7 @@ for more information. ### pod-security.kubernetes.io/warn-version -Example: `pod-security.kubernetes.io/warn-version: {{< skew latestVersion >}}` +Example: `pod-security.kubernetes.io/warn-version: "{{< skew currentVersion >}}"` Used on: Namespace diff --git a/content/en/docs/reference/ports-and-protocols.md b/content/en/docs/reference/ports-and-protocols.md index 91d6cba8e7665..8ca5bc0774444 100644 --- a/content/en/docs/reference/ports-and-protocols.md +++ b/content/en/docs/reference/ports-and-protocols.md @@ -7,7 +7,7 @@ weight: 50 When running Kubernetes in an environment with strict network boundaries, such as on-premises datacenter with physical network firewalls or Virtual Networks in Public Cloud, it is useful to be aware of the ports and protocols -used by Kubernetes components +used by Kubernetes components. ## Control plane diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md index 1abc7d9bac648..9cd82c43167ca 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md @@ -92,6 +92,3 @@ kubeadm certs generate-csr [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md index 7dc59c45d4c6b..b7708955007e0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md @@ -87,6 +87,3 @@ kubeadm certs renew apiserver [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md index 84d75bfd36d8a..0fde99368ceef 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md @@ -87,6 +87,3 @@ kubeadm certs renew etcd-healthcheck-client [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md index 60acaae1dbf20..214b353b00708 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md @@ -87,6 +87,3 @@ kubeadm certs renew etcd-peer [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md index 969157fe3e507..cd8b73908d277 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md @@ -87,6 +87,3 @@ kubeadm certs renew etcd-server [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md index 0f85b4fbc2183..7dd3a4f820f7c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md @@ -67,6 +67,3 @@ kubeadm config images [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md index 74c870e3dba89..d46abeab03fb9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md @@ -116,6 +116,3 @@ kubeadm config images list [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md index 1c634871eb24a..ea7c83e14bd21 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md @@ -79,6 +79,3 @@ kubeadm config print join-defaults [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md index 64777661d03ae..1ff70e5cb8464 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md @@ -60,6 +60,3 @@ kubeadm init phase addon [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md index 547601e364905..a47b8390be52b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md @@ -85,6 +85,3 @@ kubeadm init phase certs etcd-ca [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md index a3df321d886fc..d6c0630075068 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md @@ -69,6 +69,3 @@ kubeadm init phase certs sa [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md index f1ebdbcf12afd..1eb52e8287c2c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md @@ -116,6 +116,3 @@ kubeadm init phase kubeconfig all [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md index 70e4c634b027c..55a98fae965c1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md @@ -81,6 +81,3 @@ kubeadm init phase kubelet-finalize all [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md index 11d2407499e86..a79f41f610422 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md @@ -88,6 +88,3 @@ kubeadm init phase kubelet-start [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md index 345621f7030a9..61e5a0c9468ed 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md @@ -81,6 +81,3 @@ kubeadm init phase preflight [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md index 515060a76c7bb..e242f4eedd84b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -95,6 +95,3 @@ kubeadm init phase upload-certs [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md index 3c087368a77e3..ba4e91fa89e0e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md @@ -74,6 +74,3 @@ kubeadm init phase upload-config all [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md index 13e561f486287..930b2d94a2b32 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md @@ -83,6 +83,3 @@ kubeadm init phase upload-config kubeadm [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md index 07768a16c6efb..f7ea8ea39aae0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md @@ -67,6 +67,3 @@ kubeadm join phase control-plane-join [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md index 7a3517652d7dc..496213ce90650 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md @@ -95,6 +95,3 @@ kubeadm join phase control-plane-join all [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md index c06ddaae40e4f..d127f67fd80db 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md @@ -9,7 +9,6 @@ guide. You can file document formatting bugs against the [reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project. --> - Add a new local etcd member ### Synopsis diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md index 6952dbca80ca4..3dc12615a9560 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md @@ -67,6 +67,3 @@ kubeadm join phase control-plane-prepare [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md index 661edf597deac..1d5351f3aff99 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md @@ -151,6 +151,3 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md index 647511594058c..81355a775e8a2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md @@ -110,8 +110,6 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags] - - ### Options inherited from parent commands @@ -130,6 +128,3 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]
    - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md index 5896b25337b15..cafb58658ecc6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md @@ -123,6 +123,3 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md index e09b75f9283a0..ae869d7f476ad 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md @@ -117,6 +117,3 @@ kubeadm reset [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md index ceabd2045e96a..4120c0a97d137 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md @@ -74,6 +74,3 @@ kubeadm reset phase cleanup-node [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md index d2c1060ff4ac2..3fd91a98a6223 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md @@ -67,6 +67,3 @@ kubeadm reset phase remove-etcd-member [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md index 5384fc4d6cce2..025ab1efac99d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md @@ -91,6 +91,3 @@ kubeadm token [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md index a2a217033c88b..4687fcbba1e5a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md @@ -9,7 +9,6 @@ guide. You can file document formatting bugs against the [reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project. --> - Create bootstrap tokens on the server ### Synopsis diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md index 2040bd3f94ac1..30b76787988bf 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md @@ -79,6 +79,3 @@ kubeadm token delete [token-value] ... - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md index eb5e3c4cace98..718257f8afc77 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md @@ -102,6 +102,3 @@ kubeadm upgrade diff [version] [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md index a8a3138c887ac..12fbe5b8f390b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md @@ -117,6 +117,3 @@ kubeadm upgrade node [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md index 6b86c950548ec..ce5b6f842970b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md @@ -56,6 +56,3 @@ Use this command to invoke single phase of the node workflow - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_preflight.md index d82a193898a21..ff44d0d7dae3e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_preflight.md @@ -67,6 +67,3 @@ kubeadm upgrade node phase preflight [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md index b86c7259774d3..38cc27bee91b4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md @@ -67,6 +67,3 @@ kubeadm version [flags] - - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md index 74428b914834e..4a9e125379ea6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -219,7 +219,7 @@ Other API server flags that are set unconditionally are: - `--insecure-port=0` to avoid insecure connections to the api server - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module. - See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details + See [TLS Bootstrapping](/docs/reference/access-authn-authn/kubelet-tls-bootstrapping/) for more details - `--allow-privileged` to `true` (required e.g. by kube proxy) - `--requestheader-client-ca-file` to `front-proxy-ca.crt` - `--enable-admission-plugins` to: @@ -266,7 +266,7 @@ The static Pod manifest for the controller manager is affected by following para Other flags that are set unconditionally are: - `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner` controllers for TLS bootstrap. - See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details + See [TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for more details - `--use-service-account-credentials` to `true` - Flags for using certificates generated in previous steps: - `--root-ca-file` to `ca.crt` @@ -329,7 +329,7 @@ Please note that: ### Configure TLS-Bootstrapping for node joining Kubeadm uses [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) for joining new nodes to an -existing cluster; for more details see also [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). +existing cluster; for more details see also [design proposal](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md). `kubeadm init` ensures that everything is properly configured for this process, and this includes following steps as well as setting API server and controller flags as already described in previous paragraphs. @@ -420,7 +420,7 @@ Similarly to `kubeadm init`, also `kubeadm join` internal workflow consists of a This is split into discovery (having the Node trust the Kubernetes Master) and TLS bootstrap (having the Kubernetes Master trust the Node). -see [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) or the corresponding [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). +see [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) or the corresponding [design proposal](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md). ### Preflight checks diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index fc87e796c2ed7..fdb117c5d545b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -52,7 +52,7 @@ following steps: 1. Makes all the necessary configurations for allowing node joining with the [Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and - [TLS Bootstrap](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) + [TLS Bootstrap](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) mechanism: - Write a ConfigMap for making available all the information required @@ -242,6 +242,15 @@ where `config.yaml` contains the custom `imageRepository`, and/or `imageTag` for etcd and CoreDNS. * Pass the same `config.yaml` to `kubeadm init`. +#### Custom sandbox (pause) images {#custom-pause-image} + +To set a custom image for these you need to configure this in your +{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} +to use the image. +Consult the documentation for your container runtime to find out how to change this setting; +for selected container runtimes, you can also find advice within the +[Container Runtimes]((/docs/setup/production-environment/container-runtimes/) topic. + ### Uploading control-plane certificates to the cluster By adding the flag `--upload-certs` to `kubeadm init` you can temporary upload diff --git a/content/en/docs/reference/using-api/_index.md b/content/en/docs/reference/using-api/_index.md index 5e335fb191c37..6592deb3c7155 100644 --- a/content/en/docs/reference/using-api/_index.md +++ b/content/en/docs/reference/using-api/_index.md @@ -39,7 +39,7 @@ The JSON and Protobuf serialization schemas follow the same guidelines for schema changes. The following descriptions cover both formats. The API versioning and software versioning are indirectly related. -The [API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) +The [API and release versioning proposal](https://git.k8s.io/design-proposals-archive/release/versioning.md) describes the relationship between API versioning and software versioning. Different API versions indicate different levels of stability and support. You @@ -83,7 +83,7 @@ Here's a summary of each level: ## API groups -[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md) +[API groups](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md) make it easier to extend the Kubernetes API. The API group is specified in a REST path and in the `apiVersion` field of a serialized object. @@ -124,4 +124,4 @@ Kubernetes stores its serialized state in terms of the API resources by writing - Learn more about [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions) - Read the design documentation for - [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md) + [aggregator](https://git.k8s.io/design-proposals-archive/api-machinery/aggregated-api-servers.md) diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 2e4fb85df289e..1a722acffd229 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -15,8 +15,8 @@ primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET). For some resources, the API includes additional subresources that allow -fine grained authorization (such as a separating viewing details for a Pod from -retrieving its logs), and can accept and serve those resources in different +fine grained authorization (such as separate views for Pod details and +log retrievals), and can accept and serve those resources in different representations for convenience or efficiency. Kubernetes supports efficient change notifications on resources via *watches*. diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md index 8647198fe9dd7..145599cdd9203 100644 --- a/content/en/docs/reference/using-api/client-libraries.md +++ b/content/en/docs/reference/using-api/client-libraries.md @@ -28,14 +28,17 @@ The following client libraries are officially maintained by [Kubernetes SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery). -| Language | Client Library | Sample Programs | -|----------|----------------|-----------------| -| dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [browse](https://github.com/kubernetes-client/csharp/tree/master/examples/simple) -| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [browse](https://github.com/kubernetes/client-go/tree/master/examples) -| Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | [browse](https://github.com/kubernetes-client/haskell/tree/master/kubernetes-client/example) -| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java#installation) -| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [browse](https://github.com/kubernetes-client/javascript/tree/master/examples) -| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [browse](https://github.com/kubernetes-client/python/tree/master/examples) +| Language | Client Library | Sample Programs | +|------------|----------------|-----------------| +| C | [github.com/kubernetes-client/c](https://github.com/kubernetes-client/c/) | [browse](https://github.com/kubernetes-client/c/tree/master/examples) +| dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [browse](https://github.com/kubernetes-client/csharp/tree/master/examples/simple) +| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [browse](https://github.com/kubernetes/client-go/tree/master/examples) +| Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | [browse](https://github.com/kubernetes-client/haskell/tree/master/kubernetes-client/example) +| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java#installation) +| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [browse](https://github.com/kubernetes-client/javascript/tree/master/examples) +| Perl | [github.com/kubernetes-client/perl/](https://github.com/kubernetes-client/perl/) | [browse](https://github.com/kubernetes-client/perl/tree/master/examples) +| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [browse](https://github.com/kubernetes-client/python/tree/master/examples) +| Ruby | [github.com/kubernetes-client/ruby/](https://github.com/kubernetes-client/ruby/) | [browse](https://github.com/kubernetes-client/ruby/tree/master/examples) ## Community-maintained client libraries diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md index 08601313d95aa..d448344504aa9 100644 --- a/content/en/docs/reference/using-api/deprecation-guide.md +++ b/content/en/docs/reference/using-api/deprecation-guide.md @@ -110,8 +110,10 @@ The **policy/v1beta1** API version of PodDisruptionBudget will no longer be serv PodSecurityPolicy in the **policy/v1beta1** API version will no longer be served in v1.25, and the PodSecurityPolicy admission controller will be removed. -PodSecurityPolicy replacements are still under discussion, but current use can be migrated to -[3rd-party admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/) now. +Migrate to [Pod Security Admission](/docs/concepts/security/pod-security-admission/) +or a [3rd party admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/). +For a migration guide, see [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp/). +For more information on the deprecation, see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). #### RuntimeClass {#runtimeclass-v125} @@ -174,7 +176,7 @@ The **authentication.k8s.io/v1beta1** API version of TokenReview is no longer se #### SubjectAccessReview resources {#subjectaccessreview-resources-v122} -The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, and SubjectAccessReview is no longer served as of v1.22. +The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22. * Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6. * Notable changes: diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index 6b932278dc8c1..e9f951a76a8cb 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -125,7 +125,8 @@ this occurs, the applier has 3 options to resolve the conflicts: * **Overwrite value, become sole manager:** If overwriting the value was intentional (or if the applier is an automated process like a controller) the - applier should set the `force` query parameter to true and make the request + applier should set the `force` query parameter to true (in kubectl, it can be done by + using the `--force-conflicts` flag with the apply command) and make the request again. This forces the operation to succeed, changes the value of the field, and removes the field from all other managers' entries in managedFields. diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index bb73375553d27..ba9b3c6785ee5 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -27,6 +27,13 @@ control, available resources, and expertise required to operate and manage a clu You can [download Kubernetes](/releases/download/) to deploy a Kubernetes cluster on a local machine, into the cloud, or for your own datacenter. +Several [Kubernetes components](/docs/concepts/overview/components/) such as `kube-apiserver` or `kube-proxy` can also be +deployed as [container images](/releases/download/#container-images) within the cluster. + +It is **recommended** to run Kubernetes components as container images wherever +that is possible, and to have Kubernetes manage those components. +Components that run containers - notably, the kubelet - can't be included in this category. + If you don't want to manage a Kubernetes cluster yourself, you could pick a managed service, including [certified platforms](/docs/setup/production-environment/turnkey-solutions/). There are also other standardized and custom solutions across a wide range of cloud and @@ -60,4 +67,5 @@ for deploying Kubernetes is [kubeadm](/docs/setup/production-environment/tools/k Kubernetes is designed for its {{< glossary_tooltip term_id="control-plane" text="control plane" >}} to run on Linux. Within your cluster you can run applications on Linux or other operating systems, including Windows. -- Learn to [set up clusters with Windows nodes](/docs/setup/production-environment/windows/) + +- Learn to [set up clusters with Windows nodes](/docs/concepts/windows/) diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 6d6d576c39641..23e4ac8df70b4 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -22,7 +22,7 @@ This page explains the certificates that your cluster requires. Kubernetes requires PKI for the following operations: * Client certificates for the kubelet to authenticate to the API server -* Kubelet [server certificates](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) +* Kubelet [server certificates](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates) for the API server to talk to the kubelets * Server certificate for the API server endpoint * Client certificates for administrators of the cluster to authenticate to the API server diff --git a/content/en/docs/setup/production-environment/_index.md b/content/en/docs/setup/production-environment/_index.md index 7d8200a6b3cab..32c91d4f752c9 100644 --- a/content/en/docs/setup/production-environment/_index.md +++ b/content/en/docs/setup/production-environment/_index.md @@ -28,29 +28,29 @@ on or hand to others, consider how your requirements for a Kubernetes cluster are influenced by the following issues: - *Availability*: A single-machine Kubernetes [learning environment](/docs/setup/#learning-environment) -has a single point of failure. Creating a highly available cluster means considering: + has a single point of failure. Creating a highly available cluster means considering: - Separating the control plane from the worker nodes. - Replicating the control plane components on multiple nodes. - Load balancing traffic to the cluster’s {{< glossary_tooltip term_id="kube-apiserver" text="API server" >}}. - Having enough worker nodes available, or able to quickly become available, as changing workloads warrant it. - *Scale*: If you expect your production Kubernetes environment to receive a stable amount of -demand, you might be able to set up for the capacity you need and be done. However, -if you expect demand to grow over time or change dramatically based on things like -season or special events, you need to plan how to scale to relieve increased -pressure from more requests to the control plane and worker nodes or scale down to reduce unused -resources. + demand, you might be able to set up for the capacity you need and be done. However, + if you expect demand to grow over time or change dramatically based on things like + season or special events, you need to plan how to scale to relieve increased + pressure from more requests to the control plane and worker nodes or scale down to reduce unused + resources. - *Security and access management*: You have full admin privileges on your own -Kubernetes learning cluster. But shared clusters with important workloads, and -more than one or two users, require a more refined approach to who and what can -access cluster resources. You can use role-based access control -([RBAC](/docs/reference/access-authn-authz/rbac/)) and other -security mechanisms to make sure that users and workloads can get access to the -resources they need, while keeping workloads, and the cluster itself, secure. -You can set limits on the resources that users and workloads can access -by managing [policies](/docs/concepts/policy/) and -[container resources](/docs/concepts/configuration/manage-resources-containers/). + Kubernetes learning cluster. But shared clusters with important workloads, and + more than one or two users, require a more refined approach to who and what can + access cluster resources. You can use role-based access control + ([RBAC](/docs/reference/access-authn-authz/rbac/)) and other + security mechanisms to make sure that users and workloads can get access to the + resources they need, while keeping workloads, and the cluster itself, secure. + You can set limits on the resources that users and workloads can access + by managing [policies](/docs/concepts/policy/) and + [container resources](/docs/concepts/configuration/manage-resources-containers/). Before building a Kubernetes production environment on your own, consider handing off some or all of this job to @@ -59,16 +59,16 @@ providers or other [Kubernetes Partners](https://kubernetes.io/partners/). Options include: - *Serverless*: Just run workloads on third-party equipment without managing -a cluster at all. You will be charged for things like CPU usage, memory, and -disk requests. + a cluster at all. You will be charged for things like CPU usage, memory, and + disk requests. - *Managed control plane*: Let the provider manage the scale and availability -of the cluster's control plane, as well as handle patches and upgrades. + of the cluster's control plane, as well as handle patches and upgrades. - *Managed worker nodes*: Configure pools of nodes to meet your needs, -then the provider makes sure those nodes are available and ready to implement -upgrades when needed. + then the provider makes sure those nodes are available and ready to implement + upgrades when needed. - *Integration*: There are providers that integrate Kubernetes with other -services you may need, such as storage, container registries, authentication -methods, and development tools. + services you may need, such as storage, container registries, authentication + methods, and development tools. Whether you build a production Kubernetes cluster yourself or work with partners, review the following sections to evaluate your needs as they relate @@ -99,52 +99,52 @@ and ensuring that it can be repaired if something goes wrong is important, consider these steps: - *Choose deployment tools*: You can deploy a control plane using tools such -as kubeadm, kops, and kubespray. See -[Installing Kubernetes with deployment tools](/docs/setup/production-environment/tools/) -to learn tips for production-quality deployments using each of those deployment -methods. Different [Container Runtimes](/docs/setup/production-environment/container-runtimes/) -are available to use with your deployments. + as kubeadm, kops, and kubespray. See + [Installing Kubernetes with deployment tools](/docs/setup/production-environment/tools/) + to learn tips for production-quality deployments using each of those deployment + methods. Different [Container Runtimes](/docs/setup/production-environment/container-runtimes/) + are available to use with your deployments. - *Manage certificates*: Secure communications between control plane services -are implemented using certificates. Certificates are automatically generated -during deployment or you can generate them using your own certificate authority. -See [PKI certificates and requirements](/docs/setup/best-practices/certificates/) for details. + are implemented using certificates. Certificates are automatically generated + during deployment or you can generate them using your own certificate authority. + See [PKI certificates and requirements](/docs/setup/best-practices/certificates/) for details. - *Configure load balancer for apiserver*: Configure a load balancer -to distribute external API requests to the apiserver service instances running on different nodes. See -[Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/) -for details. + to distribute external API requests to the apiserver service instances running on different nodes. See + [Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/) + for details. - *Separate and backup etcd service*: The etcd services can either run on the -same machines as other control plane services or run on separate machines, for -extra security and availability. Because etcd stores cluster configuration data, -backing up the etcd database should be done regularly to ensure that you can -repair that database if needed. -See the [etcd FAQ](https://etcd.io/docs/v3.4/faq/) for details on configuring and using etcd. -See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/) -and [Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) -for details. + same machines as other control plane services or run on separate machines, for + extra security and availability. Because etcd stores cluster configuration data, + backing up the etcd database should be done regularly to ensure that you can + repair that database if needed. + See the [etcd FAQ](https://etcd.io/docs/v3.4/faq/) for details on configuring and using etcd. + See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/) + and [Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) + for details. - *Create multiple control plane systems*: For high availability, the -control plane should not be limited to a single machine. If the control plane -services are run by an init service (such as systemd), each service should run on at -least three machines. However, running control plane services as pods in -Kubernetes ensures that the replicated number of services that you request -will always be available. -The scheduler should be fault tolerant, -but not highly available. Some deployment tools set up [Raft](https://raft.github.io/) -consensus algorithm to do leader election of Kubernetes services. If the -primary goes away, another service elects itself and take over. + control plane should not be limited to a single machine. If the control plane + services are run by an init service (such as systemd), each service should run on at + least three machines. However, running control plane services as pods in + Kubernetes ensures that the replicated number of services that you request + will always be available. + The scheduler should be fault tolerant, + but not highly available. Some deployment tools set up [Raft](https://raft.github.io/) + consensus algorithm to do leader election of Kubernetes services. If the + primary goes away, another service elects itself and take over. - *Span multiple zones*: If keeping your cluster available at all times is -critical, consider creating a cluster that runs across multiple data centers, -referred to as zones in cloud environments. Groups of zones are referred to as regions. -By spreading a cluster across -multiple zones in the same region, it can improve the chances that your -cluster will continue to function even if one zone becomes unavailable. -See [Running in multiple zones](/docs/setup/best-practices/multiple-zones/) for details. + critical, consider creating a cluster that runs across multiple data centers, + referred to as zones in cloud environments. Groups of zones are referred to as regions. + By spreading a cluster across + multiple zones in the same region, it can improve the chances that your + cluster will continue to function even if one zone becomes unavailable. + See [Running in multiple zones](/docs/setup/best-practices/multiple-zones/) for details. - *Manage on-going features*: If you plan to keep your cluster over time, -there are tasks you need to do to maintain its health and security. For example, -if you installed with kubeadm, there are instructions to help you with -[Certificate Management](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) -and [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). -See [Administer a Cluster](/docs/tasks/administer-cluster/) -for a longer list of Kubernetes administrative tasks. + there are tasks you need to do to maintain its health and security. For example, + if you installed with kubeadm, there are instructions to help you with + [Certificate Management](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) + and [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). + See [Administer a Cluster](/docs/tasks/administer-cluster/) + for a longer list of Kubernetes administrative tasks. To learn about available options when you run control plane services, see [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/), @@ -166,39 +166,36 @@ consider how you want to manage your worker nodes (also referred to simply as *nodes*). - *Configure nodes*: Nodes can be physical or virtual machines. If you want to -create and manage your own nodes, you can install a supported operating system, -then add and run the appropriate -[Node services](/docs/concepts/overview/components/#node-components). Consider: + create and manage your own nodes, you can install a supported operating system, + then add and run the appropriate + [Node services](/docs/concepts/overview/components/#node-components). Consider: - The demands of your workloads when you set up nodes by having appropriate memory, CPU, and disk speed and storage capacity available. - Whether generic computer systems will do or you have workloads that need GPU processors, Windows nodes, or VM isolation. - *Validate nodes*: See [Valid node setup](/docs/setup/best-practices/node-conformance/) -for information on how to ensure that a node meets the requirements to join -a Kubernetes cluster. + for information on how to ensure that a node meets the requirements to join + a Kubernetes cluster. - *Add nodes to the cluster*: If you are managing your own cluster you can -add nodes by setting up your own machines and either adding them manually or -having them register themselves to the cluster’s apiserver. See the -[Nodes](/docs/concepts/architecture/nodes/) section for information on how to set up Kubernetes to add nodes in these ways. -- *Add Windows nodes to the cluster*: Kubernetes offers support for Windows -worker nodes, allowing you to run workloads implemented in Windows containers. See -[Windows in Kubernetes](/docs/setup/production-environment/windows/) for details. + add nodes by setting up your own machines and either adding them manually or + having them register themselves to the cluster’s apiserver. See the + [Nodes](/docs/concepts/architecture/nodes/) section for information on how to set up Kubernetes to add nodes in these ways. - *Scale nodes*: Have a plan for expanding the capacity your cluster will -eventually need. See [Considerations for large clusters](/docs/setup/best-practices/cluster-large/) -to help determine how many nodes you need, based on the number of pods and -containers you need to run. If you are managing nodes yourself, this can mean -purchasing and installing your own physical equipment. + eventually need. See [Considerations for large clusters](/docs/setup/best-practices/cluster-large/) + to help determine how many nodes you need, based on the number of pods and + containers you need to run. If you are managing nodes yourself, this can mean + purchasing and installing your own physical equipment. - *Autoscale nodes*: Most cloud providers support -[Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) -to replace unhealthy nodes or grow and shrink the number of nodes as demand requires. See the -[Frequently Asked Questions](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) -for how the autoscaler works and -[Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment) -for how it is implemented by different cloud providers. For on-premises, there -are some virtualization platforms that can be scripted to spin up new nodes -based on demand. + [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) + to replace unhealthy nodes or grow and shrink the number of nodes as demand requires. See the + [Frequently Asked Questions](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) + for how the autoscaler works and + [Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment) + for how it is implemented by different cloud providers. For on-premises, there + are some virtualization platforms that can be scripted to spin up new nodes + based on demand. - *Set up node health checks*: For important workloads, you want to make sure -that the nodes and pods running on those nodes are healthy. Using the -[Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) -daemon, you can ensure your nodes are healthy. + that the nodes and pods running on those nodes are healthy. Using the + [Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) + daemon, you can ensure your nodes are healthy. ## Production user management @@ -215,39 +212,51 @@ cluster (authentication) and deciding if they have permissions to do what they are asking (authorization): - *Authentication*: The apiserver can authenticate users using client -certificates, bearer tokens, an authenticating proxy, or HTTP basic auth. -You can choose which authentication methods you want to use. -Using plugins, the apiserver can leverage your organization’s existing -authentication methods, such as LDAP or Kerberos. See -[Authentication](/docs/reference/access-authn-authz/authentication/) -for a description of these different methods of authenticating Kubernetes users. -- *Authorization*: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization. See [Authorization Overview](/docs/reference/access-authn-authz/authorization/) to review different modes for authorizing user accounts (as well as service account access to your cluster): - - *Role-based access control* ([RBAC](/docs/reference/access-authn-authz/rbac/)): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole). Then using RoleBindings and ClusterRoleBindings, those permissions can be attached to particular users. - - *Attribute-based access control* ([ABAC](/docs/reference/access-authn-authz/abac/)): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes. Each line of a policy file identifies versioning properties (apiVersion and kind) and a map of spec properties to match the subject (user or group), resource property, non-resource property (/version or /apis), and readonly. See [Examples](/docs/reference/access-authn-authz/abac/#examples) for details. + certificates, bearer tokens, an authenticating proxy, or HTTP basic auth. + You can choose which authentication methods you want to use. + Using plugins, the apiserver can leverage your organization’s existing + authentication methods, such as LDAP or Kerberos. See + [Authentication](/docs/reference/access-authn-authz/authentication/) + for a description of these different methods of authenticating Kubernetes users. +- *Authorization*: When you set out to authorize your regular users, you will probably choose + between RBAC and ABAC authorization. See [Authorization Overview](/docs/reference/access-authn-authz/authorization/) + to review different modes for authorizing user accounts (as well as service account access to + your cluster): + - *Role-based access control* ([RBAC](/docs/reference/access-authn-authz/rbac/)): Lets you + assign access to your cluster by allowing specific sets of permissions to authenticated users. + Permissions can be assigned for a specific namespace (Role) or across the entire cluster + (ClusterRole). Then using RoleBindings and ClusterRoleBindings, those permissions can be attached + to particular users. + - *Attribute-based access control* ([ABAC](/docs/reference/access-authn-authz/abac/)): Lets you + create policies based on resource attributes in the cluster and will allow or deny access + based on those attributes. Each line of a policy file identifies versioning properties (apiVersion + and kind) and a map of spec properties to match the subject (user or group), resource property, + non-resource property (/version or /apis), and readonly. See + [Examples](/docs/reference/access-authn-authz/abac/#examples) for details. As someone setting up authentication and authorization on your production Kubernetes cluster, here are some things to consider: - *Set the authorization mode*: When the Kubernetes API server -([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)) -starts, the supported authentication modes must be set using the *--authorization-mode* -flag. For example, that flag in the *kube-adminserver.yaml* file (in */etc/kubernetes/manifests*) -could be set to Node,RBAC. This would allow Node and RBAC authorization for authenticated requests. + ([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)) + starts, the supported authentication modes must be set using the *--authorization-mode* + flag. For example, that flag in the *kube-adminserver.yaml* file (in */etc/kubernetes/manifests*) + could be set to Node,RBAC. This would allow Node and RBAC authorization for authenticated requests. - *Create user certificates and role bindings (RBAC)*: If you are using RBAC -authorization, users can create a CertificateSigningRequest (CSR) that can be -signed by the cluster CA. Then you can bind Roles and ClusterRoles to each user. -See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) -for details. + authorization, users can create a CertificateSigningRequest (CSR) that can be + signed by the cluster CA. Then you can bind Roles and ClusterRoles to each user. + See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) + for details. - *Create policies that combine attributes (ABAC)*: If you are using ABAC -authorization, you can assign combinations of attributes to form policies to -authorize selected users or groups to access particular resources (such as a -pod), namespace, or apiGroup. For more information, see -[Examples](/docs/reference/access-authn-authz/abac/#examples). + authorization, you can assign combinations of attributes to form policies to + authorize selected users or groups to access particular resources (such as a + pod), namespace, or apiGroup. For more information, see + [Examples](/docs/reference/access-authn-authz/abac/#examples). - *Consider Admission Controllers*: Additional forms of authorization for -requests that can come in through the API server include -[Webhook Token Authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication). -Webhooks and other special authorization types need to be enabled by adding -[Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) -to the API server. + requests that can come in through the API server include + [Webhook Token Authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication). + Webhooks and other special authorization types need to be enabled by adding + [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) + to the API server. ## Set limits on workload resources @@ -256,38 +265,45 @@ of the Kubernetes control plane. Consider these items when setting up for the needs of your cluster's workloads: - *Set namespace limits*: Set per-namespace quotas on things like memory and CPU. See -[Manage Memory, CPU, and API Resources](/docs/tasks/administer-cluster/manage-resources/) -for details. You can also set -[Hierarchical Namespaces](/blog/2020/08/14/introducing-hierarchical-namespaces/) -for inheriting limits. + [Manage Memory, CPU, and API Resources](/docs/tasks/administer-cluster/manage-resources/) + for details. You can also set + [Hierarchical Namespaces](/blog/2020/08/14/introducing-hierarchical-namespaces/) + for inheriting limits. - *Prepare for DNS demand*: If you expect workloads to massively scale up, -your DNS service must be ready to scale up as well. See -[Autoscale the DNS service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). + your DNS service must be ready to scale up as well. See + [Autoscale the DNS service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). - *Create additional service accounts*: User accounts determine what users can -do on a cluster, while a service account defines pod access within a particular -namespace. By default, a pod takes on the default service account from its namespace. -See [Managing Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) -for information on creating a new service account. For example, you might want to: - - Add secrets that a pod could use to pull images from a particular container registry. See [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/) for an example. - - Assign RBAC permissions to a service account. See [ServiceAccount permissions](/docs/reference/access-authn-authz/rbac/#service-account-permissions) for details. + do on a cluster, while a service account defines pod access within a particular + namespace. By default, a pod takes on the default service account from its namespace. + See [Managing Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) + for information on creating a new service account. For example, you might want to: + - Add secrets that a pod could use to pull images from a particular container registry. See + [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/) + for an example. + - Assign RBAC permissions to a service account. See + [ServiceAccount permissions](/docs/reference/access-authn-authz/rbac/#service-account-permissions) + for details. ## {{% heading "whatsnext" %}} - Decide if you want to build your own production Kubernetes or obtain one from -available [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/) -or [Kubernetes Partners](https://kubernetes.io/partners/). + available [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/) + or [Kubernetes Partners](https://kubernetes.io/partners/). - If you choose to build your own cluster, plan how you want to -handle [certificates](/docs/setup/best-practices/certificates/) -and set up high availability for features such as -[etcd](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) -and the -[API server](/docs/setup/production-environment/tools/kubeadm/ha-topology/). -- Choose from [kubeadm](/docs/setup/production-environment/tools/kubeadm/), [kops](/docs/setup/production-environment/tools/kops/) or [Kubespray](/docs/setup/production-environment/tools/kubespray/) -deployment methods. + handle [certificates](/docs/setup/best-practices/certificates/) + and set up high availability for features such as + [etcd](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) + and the + [API server](/docs/setup/production-environment/tools/kubeadm/ha-topology/). +- Choose from [kubeadm](/docs/setup/production-environment/tools/kubeadm/), + [kops](/docs/setup/production-environment/tools/kops/) or + [Kubespray](/docs/setup/production-environment/tools/kubespray/) + deployment methods. - Configure user management by determining your -[Authentication](/docs/reference/access-authn-authz/authentication/) and -[Authorization](/docs/reference/access-authn-authz/authorization/) methods. + [Authentication](/docs/reference/access-authn-authz/authentication/) and + [Authorization](/docs/reference/access-authn-authz/authorization/) methods. - Prepare for application workloads by setting up -[resource limits](/docs/tasks/administer-cluster/manage-resources/), -[DNS autoscaling](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) -and [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/). + [resource limits](/docs/tasks/administer-cluster/manage-resources/), + [DNS autoscaling](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) + and [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/). + diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 1f658fa262b92..dd432afd3de57 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -36,8 +36,8 @@ part of Kubernetes (this removal was [announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation) as part of the v1.20 release). You can read -[Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to understand how this removal might -affect you. To learn about migrating from using dockershim, see +[Check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) +to understand how this removal might affect you. To learn about migrating from using dockershim, see [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/). If you are running a version of Kubernetes other than v{{< skew currentVersion >}}, @@ -46,6 +46,41 @@ check the documentation for that version. +## Install and configure prerequisites + +The following steps apply common settings for Kubernetes nodes on Linux. + +You can skip a particular setting if you're certain you don't need it. + +For more information, see [Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) or the documentation for your specific container runtime. + +### Forwarding IPv4 and letting iptables see bridged traffic + +Verify that the `br_netfilter` module is loaded by running `lsmod | grep br_netfilter`. + +To load it explicitly, run `sudo modprobe br_netfilter`. + +In order for a Linux node's iptables to correctly view bridged traffic, verify that `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config. For example: + +```bash +cat <}} +{{% tab name="Linux" %}} +You can find this file under the path `/etc/containerd/config.toml`. +{{% /tab %}} +{{< tab name="Windows" >}} +You can find this file under the path `C:\Program Files\containerd\config.toml`. +{{< /tab >}} +{{< /tabs >}} On Linux the default CRI socket for containerd is `/run/containerd/containerd.sock`. On Windows the default CRI endpoint is `npipe://./pipe/containerd-containerd`. @@ -185,6 +197,14 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true ``` +{{< note >}} +If you installed containerd from a package (for example, RPM or `.deb`), you may find +that the CRI integration plugin is disabled by default. + +You need CRI support enabled to use containerd with Kubernetes. Make sure that `cri` +is not included in the`disabled_plugins` list within `/etc/containerd/config.toml`; +if you made changes to that file, also restart `containerd`. +{{< /note >}} If you apply this change, make sure to restart containerd: @@ -193,7 +213,19 @@ sudo systemctl restart containerd ``` When using kubeadm, manually configure the -[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node). +[cgroup driver for kubelet](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver). + +#### Overriding the sandbox (pause) image {#override-pause-image-containerd} + +In your [containerd config](https://github.com/containerd/cri/blob/master/docs/config.md) you can overwrite the +sandbox image by setting the following config: + +```toml +[plugins."io.containerd.grpc.v1.cri"] + sandbox_image = "k8s.gcr.io/pause:3.2" +``` + +You might need to restart `containerd` as well once you've updated the config file: `systemctl restart containerd`. ### CRI-O @@ -221,6 +253,19 @@ in sync. For CRI-O, the CRI socket is `/var/run/crio/crio.sock` by default. +#### Overriding the sandbox (pause) image {#override-pause-image-cri-o} + +In your [CRI-O config](https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md) you can set the following +config value: + +```toml +[crio.image] +pause_image="registry.k8s.io/pause:3.6" +``` + +This config option supports live configuration reload to apply this change: `systemctl reload crio` or by sending +`SIGHUP` to the `crio` process. + ### Docker Engine {#docker} {{< note >}} @@ -237,6 +282,12 @@ Docker Engine with Kubernetes. For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default. +#### Overriding the sandbox (pause) image {#override-pause-image-cri-dockerd} + +The `cri-dockerd` adapter accepts a command line argument for +specifying which container image to use as the Pod infrastructure container (“pause image”). +The command line argument to use is `--pod-infra-container-image`. + ### Mirantis Container Runtime {#mcr} [Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR) is a commercially @@ -251,6 +302,12 @@ visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html). Check the systemd unit named `cri-docker.socket` to find out the path to the CRI socket. +#### Overriding the sandbox (pause) image {#override-pause-image-cri-dockerd-mcr} + +The `cri-dockerd` adapter accepts a command line argument for +specifying which container image to use as the Pod infrastructure container (“pause image”). +The command line argument to use is `--pod-infra-container-image`. + ## {{% heading "whatsnext" %}} As well as a container runtime, your cluster will need a working diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index d7897dfec5817..eedee3b5a304f 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -91,7 +91,8 @@ to not download the default container images which are hosted at `k8s.gcr.io`. Kubeadm has commands that can help you pre-pull the required images when creating a cluster without an internet connection on its nodes. -See [Running kubeadm without an internet connection](/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) for more details. +See [Running kubeadm without an internet connection](/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) +for more details. Kubeadm allows you to use a custom image repository for the required images. See [Using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images) @@ -365,7 +366,8 @@ The output is similar to this: 5didvk.d09sbcov8ph2amjw ``` -If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the control-plane node: +If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the +following command chain on the control-plane node: ```bash openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ @@ -506,7 +508,7 @@ options. * Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy) * See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) for details about upgrading your cluster using `kubeadm`. -* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm) +* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/) * Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/). * See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list of Pod network add-ons. @@ -544,8 +546,8 @@ field when using `--config`. This option will control the versions of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy. Example: -* kubeadm is at {{< skew latestVersion >}} -* `kubernetesVersion` must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}} +* kubeadm is at {{< skew currentVersion >}} +* `kubernetesVersion` must be at {{< skew currentVersion >}} or {{< skew currentVersionAddMinor -1 >}} ### kubeadm's skew against the kubelet @@ -553,8 +555,8 @@ Similarly to the Kubernetes version, kubeadm can be used with a kubelet version version as kubeadm or one version older. Example: -* kubeadm is at {{< skew latestVersion >}} -* kubelet on the host must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}} +* kubeadm is at {{< skew currentVersion >}} +* kubelet on the host must be at {{< skew currentVersion >}} or {{< skew currentVersionAddMinor -1 >}} ### kubeadm's skew against kubeadm @@ -567,17 +569,17 @@ the same node with `kubeadm upgrade`. Similar rules apply to the rest of the kub with the exception of `kubeadm upgrade`. Example for `kubeadm join`: -* kubeadm version {{< skew latestVersion >}} was used to create a cluster with `kubeadm init` -* Joining nodes must use a kubeadm binary that is at version {{< skew latestVersion >}} +* kubeadm version {{< skew currentVersion >}} was used to create a cluster with `kubeadm init` +* Joining nodes must use a kubeadm binary that is at version {{< skew currentVersion >}} Nodes that are being upgraded must use a version of kubeadm that is the same MINOR version or one MINOR version newer than the version of kubeadm used for managing the node. Example for `kubeadm upgrade`: -* kubeadm version {{< skew prevMinorVersion >}} was used to create or upgrade the node -* The version of kubeadm used for upgrading the node must be at {{< skew prevMinorVersion >}} -or {{< skew latestVersion >}} +* kubeadm version {{< skew currentVersionAddMinor -1 >}} was used to create or upgrade the node +* The version of kubeadm used for upgrading the node must be at {{< skew currentVersionAddMinor -1 >}} +or {{< skew currentVersion >}} To learn more about the version skew between the different Kubernetes component see the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/). @@ -603,7 +605,7 @@ Workarounds: kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x following the [multi-platform -proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md). +proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md). Multiplatform container images for the control plane and addons are also supported since v1.12. @@ -613,4 +615,6 @@ supports your chosen platform. ## Troubleshooting {#troubleshooting} -If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). +If you are running into difficulties with kubeadm, please consult our +[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 8ccc267224ef7..993ecf3878d24 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -45,26 +45,6 @@ may [fail](https://github.com/kubernetes/kubeadm/issues/31). If you have more than one network adapter, and your Kubernetes components are not reachable on the default route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter. -## Letting iptables see bridged traffic - -Make sure that the `br_netfilter` module is loaded. This can be done by running `lsmod | grep br_netfilter`. To load it explicitly call `sudo modprobe br_netfilter`. - -As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g. - -```bash -cat <}} -kubeadm contains all the necessary crytographic machinery to generate +kubeadm contains all the necessary cryptographic machinery to generate the certificates described below; no other cryptographic tooling is required for this example. {{< /note >}} diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index ac8d89ee3a17b..7b195a34fd0a3 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -114,7 +114,7 @@ sudo kubeadm reset A possible solution is to restart the container runtime and then re-run `kubeadm reset`. You can also use `crictl` to debug the state of the container runtime. See -[Debugging Kubernetes nodes with crictl](/docs/tasks/debug-application-cluster/crictl/). +[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/). ## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index fd594b92f8915..e0525562c6556 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -6,7 +6,7 @@ weight: 30 -This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). +This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: @@ -46,7 +46,7 @@ Kubespray provides the following utilities to help provision your environment: * [Terraform](https://www.terraform.io/) scripts for the following cloud providers: * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws) * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack) - * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet) + * [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/metal) ### (2/5) Compose an inventory file diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md deleted file mode 100644 index 9c6ab896d7627..0000000000000 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ /dev/null @@ -1,917 +0,0 @@ ---- -reviewers: -- jayunit100 -- jsturtevant -- marosset -- perithompson -title: Windows containers in Kubernetes -content_type: concept -weight: 65 ---- - - - -Windows applications constitute a large portion of the services and applications that -run in many organizations. [Windows containers](https://aka.ms/windowscontainers) -provide a way to encapsulate processes and package dependencies, making it easier -to use DevOps practices and follow cloud native patterns for Windows applications. - -Organizations with investments in Windows-based applications and Linux-based -applications don't have to look for separate orchestrators to manage their workloads, -leading to increased operational efficiencies across their deployments, regardless -of operating system. - - - -## Windows nodes in Kubernetes - -To enable the orchestration of Windows containers in Kubernetes, include Windows nodes -in your existing Linux cluster. Scheduling Windows containers in -{{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is similar to -scheduling Linux-based containers. - -In order to run Windows containers, your Kubernetes cluster must include -multiple operating systems. -While you can only run the {{< glossary_tooltip text="control plane" term_id="control-plane" >}} on Linux, you can deploy worker nodes running either Windows or Linux depending on your workload needs. - -Windows {{< glossary_tooltip text="nodes" term_id="node" >}} are -[supported](#windows-os-version-support) provided that the operating system is -Windows Server 2019. - -This document uses the term *Windows containers* to mean Windows containers with -process isolation. Kubernetes does not support running Windows containers with -[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container). - -## Compatibility and limitations {#limitations} - -Some node features are only available if you use a specific -[container runtime](#container-runtime); others are not available on Windows nodes, -including: - -* HugePages: not supported for Windows containers -* Privileged containers: not supported for Windows containers -* TerminationGracePeriod: requires containerD - -Not all features of shared namespaces are supported. See [API compatibility](#api) -for more details. - -See [Windows OS version compatibility](#windows-os-version-support) for details on -the Windows versions that Kubernetes is tested against. - -From an API and kubectl perspective, Windows containers behave in much the same -way as Linux-based containers. However, there are some notable differences in key -functionality which are outlined in this section. - -### Comparison with Linux {#compatibility-linux-similarities} - -Key Kubernetes elements work the same way in Windows as they do in Linux. This -section refers to several key workload enablers and how they map to Windows. - -* [Pods](/docs/concepts/workloads/pods/) - - A Pod is the basic building block of Kubernetes–the smallest and simplest unit in - the Kubernetes object model that you create or deploy. You may not deploy Windows and - Linux containers in the same Pod. All containers in a Pod are scheduled onto a single - Node where each Node represents a specific platform and architecture. The following - Pod capabilities, properties and events are supported with Windows containers: - - * Single or multiple containers per Pod with process isolation and volume sharing - * Pod `status` fields - * Readiness and Liveness probes - * postStart & preStop container lifecycle events - * ConfigMap, Secrets: as environment variables or volumes - * `emptyDir` volumes - * Named pipe host mounts - * Resource limits - * OS field: - - The `.spec.os.name` field should be set to `windows` to indicate that the current Pod uses Windows containers. - The `IdentifyPodOS` feature gate needs to be enabled for this field to be recognized and used by control plane - components and kubelet. - - {{< note >}} - Starting from 1.24, the `IdentifyPodOS` feature gate is in Beta stage and defaults to be enabled. - {{< /note >}} - - If the `IdentifyPodOS` feature gate is enabled and you set the `.spec.os.name` field to `windows`, - you must not set the following fields in the `.spec` of that Pod: - - * `spec.hostPID` - * `spec.hostIPC` - * `spec.securityContext.seLinuxOptions` - * `spec.securityContext.seccompProfile` - * `spec.securityContext.fsGroup` - * `spec.securityContext.fsGroupChangePolicy` - * `spec.securityContext.sysctls` - * `spec.shareProcessNamespace` - * `spec.securityContext.runAsUser` - * `spec.securityContext.runAsGroup` - * `spec.securityContext.supplementalGroups` - * `spec.containers[*].securityContext.seLinuxOptions` - * `spec.containers[*].securityContext.seccompProfile` - * `spec.containers[*].securityContext.capabilities` - * `spec.containers[*].securityContext.readOnlyRootFilesystem` - * `spec.containers[*].securityContext.privileged` - * `spec.containers[*].securityContext.allowPrivilegeEscalation` - * `spec.containers[*].securityContext.procMount` - * `spec.containers[*].securityContext.runAsUser` - * `spec.containers[*].securityContext.runAsGroup` - - In the above list, wildcards (`*`) indicate all elements in a list. - For example, `spec.containers[*].securityContext` refers to the SecurityContext object - for all containers. If any of these fields is specified, the Pod will - not be admited by the API server. - -* [Workload resources](/docs/concepts/workloads/controllers/) including: - * ReplicaSet - * Deployments - * StatefulSets - * DaemonSet - * Job - * CronJob - * ReplicationController -* {{< glossary_tooltip text="Services" term_id="service" >}} - See [Load balancing and Services](#load-balancing-and-services) for more details. - -Pods, workload resources, and Services are critical elements to managing Windows -workloads on Kubernetes. However, on their own they are not enough to enable -the proper lifecycle management of Windows workloads in a dynamic cloud native -environment. Kubernetes also supports: - -* `kubectl exec` -* Pod and container metrics -* {{< glossary_tooltip text="Horizontal pod autoscaling" term_id="horizontal-pod-autoscaler" >}} -* {{< glossary_tooltip text="Resource quotas" term_id="resource-quota" >}} -* Scheduler preemption - - -### Networking on Windows nodes {#compatibility-networking} - -Networking for Windows containers is exposed through -[CNI plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). -Windows containers function similarly to virtual machines in regards to -networking. Each container has a virtual network adapter (vNIC) which is connected -to a Hyper-V virtual switch (vSwitch). The Host Networking Service (HNS) and the -Host Compute Service (HCS) work together to create containers and attach container -vNICs to networks. HCS is responsible for the management of containers whereas HNS -is responsible for the management of networking resources such as: - -* Virtual networks (including creation of vSwitches) -* Endpoints / vNICs -* Namespaces -* Policies including packet encapsulations, load-balancing rules, ACLs, and NAT rules. - -#### Container networking {#networking} - -The Windows HNS and vSwitch implement namespacing and can -create virtual NICs as needed for a pod or container. However, many configurations such -as DNS, routes, and metrics are stored in the Windows registry database rather than as -files inside `/etc`, which is how Linux stores those configurations. The Windows registry for the container -is separate from that of the host, so concepts like mapping `/etc/resolv.conf` from -the host into a container don't have the same effect they would on Linux. These must -be configured using Windows APIs run in the context of that container. Therefore -CNI implementations need to call the HNS instead of relying on file mappings to pass -network details into the pod or container. - -The following networking functionality is _not_ supported on Windows nodes: - -* Host networking mode -* Local NodePort access from the node itself (works for other nodes or external clients) -* More than 64 backend pods (or unique destination addresses) for a single Service -* IPv6 communication between Windows pods connected to overlay networks -* Local Traffic Policy in non-DSR mode -* Outbound communication using the ICMP protocol via the `win-overlay`, `win-bridge`, or using the Azure-CNI plugin.\ - Specifically, the Windows data plane ([VFP](https://www.microsoft.com/en-us/research/project/azure-virtual-filtering-platform/)) doesn't support ICMP packet transpositions, and this means: - * ICMP packets directed to destinations within the same network (such as pod to pod communication via ping) work as expected and without any limitations; - * TCP/UDP packets work as expected and without any limitations; - * ICMP packets directed to pass through a remote network (e.g. pod to external internet communication via ping) cannot be transposed and thus will not be routed back to their source; - * Since TCP/UDP packets can still be transposed, you can substitute `ping ` with `curl ` to get some debugging insight into connectivity with the outside world. - -Overlay networking support in kube-proxy is a beta feature. In addition, it requires -[KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887) -to be installed on Windows Server 2019. - -#### Network modes - -Windows supports five different networking drivers/modes: L2bridge, L2tunnel, -Overlay (beta), Transparent, and NAT. In a heterogeneous cluster with Windows and Linux -worker nodes, you need to select a networking solution that is compatible on both -Windows and Linux. The following out-of-tree plugins are supported on Windows, -with recommendations on when to use each CNI: - -| Network Driver | Description | Container Packet Modifications | Network Plugins | Network Plugin Characteristics | -| -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | -| L2bridge | Containers are attached to an external vSwitch. Containers are attached to the underlay network, although the physical network doesn't need to learn the container MACs because they are rewritten on ingress/egress. | MAC is rewritten to host MAC, IP may be rewritten to host IP using HNS OutboundNAT policy. | [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge), [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md), Flannel host-gateway uses win-bridge | win-bridge uses L2bridge network mode, connects containers to the underlay of hosts, offering best performance. Requires user-defined routes (UDR) for inter-node connectivity. | -| L2Tunnel | This is a special case of l2bridge, but only used on Azure. All packets are sent to the virtualization host where SDN policy is applied. | MAC rewritten, IP visible on the underlay network | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNI allows integration of containers with Azure vNET, and allows them to leverage the set of capabilities that [Azure Virtual Network provides](https://azure.microsoft.com/en-us/services/virtual-network/). For example, securely connect to Azure services or use Azure NSGs. See [azure-cni for some examples](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking) | -| Overlay (Overlay networking for Windows in Kubernetes is in *alpha* stage) | Containers are given a vNIC connected to an external vSwitch. Each overlay network gets its own IP subnet, defined by a custom IP prefix.The overlay network driver uses VXLAN encapsulation. | Encapsulated with an outer header. | [win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay), Flannel VXLAN (uses win-overlay) | win-overlay should be used when virtual container networks are desired to be isolated from underlay of hosts (e.g. for security reasons). Allows for IPs to be re-used for different overlay networks (which have different VNID tags) if you are restricted on IPs in your datacenter. This option requires [KB4489899](https://support.microsoft.com/help/4489899) on Windows Server 2019. | -| Transparent (special use case for [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)) | Requires an external vSwitch. Containers are attached to an external vSwitch which enables intra-pod communication via logical networks (logical switches and routers). | Packet is encapsulated either via [GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/) or [STT](https://datatracker.ietf.org/doc/draft-davie-stt/) tunneling to reach pods which are not on the same host.
    Packets are forwarded or dropped via the tunnel metadata information supplied by the ovn network controller.
    NAT is done for north-south communication. | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [Deploy via ansible](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib). Distributed ACLs can be applied via Kubernetes policies. IPAM support. Load-balancing can be achieved without kube-proxy. NATing is done without using iptables/netsh. | -| NAT (*not used in Kubernetes*) | Containers are given a vNIC connected to an internal vSwitch. DNS/DHCP is provided using an internal component called [WinNAT](https://techcommunity.microsoft.com/t5/virtualization/windows-nat-winnat-capabilities-and-limitations/ba-p/382303) | MAC and IP is rewritten to host MAC/IP. | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | Included here for completeness | - -As outlined above, the [Flannel](https://github.com/coreos/flannel) -CNI [meta plugin](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel) -is also [supported](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel#windows-support-experimental) on Windows via the -[VXLAN network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) (**alpha support** ; delegates to win-overlay) -and [host-gateway network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) (stable support; delegates to win-bridge). - -This plugin supports delegating to one of the reference CNI plugins (win-overlay, -win-bridge), to work in conjunction with Flannel daemon on Windows (Flanneld) for -automatic node subnet lease assignment and HNS network creation. This plugin reads -in its own configuration file (cni.conf), and aggregates it with the environment -variables from the FlannelD generated subnet.env file. It then delegates to one of -the reference CNI plugins for network plumbing, and sends the correct configuration -containing the node-assigned subnet to the IPAM plugin (for example: `host-local`). - -For Node, Pod, and Service objects, the following network flows are supported for -TCP/UDP traffic: - -* Pod → Pod (IP) -* Pod → Pod (Name) -* Pod → Service (Cluster IP) -* Pod → Service (PQDN, but only if there are no ".") -* Pod → Service (FQDN) -* Pod → external (IP) -* Pod → external (DNS) -* Node → Pod -* Pod → Node - -#### CNI plugin limitations - -* Windows reference network plugins win-bridge and win-overlay do not implement - [CNI spec](https://github.com/containernetworking/cni/blob/master/SPEC.md) v0.4.0, - due to a missing `CHECK` implementation. -* The Flannel VXLAN CNI plugin has the following limitations on Windows: - -1. Node-pod connectivity isn't possible by design. It's only possible for local pods with Flannel v0.12.0 (or higher). -2. Flannel is restricted to using VNI 4096 and UDP port 4789. See the official - [Flannel VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) - backend docs for more details on these parameters. - -#### IP address management (IPAM) {#ipam} - -The following IPAM options are supported on Windows: - -* [host-local](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local) -* HNS IPAM (Inbox platform IPAM, this is a fallback when no IPAM is set) -* [azure-vnet-ipam](https://github.com/Azure/azure-container-networking/blob/master/docs/ipam.md) (for azure-cni only) - -#### Load balancing and Services - -A Kubernetes {{< glossary_tooltip text="Service" term_id="service" >}} is an abstraction -that defines a logical set of Pods and a means to access them over a network. -In a cluster that includes Windows nodes, you can use the following types of Service: - - * `NodePort` - * `ClusterIP` - * `LoadBalancer` - * `ExternalName` - -{{< warning >}} -There are known issue with NodePort services on overlay networking, if the target destination node is running Windows Server 2022. -To avoid the issue entirely, you can configure the service with `externalTrafficPolicy: Local`. - -There are known issues with pod to pod connectivity on l2bridge network on Windows Server 2022 with KB5005619 or higher installed. -To workaround the issue and restore pod-pod connectivity, you can disable the WinDSR feature in kube-proxy. - -These issues require OS fixes. -Please follow https://github.com/microsoft/Windows-Containers/issues/204 for updates. -{{< /warning >}} - -Windows container networking differs in some important ways from Linux networking. -The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) provides -additional details and background. - -On Windows, you can use the following settings to configure Services and load -balancing behavior: - -{{< table caption="Windows Service Settings" >}} -| Feature | Description | Supported Kubernetes version | Supported Windows OS build | How to enable | -| ------- | ----------- | ----------------------------- | -------------------------- | ------------- | -| Session affinity | Ensures that connections from a particular client are passed to the same Pod each time. | v1.20+ | [Windows Server vNext Insider Preview Build 19551](https://blogs.windows.com/windowsexperience/2020/01/28/announcing-windows-server-vnext-insider-preview-build-19551/) (or higher) | Set `service.spec.sessionAffinity` to "ClientIP" | -| Direct Server Return (DSR) | Load balancing mode where the IP address fixups and the LBNAT occurs at the container vSwitch port directly; service traffic arrives with the source IP set as the originating pod IP. | v1.20+ | Windows Server 2019 | Set the following flags in kube-proxy: `--feature-gates="WinDSR=true" --enable-dsr=true` | -| Preserve-Destination | Skips DNAT of service traffic, thereby preserving the virtual IP of the target service in packets reaching the backend Pod. Also disables node-node forwarding. | v1.20+ | Windows Server, version 1903 (or higher) | Set `"preserve-destination": "true"` in service annotations and enable DSR in kube-proxy. | -| IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | v1.19+ | Windows Server, version 2019 | See [IPv4/IPv6 dual-stack](#ipv4ipv6-dual-stack) | -| Client IP preservation | Ensures that source IP of incoming ingress traffic gets preserved. Also disables node-node forwarding. | v1.20+ | Windows Server, version 2019 | Set `service.spec.externalTrafficPolicy` to "Local" and enable DSR in kube-proxy | -{{< /table >}} - -##### Session affinity - -Setting the maximum session sticky time for Windows services using -`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` is not supported. - -#### DNS {#dns-limitations} - -* ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a - `.` as a FQDN and skips FQDN resolution -* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On - Windows, you can only have 1 DNS suffix, which is the DNS suffix associated with that - pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs - and services or names resolvable with just that suffix. For example, a pod spawned - in the default namespace, will have the DNS suffix **default.svc.cluster.local**. - Inside a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** - and **kubernetes**, but not the in-betweens, like **kubernetes.default** or - **kubernetes.default.svc**. -* On Windows, there are multiple DNS resolvers that can be used. As these come with - slightly different behaviors, using the `Resolve-DNSName` utility for name query - resolutions is recommended. - -#### IPv6 networking - -Kubernetes on Windows does not support single-stack "IPv6-only" networking. However, -dual-stack IPv4/IPv6 networking for pods and nodes with single-family services -is supported. - -You can use IPv4/IPv6 dual-stack networking with `l2bridge` networks. See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#configure-ipv4-ipv6-dual-stack) for more details. - -{{< note >}} -Overlay (VXLAN) networks on Windows do not support dual-stack networking. -{{< /note >}} - -### Persistent storage {#compatibility-storage} - -Windows has a layered filesystem driver to mount container layers and create a copy -filesystem based on NTFS. All file paths in the container are resolved only within -the context of that container. - -* With Docker, volume mounts can only target a directory in the container, and not - an individual file. This limitation does not exist with CRI-containerD runtime. -* Volume mounts cannot project files or directories back to the host filesystem. -* Read-only filesystems are not supported because write access is always required - for the Windows registry and SAM database. However, read-only volumes are supported. -* Volume user-masks and permissions are not available. Because the SAM is not shared - between the host & container, there's no mapping between them. All permissions are - resolved within the context of the container. - -As a result, the following storage functionality is not supported on Windows nodes: - -* Volume subpath mounts: only the entire volume can be mounted in a Windows container -* Subpath volume mounting for Secrets -* Host mount projection -* Read-only root filesystem (mapped volumes still support `readOnly`) -* Block device mapping -* Memory as the storage medium (for example, `emptyDir.medium` set to `Memory`) -* File system features like uid/gid; per-user Linux filesystem permissions -* DefaultMode (due to UID/GID dependency) -* NFS based storage/volume support -* Expanding the mounted volume (resizefs) - -Kubernetes {{< glossary_tooltip text="volumes" term_id="volume" >}} enable complex -applications, with data persistence and Pod volume sharing requirements, to be deployed -on Kubernetes. Management of persistent volumes associated with a specific storage -back-end or protocol includes actions such as provisioning/de-provisioning/resizing -of volumes, attaching/detaching a volume to/from a Kubernetes node and -mounting/dismounting a volume to/from individual containers in a pod that needs to -persist data. - -The code implementing these volume management actions for a specific storage back-end -or protocol is shipped in the form of a Kubernetes volume -[plugin](/docs/concepts/storage/volumes/#types-of-volumes). -The following broad classes of Kubernetes volume plugins are supported on Windows: - -##### In-tree volume plugins - -Code associated with in-tree volume plugins ship as part of the core Kubernetes code -base. Deployment of in-tree volume plugins do not require installation of additional -scripts or deployment of separate containerized plugin components. These plugins can -handle provisioning/de-provisioning and resizing of volumes in the storage backend, -attaching/detaching of volumes to/from a Kubernetes node and mounting/dismounting a -volume to/from individual containers in a pod. The following in-tree plugins support -persistent storage on Windows nodes: - -* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) -* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) -* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) -* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) -* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - -#### FlexVolume plugins - -Code associated with [FlexVolume](/docs/concepts/storage/volumes/#flexVolume) -plugins ship as out-of-tree scripts or binaries that need to be deployed directly -on the host. FlexVolume plugins handle attaching/detaching of volumes to/from a -Kubernetes node and mounting/dismounting a volume to/from individual containers -in a pod. Provisioning/De-provisioning of persistent volumes associated -with FlexVolume plugins may be handled through an external provisioner that -is typically separate from the FlexVolume plugins. The following FlexVolume -[plugins](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows), -deployed as PowerShell scripts on the host, support Windows nodes: - -* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd) -* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd) - -#### CSI plugins - -{{< feature-state for_k8s_version="v1.19" state="beta" >}} - -Code associated with {{< glossary_tooltip text="CSI" term_id="csi" >}} plugins ship -as out-of-tree scripts and binaries that are typically distributed as container -images and deployed using standard Kubernetes constructs like DaemonSets and -StatefulSets. -CSI plugins handle a wide range of volume management actions in Kubernetes: -provisioning/de-provisioning/resizing of volumes, attaching/detaching of volumes -to/from a Kubernetes node and mounting/dismounting a volume to/from individual -containers in a pod, backup/restore of persistent data using snapshots and cloning. -CSI plugins typically consist of node plugins (that run on each node as a DaemonSet) -and controller plugins. - -CSI node plugins (especially those associated with persistent volumes exposed as -either block devices or over a shared file-system) need to perform various privileged -operations like scanning of disk devices, mounting of file systems, etc. These -operations differ for each host operating system. For Linux worker nodes, containerized -CSI node plugins are typically deployed as privileged containers. For Windows worker -nodes, privileged operations for containerized CSI node plugins is supported using -[csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed, -stand-alone binary that needs to be pre-installed on each Windows node. - -For more details, refer to the deployment guide of the CSI plugin you wish to deploy. - -### Command line options for the kubelet {#kubelet-compatibility} - -The behavior of some kubelet command line options behave differently on Windows, as described below: - -* The `--windows-priorityclass` lets you set the scheduling priority of the kubelet process (see [CPU resource management](/docs/concepts/configuration/windows-resource-management/#resource-management-cpu)) -* The `--kubelet-reserve`, `--system-reserve` , and `--eviction-hard` flags update [NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) -* Eviction by using `--enforce-node-allocable` is not implemented -* Eviction by using `--eviction-hard` and `--eviction-soft` are not implemented -* A kubelet running on a Windows node does not have memory - restrictions. `--kubelet-reserve` and `--system-reserve` do not set limits on - kubelet or processes running on the host. This means kubelet or a process on the host - could cause memory resource starvation outside the node-allocatable and scheduler. -* The `MemoryPressure` Condition is not implemented -* The kubelet does not take OOM eviction actions - -### API compatibility {#api} - -There are no differences in how most of the Kubernetes APIs work for Windows. The -subtleties around what's different come down to differences in the OS and container -runtime. In certain situations, some properties on workload resources were designed -under the assumption that they would be implemented on Linux, and fail to run on Windows. - -At a high level, these OS concepts are different: - -* Identity - Linux uses userID (UID) and groupID (GID) which - are represented as integer types. User and group names - are not canonical - they are just an alias in `/etc/groups` - or `/etc/passwd` back to UID+GID. Windows uses a larger binary - [security identifier](https://docs.microsoft.com/en-us/windows/security/identity-protection/access-control/security-identifiers) (SID) - which is stored in the Windows Security Access Manager (SAM) database. This - database is not shared between the host and containers, or between containers. -* File permissions - Windows uses an access control list based on (SIDs), whereas - POSIX systems such as Linux use a bitmask based on object permissions and UID+GID, - plus _optional_ access control lists. -* File paths - the convention on Windows is to use `\` instead of `/`. The Go IO - libraries typically accept both and just make it work, but when you're setting a - path or command line that's interpreted inside a container, `\` may be needed. -* Signals - Windows interactive apps handle termination differently, and can - implement one or more of these: - * A UI thread handles well-defined messages including `WM_CLOSE`. - * Console apps handle Ctrl-C or Ctrl-break using a Control Handler. - * Services register a Service Control Handler function that can accept - `SERVICE_CONTROL_STOP` control codes. - -Container exit codes follow the same convention where 0 is success, and nonzero is failure. -The specific error codes may differ across Windows and Linux. However, exit codes -passed from the Kubernetes components (kubelet, kube-proxy) are unchanged. - -##### Field compatibility for container specifications {#compatibility-v1-pod-spec-containers} - -The following list documents differences between how Pod container specifications -work between Windows and Linux: - -* Huge pages are not implemented in the Windows container - runtime, and are not available. They require [asserting a user - privilege](https://docs.microsoft.com/en-us/windows/desktop/Memory/large-page-support) - that's not configurable for containers. -* `requests.cpu` and `requests.memory` - requests are subtracted - from node available resources, so they can be used to avoid overprovisioning a - node. However, they cannot be used to guarantee resources in an overprovisioned - node. They should be applied to all containers as a best practice if the operator - wants to avoid overprovisioning entirely. -* `securityContext.allowPrivilegeEscalation` - - not possible on Windows; none of the capabilities are hooked up -* `securityContext.capabilities` - - POSIX capabilities are not implemented on Windows -* `securityContext.privileged` - - Windows doesn't support privileged containers -* `securityContext.procMount` - - Windows doesn't have a `/proc` filesystem -* `securityContext.readOnlyRootFilesystem` - - not possible on Windows; write access is required for registry & system - processes to run inside the container -* `securityContext.runAsGroup` - - not possible on Windows as there is no GID support -* `securityContext.runAsNonRoot` - - this setting will prevent containers from running as `ContainerAdministrator` - which is the closest equivalent to a root user on Windows. -* `securityContext.runAsUser` - - use [`runAsUserName`](/docs/tasks/configure-pod-container/configure-runasusername) - instead -* `securityContext.seLinuxOptions` - - not possible on Windows as SELinux is Linux-specific -* `terminationMessagePath` - - this has some limitations in that Windows doesn't support mapping single files. The - default value is `/dev/termination-log`, which does work because it does not - exist on Windows by default. - -##### Field compatibility for Pod specifications {#compatibility-v1-pod} - -The following list documents differences between how Pod specifications work between Windows and Linux: - -* `hostIPC` and `hostpid` - host namespace sharing is not possible on Windows -* `hostNetwork` - There is no Windows OS support to share the host network -* `dnsPolicy` - setting the Pod `dnsPolicy` to `ClusterFirstWithHostNet` is - not supported on Windows because host networking is not provided. Pods always - run with a container network. -* `podSecurityContext` (see below) -* `shareProcessNamespace` - this is a beta feature, and depends on Linux namespaces - which are not implemented on Windows. Windows cannot share process namespaces or - the container's root filesystem. Only the network can be shared. -* `terminationGracePeriodSeconds` - this is not fully implemented in Docker on Windows, - see the [GitHub issue](https://github.com/moby/moby/issues/25982). - The behavior today is that the ENTRYPOINT process is sent CTRL_SHUTDOWN_EVENT, - then Windows waits 5 seconds by default, and finally shuts down - all processes using the normal Windows shutdown behavior. The 5 - second default is actually in the Windows registry - [inside the container](https://github.com/moby/moby/issues/25982#issuecomment-426441183), - so it can be overridden when the container is built. -* `volumeDevices` - this is a beta feature, and is not implemented on Windows. - Windows cannot attach raw block devices to pods. -* `volumes` - * If you define an `emptyDir` volume, you cannot set its volume source to `memory`. -* You cannot enable `mountPropagation` for volume mounts as this is not - supported on Windows. - -##### Field compatibility for Pod security context {#compatibility-v1-pod-spec-containers-securitycontext} - -None of the Pod [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) fields work on Windows. - -### Node problem detector - -The node problem detector (see -[Monitor Node Health](/docs/tasks/debug/debug-cluster/monitor-node-health/)) -is not compatible with Windows. - -### Pause container - -In a Kubernetes Pod, an infrastructure or “pause” container is first created -to host the container. In Linux, the cgroups and namespaces that make up a pod -need a process to maintain their continued existence; the pause process provides -this. Containers that belong to the same pod, including infrastructure and worker -containers, share a common network endpoint (same IPv4 and / or IPv6 address, same -network port spaces). Kubernetes uses pause containers to allow for worker containers -crashing or restarting without losing any of the networking configuration. - -Kubernetes maintains a multi-architecture image that includes support for Windows. -For Kubernetes v{{< skew currentVersion >}} the recommended pause image is `k8s.gcr.io/pause:3.6`. -The [source code](https://github.com/kubernetes/kubernetes/tree/master/build/pause) -is available on GitHub. - -Microsoft maintains a different multi-architecture image, with Linux and Windows -amd64 support, that you can find as `mcr.microsoft.com/oss/kubernetes/pause:3.6`. -This image is built from the same source as the Kubernetes maintained image but -all of the Windows binaries are [authenticode signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/authenticode) by Microsoft. -The Kubernetes project recommends using the Microsoft maintained image if you are -deploying to a production or production-like environment that requires signed -binaries. - -### Container runtimes {#container-runtime} - -You need to install a -{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} -into each node in the cluster so that Pods can run there. - -The following container runtimes work with Windows: - -{{% thirdparty-content %}} - -#### cri-containerd - -{{< feature-state for_k8s_version="v1.20" state="stable" >}} - -You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ -as the container runtime for Kubernetes nodes that run Windows. - -Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd). - -{{< note >}} -There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations) -when using GMSA with containerd to access Windows network shares, which requires a -kernel patch. -{{< /note >}} - -#### Mirantis Container Runtime {#mcr} - -[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR) is available as a container runtime for all Windows Server 2019 and later versions. - -See [Install MCR on Windows Servers](https://docs.mirantis.com/mcr/20.10/install/mcr-windows.html) for more information. - -## Windows OS version compatibility {#windows-os-version-support} - -On Windows nodes, strict compatibility rules apply where the host OS version must -match the container base image OS version. Only Windows containers with a container -operating system of Windows Server 2019 are fully supported. - -For Kubernetes v{{< skew currentVersion >}}, operating system compatibility for Windows nodes (and Pods) -is as follows: - -Windows Server LTSC release -: Windows Server 2019 -: Windows Server 2022 - -Windows Server SAC release -: Windows Server version 20H2 - -The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies. - -## Getting help and troubleshooting {#troubleshooting} - -Your main source of help for troubleshooting your Kubernetes cluster should start -with the [Troubleshooting](/docs/tasks/debug/debug-cluster/) -page. - -Some additional, Windows-specific troubleshooting help is included -in this section. Logs are an important element of troubleshooting -issues in Kubernetes. Make sure to include them any time you seek -troubleshooting assistance from other contributors. Follow the -instructions in the -SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs). - -### Node-level troubleshooting {#troubleshooting-node} - -1. How do I know `start.ps1` completed successfully? - - You should see kubelet, kube-proxy, and (if you chose Flannel as your networking - solution) flanneld host-agent processes running on your node, with running logs - being displayed in separate PowerShell windows. In addition to this, your Windows - node should be listed as "Ready" in your Kubernetes cluster. - -1. Can I configure the Kubernetes node processes to run in the background as services? - - The kubelet and kube-proxy are already configured to run as native Windows Services, - offering resiliency by re-starting the services automatically in the event of - failure (for example a process crash). You have two options for configuring these - node components as services. - - 1. As native Windows Services - - You can run the kubelet and kube-proxy as native Windows Services using `sc.exe`. - - ```powershell - # Create the services for kubelet and kube-proxy in two separate commands - sc.exe create binPath= " --service " - - # Please note that if the arguments contain spaces, they must be escaped. - sc.exe create kubelet binPath= "C:\kubelet.exe --service --hostname-override 'minion' " - - # Start the services - Start-Service kubelet - Start-Service kube-proxy - - # Stop the service - Stop-Service kubelet (-Force) - Stop-Service kube-proxy (-Force) - - # Query the service status - Get-Service kubelet - Get-Service kube-proxy - ``` - - 1. Using `nssm.exe` - - You can also always use alternative service managers like - [nssm.exe](https://nssm.cc/) to run these processes (flanneld, - kubelet & kube-proxy) in the background for you. You can use this - [sample script](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/register-svc.ps1), - leveraging nssm.exe to register kubelet, kube-proxy, and flanneld.exe to run - as Windows services in the background. - - ```powershell - register-svc.ps1 -NetworkMode -ManagementIP -ClusterCIDR -KubeDnsServiceIP -LogDir - - # NetworkMode = The network mode l2bridge (flannel host-gw, also the default value) or overlay (flannel vxlan) chosen as a network solution - # ManagementIP = The IP address assigned to the Windows node. You can use ipconfig to find this - # ClusterCIDR = The cluster subnet range. (Default value 10.244.0.0/16) - # KubeDnsServiceIP = The Kubernetes DNS service IP (Default value 10.96.0.10) - # LogDir = The directory where kubelet and kube-proxy logs are redirected into their respective output files (Default value C:\k) - ``` - - If the above referenced script is not suitable, you can manually configure - `nssm.exe` using the following examples. - - ```powershell - # Register flanneld.exe - nssm install flanneld C:\flannel\flanneld.exe - nssm set flanneld AppParameters --kubeconfig-file=c:\k\config --iface= --ip-masq=1 --kube-subnet-mgr=1 - nssm set flanneld AppEnvironmentExtra NODE_NAME= - nssm set flanneld AppDirectory C:\flannel - nssm start flanneld - - # Register kubelet.exe - # Microsoft releases the pause infrastructure container at mcr.microsoft.com/oss/kubernetes/pause:3.6 - nssm install kubelet C:\k\kubelet.exe - nssm set kubelet AppParameters --hostname-override= --v=6 --pod-infra-container-image=mcr.microsoft.com/oss/kubernetes/pause:3.6 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns= --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir= --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config - nssm set kubelet AppDirectory C:\k - nssm start kubelet - - # Register kube-proxy.exe (l2bridge / host-gw) - nssm install kube-proxy C:\k\kube-proxy.exe - nssm set kube-proxy AppDirectory c:\k - nssm set kube-proxy AppParameters --v=4 --proxy-mode=kernelspace --hostname-override=--kubeconfig=c:\k\config --enable-dsr=false --log-dir= --logtostderr=false - nssm.exe set kube-proxy AppEnvironmentExtra KUBE_NETWORK=cbr0 - nssm set kube-proxy DependOnService kubelet - nssm start kube-proxy - - # Register kube-proxy.exe (overlay / vxlan) - nssm install kube-proxy C:\k\kube-proxy.exe - nssm set kube-proxy AppDirectory c:\k - nssm set kube-proxy AppParameters --v=4 --proxy-mode=kernelspace --feature-gates="WinOverlay=true" --hostname-override= --kubeconfig=c:\k\config --network-name=vxlan0 --source-vip= --enable-dsr=false --log-dir= --logtostderr=false - nssm set kube-proxy DependOnService kubelet - nssm start kube-proxy - ``` - - For initial troubleshooting, you can use the following flags in [nssm.exe](https://nssm.cc/) to redirect stdout and stderr to a output file: - - ```powershell - nssm set AppStdout C:\k\mysvc.log - nssm set AppStderr C:\k\mysvc.log - ``` - - For additional details, see [NSSM - the Non-Sucking Service Manager](https://nssm.cc/usage). - -1. My Pods are stuck at "Container Creating" or restarting over and over - - Check that your pause image is compatible with your OS version. The - [instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources) - assume that both the OS and the containers are version 1803. If you have a later - version of Windows, such as an Insider build, you need to adjust the images - accordingly. See [Pause container](#pause-container) for more details. - -### Network troubleshooting {#troubleshooting-network} - -1. My Windows Pods do not have network connectivity - - If you are using virtual machines, ensure that MAC spoofing is **enabled** on all - the VM network adapter(s). - -1. My Windows Pods cannot ping external resources - - Windows Pods do not have outbound rules programmed for the ICMP protocol. However, - TCP/UDP is supported. When trying to demonstrate connectivity to resources - outside of the cluster, substitute `ping ` with corresponding - `curl ` commands. - - If you are still facing problems, most likely your network configuration in - [cni.conf](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf) - deserves some extra attention. You can always edit this static file. The - configuration update will apply to any new Kubernetes resources. - - One of the Kubernetes networking requirements - (see [Kubernetes model](/docs/concepts/cluster-administration/networking/)) is - for cluster communication to occur without - NAT internally. To honor this requirement, there is an - [ExceptionList](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf#L20) - for all the communication where you do not want outbound NAT to occur. However, - this also means that you need to exclude the external IP you are trying to query - from the `ExceptionList`. Only then will the traffic originating from your Windows - pods be SNAT'ed correctly to receive a response from the outside world. In this - regard, your `ExceptionList` in `cni.conf` should look as follows: - - ```conf - "ExceptionList": [ - "10.244.0.0/16", # Cluster subnet - "10.96.0.0/12", # Service subnet - "10.127.130.0/24" # Management (host) subnet - ] - ``` - -1. My Windows node cannot access `NodePort` type Services - - Local NodePort access from the node itself fails. This is a known - limitation. NodePort access works from other nodes or external clients. - -1. vNICs and HNS endpoints of containers are being deleted - - This issue can be caused when the `hostname-override` parameter is not passed to - [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/). To resolve - it, users need to pass the hostname to kube-proxy as follows: - - ```powershell - C:\k\kube-proxy.exe --hostname-override=$(hostname) - ``` - -1. With flannel, my nodes are having issues after rejoining a cluster - - Whenever a previously deleted node is being re-joined to the cluster, flannelD - tries to assign a new pod subnet to the node. Users should remove the old pod - subnet configuration files in the following paths: - - ```powershell - Remove-Item C:\k\SourceVip.json - Remove-Item C:\k\SourceVipRequest.json - ``` - -1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created" - - There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch `start.ps1` or relaunch it manually as follows: - - ```powershell - [Environment]::SetEnvironmentVariable("NODE_NAME", "") - C:\flannel\flanneld.exe --kubeconfig-file=c:\k\config --iface= --ip-masq=1 --kube-subnet-mgr=1 - ``` - -1. My Windows Pods cannot launch because of missing `/run/flannel/subnet.env` - - This indicates that Flannel didn't launch correctly. You can either try - to restart `flanneld.exe` or you can copy the files over manually from - `/run/flannel/subnet.env` on the Kubernetes master to `C:\run\flannel\subnet.env` - on the Windows worker node and modify the `FLANNEL_SUBNET` row to a different - number. For example, if node subnet 10.244.4.1/24 is desired: - - ```env - FLANNEL_NETWORK=10.244.0.0/16 - FLANNEL_SUBNET=10.244.4.1/24 - FLANNEL_MTU=1500 - FLANNEL_IPMASQ=true - ``` - -1. My Windows node cannot access my services using the service IP - - This is a known limitation of the networking stack on Windows. However, Windows Pods can access the Service IP. - -1. No network adapter is found when starting the kubelet - - The Windows networking stack needs a virtual adapter for Kubernetes networking to work. If the following commands return no results (in an admin shell), virtual network creation — a necessary prerequisite for the kubelet to work — has failed: - - ```powershell - Get-HnsNetwork | ? Name -ieq "cbr0" - Get-NetAdapter | ? Name -Like "vEthernet (Ethernet*" - ``` - - Often it is worthwhile to modify the [InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L7) parameter of the start.ps1 script, in cases where the host's network adapter isn't "Ethernet". Otherwise, consult the output of the `start-kubelet.ps1` script to see if there are errors during virtual network creation. - -1. DNS resolution is not properly working - - Check the DNS limitations for Windows in this [section](#dns-limitations). - -1. `kubectl port-forward` fails with "unable to do port forwarding: wincat not found" - - This was implemented in Kubernetes 1.15 by including `wincat.exe` in the pause infrastructure container `mcr.microsoft.com/oss/kubernetes/pause:3.6`. Be sure to use a supported version of Kubernetes. - If you would like to build your own pause infrastructure container be sure to include [wincat](https://github.com/kubernetes/kubernetes/tree/master/build/pause/windows/wincat). - -1. My Kubernetes installation is failing because my Windows Server node is behind a proxy - - If you are behind a proxy, the following PowerShell environment variables must be defined: - - ```PowerShell - [Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine) - [Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine) - ``` - -### Further investigation - -If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through: - -* StackOverflow [Windows Server Container](https://stackoverflow.com/questions/tagged/windows-server-container) topic -* Kubernetes Official Forum [discuss.kubernetes.io](https://discuss.kubernetes.io/) -* Kubernetes Slack [#SIG-Windows Channel](https://kubernetes.slack.com/messages/sig-windows) - -### Reporting issues and feature requests - -If you have what looks like a bug, or you would like to -make a feature request, please use the -[GitHub issue tracking system](https://github.com/kubernetes/kubernetes/issues). -You can open issues on -[GitHub](https://github.com/kubernetes/kubernetes/issues/new/choose) and assign -them to SIG-Windows. You should first search the list of issues in case it was -reported previously and comment with your experience on the issue and add additional -logs. SIG-Windows Slack is also a great avenue to get some initial support and -troubleshooting ideas prior to creating a ticket. - -If filing a bug, please include detailed information about how to reproduce the problem, such as: - -* Kubernetes version: output from `kubectl version` -* Environment details: Cloud provider, OS distro, networking choice and configuration, and Docker version -* Detailed steps to reproduce the problem -* [Relevant logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) - -It helps if you tag the issue as **sig/windows**, by commenting on the issue with `/sig windows`. This helps to bring -the issue to a SIG Windows member's attention - - -## {{% heading "whatsnext" %}} - -### Deployment tools - -The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control -plane to manage the cluster it, and nodes to run your workloads. -[Adding Windows nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -explains how to deploy Windows nodes to your cluster using kubeadm. - -The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes. - -### Windows distribution channels - -For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19). - -Information on the different Windows Server servicing channels -including their support models can be found at -[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md index 262071094ce5c..456662692ee25 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md @@ -64,17 +64,17 @@ kubectl cluster-info The output is similar to this: ``` -Kubernetes master is running at https://104.197.5.247 -elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy -kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy -kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy -grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy -heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy +Kubernetes master is running at https://192.0.2.1 +elasticsearch-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy +kibana-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kibana-logging/proxy +kube-dns is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kube-dns/proxy +grafana is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy +heapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy ``` This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached -at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed, or through a kubectl proxy at, for example: +at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed, or through a kubectl proxy at, for example: `http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`. {{< note >}} @@ -104,13 +104,13 @@ The supported formats for the `` segment of the URL are: * To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: ``` - http://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy + http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy ``` * To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: ``` - https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true + https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true ``` The health information is similar to this: @@ -133,7 +133,7 @@ The supported formats for the `` segment of the URL are: * To access the *https* Elasticsearch service health information `_cluster/health?pretty=true`, you would use: ``` - https://104.197.5.247/api/v1/namespaces/kube-system/services/https:elasticsearch-logging/proxy/_cluster/health?pretty=true + https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true ``` #### Using web browsers to access services running on the cluster diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index aae96d3e96bcc..f20fe407e8717 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -233,7 +233,7 @@ There are several different proxies you may encounter when using Kubernetes: - locates apiserver - adds authentication headers -1. The [apiserver proxy](#discovering-builtin-services): +1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services): - is a bastion built into the apiserver - connects a user outside of the cluster to cluster IPs which otherwise might not be reachable diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 8b79d7042f48c..0c3d05d62fc32 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -22,7 +22,8 @@ It does not mean that there is a file named `kubeconfig`. {{< warning >}} -Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. +Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig +file could result in malicious code execution or file exposure. If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script. {{< /warning>}} @@ -50,7 +51,7 @@ to the scratch cluster requires authentication by username and password. Create a directory named `config-exercise`. In your `config-exercise` directory, create a file named `config-demo` with this content: -```shell +```yaml apiVersion: v1 kind: Config preferences: {} @@ -115,7 +116,7 @@ kubectl config --kubeconfig=config-demo view The output shows the two clusters, two users, and three contexts: -```shell +```yaml apiVersion: v1 clusters: - cluster: @@ -271,7 +272,7 @@ For example: ### Linux ```shell -export KUBECONFIG_SAVED=$KUBECONFIG +export KUBECONFIG_SAVED="$KUBECONFIG" ``` ### Windows PowerShell @@ -290,7 +291,7 @@ Temporarily append two paths to your `KUBECONFIG` environment variable. For exam ### Linux ```shell -export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 +export KUBECONFIG="${KUBECONFIG}:config-demo:config-demo-2" ``` ### Windows PowerShell @@ -356,7 +357,7 @@ For example: ### Linux ```shell -export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config +export KUBECONFIG="${KUBECONFIG}:${HOME}/.kube/config" ``` ### Windows Powershell @@ -379,7 +380,7 @@ Return your `KUBECONFIG` environment variable to its original value. For example ### Linux ```shell -export KUBECONFIG=$KUBECONFIG_SAVED +export KUBECONFIG="$KUBECONFIG_SAVED" ``` ### Windows PowerShell diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index ba8f7b1244c8c..3b2648f9437e7 100644 --- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -11,180 +11,169 @@ This page shows how to use `kubectl port-forward` to connect to a MongoDB server running in a Kubernetes cluster. This type of connection can be useful for database debugging. - - - ## {{% heading "prerequisites" %}} - * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - * Install [MongoDB Shell](https://www.mongodb.com/try/download/shell). - - - ## Creating MongoDB deployment and service 1. Create a Deployment that runs MongoDB: - ```shell - kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml - ``` - - The output of a successful command verifies that the deployment was created: + ```shell + kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml + ``` - ``` - deployment.apps/mongo created - ``` + The output of a successful command verifies that the deployment was created: - View the pod status to check that it is ready: + ``` + deployment.apps/mongo created + ``` - ```shell - kubectl get pods - ``` + View the pod status to check that it is ready: - The output displays the pod created: + ```shell + kubectl get pods + ``` - ``` - NAME READY STATUS RESTARTS AGE - mongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s - ``` + The output displays the pod created: - View the Deployment's status: + ``` + NAME READY STATUS RESTARTS AGE + mongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s + ``` - ```shell - kubectl get deployment - ``` + View the Deployment's status: - The output displays that the Deployment was created: + ```shell + kubectl get deployment + ``` - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - mongo 1/1 1 1 2m21s - ``` + The output displays that the Deployment was created: - The Deployment automatically manages a ReplicaSet. - View the ReplicaSet status using: + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + mongo 1/1 1 1 2m21s + ``` - ```shell - kubectl get replicaset - ``` + The Deployment automatically manages a ReplicaSet. + View the ReplicaSet status using: - The output displays that the ReplicaSet was created: + ```shell + kubectl get replicaset + ``` - ``` - NAME DESIRED CURRENT READY AGE - mongo-75f59d57f4 1 1 1 3m12s - ``` + The output displays that the ReplicaSet was created: + ``` + NAME DESIRED CURRENT READY AGE + mongo-75f59d57f4 1 1 1 3m12s + ``` 2. Create a Service to expose MongoDB on the network: - ```shell - kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml + ``` - The output of a successful command verifies that the Service was created: + The output of a successful command verifies that the Service was created: - ``` - service/mongo created - ``` + ``` + service/mongo created + ``` - Check the Service created: + Check the Service created: - ```shell - kubectl get service mongo - ``` + ```shell + kubectl get service mongo + ``` - The output displays the service created: + The output displays the service created: - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - mongo ClusterIP 10.96.41.183 27017/TCP 11s - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + mongo ClusterIP 10.96.41.183 27017/TCP 11s + ``` 3. Verify that the MongoDB server is running in the Pod, and listening on port 27017: - ```shell - # Change mongo-75f59d57f4-4nd6q to the name of the Pod - kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' - ``` + ```shell + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + ``` - The output displays the port for MongoDB in that Pod: + The output displays the port for MongoDB in that Pod: - ``` - 27017 - ``` + ``` + 27017 + ``` - (this is the TCP port allocated to MongoDB on the internet). + 27017 is the TCP port allocated to MongoDB on the internet. ## Forward a local port to a port on the Pod -1. `kubectl port-forward` allows using resource name, such as a pod name, to select a matching pod to port forward to. +1. `kubectl port-forward` allows using resource name, such as a pod name, to select a matching pod to port forward to. - ```shell - # Change mongo-75f59d57f4-4nd6q to the name of the Pod - kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017 - ``` + ```shell + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017 + ``` - which is the same as + which is the same as - ```shell - kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017 - ``` + ```shell + kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017 + ``` - or + or - ```shell - kubectl port-forward deployment/mongo 28015:27017 - ``` + ```shell + kubectl port-forward deployment/mongo 28015:27017 + ``` - or + or - ```shell - kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017 - ``` + ```shell + kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017 + ``` - or + or - ```shell - kubectl port-forward service/mongo 28015:27017 - ``` + ```shell + kubectl port-forward service/mongo 28015:27017 + ``` - Any of the above commands works. The output is similar to this: + Any of the above commands works. The output is similar to this: - ``` - Forwarding from 127.0.0.1:28015 -> 27017 - Forwarding from [::1]:28015 -> 27017 - ``` + ``` + Forwarding from 127.0.0.1:28015 -> 27017 + Forwarding from [::1]:28015 -> 27017 + ``` -{{< note >}} - -`kubectl port-forward` does not return. To continue with the exercises, you will need to open another terminal. + {{< note >}} + `kubectl port-forward` does not return. To continue with the exercises, you will need to open another terminal. + {{< /note >}} -{{< /note >}} +2. Start the MongoDB command line interface: -2. Start the MongoDB command line interface: + ```shell + mongosh --port 28015 + ``` - ```shell - mongosh --port 28015 - ``` +3. At the MongoDB command line prompt, enter the `ping` command: -3. At the MongoDB command line prompt, enter the `ping` command: + ``` + db.runCommand( { ping: 1 } ) + ``` - ``` - db.runCommand( { ping: 1 } ) - ``` + A successful ping request returns: - A successful ping request returns: - - ``` - { ok: 1 } - ``` + ``` + { ok: 1 } + ``` ### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port} @@ -204,7 +193,6 @@ Forwarding from 127.0.0.1:63753 -> 27017 Forwarding from [::1]:63753 -> 27017 ``` - ## Discussion @@ -219,9 +207,7 @@ The support for UDP protocol is tracked in [issue 47862](https://github.com/kubernetes/kubernetes/issues/47862). {{< /note >}} - - - ## {{% heading "whatsnext" %}} Learn more about [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward). + diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index 2338b0cdc76fe..44effe93403d3 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -1,5 +1,5 @@ --- -title: Certificates +title: Generate Certificates Manually content_type: task weight: 20 --- diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md index 6e9dc302c445c..e3f9595d26bf1 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md @@ -21,8 +21,8 @@ At a high level, the steps you perform are: ## {{% heading "prerequisites" %}} You must have an existing cluster. This page is about upgrading from Kubernetes -{{< skew prevMinorVersion >}} to Kubernetes {{< skew latestVersion >}}. If your cluster -is not currently running Kubernetes {{< skew prevMinorVersion >}} then please check +{{< skew currentVersionAddMinor -1 >}} to Kubernetes {{< skew currentVersion >}}. If your cluster +is not currently running Kubernetes {{< skew currentVersionAddMinor -1 >}} then please check the documentation for the version of Kubernetes that you plan to upgrade to. ## Upgrade approaches @@ -55,7 +55,7 @@ At this point you should [install the latest version of `kubectl`](/docs/tasks/tools/). For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/) -that node and then either replace it with a new node that uses the {{< skew latestVersion >}} +that node and then either replace it with a new node that uses the {{< skew currentVersion >}} kubelet, or upgrade the kubelet on that node and bring the node back into service. ### Other deployments {#upgrade-other} diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index bf5ddd8f5fb71..be77074dc14e7 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -150,7 +150,7 @@ access to clients with the certificate `k8sclient.cert`. Once etcd is configured correctly, only clients with valid certificates can access it. To give Kubernetes API servers the access, configure them with the -flags `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` and +flags `--etcd-certfile=k8sclient.cert`, `--etcd-keyfile=k8sclient.key` and `--etcd-cafile=ca.cert`. {{< note >}} @@ -319,7 +319,7 @@ employed to recover the data of a failed cluster. Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining -[data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir). +[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir). Here is an example: ```shell diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index c48f9ee2dab2f..d510caff81a98 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -88,8 +88,8 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations `identity` | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written. `secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. `aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. -`aescbc` | AES-CBC with PKCS#7 padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks. -`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/) +`aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks. +`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/) Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider is the first provider, the first key is used for encryption. diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md index 15bc1290ff97c..d2ea73d761c30 100644 --- a/content/en/docs/tasks/administer-cluster/kms-provider.md +++ b/content/en/docs/tasks/administer-cluster/kms-provider.md @@ -19,35 +19,50 @@ This page shows how to configure a Key Management Service (KMS) provider and plu -The KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS -plugin. The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) as the Kubernetes master(s), is responsible for all communication with the remote KMS. +The KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. +The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. +The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. +The KMS provider uses gRPC to communicate with a specific KMS plugin. +The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) +as the Kubernetes control plane, is responsible for all communication with the remote KMS. ## Configuring the KMS provider -To configure a KMS provider on the API server, include a provider of type ```kms``` in the providers array in the encryption configuration file and set the following properties: +To configure a KMS provider on the API server, include a provider of type `kms` in the +`providers` array in the encryption configuration file and set the following properties: * `name`: Display name of the KMS plugin. * `endpoint`: Listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket. * `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap. -* `timeout`: How long should kube-apiserver wait for kms-plugin to respond before returning an error (default is 3 seconds). +* `timeout`: How long should `kube-apiserver` wait for kms-plugin to respond before + returning an error (default is 3 seconds). -See [Understanding the encryption at rest configuration.](/docs/tasks/administer-cluster/encrypt-data) +See [Understanding the encryption at rest configuration](/docs/tasks/administer-cluster/encrypt-data). ## Implementing a KMS plugin -To implement a KMS plugin, you can develop a new plugin gRPC server or enable a KMS plugin already provided by your cloud provider. You then integrate the plugin with the remote KMS and deploy it on the Kubernetes master. +To implement a KMS plugin, you can develop a new plugin gRPC server or enable a KMS plugin +already provided by your cloud provider. +You then integrate the plugin with the remote KMS and deploy it on the Kubernetes master. ### Enabling the KMS supported by your cloud provider + Refer to your cloud provider for instructions on enabling the cloud provider-specific KMS plugin. ### Developing a KMS plugin gRPC server -You can develop a KMS plugin gRPC server using a stub file available for Go. For other languages, you use a proto file to create a stub file that you can use to develop the gRPC server code. -* Using Go: Use the functions and data structures in the stub file: [service.pb.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.pb.go) to develop the gRPC server code +You can develop a KMS plugin gRPC server using a stub file available for Go. For other languages, +you use a proto file to create a stub file that you can use to develop the gRPC server code. + +* Using Go: Use the functions and data structures in the stub file: + [service.pb.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.pb.go) + to develop the gRPC server code -* Using languages other than Go: Use the protoc compiler with the proto file: [service.proto](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.proto) to generate a stub file for the specific language +* Using languages other than Go: Use the protoc compiler with the proto file: + [service.proto](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.proto) + to generate a stub file for the specific language Then use the functions and data structures in the stub file to develop the server code. @@ -55,7 +70,7 @@ Then use the functions and data structures in the stub file to develop the serve * kms plugin version: `v1beta1` - In response to procedure call Version, a compatible KMS plugin should return v1beta1 as VersionResponse.version. + In response to procedure call Version, a compatible KMS plugin should return `v1beta1` as `VersionResponse.version`. * message version: `v1beta1` @@ -69,12 +84,15 @@ Then use the functions and data structures in the stub file to develop the serve The KMS plugin can communicate with the remote KMS using any protocol supported by the KMS. All configuration data, including authentication credentials the KMS plugin uses to communicate with the remote KMS, -are stored and managed by the KMS plugin independently. The KMS plugin can encode the ciphertext with additional metadata that may be required before sending it to the KMS for decryption. +are stored and managed by the KMS plugin independently. +The KMS plugin can encode the ciphertext with additional metadata that may be required before sending it to the KMS for decryption. ### Deploying the KMS plugin + Ensure that the KMS plugin runs on the same host(s) as the Kubernetes master(s). ## Encrypting your data with the KMS provider + To encrypt the data: 1. Create a new encryption configuration file using the appropriate properties for the `kms` provider: @@ -94,32 +112,43 @@ To encrypt the data: - identity: {} ``` -1. Set the `--encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file. +1. Set the `--encryption-provider-config` flag on the kube-apiserver to point to + the location of the configuration file. 1. Restart your API server. +For details about the `EncryptionConfiguration` format, please check the +[API server encryption API reference](/docs/reference/config-api/apiserver-encryption.v1/). + ## Verifying that the data is encrypted Data is encrypted when written to etcd. After restarting your `kube-apiserver`, any newly created or updated secret should be encrypted when stored. To verify, you can use the `etcdctl` command line program to retrieve the contents of your secret. -1. Create a new secret called secret1 in the default namespace: - ``` +1. Create a new secret called `secret1` in the `default` namespace: + + ```shell kubectl create secret generic secret1 -n default --from-literal=mykey=mydata ``` -1. Using the etcdctl command line, read that secret out of etcd: - ``` + +1. Using the `etcdctl` command line, read that secret out of etcd: + + ```shell ETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C ``` - where `[...]` must be the additional arguments for connecting to the etcd server. -1. Verify the stored secret is prefixed with `k8s:enc:kms:v1:`, which indicates that the `kms` provider has encrypted the resulting data. + where `[...]` contains the additional arguments for connecting to the etcd server. + +1. Verify the stored secret is prefixed with `k8s:enc:kms:v1:`, which indicates that + the `kms` provider has encrypted the resulting data. 1. Verify that the secret is correctly decrypted when retrieved via the API: - ``` + + ```shell kubectl describe secret secret1 -n default ``` - should match `mykey: mydata` + + The Secret should contain `mykey: mydata` ## Ensuring all secrets are encrypted @@ -129,7 +158,7 @@ The following command reads all secrets and then updates them to apply server si If an error occurs due to a conflicting write, retry the command. For larger clusters, you may wish to subdivide the secrets by namespace or script an update. -``` +```shell kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` @@ -156,12 +185,12 @@ To switch from a local encryption provider to the `kms` provider and re-encrypt secret: ``` -1. Restart all kube-apiserver processes. +1. Restart all `kube-apiserver` processes. 1. Run the following command to force all secrets to be re-encrypted using the `kms` provider. - ``` - kubectl get secrets --all-namespaces -o json| kubectl replace -f - + ```shell + kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` ## Disabling encryption at rest @@ -183,9 +212,12 @@ To disable encryption at rest: endpoint: unix:///tmp/socketfile.sock cachesize: 100 ``` -1. Restart all kube-apiserver processes. + +1. Restart all `kube-apiserver` processes. + 1. Run the following command to force all secrets to be decrypted. - ``` + + ```shell kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md deleted file mode 100644 index 7009eb2d2a18e..0000000000000 --- a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -reviewers: -- jayunit100 -- jsturtevant -- marosset -- perithompson -title: Adding Windows nodes -min-kubernetes-server-version: 1.17 -content_type: tutorial -weight: 30 ---- - - - -{{< feature-state for_k8s_version="v1.18" state="beta" >}} - -You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster. - - -## {{% heading "prerequisites" %}} - {{< version-check >}} - -* Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) -(or higher) in order to configure the Windows node that hosts Windows containers. -If you are using VXLAN/Overlay networking you must have also have [KB4489899](https://support.microsoft.com/help/4489899) installed. - -* A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)). - - - - -## {{% heading "objectives" %}} - - -* Register a Windows node to the cluster -* Configure networking so Pods and Services on Linux and Windows can communicate with each other - - - - - - -## Getting Started: Adding a Windows Node to Your Cluster - -### Networking Configuration - -Once you have a Linux-based Kubernetes control-plane node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity. - -#### Configuring Flannel - -1. Prepare Kubernetes control plane for Flannel - - Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. The following command must be run on all Linux nodes: - - ```bash - sudo sysctl net.bridge.bridge-nf-call-iptables=1 - ``` - -1. Download & configure Flannel for Linux - - Download the most recent Flannel manifest: - - ```bash - wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml - ``` - - Modify the `net-conf.json` section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows: - - ```json - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan", - "VNI": 4096, - "Port": 4789 - } - } - ``` - - {{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). - for an explanation of these fields.{{< /note >}} - - {{< note >}}To use L2Bridge/Host-gateway mode instead change the value of `Type` to `"host-gw"` and omit `VNI` and `Port`.{{< /note >}} - -1. Apply the Flannel manifest and validate - - Let's apply the Flannel configuration: - - ```bash - kubectl apply -f kube-flannel.yml - ``` - - After a few minutes, you should see all the pods as running if the Flannel pod network was deployed. - - ```bash - kubectl get pods -n kube-system - ``` - - The output should include the Linux flannel DaemonSet as running: - - ``` - NAMESPACE NAME READY STATUS RESTARTS AGE - ... - kube-system kube-flannel-ds-54954 1/1 Running 0 1m - ``` - -1. Add Windows Flannel and kube-proxy DaemonSets - - Now you can add Windows-compatible versions of Flannel and kube-proxy. In order - to ensure that you get a compatible version of kube-proxy, you'll need to substitute - the tag of the image. The following example shows usage for Kubernetes {{< param "fullversion" >}}, - but you should adjust the version for your own deployment. - - ```bash - curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f - - kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml - ``` - {{< note >}} - If you're using host-gateway use https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.yml instead - {{< /note >}} - - {{< note >}} -If you're using a different interface rather than Ethernet (i.e. "Ethernet0 2") on the Windows nodes, you have to modify the line: - -```powershell -wins cli process run --path /k/flannel/setup.exe --args "--mode=overlay --interface=Ethernet" -``` - -in the `flannel-host-gw.yml` or `flannel-overlay.yml` file and specify your interface accordingly. - -```bash -# Example -curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml | sed 's/Ethernet/Ethernet0 2/g' | kubectl apply -f - -``` - {{< /note >}} - - - -### Joining a Windows worker node - -{{< note >}} -All code snippets in Windows sections are to be run in a PowerShell environment -with elevated permissions (Administrator) on the Windows worker node. -{{< /note >}} - -{{< tabs name="tab-windows-kubeadm-runtime-installation" >}} - -{{% tab name="CRI-containerD" %}} - -#### Install containerD - -```powershell -curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/Install-Containerd.ps1 -.\Install-Containerd.ps1 -``` - -{{< note >}} -To install a specific version of containerD specify the version with -ContainerDVersion. - -```powershell -# Example -.\Install-Containerd.ps1 -ContainerDVersion 1.4.1 -``` - -If you're using a different interface rather than Ethernet (i.e. "Ethernet0 2") on the Windows nodes, specify the name with `-netAdapterName`. - -```powershell -# Example -.\Install-Containerd.ps1 -netAdapterName "Ethernet0 2" -``` - -{{< /note >}} - -#### Install wins, kubelet, and kubeadm - -```PowerShell -curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/kubeadm/scripts/PrepareNode.ps1 -.\PrepareNode.ps1 -KubernetesVersion {{< param "fullversion" >}} -ContainerRuntime containerD -``` - -[Install `crictl` from the cri-tools package](https://github.com/kubernetes-sigs/cri-tools) -which is required so that kubeadm can talk to the CRI endpoint. - -#### Run `kubeadm` to join the node - -Use the command that was given to you when you ran `kubeadm init` on a control plane host. -If you no longer have this command, or the token has expired, you can run `kubeadm token create --print-join-command` -(on a control plane host) to generate a new token and join command. - -{{% /tab %}} - -{{% tab name="Docker Engine" %}} - -#### Install Docker Engine - -Install the `Containers` feature - -```powershell -Install-WindowsFeature -Name containers -``` - -Install Docker -Instructions to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=Windows-Server#install-docker). - -[Install cri-dockerd](https://github.com/Mirantis/cri-dockerd) which is required so that the kubelet -can communicate with Docker on a CRI compatible endpoint. - -{{< note >}} -Docker Engine does not implement the [CRI](/docs/concepts/architecture/cri/) -which is a requirement for a container runtime to work with Kubernetes. -For that reason, an additional service [cri-dockerd](https://github.com/Mirantis/cri-dockerd) -has to be installed. cri-dockerd is a project based on the legacy built-in -Docker Engine support that was [removed](/dockershim) from the kubelet in version 1.24. -{{< /note >}} - -Install `crictl` from the [cri-tools project](https://github.com/kubernetes-sigs/cri-tools) -which is required so that kubeadm can talk to the CRI endpoint. - -#### Install wins, kubelet, and kubeadm - -```PowerShell -curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/kubeadm/scripts/PrepareNode.ps1 -.\PrepareNode.ps1 -KubernetesVersion {{< param "fullversion" >}} -``` - -#### Run `kubeadm` to join the node - -Use the command that was given to you when you ran `kubeadm init` on a control plane host. -If you no longer have this command, or the token has expired, you can run `kubeadm token create --print-join-command` -(on a control plane host) to generate a new token and join command. - -{{% /tab %}} - -{{< /tabs >}} - -### Verifying your installation - -You should now be able to view the Windows node in your cluster by running: - -```bash -kubectl get nodes -o wide -``` - -If your new node is in the `NotReady` state it is likely because the flannel image is still downloading. -You can check the progress as before by checking on the flannel pods in the `kube-system` namespace: - -```shell -kubectl -n kube-system get pods -l app=flannel -``` - -Once the flannel Pod is running, your node should enter the `Ready` state and then be available to handle workloads. - -## {{% heading "whatsnext" %}} - -- [Upgrading Windows kubeadm nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md index d9df7fb38c31b..1b2a2e4cbe7dc 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md @@ -68,14 +68,12 @@ and passing it to the local node kubelet. ## Using the `cgroupfs` driver -As this guide explains using the `cgroupfs` driver with kubeadm is not recommended. - -To continue using `cgroupfs` and to prevent `kubeadm upgrade` from modifying the +To use `cgroupfs` and to prevent `kubeadm upgrade` from modifying the `KubeletConfiguration` cgroup driver on existing setups, you must be explicit about its value. This applies to a case where you do not wish future versions of kubeadm to apply the `systemd` driver by default. -See the below section on "Modify the kubelet ConfigMap" for details on +See the below section on "[Modify the kubelet ConfigMap](#modify-the-kubelet-configmap)" for details on how to be explicit about the value. If you wish to configure a container runtime to use the `cgroupfs` driver, diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 696a69ba828c8..18032fc4b3989 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -244,7 +244,7 @@ serverTLSBootstrap: true ``` If you have already created the cluster you must adapt it by doing the following: - - Find and edit the `kubelet-config-{{< skew latestVersion >}}` ConfigMap in the `kube-system` namespace. + - Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace. In that ConfigMap, the `kubelet` key has a [KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`. @@ -276,7 +276,7 @@ By default, these serving certificate will expire after one year. Kubeadm sets t `KubeletConfiguration` field `rotateCertificates` to `true`, which means that close to expiration a new set of CSRs for the serving certificates will be created and must be approved to complete the rotation. To understand more see -[Certificate Rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation). +[Certificate Rotation](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation). If you are looking for a solution for automatic approval of these CSRs it is recommended that you contact your cloud provider and ask if they have a CSR signer that verifies diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 8040e1185f884..783b107927186 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -29,7 +29,7 @@ The upgrade workflow at high level is the following: ## {{% heading "prerequisites" %}} -- Make sure you read the [release notes]({{< latest-release-notes >}}) carefully. +- Make sure you read the [release notes](https://git.k8s.io/kubernetes/CHANGELOG) carefully. - The cluster should use a static control plane and etcd pods or external etcd. - Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. @@ -79,83 +79,87 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc **For the first control plane node** -- Upgrade kubeadm: +- Upgrade kubeadm: -{{< tabs name="k8s_install_kubeadm_first_cp" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version - apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \ - apt-mark hold kubeadm -{{% /tab %}} -{{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version - yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes -{{% /tab %}} -{{< /tabs >}} -
    + {{< tabs name="k8s_install_kubeadm_first_cp" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version + apt-mark unhold kubeadm && \ + apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \ + apt-mark hold kubeadm + ``` + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version + yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes + ``` + {{% /tab %}} + {{< /tabs >}} +
    -- Verify that the download works and has the expected version: +- Verify that the download works and has the expected version: - ```shell - kubeadm version - ``` + ```shell + kubeadm version + ``` -- Verify the upgrade plan: +- Verify the upgrade plan: - ```shell - kubeadm upgrade plan - ``` + ```shell + kubeadm upgrade plan + ``` - This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. - It also shows a table with the component config version states. + This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. + It also shows a table with the component config version states. -{{< note >}} -`kubeadm upgrade` also automatically renews the certificates that it manages on this node. -To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used. -For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs). -{{}} + {{< note >}} + `kubeadm upgrade` also automatically renews the certificates that it manages on this node. + To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used. + For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs). + {{}} + + {{< note >}} + If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide + a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag. + Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade. + {{}} -{{< note >}} -If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide -a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag. -Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade. -{{}} +- Choose a version to upgrade to, and run the appropriate command. For example: -- Choose a version to upgrade to, and run the appropriate command. For example: + ```shell + # replace x with the patch version you picked for this upgrade + sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x + ``` - ```shell - # replace x with the patch version you picked for this upgrade - sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x - ``` + Once the command finishes you should see: - Once the command finishes you should see: + ``` + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy! - ``` - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy! + [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. + ``` - [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - ``` +- Manually upgrade your CNI provider plugin. -- Manually upgrade your CNI provider plugin. + Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. + Check the [addons](/docs/concepts/cluster-administration/addons/) page to + find your CNI provider and see whether additional upgrade steps are required. - Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. - Check the [addons](/docs/concepts/cluster-administration/addons/) page to - find your CNI provider and see whether additional upgrade steps are required. - - This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet. + This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet. **For the other control plane nodes** Same as the first control plane node but use: -``` +```shell sudo kubeadm upgrade node ``` instead of: -``` +```shell sudo kubeadm upgrade apply ``` @@ -163,46 +167,50 @@ Also calling `kubeadm upgrade plan` and upgrading the CNI provider plugin is no ### Drain the node -- Prepare the node for maintenance by marking it unschedulable and evicting the workloads: +- Prepare the node for maintenance by marking it unschedulable and evicting the workloads: - ```shell - # replace with the name of your node you are draining - kubectl drain --ignore-daemonsets - ``` + ```shell + # replace with the name of your node you are draining + kubectl drain --ignore-daemonsets + ``` ### Upgrade kubelet and kubectl -- Upgrade the kubelet and kubectl: - -{{< tabs name="k8s_install_kubelet" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \ - apt-mark hold kubelet kubectl -{{% /tab %}} -{{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version - yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes -{{% /tab %}} -{{< /tabs >}} -
    - -- Restart the kubelet: - - ```shell - sudo systemctl daemon-reload - sudo systemctl restart kubelet - ``` +- Upgrade the kubelet and kubectl: + + {{< tabs name="k8s_install_kubelet" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version + apt-mark unhold kubelet kubectl && \ + apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \ + apt-mark hold kubelet kubectl + ``` + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version + yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes + ``` + {{% /tab %}} + {{< /tabs >}} +
    + +- Restart the kubelet: + + ```shell + sudo systemctl daemon-reload + sudo systemctl restart kubelet + ``` ### Uncordon the node -- Bring the node back online by marking it schedulable: +- Bring the node back online by marking it schedulable: - ```shell - # replace with the name of your node - kubectl uncordon - ``` + ```shell + # replace with the name of your node + kubectl uncordon + ``` ## Upgrade worker nodes @@ -211,76 +219,83 @@ without compromising the minimum required capacity for running your workloads. ### Upgrade kubeadm -- Upgrade kubeadm: - -{{< tabs name="k8s_install_kubeadm_worker_nodes" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version - apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \ - apt-mark hold kubeadm -{{% /tab %}} -{{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version - yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes -{{% /tab %}} -{{< /tabs >}} +- Upgrade kubeadm: + + {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version + apt-mark unhold kubeadm && \ + apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \ + apt-mark hold kubeadm + ``` + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version + yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes + ``` + {{% /tab %}} + {{< /tabs >}} ### Call "kubeadm upgrade" -- For worker nodes this upgrades the local kubelet configuration: +- For worker nodes this upgrades the local kubelet configuration: - ```shell - sudo kubeadm upgrade node - ``` + ```shell + sudo kubeadm upgrade node + ``` ### Drain the node -- Prepare the node for maintenance by marking it unschedulable and evicting the workloads: +- Prepare the node for maintenance by marking it unschedulable and evicting the workloads: - ```shell - # replace with the name of your node you are draining - kubectl drain --ignore-daemonsets - ``` + ```shell + # replace with the name of your node you are draining + kubectl drain --ignore-daemonsets + ``` ### Upgrade kubelet and kubectl -- Upgrade the kubelet and kubectl: - -{{< tabs name="k8s_kubelet_and_kubectl" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \ - apt-mark hold kubelet kubectl -{{% /tab %}} -{{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version - yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes -{{% /tab %}} -{{< /tabs >}} -
    - -- Restart the kubelet: - - ```shell - sudo systemctl daemon-reload - sudo systemctl restart kubelet - ``` +- Upgrade the kubelet and kubectl: + + {{< tabs name="k8s_kubelet_and_kubectl" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version + apt-mark unhold kubelet kubectl && \ + apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \ + apt-mark hold kubelet kubectl + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + ```shell + # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version + yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes + ``` + {{% /tab %}} + {{< /tabs >}} +
    + +- Restart the kubelet: + + ```shell + sudo systemctl daemon-reload + sudo systemctl restart kubelet + ``` ### Uncordon the node -- Bring the node back online by marking it schedulable: +- Bring the node back online by marking it schedulable: - ```shell - # replace with the name of your node - kubectl uncordon - ``` + ```shell + # replace with the name of your node + kubectl uncordon + ``` ## Verify the status of the cluster -After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command -from anywhere kubectl can access the cluster: +After the kubelet is upgraded on all nodes verify that all nodes are available again by running +the following command from anywhere kubectl can access the cluster: ```shell kubectl get nodes @@ -296,6 +311,7 @@ This command is idempotent and eventually makes sure that the actual state is th To recover from a bad state, you can also run `kubeadm upgrade apply --force` without changing the version that your cluster is running. During upgrade kubeadm writes the following backup folders under `/etc/kubernetes/tmp`: + - `kubeadm-backup-etcd--
    diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md index ec6edb9cc7037..d95e752208fec 100644 --- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -8,7 +8,7 @@ content_type: tutorial -This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task. +This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task. @@ -27,7 +27,7 @@ This page provides a real world example of how to configure Redis using a Config {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * The example shown on this page works with `kubectl` 1.14 and above. -* Understand [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). +* Understand [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). @@ -78,7 +78,7 @@ kubectl get pod/redis configmap/example-redis-config You should see the following output: -```shell +``` NAME READY STATUS RESTARTS AGE pod/redis 1/1 Running 0 8s diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index 5301c6b7a14ef..2649ce4f9476b 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
    diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index 915304f9127e5..d8a525a6d515d 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
    diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index ad79ec5d7f60c..82b5f4bb332b5 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
    diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index 2b5d3aa365f37..ce5eeb455f966 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
    diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 1996859e2e4f1..b362669c05287 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -66,7 +66,7 @@

    Services and Labels

    -

    A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.

    +

    A Service routes traffic across a set of Pods. Services are the abstraction that allows pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) are handled by Kubernetes Services.

    Services match a set of Pods using labels and selectors, a grouping primitive that allows logical operation on objects in Kubernetes. Labels are key/value pairs attached to objects and can be used in any number of ways:

    • Designate objects for development, test, and production
    • diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html index 3fedf79782a5b..ad01e64c02cb1 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
      diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html index 2e70d61d7412e..99184ddb3e130 100644 --- a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -11,7 +11,7 @@ - +{{< katacoda-tutorial >}}
      diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index bb9a622c98d7f..9eab7538d4336 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -206,19 +206,8 @@ Note that these are not the correct client IPs, they're cluster internal IPs. Th Visually: -{{< mermaid >}} -graph LR; - client(client)-->node2[Node 2]; - node2-->client; - node2-. SNAT .->node1[Node 1]; - node1-. SNAT .->node2; - node1-->endpoint(Endpoint); - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - class node1,node2,endpoint k8s; - class client plain; -{{}} +{{< figure src="/docs/images/tutor-service-nodePort-fig01.svg" alt="source IP nodeport figure 01" class="diagram-large" caption="Figure. Source IP Type=NodePort using SNAT" link="https://mermaid.live/edit#pako:eNqNkV9rwyAUxb-K3LysYEqS_WFYKAzat9GHdW9zDxKvi9RoMIZtlH732ZjSbE970cu5v3s86hFqJxEYfHjRNeT5ZcUtIbXRaMNN2hZ5vrYRqt52cSXV-4iMSuwkZiYtyX739EqWaahMQ-V1qPxDVLNOvkYrO6fj2dupWMR2iiT6foOKdEZoS5Q2hmVSStoH7w7IMqXUVOefWoaG3XVftHbGeZYVRbH6ZXJ47CeL2-qhxvt_ucTe1SUlpuMN6CX12XeGpLdJiaMMFFr0rdAyvvfxjHEIDbbIgcVSohKDCRy4PUV06KQIuJU6OA9MCdMjBTEEt_-2NbDgB7xAGy3i97VJPP0ABRmcqg" >}} + To avoid this, Kubernetes has a feature to [preserve the client source IP](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). @@ -262,20 +251,8 @@ This is what happens: Visually: -{{< mermaid >}} -graph TD; - client --> node1[Node 1]; - client(client) --x node2[Node 2]; - node1 --> endpoint(endpoint); - endpoint --> node1; - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - class node1,node2,endpoint k8s; - class client plain; -{{}} - +{{< figure src="/docs/images/tutor-service-nodePort-fig02.svg" alt="source IP nodeport figure 02" class="diagram-large" caption="Figure. Source IP Type=NodePort preserves client source IP address" link="" >}} ## Source IP for Services with `Type=LoadBalancer` diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 480316a09c80c..d3c426d7894da 100644 --- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -16,7 +16,7 @@ This tutorial shows you how to deploy a WordPress site and a MySQL database usin A [PersistentVolume](/docs/concepts/storage/persistent-volumes/) (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a [StorageClass](/docs/concepts/storage/storage-classes). A [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods. {{< warning >}} -This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy WordPress in production. +This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/bitnami/charts/tree/master/bitnami/wordpress) to deploy WordPress in production. {{< /warning >}} {{< note >}} @@ -236,7 +236,7 @@ Do not leave your WordPress installation on this page. If another user finds it, ## {{% heading "whatsnext" %}} -* Learn more about [Introspection and Debugging](/docs/tasks/debug/debug-application) +* Learn more about [Introspection and Debugging](/docs/tasks/debug/debug-application/debug-running-pod/) * Learn more about [Jobs](/docs/concepts/workloads/controllers/job/) * Learn more about [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) * Learn how to [Get a Shell to a Container](/docs/tasks/debug/debug-application/get-shell-running-container/) diff --git a/content/en/examples/application/mysql/mysql-configmap.yaml b/content/en/examples/application/mysql/mysql-configmap.yaml index 6aa5bfe4e5dc6..9adb0344a31a6 100644 --- a/content/en/examples/application/mysql/mysql-configmap.yaml +++ b/content/en/examples/application/mysql/mysql-configmap.yaml @@ -4,15 +4,14 @@ metadata: name: mysql labels: app: mysql + app.kubernetes.io/name: mysql data: primary.cnf: | # Apply this config only on the primary. [mysqld] log-bin - datadir=/var/lib/mysql/mysql replica.cnf: | # Apply this config only on replicas. [mysqld] super-read-only - datadir=/var/lib/mysql/mysql diff --git a/content/en/examples/application/mysql/mysql-services.yaml b/content/en/examples/application/mysql/mysql-services.yaml index 6743cf707ad4a..bc015066780c3 100644 --- a/content/en/examples/application/mysql/mysql-services.yaml +++ b/content/en/examples/application/mysql/mysql-services.yaml @@ -5,6 +5,7 @@ metadata: name: mysql labels: app: mysql + app.kubernetes.io/name: mysql spec: ports: - name: mysql @@ -21,6 +22,8 @@ metadata: name: mysql-read labels: app: mysql + app.kubernetes.io/name: mysql + readonly: "true" spec: ports: - name: mysql diff --git a/content/en/examples/application/mysql/mysql-statefulset.yaml b/content/en/examples/application/mysql/mysql-statefulset.yaml index bb61537fcf338..85563a2abc879 100644 --- a/content/en/examples/application/mysql/mysql-statefulset.yaml +++ b/content/en/examples/application/mysql/mysql-statefulset.yaml @@ -6,12 +6,14 @@ spec: selector: matchLabels: app: mysql + app.kubernetes.io/name: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql + app.kubernetes.io/name: mysql spec: initContainers: - name: init-mysql diff --git a/content/en/examples/controllers/job.yaml b/content/en/examples/controllers/job.yaml index b448f2eb81daf..a6e40bc778d6d 100644 --- a/content/en/examples/controllers/job.yaml +++ b/content/en/examples/controllers/job.yaml @@ -7,7 +7,7 @@ spec: spec: containers: - name: pi - image: perl + image: perl:5.34 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 diff --git a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml index a7d14b2d6f755..5dcc7693b67c8 100644 --- a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml +++ b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml @@ -8,10 +8,11 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: kubernetes.io/os + - key: topology.kubernetes.io/zone operator: In values: - - linux + - antarctica-east1 + - antarctica-west1 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: @@ -29,4 +30,4 @@ spec: - key-2 containers: - name: with-node-affinity - image: k8s.gcr.io/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 diff --git a/content/en/releases/_index.md b/content/en/releases/_index.md index d374f6eb5ab6a..689402cadf5cc 100644 --- a/content/en/releases/_index.md +++ b/content/en/releases/_index.md @@ -7,7 +7,7 @@ type: docs -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. @@ -22,6 +22,6 @@ More information in the [version skew policy](/releases/version-skew-policy/) do ## Upcoming Release -Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release! +Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew currentVersionAddMinor 1 >}}) for the upcoming **{{< skew currentVersionAddMinor 1 >}}** Kubernetes release! ## Helpful Resources diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 144e74676b53b..91bdcc58bb91e 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,10 +78,10 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| May 2022 | 2022-05-20 | 2022-05-24 | -| June 2022 | 2022-06-10 | 2022-06-15 | | July 2022 | 2022-07-08 | 2022-07-13 | -| August 2022 | 2022-08-12 | 2022-08-16 | +| August 2022 | 2022-08-12 | 2022-08-17 | +| September 2022 | 2022-09-09 | 2022-09-14 | +| October 2022 | 2022-10-07 | 2022-10-12 | ## Detailed Release History for Active Branches @@ -93,6 +93,8 @@ End of Life for **1.24** is **2023-09-29** | PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE | |---------------|----------------------|-------------|------| +| 1.24.3 | 2022-07-08 | 2022-07-13 | | +| 1.24.2 | 2022-06-10 | 2022-06-15 | | | 1.24.1 | 2022-05-20 | 2022-05-24 | | ### 1.23 @@ -103,11 +105,13 @@ End of Life for **1.23** is **2023-02-28**. | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.23.9 | 2022-07-08 | 2022-07-13 | | +| 1.23.8 | 2022-06-10 | 2022-06-15 | | | 1.23.7 | 2022-05-20 | 2022-05-24 | | | 1.23.6 | 2022-04-08 | 2022-04-13 | | | 1.23.5 | 2022-03-11 | 2022-03-16 | | | 1.23.4 | 2022-02-11 | 2022-02-16 | | -| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) | +| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) | | 1.23.2 | 2022-01-14 | 2022-01-19 | | | 1.23.1 | 2021-12-14 | 2021-12-16 | | @@ -119,6 +123,8 @@ End of Life for **1.22** is **2022-10-28** | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.22.12 | 2022-07-08 | 2022-07-13 | | +| 1.22.11 | 2022-06-10 | 2022-06-15 | | | 1.22.10 | 2022-05-20 | 2022-05-24 | | | 1.22.9 | 2022-04-08 | 2022-04-13 | | | 1.22.8 | 2022-03-11 | 2022-03-16 | | @@ -130,34 +136,13 @@ End of Life for **1.22** is **2022-10-28** | 1.22.2 | 2021-09-10 | 2021-09-15 | | | 1.22.1 | 2021-08-16 | 2021-08-19 | | -### 1.21 - -**1.21** enters maintenance mode on **2022-04-28** - -End of Life for **1.21** is **2022-06-28** - -| Patch Release | Cherry Pick Deadline | Target Date | Note | -| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- | -| 1.21.13 | 2022-05-20 | 2022-05-24 | | -| 1.21.12 | 2022-04-08 | 2022-04-13 | | -| 1.21.11 | 2022-03-11 | 2022-03-16 | | -| 1.21.10 | 2022-02-11 | 2022-02-16 | | -| 1.21.9 | 2022-01-14 | 2022-01-19 | | -| 1.21.8 | 2021-12-10 | 2021-12-15 | | -| 1.21.7 | 2021-11-12 | 2021-11-17 | | -| 1.21.6 | 2021-10-22 | 2021-10-27 | | -| 1.21.5 | 2021-09-10 | 2021-09-15 | | -| 1.21.4 | 2021-08-07 | 2021-08-11 | | -| 1.21.3 | 2021-07-10 | 2021-07-14 | | -| 1.21.2 | 2021-06-12 | 2021-06-16 | | -| 1.21.1 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) | - ## Non-Active Branch History These releases are no longer supported. | Minor Version | Final Patch Release | EOL Date | Note | | ------------- | ------------------- | ---------- | ---------------------------------------------------------------------- | +| 1.21 | 1.21.14 | 2022-06-28 | | | 1.20 | 1.20.15 | 2022-02-28 | | | 1.19 | 1.19.16 | 2021-10-28 | | | 1.18 | 1.18.20 | 2021-06-18 | Created to resolve regression introduced in 1.18.19 | diff --git a/content/en/releases/release-cycle.jpg b/content/en/releases/release-cycle.jpg new file mode 100644 index 0000000000000..8519fca6bd82d Binary files /dev/null and b/content/en/releases/release-cycle.jpg differ diff --git a/content/en/releases/release-lifecycle.jpg b/content/en/releases/release-lifecycle.jpg new file mode 100644 index 0000000000000..ac30788ba322a Binary files /dev/null and b/content/en/releases/release-lifecycle.jpg differ diff --git a/content/en/releases/release.md b/content/en/releases/release.md index 5542b412024e3..f69424b7f9f71 100644 --- a/content/en/releases/release.md +++ b/content/en/releases/release.md @@ -17,9 +17,9 @@ create an enhancement, issue, or pull request which targets a specific release milestone. - [TL;DR](#tldr) - - [Normal Dev (Weeks 1-8)](#normal-dev-weeks-1-8) - - [Code Freeze (Weeks 9-11)](#code-freeze-weeks-9-11) - - [Post-Release (Weeks 11+)](#post-release-weeks-11) + - [Normal Dev (Weeks 1-11)](#normal-dev-weeks-1-11) + - [Code Freeze (Weeks 12-14)](#code-freeze-weeks-12-14) + - [Post-Release (Weeks 14+)](#post-release-weeks-14+) - [Definitions](#definitions) - [The Release Cycle](#the-release-cycle) - [Removal Of Items From The Milestone](#removal-of-items-from-the-milestone) @@ -54,14 +54,14 @@ requirements exist when the target milestone is a prior release (see If you want your PR to get merged, it needs the following required labels and milestones, represented here by the Prow /commands it would take to add them: -### Normal Dev (Weeks 1-8) +### Normal Dev (Weeks 1-11) - /sig {name} - /kind {type} - /lgtm - /approved -### [Code Freeze][code-freeze] (Weeks 9-11) +### [Code Freeze][code-freeze] (Weeks 12-14) - /milestone {v1.y} - /sig {name} @@ -69,7 +69,7 @@ milestones, represented here by the Prow /commands it would take to add them: - /lgtm - /approved -### Post-Release (Weeks 11+) +### Post-Release (Weeks 14+) Return to 'Normal Dev' phase requirements: @@ -90,43 +90,43 @@ The general labeling process should be consistent across artifact types. ## Definitions -- _issue owners_: Creator, assignees, and user who moved the issue into a +- *issue owners*: Creator, assignees, and user who moved the issue into a release milestone -- _Release Team_: Each Kubernetes release has a team doing project management +- *Release Team*: Each Kubernetes release has a team doing project management tasks described [here][release-team]. The contact info for the team associated with any given release can be found [here](https://git.k8s.io/sig-release/releases/). -- _Y days_: Refers to business days +- *Y days*: Refers to business days -- _enhancement_: see "[Is My Thing an Enhancement?](https://git.k8s.io/enhancements/README.md#is-my-thing-an-enhancement)" +- *enhancement*: see "[Is My Thing an Enhancement?](https://git.k8s.io/enhancements/README.md#is-my-thing-an-enhancement)" -- _[Enhancements Freeze][enhancements-freeze]_: +- *[Enhancements Freeze][enhancements-freeze]*: the deadline by which [KEPs][keps] have to be completed in order for enhancements to be part of the current release -- _[Exception Request][exceptions]_: +- *[Exception Request][exceptions]*: The process of requesting an extension on the deadline for a particular Enhancement -- _[Code Freeze][code-freeze]_: +- *[Code Freeze][code-freeze]*: The period of ~4 weeks before the final release date, during which only critical bug fixes are merged into the release. -- _[Pruning](https://git.k8s.io/sig-release/releases/release_phases.md#pruning)_: +- *[Pruning](https://git.k8s.io/sig-release/releases/release_phases.md#pruning)*: The process of removing an Enhancement from a release milestone if it is not fully implemented or is otherwise considered not stable. -- _release milestone_: semantic version string or +- *release milestone*: semantic version string or [GitHub milestone](https://help.github.com/en/github/managing-your-work-on-github/associating-milestones-with-issues-and-pull-requests) referring to a release MAJOR.MINOR `vX.Y` version. See also - [release versioning](/contributors/design-proposals/release/versioning.md). + [release versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md). -- _release branch_: Git branch `release-X.Y` created for the `vX.Y` milestone. +- *release branch*: Git branch `release-X.Y` created for the `vX.Y` milestone. Created at the time of the `vX.Y-rc.0` release and maintained after the release for approximately 12 months with `vX.Y.Z` patch releases. @@ -136,9 +136,9 @@ The general labeling process should be consistent across artifact types. ## The Release Cycle -![Image of one Kubernetes release cycle](release-cycle.png) +![Image of one Kubernetes release cycle](release-cycle.jpg) -Kubernetes releases currently happen approximately four times per year. +Kubernetes releases currently happen approximately three times per year. The release process can be thought of as having three main phases: @@ -161,7 +161,7 @@ conjunction with the Release Team's [Enhancements Lead](https://git.k8s.io/sig-r After Enhancements Freeze, tracking milestones on PRs and issues is important. Items within the milestone are used as a punchdown list to complete the -release. _On issues_, milestones must be applied correctly, via triage by the +release. *On issues*, milestones must be applied correctly, via triage by the SIG, so that [Release Team][release-team] can track bugs and enhancements (any enhancement-related issue needs a milestone). @@ -189,7 +189,7 @@ under that automation umbrella should be have a milestone applied. Implementation and bug fixing is ongoing across the cycle, but culminates in a code freeze period. -**[Code Freeze][code-freeze]** starts in week ~10 and continues for ~2 weeks. +**[Code Freeze][code-freeze]** starts in week ~12 and continues for ~2 weeks. Only critical bug fixes are accepted into the release codebase during this time. @@ -204,7 +204,7 @@ back to the release branch. The release is built from the release branch. Each release is part of a broader Kubernetes lifecycle: -![Image of Kubernetes release lifecycle spanning three releases](release-lifecycle.png) +![Image of Kubernetes release lifecycle spanning three releases](release-lifecycle.jpg) ## Removal Of Items From The Milestone @@ -355,11 +355,11 @@ issue kind labels must be set: - `kind/feature`: New functionality. - `kind/flake`: CI test case is showing intermittent failures. -[cherry-picks]: /community/blob/master/contributors/devel/sig-release/cherry-picks.md +[cherry-picks]: /contributors/devel/sig-release/cherry-picks.md [code-freeze]: https://git.k8s.io/sig-release/releases/release_phases.md#code-freeze [enhancements-freeze]: https://git.k8s.io/sig-release/releases/release_phases.md#enhancements-freeze [exceptions]: https://git.k8s.io/sig-release/releases/release_phases.md#exceptions [keps]: https://git.k8s.io/enhancements/keps -[release-managers]: https://git.k8s.io/sig-release/release-managers.md +[release-managers]: https://kubernetes.io/releases/release-managers/ [release-team]: https://git.k8s.io/sig-release/release-team -[sig-list]: /sig-list.md +[sig-list]: /sig-list.md \ No newline at end of file diff --git a/content/en/releases/version-skew-policy.md b/content/en/releases/version-skew-policy.md index 87f6cf2c6278a..730f892c80cc9 100644 --- a/content/en/releases/version-skew-policy.md +++ b/content/en/releases/version-skew-policy.md @@ -21,12 +21,12 @@ Specific cluster deployment tools may place additional restrictions on version s ## Supported versions Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. -For more information, see [Kubernetes Release Versioning](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning). +For more information, see [Kubernetes Release Versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md#kubernetes-release-versioning). -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support. Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. -Patch releases are cut from those branches at a [regular cadence](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence), plus additional urgent releases, when required. +Patch releases are cut from those branches at a [regular cadence](https://kubernetes.io/releases/patch-releases/#cadence), plus additional urgent releases, when required. The [Release Managers](/releases/release-managers/) group owns this decision. @@ -40,8 +40,8 @@ In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kub Example: -* newest `kube-apiserver` is at **{{< skew latestVersion >}}** -* other `kube-apiserver` instances are supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** +* newest `kube-apiserver` is at **{{< skew currentVersion >}}** +* other `kube-apiserver` instances are supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** ### kubelet @@ -49,8 +49,8 @@ Example: Example: -* `kube-apiserver` is at **{{< skew latestVersion >}}** -* `kubelet` is supported at **{{< skew latestVersion >}}**, **{{< skew prevMinorVersion >}}**, and **{{< skew oldestMinorVersion >}}** +* `kube-apiserver` is at **{{< skew currentVersion >}}** +* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, and **{{< skew currentVersionAddMinor -2 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kubelet` versions. @@ -58,8 +58,8 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: -* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** -* `kubelet` is supported at **{{< skew prevMinorVersion >}}**, and **{{< skew oldestMinorVersion >}}** (**{{< skew latestVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew prevMinorVersion >}}**) +* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** +* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, and **{{< skew currentVersionAddMinor -2 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kube-controller-manager, kube-scheduler, and cloud-controller-manager @@ -67,8 +67,8 @@ Example: Example: -* `kube-apiserver` is at **{{< skew latestVersion >}}** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** +* `kube-apiserver` is at **{{< skew currentVersion >}}** +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, and these components can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components. @@ -76,9 +76,9 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, and Example: -* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** +* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** * `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew prevMinorVersion >}}** (**{{< skew latestVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew prevMinorVersion >}}**) +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kubectl @@ -86,8 +86,8 @@ Example: Example: -* `kube-apiserver` is at **{{< skew latestVersion >}}** -* `kubectl` is supported at **{{< skew nextMinorVersion >}}**, **{{< skew latestVersion >}}**, and **{{< skew prevMinorVersion >}}** +* `kube-apiserver` is at **{{< skew currentVersion >}}** +* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the supported `kubectl` versions. @@ -95,27 +95,27 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: -* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** -* `kubectl` is supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) +* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** +* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) ## Supported component upgrade order The supported version skew between components has implications on the order in which components must be upgraded. -This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew prevMinorVersion >}}** to version **{{< skew latestVersion >}}**. +This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**. ### kube-apiserver Pre-requisites: -* In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew prevMinorVersion >}}** -* In an HA cluster, all `kube-apiserver` instances are at **{{< skew prevMinorVersion >}}** or **{{< skew latestVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) -* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew prevMinorVersion >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) -* `kubelet` instances on all nodes are at version **{{< skew prevMinorVersion >}}** or **{{< skew oldestMinorVersion >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) +* In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew currentVersionAddMinor -1 >}}** +* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) +* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) +* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) * Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them: - * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew latestVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) - * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew latestVersion >}}** + * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew currentVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) + * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew currentVersion >}}** -Upgrade `kube-apiserver` to **{{< skew latestVersion >}}** +Upgrade `kube-apiserver` to **{{< skew currentVersion >}}** {{< note >}} Project policies for [API deprecation](/docs/reference/using-api/deprecation-policy/) and @@ -127,17 +127,17 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing Pre-requisites: -* The `kube-apiserver` instances these components communicate with are at **{{< skew latestVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) +* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) -Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **{{< skew latestVersion >}}** +Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **{{< skew currentVersion >}}** ### kubelet Pre-requisites: -* The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew latestVersion >}}** +* The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew currentVersion >}}** -Optionally upgrade `kubelet` instances to **{{< skew latestVersion >}}** (or they can be left at **{{< skew prevMinorVersion >}}** or **{{< skew oldestMinorVersion >}}**) +Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}**) {{< note >}} Before performing a minor version `kubelet` upgrade, [drain](/docs/tasks/administer-cluster/safely-drain-node/) pods from that node. @@ -159,7 +159,7 @@ Running a cluster with `kubelet` instances that are persistently two minor versi Example: -If `kube-proxy` version is **{{< skew oldestMinorVersion >}}**: +If `kube-proxy` version is **{{< skew currentVersionAddMinor -2 >}}**: -* `kubelet` version must be at the same minor version as **{{< skew oldestMinorVersion >}}**. -* `kube-apiserver` version must be between **{{< skew oldestMinorVersion >}}** and **{{< skew latestVersion >}}**, inclusive. +* `kubelet` version must be at the same minor version as **{{< skew currentVersionAddMinor -2 >}}**. +* `kube-apiserver` version must be between **{{< skew currentVersionAddMinor -2 >}}** and **{{< skew currentVersion >}}**, inclusive. diff --git a/content/es/docs/_index.md b/content/es/docs/_index.md index a5cbf56e30d44..caf9a58d7a9f0 100644 --- a/content/es/docs/_index.md +++ b/content/es/docs/_index.md @@ -16,7 +16,7 @@ Como podrá comprobar, la mayor parte de la documentación aún está disponible -Si quiere participar, puede entrar al canal de Slack [#kubernets-docs-es](http://slack.kubernetes.io/) y formar parte del equipo detrás de la localización. +Si quiere participar, puede entrar al canal de Slack [#kubernetes-docs-es](http://slack.kubernetes.io/) y formar parte del equipo detrás de la localización. También puede pasar por el canal para solicitar la traducción de alguna página en concreto o reportar algún error que se haya podido encontrar. ¡Cualquier aportación será bien recibida! diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md index 7120f0476bb9d..8fd08c0285486 100644 --- a/content/es/docs/concepts/configuration/secret.md +++ b/content/es/docs/concepts/configuration/secret.md @@ -862,7 +862,7 @@ tu debes usar `ls -la` para verlos al enumerar los contenidos del directorio. ### Caso de uso: Secret visible para un contenedor en un pod -Considere un programa que necesita manejar solicitudes HTTP, hacer una lógicca empresarial compleja y luego firmar algunos mensajes con un HMAC. Debido a que tiene una lógica de aplicación compleja, puede haber una vulnerabilidad de lectura remota de archivos inadvertida en el servidor, lo que podría exponer la clave privada a un atacante. +Considere un programa que necesita manejar solicitudes HTTP, hacer una lógica empresarial compleja y luego firmar algunos mensajes con un HMAC. Debido a que tiene una lógica de aplicación compleja, puede haber una vulnerabilidad de lectura remota de archivos inadvertida en el servidor, lo que podría exponer la clave privada a un atacante. Esto podría dividirse en dos procesos en dos contenedores: un contenedor de frontend que maneja la interacción del usuario y la lógica empresarial. pero que no puede ver la clave privada; y un contenedor de firmante que puede ver la clave privada, y responde a solicitudes de firma simples del frontend (ejemplo, a través de redes de localhost). diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md index fad0220899ea1..fafb6ae2f608e 100644 --- a/content/es/docs/concepts/workloads/pods/init-containers.md +++ b/content/es/docs/concepts/workloads/pods/init-containers.md @@ -338,4 +338,4 @@ Kubernetes, consulta la documentación de la versión que estás utilizando. ## {{% heading "whatsnext" %}} * Lee acerca de [creando un Pod que tiene un contenedor de inicialización](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) -* Aprende cómo [depurar contenedores de inicialización](/docs/tasks/debug-application-cluster/debug-init-containers/) +* Aprende cómo [depurar contenedores de inicialización](/docs/tasks/debug/debug-application/debug-init-containers/) diff --git a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md index fd470c26332e8..feefb194edd26 100644 --- a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md +++ b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md @@ -44,7 +44,7 @@ Si tiene un alias para kubectl, puede extender el completado del shell para trab ```bash echo 'alias k=kubectl' >>~/.bashrc -echo 'complete -F __start_kubectl k' >>~/.bashrc +echo 'complete -o default -F __start_kubectl k' >>~/.bashrc ``` {{< note >}} diff --git a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md index 5c6dd3d8e63b8..00437e7a67377 100644 --- a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md +++ b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -76,7 +76,7 @@ Ahora debe asegurarse de que el script de completado de kubectl se obtenga en to ```bash echo 'alias k=kubectl' >>~/.bash_profile - echo 'complete -F __start_kubectl k' >>~/.bash_profile + echo 'complete -o default -F __start_kubectl k' >>~/.bash_profile ``` - Si instaló kubectl con Homebrew (como se explica [aquí](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), entonces el script de completado de kubectl ya debería estar en `/usr/local/etc/bash_completion.d/kubectl`. En ese caso, no necesita hacer nada. diff --git a/content/fr/docs/concepts/workloads/pods/init-containers.md b/content/fr/docs/concepts/workloads/pods/init-containers.md index fb4b6f3270174..2af4306b0ba93 100644 --- a/content/fr/docs/concepts/workloads/pods/init-containers.md +++ b/content/fr/docs/concepts/workloads/pods/init-containers.md @@ -325,6 +325,6 @@ redémarrage du conteneur d'application. * Lire à propos de la [création d'un Pod ayant un init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container) -* Apprendre à [debugger les init containers](/docs/tasks/debug-application-cluster/debug-init-containers/) +* Apprendre à [debugger les init containers](/docs/tasks/debug/debug-application/debug-init-containers/) diff --git a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md index 284024b425a07..297cbd700ea21 100644 --- a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -212,7 +212,7 @@ vous pouvez utiliser efficacement les ressources CPU disponibles sur les Nœuds En gardant une demande faible de CPU de pod, vous donnez au Pod une bonne chance d'être ordonnancé. En ayant une limite CPU supérieure à la demande de CPU, vous accomplissez deux choses : -* Le Pod peut avoir des pics d'activité où il utilise les ressources CPU qui se sont déjà disponible. +* Le Pod peut avoir des pics d'activité où il utilise les ressources CPU qui sont déjà disponibles. * La quantité de ressources CPU qu'un Pod peut utiliser pendant une pic d'activité est limitée à une quantité raisonnable. ## Nettoyage diff --git a/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 5902ca926d3c6..2aa904144f2e7 100644 --- a/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -37,7 +37,7 @@ Le champ `periodSeconds` spécifie que le Kubelet doit effectuer un check de liv Au démarrage, le conteneur exécute cette commande : ```shell -/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600" +/bin/sh -c "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600" ``` Pour les 30 premières secondes de la vie du conteneur, il y a un fichier `/tmp/healthy`. diff --git a/content/fr/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/fr/docs/tasks/configure-pod-container/configure-pod-initialization.md index 6d1ca96b3173e..eeb33a0d4b8c4 100644 --- a/content/fr/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/content/fr/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -81,7 +81,7 @@ La sortie montre que nginx sert la page web qui a été écrite par le conteneur [communiquer entre conteneurs fonctionnant dans le même Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). * Pour en savoir plus sur [Init Conteneurs](/docs/concepts/workloads/pods/init-containers/). * Pour en savoir plus sur [Volumes](/docs/concepts/storage/volumes/). -* Pour en savoir plus sur [Débogage des Init Conteneurs](/docs/tasks/debug-application-cluster/debug-init-containers/) +* Pour en savoir plus sur [Débogage des Init Conteneurs](/docs/tasks/debug/debug-application/debug-init-containers/) diff --git a/content/fr/examples/pods/probe/exec-liveness.yaml b/content/fr/examples/pods/probe/exec-liveness.yaml index 07bf75f85c6f3..6a9c9b3213718 100644 --- a/content/fr/examples/pods/probe/exec-liveness.yaml +++ b/content/fr/examples/pods/probe/exec-liveness.yaml @@ -11,7 +11,7 @@ spec: args: - /bin/sh - -c - - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 + - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: diff --git a/content/fr/includes/default-storage-class-prereqs.md b/content/fr/includes/default-storage-class-prereqs.md index c0a21bec17dc0..eefd9fcf90ed2 100644 --- a/content/fr/includes/default-storage-class-prereqs.md +++ b/content/fr/includes/default-storage-class-prereqs.md @@ -1 +1 @@ -Vous devez disposer soit d'un fournisseur PersistentVolume dynamique avec une valeur par défaut [StorageClass](/docs/concepts/storage/storage-classes/), soit préparer [un PersistentVolumes statique](/docs/user-guide/persistent-volumes/#provisioning) pour satisfaire les [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) utilisés ici. +Vous devez disposer soit d'un fournisseur PersistentVolume dynamique avec une valeur par défaut [StorageClass](/docs/concepts/storage/storage-classes/), soit préparer [un PersistentVolumes statique](/docs/concepts/storage/persistent-volumes/#provisioning) pour satisfaire les [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) utilisés ici. diff --git a/content/id/docs/concepts/workloads/pods/disruptions.md b/content/id/docs/concepts/workloads/pods/disruptions.md index 7a09eed3a502f..f466bc6300ac5 100644 --- a/content/id/docs/concepts/workloads/pods/disruptions.md +++ b/content/id/docs/concepts/workloads/pods/disruptions.md @@ -67,7 +67,7 @@ Kubernetes menawarkan fitur-fitur untuk membantu menjalankan aplikasi-aplikasi d Pemilik aplikasi dapat membuat objek `PodDisruptionBudget` (PDB) untuk setiap aplikasi. Sebuah PDB membatasi jumlah Pod yang boleh mati secara bersamaan pada aplikasi yang direplikasi dikarenakan disrupsi yang disengaja. Misalnya, sebuah aplikasi yang bekerja secara _quorum_ mau memastikan bahwa jumlah replika yang berjalan tidak jatuh ke bawah yang dibutuhkan untuk membentuk sebuah _quorum_. Contoh lainnya, sebuah _front-end_ web mungkin perlu memastikan bahwa jumlah replika yang melayani trafik tidak pernah turun ke total persentase yang telah ditentukan. -Administrator klaster dan penyedia layanan Kubernetes sebaiknya menggunakan alat-alat yang menghormati PDB dengan cara berkomunikasi dengan [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) dari pada menghapus Pod atau Deployment secara langsung. Contohnya adalah perintah `kubectl drain` dan skrip pembaruan Kubernets-on-GCE (`cluster/gce/upgrade.sh`) +Administrator klaster dan penyedia layanan Kubernetes sebaiknya menggunakan alat-alat yang menghormati PDB dengan cara berkomunikasi dengan [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) dari pada menghapus Pod atau Deployment secara langsung. Contohnya adalah perintah `kubectl drain` dan skrip pembaruan Kubernetes-on-GCE (`cluster/gce/upgrade.sh`) Saat seorang administrator klaster ingin melakukan _drain_ terhadap sebuah node, ia akan menggunakan perintah `kubectl drain`. Alat tersebut mencoba untuk "mengusir" semua Pod di node tersebut. Permintaan untuk mengusir Pod tersebut mungkin ditolak untuk sementara, dan alat tersebut akan mencoba ulang permintaannya secara periodik hingga semua Pod dihapus, atau hingga batas waktu yang ditentukan telah dicapai. diff --git a/content/id/docs/contribute/participate/_index.md b/content/id/docs/contribute/participate/_index.md index 72c561432b5b9..cd47d6230971d 100644 --- a/content/id/docs/contribute/participate/_index.md +++ b/content/id/docs/contribute/participate/_index.md @@ -71,8 +71,8 @@ dua buah [prow _plugin_](https://github.com/kubernetes/test-infra/tree/master/pr - approve Kedua _plugin_ menggunakan berkas -[OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) dan -[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES) +[OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) dan +[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) dalam level teratas dari repositori GitHub `kubernetes/website` untuk mengontrol bagaimana prow bekerja di dalam repositori. diff --git a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md index aeeeab1001d84..67b451db52b34 100644 --- a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -160,7 +160,7 @@ Nama layanan adalah `kube-dns` baik untuk CoreDNS maupun kube-dns. {{< /note >}} Jika kamu telah membuat Service atau seharusnya Service telah dibuat secara bawaan namun ternyata tidak muncul, lihat -[_debugging_ Service](/docs/tasks/debug-application-cluster/debug-service/) untuk informasi lebih lanjut. +[_debugging_ Service](/docs/tasks/debug/debug-application/debug-service/) untuk informasi lebih lanjut. ### Apakah endpoint DNS telah ekspos? @@ -175,7 +175,7 @@ kube-dns 10.180.3.17:53,10.180.3.17:53 1h ``` Jika kamu tidak melihat _endpoint_, lihat bagian _endpoint_ pada dokumentasi -[_debugging_ Service](/docs/tasks/debug-application-cluster/debug-service/). +[_debugging_ Service](/docs/tasks/debug/debug-application/debug-service/). Untuk tambahan contoh Kubernetes DNS, lihat [contoh cluster-dns](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) pada repositori Kubernetes GitHub. diff --git a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 934f6178cd9e5..96b08d1aca0b7 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -59,7 +59,7 @@ kode selain 0, maka kubelet akan mematikan Container dan mengulangnya kembali. Saat dimulai, Container akan menjalankan perintah berikut: ```shell -/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600" +/bin/sh -c "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600" ``` Container memiliki berkas `/tmp/healthy` pada saat 30 detik pertama setelah dijalankan. diff --git a/content/id/docs/tasks/configure-pod-container/static-pod.md b/content/id/docs/tasks/configure-pod-container/static-pod.md index 6088b458db837..b13c95608b60b 100644 --- a/content/id/docs/tasks/configure-pod-container/static-pod.md +++ b/content/id/docs/tasks/configure-pod-container/static-pod.md @@ -33,7 +33,7 @@ sebuah {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -Laman ini mengasumsikan kamu menggunakan {{< glossary_tooltip term_id="docker" >}} +Laman ini mengasumsikan kamu menggunakan {{< glossary_tooltip term_id="cri-o" >}} untuk menjalankan Pod, dan Node kamu berjalan menggunakan sistem operasi Fedora. Instruksi untuk distribusi lain atau instalasi Kubernetes mungkin berbeda. @@ -90,23 +90,23 @@ Sebagai contoh, ini cara untuk memulai server web sederhana sebagai Pod statis: 3. Atur kubelet pada Node untuk menggunakan direktori ini dengan menjalankannya menggunakan argumen `--pod-manifest-path=/etc/kubelet.d/`. Pada Fedora, ubah berkas `/etc/kubernetes/kubelet` dengan menambahkan baris berikut: - ``` - KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/" - ``` - atau tambahkan _field_ `staticPodPath: ` pada [berkas konfigurasi kubelet](/docs/tasks/administer-cluster/kubelet-config-file). + ``` + KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/" + ``` + atau tambahkan _field_ `staticPodPath: ` pada [berkas konfigurasi kubelet](/docs/tasks/administer-cluster/kubelet-config-file). 4. Jalankan ulang kubelet. Pada Fedora, kamu dapat menjalankan: - ```shell - # Jalankan perintah berikut pada Node tempat kubelet berjalan - systemctl restart kubelet - ``` + ```shell + # Jalankan perintah berikut pada Node tempat kubelet berjalan + systemctl restart kubelet + ``` ### Manifes Pod statis pada Web {#konfigurasi-melalui-http} Berkas yang ditentukan pada argumen `--manifest-url=` akan diunduh oleh kubelet secara berkala dan kubelet akan menginterpretasinya sebagai sebuah berkas JSON/YAML yang berisikan definisi Pod. -Mirip dengan cara kerja [manifes pada _filesystem_](##konfigurasi-melalui-berkas-sistem), +Mirip dengan cara kerja [manifes pada _filesystem_](#konfigurasi-melalui-berkas-sistem), kubelet akan mengambil manifes berdasarkan jadwal. Jika ada perubahan pada daftar Pod statis, maka kubelet akan menerapkannya. @@ -153,12 +153,12 @@ akan dijalankan. Kamu dapat melihat Container yang berjalan (termasuk Pod statis) dengan menjalankan (pada Node): ```shell # Jalankan perintah ini pada Node tempat kubelet berjalan -docker ps +crictl ps ``` Keluarannya kira-kira seperti berikut: -``` +```console CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c ``` @@ -169,8 +169,8 @@ Kamu dapat melihat Pod _mirror_ tersebut pada API server: kubectl get pods ``` ``` -NAME READY STATUS RESTARTS AGE -static-web-my-node1 1/1 Running 0 2m +NAME READY STATUS RESTARTS AGE +static-web 1/1 Running 0 2m ``` {{< note >}} @@ -189,18 +189,18 @@ Kamu dapat mencoba untuk menggunakan kubelet untuk menghapus Pod _mirror_ terseb namun kubelet tidak akan menghapus Pod statis: ```shell -kubectl delete pod static-web-my-node1 +kubectl delete pod static-web ``` ``` -pod "static-web-my-node1" deleted +pod "static-web" deleted ``` Kamu akan melihat bahwa Pod tersebut tetap berjalan: ```shell kubectl get pods ``` ``` -NAME READY STATUS RESTARTS AGE -static-web-my-node1 1/1 Running 0 12s +NAME READY STATUS RESTARTS AGE +static-web 1/1 Running 0 4s ``` Kembali ke Node tempat kubelet berjalan, kamu dapat mencoba menghentikan Container @@ -210,13 +210,13 @@ secara otomatis: ```shell # Jalankan perintah ini pada Node tempat kubelet berjalan -docker stop f6d05272b57e # ganti dengan ID pada Container-mu +crictl stop 129fd7d382018 # ganti dengan ID pada Container-mu sleep 20 -docker ps +crictl ps ``` -``` -CONTAINER ID IMAGE COMMAND CREATED ... -5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ... +```console +CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID +89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` ## Penambahan dan pengurangan secara dinamis pada Pod statis @@ -231,13 +231,13 @@ Pod sesuai dengan penambahan/pengurangan berkas pada direktori tersebut. # mv /etc/kubelet.d/static-web.yaml /tmp sleep 20 -docker ps +crictl ps # Kamu mendapatkan bahwa tidak ada Container nginx yang berjalan mv /tmp/static-web.yaml /etc/kubelet.d/ sleep 20 -docker ps -``` +crictl ps ``` -CONTAINER ID IMAGE COMMAND CREATED ... -e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago +```console +CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID +f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` diff --git a/content/id/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/id/docs/tasks/run-application/horizontal-pod-autoscale.md index cd82ded0b18df..c9b74657baa67 100644 --- a/content/id/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/id/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -281,9 +281,9 @@ mengakses API ini, administrator klaster harus memastikan bahwa: `false` untuk mengubah ke *autoscaling* berdasarkan Heapster, dimana ini sudah tidak didukung lagi. Untuk informasi lebih lanjut mengenai metrik-metrik ini dan bagaimana perbedaan setiap metrik, perhatikan proposal -desain untuk [HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md), -[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) -dan [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md). +desain untuk [HPA V2](https://github.com/kubernetes/design-proposals-archive/blob/main/autoscaling/hpa-v2.md), +[custom.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/custom-metrics-api.md) +dan [external.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/external-metrics-api.md). Untuk contoh bagaimana menggunakan metrik-metrik ini, perhatikan [panduan penggunaan metrik khusus](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics) dan [panduan penggunaan metrik eksternal](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). diff --git a/content/id/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/id/docs/tutorials/kubernetes-basics/explore/explore-intro.html index a6381b3275490..1b3b494b1afdf 100644 --- a/content/id/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/id/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -76,9 +76,9 @@

      Ikhtisar Pod

      Node

      Sebuah Pod selalu berjalan dalam sebuah Node. Node merupakan sebuah mesin pekerja (worker) di Kubernetes dan mungkin merupakan mesin virtual ataupun fisik, tergantung dari klaster. Tiap Node dikelola oleh control plane. Satu Node dapat memiliki beberapa Pod, dan control plane Kubernetes yang otomatis menangani penjadwalan pod seluruh Node-Node dalam klaster. Penjadwalan otomatis oleh control plane memperhitungkan tersedianya sumber daya tiap Node.

      -

      Tiap Node Kuberbetes menjalankan setidaknya:

      +

      Tiap Node Kubernetes menjalankan setidaknya:

        -
      • Kubelet, satu proses yang bertanggung jawab untuk berkomunikasi antara control plane Kuberneter dan Node; ini juga mengelola Pod-Pod dan kontainer-kontainer yang berjalan di sebuah mesin.
      • +
      • Kubelet, satu proses yang bertanggung jawab untuk berkomunikasi antara control plane Kubernetes dan Node; ini juga mengelola Pod-Pod dan kontainer-kontainer yang berjalan di sebuah mesin.
      • Satu container runtime, seperti Docker, bertanggung jawab untuk menarik image kontainer dari register, membuka kontainer, dan menjalankan aplikasi.
      diff --git a/content/id/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/id/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml index 87bd198cfdab7..ac19efe4a2350 100644 --- a/content/id/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml +++ b/content/id/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c @@ -22,14 +22,14 @@ spec: - name: varlog mountPath: /var/log - name: count-log-1 - image: busybox - args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] + image: busybox:1.28 + args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log - name: count-log-2 - image: busybox - args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log'] + image: busybox:1.28 + args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log'] volumeMounts: - name: varlog mountPath: /var/log diff --git a/content/id/examples/pods/probe/exec-liveness.yaml b/content/id/examples/pods/probe/exec-liveness.yaml index 07bf75f85c6f3..6a9c9b3213718 100644 --- a/content/id/examples/pods/probe/exec-liveness.yaml +++ b/content/id/examples/pods/probe/exec-liveness.yaml @@ -11,7 +11,7 @@ spec: args: - /bin/sh - -c - - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 + - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: diff --git a/content/it/docs/concepts/overview/components.md b/content/it/docs/concepts/overview/components.md index 906d1f701336c..2734b38674639 100644 --- a/content/it/docs/concepts/overview/components.md +++ b/content/it/docs/concepts/overview/components.md @@ -1,5 +1,7 @@ --- title: I componenti di Kubernetes +description: > + Un cluster di Kubernetes è costituito da un insieme di componenti che sono, come minimo, un Control Plane e uno o più sistemi di elaborazione, detti nodi. content_type: concept weight: 20 card: diff --git a/content/it/docs/concepts/overview/kubernetes-api.md b/content/it/docs/concepts/overview/kubernetes-api.md index 5214bea53a330..78022336e84d5 100644 --- a/content/it/docs/concepts/overview/kubernetes-api.md +++ b/content/it/docs/concepts/overview/kubernetes-api.md @@ -1,5 +1,7 @@ --- -title: Le API di Kubernetes +title: Le API di Kubernetes +description: > + Le API di Kubernetes ti permettono di interrogare e manipolare lo stato degli oggetti in Kubernetes. Il cuore del Control Plane di Kubernetes è l'API server e le API HTTP che esso espone. Ogni entità o componente che si interfaccia con il cluster (gli utenti, le singole parti del tuo cluster, i componenti esterni), comunica attraverso l'API server. content_type: concept weight: 30 card: diff --git a/content/it/training/_index.html b/content/it/training/_index.html index a8eee4e468d43..051fc87557dbc 100644 --- a/content/it/training/_index.html +++ b/content/it/training/_index.html @@ -95,7 +95,7 @@
      Certified Kubernetes Administrator (CKA)
      -

      Il programma "Certified Kubernetes Administrator" assicura che la persona ha le capacità, conoscenze, e competenze per operare come amministratore di Kubernets.

      +

      Il programma "Certified Kubernetes Administrator" assicura che la persona ha le capacità, conoscenze, e competenze per operare come amministratore di Kubernetes.


      Vai alla certificazione diff --git a/content/ja/docs/concepts/cluster-administration/networking.md b/content/ja/docs/concepts/cluster-administration/networking.md index 444e7ef3d1eb5..37ce7102bce2e 100644 --- a/content/ja/docs/concepts/cluster-administration/networking.md +++ b/content/ja/docs/concepts/cluster-administration/networking.md @@ -88,9 +88,9 @@ Details on how the AOS system works can be accessed here: https://www.apstra.com さらに、このCNIは[ネットワークポリシーの適用のためにCalico](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/calico.html)と一緒に実行できます。AWS VPC CNIプロジェクトは、[GitHubのドキュメント](https://github.com/aws/amazon-vpc-cni-k8s)とともにオープンソースで公開されています。 ### Azure CNI for Kubernetes -[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node. +[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview)は、Kubernetes PodをAzure仮想ネットワーク(VNetとも呼ばれます)と統合する[オープンソース](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)プラグインで、VMと同等のネットワーク パフォーマンスを提供します。Pod は、ExpressRouteまたはサイト間VPN経由でピアリングされたVNetおよびオンプレミスに接続でき、これらのネットワークから直接アクセスすることもできます。Podは、サービスエンドポイントまたはプライベートリンクによって保護されているストレージやSQLなどのAzureサービスにアクセスできます。VNetセキュリティポリシーとルーティングを使用して、Podトラフィックをフィルター処理できます。プラグインは、Kubernetesノードのネットワークインターフェイスで事前に構成されたセカンダリIPのプールを利用して、VNet IPをPodに割り当てます。 -Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni). +Azure CNIは、[Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni)でネイティブに利用できます。 ### Big Cloud Fabric from Big Switch Networks diff --git a/content/ja/docs/concepts/configuration/manage-resources-containers.md b/content/ja/docs/concepts/configuration/manage-resources-containers.md index 6f699bb4779e7..d9e5a13b4190e 100644 --- a/content/ja/docs/concepts/configuration/manage-resources-containers.md +++ b/content/ja/docs/concepts/configuration/manage-resources-containers.md @@ -17,7 +17,7 @@ Pod内のコンテナのリソース*要求*を指定すると、スケジュー -## 要求と制限 +## 要求と制限 {#requests-and-limits} Podが動作しているNodeに利用可能なリソースが十分にある場合、そのリソースの`要求`が指定するよりも多くのリソースをコンテナが使用することが許可されます ただし、コンテナはそのリソースの`制限`を超えて使用することはできません。 @@ -33,7 +33,7 @@ Podが動作しているNodeに利用可能なリソースが十分にある場 コンテナが自身のメモリー制限を指定しているが、メモリー要求を指定していない場合、Kubernetesは制限に一致するメモリー要求を自動的に割り当てます。同様に、コンテナが自身のCPU制限を指定しているが、CPU要求を指定していない場合、Kubernetesは制限に一致するCPU要求を自動的に割り当てます。 {{< /note >}} -## リソースタイプ +## リソースタイプ {#resource-types} *CPU*と*メモリー*はいずれも*リソースタイプ*です。リソースタイプには基本単位があります。 CPUは計算処理を表し、[Kubernetes CPUs](#meaning-of-cpu)の単位で指定されます。 @@ -54,7 +54,7 @@ CPUとメモリーは、まとめて*コンピュートリソース*または単 それらは[API resources](/ja/docs/concepts/overview/kubernetes-api/)とは異なります。 Podや[Services](/ja/docs/concepts/services-networking/service/)などのAPIリソースは、Kubernetes APIサーバーを介して読み取りおよび変更できるオブジェクトです。 -## Podとコンテナのリソース要求と制限 +## Podとコンテナのリソース要求と制限 {#resource-requests-and-limits-of-pod-and-container} Podの各コンテナは、次の1つ以上を指定できます。 @@ -68,9 +68,9 @@ Podの各コンテナは、次の1つ以上を指定できます。 要求と制限はそれぞれのコンテナでのみ指定できますが、このPodリソースの要求と制限の関係性について理解すると便利です。 特定のリソースタイプの*Podリソース要求/制限*は、Pod内の各コンテナに対するそのタイプのリソース要求/制限の合計です。 -## Kubernetesにおけるリソースの単位 +## Kubernetesにおけるリソースの単位 {#resource-units-in-kubernetes} -### CPUの意味 +### CPUの意味 {#meaning-of-cpu} CPUリソースの制限と要求は、*cpu*単位で測定されます。 Kuberenetesにおける1つのCPUは、クラウドプロバイダーの**1 vCPU/コア**およびベアメタルのインテルプロセッサーの**1 ハイパースレッド**に相当します。 @@ -85,7 +85,7 @@ Kuberenetesにおける1つのCPUは、クラウドプロバイダーの**1 vCPU CPUは常に相対量としてではなく、絶対量として要求されます。 0.1は、シングルコア、デュアルコア、あるいは48コアマシンのどのCPUに対してでも、同一の量を要求します。 -### メモリーの意味 +### メモリーの意味 {#meaning-of-memory} `メモリー`の制限と要求はバイト単位で測定されます。 E、P、T、G、M、Kのいずれかのサフィックスを使用して、メモリーを整数または固定小数点数として表すことができます。 @@ -128,7 +128,7 @@ spec: cpu: "500m" ``` -## リソース要求を含むPodがどのようにスケジュールされるか +## リソース要求を含むPodがどのようにスケジュールされるか {#how-pods-with-resource-requests-are-scheduled} Podを作成すると、KubernetesスケジューラーはPodを実行するNodeを選択します。 各Nodeには、リソースタイプごとに最大容量があります。それは、Podに提供できるCPUとメモリの量です。 @@ -136,7 +136,7 @@ Podを作成すると、KubernetesスケジューラーはPodを実行するNode Node上の実際のメモリーまたはCPUリソースの使用率は非常に低いですが、容量チェックが失敗した場合、スケジューラーはNodeにPodを配置しないことに注意してください。 これにより、例えば日々のリソース要求のピーク時など、リソース利用が増加したときに、Nodeのリソース不足から保護されます。 -## リソース制限のあるPodがどのように実行されるか +## リソース制限のあるPodがどのように実行されるか {#how-pods-with-resource-limits-are-run} kubeletがPodのコンテナを開始すると、CPUとメモリーの制限がコンテナランタイムに渡されます。 @@ -166,13 +166,13 @@ Dockerを使用する場合: コンテナをスケジュールできないか、リソース制限が原因で強制終了されているかどうかを確認するには、[トラブルシューティング](#troubleshooting)のセクションを参照してください。 -### コンピュートリソースとメモリーリソースの使用量を監視する +### コンピュートリソースとメモリーリソースの使用量を監視する {#monitoring-compute-memory-resource-usage} Podのリソース使用量は、Podのステータスの一部として報告されます。 オプションの[監視ツール](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)がクラスターにおいて利用可能な場合、Podのリソース使用量は[メトリクスAPI](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)から直接、もしくは監視ツールから取得できます。 -## ローカルのエフェメラルストレージ +## ローカルのエフェメラルストレージ {#local-ephemeral-storage} {{< feature-state for_k8s_version="v1.10" state="beta" >}} @@ -192,7 +192,7 @@ Nodeに障害が発生すると、そのエフェメラルストレージ内の ベータ版の機能として、Kubernetesでは、Podが消費するローカルのエフェメラルストレージの量を追跡、予約、制限することができます。 -### ローカルエフェメラルストレージの設定 +### ローカルエフェメラルストレージの設定 {#configurations-for-local-ephemeral-storage} Kubernetesは、Node上のローカルエフェメラルストレージを構成する2つの方法をサポートしています。 {{< tabs name="local_storage_configurations" >}} @@ -235,7 +235,7 @@ kubeletは、ローカルストレージの使用量を測定できます。 kubeletは、`tmpfs`のemptyDirボリュームをローカルのエフェメラルストレージとしてではなく、コンテナメモリーとして追跡します。 {{< /note >}} -### ローカルのエフェメラルストレージの要求と制限設定 +### ローカルのエフェメラルストレージの要求と制限設定 {#setting-requests-and-limits-for-local-ephemeral-storage} ローカルのエフェメラルストレージを管理するためには _ephemeral-storage_ パラメーターを利用することができます。 Podの各コンテナは、次の1つ以上を指定できます。 @@ -288,7 +288,7 @@ spec: emptyDir: {} ``` -### エフェメラルストレージを要求するPodのスケジュール方法 +### エフェメラルストレージを要求するPodのスケジュール方法 {#how-pods-with-ephemeral-storage-requests-are-scheduled} Podを作成すると、KubernetesスケジューラーはPodを実行するNodeを選択します。 各Nodeには、Podに提供できるローカルのエフェメラルストレージの上限があります。 @@ -375,7 +375,7 @@ Kubernetesが使用しないようにする必要があります。 {{% /tab %}} {{< /tabs >}} -## 拡張リソース +## 拡張リソース {#extended-resources} 拡張リソースは`kubernetes.io`ドメインの外で完全に修飾されたリソース名です。 これにより、クラスタオペレータはKubernetesに組み込まれていないリソースをアドバタイズし、ユーザはそれを利用することができるようになります。 @@ -384,16 +384,16 @@ Kubernetesが使用しないようにする必要があります。 第一に、クラスタオペレーターは拡張リソースをアドバタイズする必要があります。 第二に、ユーザーはPodで拡張リソースを要求する必要があります。 -### 拡張リソースの管理 +### 拡張リソースの管理 {#managing-extended-resources} -#### Nodeレベルの拡張リソース +#### Nodeレベルの拡張リソース {#node-level-extended-resources} Nodeレベルの拡張リソースはNodeに関連付けられています。 -##### デバイスプラグイン管理のリソース +##### デバイスプラグイン管理のリソース {#device-plugin-managed-resources} 各Nodeにデバイスプラグインで管理されているリソースをアドバタイズする方法については、[デバイスプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)を参照してください。 -##### その他のリソース +##### その他のリソース {#other-resources} 新しいNodeレベルの拡張リソースをアドバタイズするには、クラスタオペレータはAPIサーバに`PATCH`HTTPリクエストを送信し、クラスタ内のNodeの`status.capacity`に利用可能な量を指定します。 この操作の後、ノードの`status.capacity`には新しいリソースが含まれます。 `status.allocatable`フィールドは、kubeletによって非同期的に新しいリソースで自動的に更新されます。 @@ -416,7 +416,7 @@ JSON-Patchの操作パス値は、JSON-Pointerとして解釈されます。 詳細については、[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3)を参照してください。 {{< /note >}} -#### クラスターレベルの拡張リソース +#### クラスターレベルの拡張リソース {#cluster-level-extended-resources} クラスターレベルの拡張リソースはノードに関連付けられていません。 これらは通常、リソース消費とリソースクォータを処理するスケジューラー拡張機能によって管理されます。 @@ -449,7 +449,7 @@ JSON-Patchの操作パス値は、JSON-Pointerとして解釈されます。 } ``` -### 拡張リソースの消費 +### 拡張リソースの消費 {#consuming-extended-resources} ユーザーは、CPUやメモリのようにPodのスペックで拡張されたリソースを消費できます。 利用可能な量以上のリソースが同時にPodに割り当てられないように、スケジューラーがリソースアカウンティングを行います。 @@ -493,9 +493,9 @@ spec: example.com/foo: 1 ``` -## トラブルシューティング +## トラブルシューティング {#troubleshooting} -### failedSchedulingイベントメッセージが表示され、Podが保留中になる +### failedSchedulingイベントメッセージが表示され、Podが保留中になる {#my-pods-are-pending-with-event-message-failedscheduling} スケジューラーがPodが収容されるNodeを見つけられない場合、場所が見つかるまでPodはスケジュールされないままになります。 スケジューラーがPodの場所を見つけられないたびに、次のようなイベントが生成されます。 @@ -562,7 +562,7 @@ Allocated resources: [リソースクォータ](/docs/concepts/policy/resource-quotas/)機能は、消費できるリソースの総量を制限するように設定することができます。 名前空間と組み合わせて使用すると、1つのチームがすべてのリソースを占有するのを防ぐことができます。 -### コンテナが終了した +### コンテナが終了した {#my-container-is-terminated} コンテナはリソース不足のため、終了する可能性があります。 コンテナがリソース制限に達したために強制終了されているかどうかを確認するには、対象のPodで`kubectl describe pod`を呼び出します。 diff --git a/content/ja/docs/concepts/extend-kubernetes/operator.md b/content/ja/docs/concepts/extend-kubernetes/operator.md index c45fe88f63d48..b7fae76b35c3a 100644 --- a/content/ja/docs/concepts/extend-kubernetes/operator.md +++ b/content/ja/docs/concepts/extend-kubernetes/operator.md @@ -89,6 +89,7 @@ kubectl edit SampleDB/example-database # 手動でいくつかの設定を変更 * ユースケースに合わせた、既製のオペレーターを[OperatorHub.io](https://operatorhub.io/)から見つけます * 自前のオペレーターを書くために既存のツールを使います、例: * [Charmed Operator Framework](https://juju.is/) + * [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk) * [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework) * [KUDO](https://kudo.dev/)(Kubernetes Universal Declarative Operator)を使います * [kubebuilder](https://book.kubebuilder.io/)を使います diff --git a/content/ja/docs/concepts/scheduling-eviction/api-eviction.md b/content/ja/docs/concepts/scheduling-eviction/api-eviction.md new file mode 100644 index 0000000000000..5092c96b19b45 --- /dev/null +++ b/content/ja/docs/concepts/scheduling-eviction/api-eviction.md @@ -0,0 +1,92 @@ +--- +title: APIを起点とした退避 +content_type: concept +weight: 70 +--- + +{{< glossary_definition term_id="api-eviction" length="short" >}}
      + +Eviction APIを直接呼び出すか、`kubectl drain`コマンドのように{{}}のクライアントを使って退避を要求することが可能です。これにより、`Eviction`オブジェクトを作成し、APIサーバーにPodを終了させます。 + +APIを起点とした退避は[`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)と[`terminationGracePeriodSeconds`](/ja/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)の設定を優先します。 + +APIを使用してPodのEvictionオブジェクトを作成することは、Podに対してポリシー制御された[`DELETE`操作](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)を実行することに似ています。 + +## Eviction APIの実行 {#calling-the-eviction-api} + +Kubernetes APIへアクセスして`Eviction`オブジェクトを作るために[Kubernetesのプログラミング言語のクライアント](/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api)を使用できます。 +そのためには、次の例のようなデータをPOSTすることで操作を試みることができます。 + +{{< tabs name="Eviction_example" >}} +{{% tab name="policy/v1" %}} +{{< note >}} +`policy/v1`においてEvictionはv1.22以上で利用可能です。それ以前のリリースでは、`policy/v1beta1`を使用してください。 +{{< /note >}} + +```json +{ + "apiVersion": "policy/v1", + "kind": "Eviction", + "metadata": { + "name": "quux", + "namespace": "default" + } +} +``` +{{% /tab %}} +{{% tab name="policy/v1beta1" %}} +{{< note >}} +v1.22で非推奨となり、`policy/v1`が採用されました。 +{{< /note >}} + +```json +{ + "apiVersion": "policy/v1beta1", + "kind": "Eviction", + "metadata": { + "name": "quux", + "namespace": "default" + } +} +``` +{{% /tab %}} +{{< /tabs >}} + +また、以下の例のように`curl`や`wget`を使ってAPIにアクセスすることで、操作を試みることもできます。 + +```bash +curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json +``` + +## APIを起点とした退避の仕組み {#how-api-initiated-eviction-works} + +APIを使用して退去を要求した場合、APIサーバーはアドミッションチェックを行い、以下のいずれかを返します。 + +* `200 OK`:この場合、退去が許可されると`Eviction`サブリソースが作成され、PodのURLに`DELETE`リクエストを送るのと同じように、Podが削除されます。 +* `429 Too Many Requests`:{{}}の設定により、現在退去が許可されていないことを示します。しばらく時間を空けてみてください。また、APIのレート制限のため、このようなレスポンスが表示されることもあります。 +* `500 Internal Server Error`:複数のPodDisruptionBudgetが同じPodを参照している場合など、設定に誤りがあり退去が許可されないことを示します。 + +退去させたいPodがPodDisruptionBudgetを持つワークロードの一部でない場合、APIサーバーは常に`200 OK`を返して退去を許可します。 + +APIサーバーが退去を許可した場合、以下の流れでPodが削除されます。 + +1. APIサーバーの`Pod`リソースの削除タイムスタンプが更新され、APIサーバーは`Pod`リソースが終了したと見なします。また`Pod`リソースは、設定された猶予期間が設けられます。 +1. ローカルのPodが動作しているNodeの{{}}は、`Pod`リソースが終了するようにマークされていることに気付き、Podの適切なシャットダウンを開始します。 +1. kubeletがPodをシャットダウンしている間、コントロールプレーンは{{}}オブジェクトからPodを削除します。その結果、コントローラーはPodを有効なオブジェクトと見なさないようになります。 +1. Podの猶予期間が終了すると、kubeletはローカルPodを強制的に終了します。 +1. kubeletはAPIサーバーに`Pod`リソースを削除するように指示します。 +1. APIサーバーは`Pod`リソースを削除します。 + +## トラブルシューティング {#troubleshooting-stuck-evictions} + +場合によっては、アプリケーションが壊れた状態になり、対処しない限りEviction APIが`429`または`500`レスポンスを返すだけとなることがあります。例えば、ReplicaSetがアプリケーション用のPodを作成しても、新しいPodが`Ready`状態にならない場合などです。また、最後に退去したPodの終了猶予期間が長い場合にも、この事象が見られます。 + +退去が進まない場合は、以下の解決策を試してみてください。 + +* 問題を引き起こしている自動化された操作を中止または一時停止し、操作を再開する前に、スタックしているアプリケーションを調査を行ってください。 +* しばらく待ってから、Eviction APIを使用する代わりに、クラスターのコントロールプレーンから直接Podを削除してください。 + +## {{% heading "whatsnext" %}} +* [Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/)でアプリケーションを保護する方法について学ぶ +* [Node不足による退避](/docs/concepts/scheduling-eviction/node-pressure-eviction/)について学ぶ +* [Podの優先度とプリエンプション](/docs/concepts/scheduling-eviction/pod-priority-preemption/)について学ぶ diff --git a/content/ja/docs/concepts/security/overview.md b/content/ja/docs/concepts/security/overview.md index 3cb67943f63de..f1bb1fed433cb 100644 --- a/content/ja/docs/concepts/security/overview.md +++ b/content/ja/docs/concepts/security/overview.md @@ -125,7 +125,7 @@ TLS経由のアクセスのみ | コードがTCP通信を必要とする場合 関連するKubernetesセキュリティについて学びます。 * [Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/) -* [Podのネットワークポリシー]](/ja/docs/concepts/services-networking/network-policies/) +* [Podのネットワークポリシー](/ja/docs/concepts/services-networking/network-policies/) * [Kubernetes APIへのアクセスを制御する](/docs/concepts/security/controlling-access) * [クラスターの保護](/docs/tasks/administer-cluster/securing-a-cluster/) * コントロールプレーンとの[通信時のデータ暗号化](/docs/tasks/tls/managing-tls-in-a-cluster/) diff --git a/content/ja/docs/concepts/workloads/controllers/daemonset.md b/content/ja/docs/concepts/workloads/controllers/daemonset.md index 92ab055a5994a..42647228e6180 100644 --- a/content/ja/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ja/docs/concepts/workloads/controllers/daemonset.md @@ -72,8 +72,7 @@ Kubernetes1.8のように、ユーザーは`.spec.template`のラベルにマッ ### 選択したNode上でPodを稼働させる -もしユーザーが`.spec.template.spec.nodeSelector`を指定したとき、DaemonSetコントローラーは、その[node -selector](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチするPodをNode上に作成します。同様に、もし`.spec.template.spec.affinity`を指定したとき、DaemonSetコントローラーは[node affinity](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)マッチするPodをNode上に作成します。 +もしユーザーが`.spec.template.spec.nodeSelector`を指定したとき、DaemonSetコントローラーは、その[node selector](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチするNode上にPodを作成します。同様に、もし`.spec.template.spec.affinity`を指定したとき、DaemonSetコントローラーは[node affinity](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチするNode上にPodを作成します。 もしユーザーがどちらも指定しないとき、DaemonSetコントローラーは全てのNode上にPodを作成します。 ## Daemon Podがどのようにスケジューリングされるか diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md index c106a6d71e8d9..b62151fc54583 100644 --- a/content/ja/docs/concepts/workloads/controllers/deployment.md +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -68,11 +68,6 @@ Deploymentによって作成されたReplicaSetを管理しないでください kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml ``` - {{< note >}} - 実行したコマンドを`kubernetes.io/change-cause`というアノテーションに記録するために`--record`フラグを指定できます。 - これは将来的な問題の調査のために有効です。例えば、各Deploymentのリビジョンにおいて実行されたコマンドを見るときに便利です。 - {{< /note >}} - 2. Deploymentが作成されたことを確認するために、`kubectl get deployments`を実行してください。 @@ -158,12 +153,12 @@ Deploymentを更新するには以下のステップに従ってください。 1. nginxのPodで、`nginx:1.14.2`イメージの代わりに`nginx:1.16.1`を使うように更新します。 ```shell - kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` または単に次のコマンドを使用します。 ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record + kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 ``` 実行結果は以下のとおりです。 @@ -235,7 +230,7 @@ Deploymentを更新するには以下のステップに従ってください。 次にPodを更新させたいときは、DeploymentのPodテンプレートを再度更新するだけです。 - Deploymentは、Podが更新されている間に特定の数のPodのみ停止状態になることを保証します。デフォルトでは、目標とするPod数の少なくとも25%が停止状態になることを保証します(25% max unavailable)。 + Deploymentは、Podが更新されている間に特定の数のPodのみ停止状態になることを保証します。デフォルトでは、目標とするPod数の少なくとも75%が稼働状態であることを保証します(25% max unavailable)。 また、DeploymentはPodが更新されている間に、目標とするPod数を特定の数まで超えてPodを稼働させることを保証します。デフォルトでは、目標とするPod数に対して最大でも125%を超えてPodを稼働させることを保証します(25% max surge)。 @@ -317,7 +312,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー * `nginx:1.16.1`の代わりに`nginx:1.161`というイメージに更新して、Deploymentの更新中にタイプミスをしたと仮定します。 ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true + kubectl set image deployment/nginx-deployment nginx=nginx:1.161 ``` 実行結果は以下のとおりです。 @@ -431,15 +426,14 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー ``` deployments "nginx-deployment" REVISION CHANGE-CAUSE - 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true - 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true - 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true + 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 ``` `CHANGE-CAUSE`はリビジョンの作成時にDeploymentの`kubernetes.io/change-cause`アノテーションからリビジョンにコピーされます。以下の方法により`CHANGE-CAUSE`メッセージを指定できます。 * `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"`の実行によりアノテーションを追加します。 - * リソースの変更時に`kubectl`コマンドの内容を記録するために`--record`フラグを追加します。 * リソースのマニフェストを手動で編集します。 2. 各リビジョンの詳細を確認するためには以下のコマンドを実行してください。 @@ -452,7 +446,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 Containers: nginx: Image: nginx:1.16.1 @@ -512,7 +506,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate diff --git a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md index beb92b3b882cd..b99d193308a50 100644 --- a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md @@ -42,7 +42,7 @@ weight: 80 エフェメラルコンテナを利用する場合には、他のコンテナ内のプロセスにアクセスできるように、[プロセス名前空間の共有](/ja/docs/tasks/configure-pod-container/share-process-namespace/)を有効にすると便利です。 -エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)を参照してください。 +エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container)を参照してください。 ## Ephemeral containers API diff --git a/content/ja/docs/contribute/review/for-approvers.md b/content/ja/docs/contribute/review/for-approvers.md index 3a96595ea82a5..ea17bba99ecf1 100644 --- a/content/ja/docs/contribute/review/for-approvers.md +++ b/content/ja/docs/contribute/review/for-approvers.md @@ -59,7 +59,7 @@ reviewerとapproverが最もよく使うprowコマンドには、以下のよう {{< table caption="Prow commands for reviewing" >}} Prowコマンド | Roleの制限 | 説明 :------------|:------------------|:----------- -`/lgtm` | 誰でも。ただし、オートメーションがトリガされるのはReviewerまたはApproverが使用したときのみ。 | PRのレビューが完了し、変更に納得したことを知らせる。 +`/lgtm` | Organizationメンバー | PRのレビューが完了し、変更に納得したことを知らせる。 `/approve` | Approver | PRをマージすることを承認する。 `/assign` | ReviewerまたはApprover | PRのレビューまたは承認するひとを割り当てる。 `/close` | ReviewerまたはApprover | issueまたはPRをcloseする。 @@ -93,7 +93,7 @@ PRで利用できるすべてのコマンド一覧を確認するには、[Prow `priority/important-longterm` | 6ヶ月以内に取り組む。 `priority/backlog` | 無期限に延期可能。リソースに余裕がある時に取り組む。 `priority/awaiting-more-evidence` | よいissueの可能性があるissueを見失わないようにするためのプレースホルダー。 - `help`または`good first issue` | KubernetesまたはSIG Docsでほとんど経験がない人に適したissue。より詳しい情報は、[Help WantedとGood First Issueラベル](https://github.com/kubernetes/community/blob/master/contributors/guide/help-wanted.md)を読んでください。 + `help`または`good first issue` | KubernetesまたはSIG Docsでほとんど経験がない人に適したissue。より詳しい情報は、[Help WantedとGood First Issueラベル](https://kubernetes.dev/docs/guide/help-wanted/)を読んでください。 {{< /table >}} あなたの裁量で、issueのオーナーシップを取り、issueに対するPRを提出してください(簡単なissueや、自分がすでに行った作業に関連するissueである場合は特に)。 diff --git a/content/ja/docs/reference/glossary/api-eviction.md b/content/ja/docs/reference/glossary/api-eviction.md new file mode 100644 index 0000000000000..25677032b9ad8 --- /dev/null +++ b/content/ja/docs/reference/glossary/api-eviction.md @@ -0,0 +1,23 @@ +--- +title: APIを起点とした退避 +id: api-eviction +date: 2021-04-27 +full_link: /ja/docs/concepts/scheduling-eviction/api-eviction/ +short_description: > + APIを起点とした退避は、Eviction APIを使用してEvictionオブジェクトを作成し、Podの正常終了を起動させるプロセスです。 +aka: +tags: +- operation +--- +APIを起点とした退避は、[Eviction API](/docs/reference/generated/kubernetes-api/{{}}/#create-eviction-pod-v1-core)を使用して退避オブジェクトを作成し、Podの正常終了を起動させるプロセスです。 + + + + +`kubectl drain`コマンドのようなkube-apiserverのクライアントを使用し、Eviction APIを直接呼び出すことで、退避を要求することができます。`Eviction`オブジェクトが生成された時、APIサーバーは対象のPodを終了させます。 + +APIを起点とした退避は[`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)と[`terminationGracePeriodSeconds`](/ja/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)の設定を優先します。 + +APIを起点とした退避は、[Node不足による退避](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction)とは異なります。 + +* 詳しくは[APIを起点とした退避](/ja/docs/concepts/scheduling-eviction/api-eviction/)をご覧ください。 diff --git a/content/ja/docs/reference/glossary/cloud-controller-manager.md b/content/ja/docs/reference/glossary/cloud-controller-manager.md index 6e9c9104daa8f..d0e705c7c18aa 100644 --- a/content/ja/docs/reference/glossary/cloud-controller-manager.md +++ b/content/ja/docs/reference/glossary/cloud-controller-manager.md @@ -4,7 +4,7 @@ id: cloud-controller-manager date: 2018-04-12 full_link: /ja/docs/concepts/architecture/cloud-controller/ short_description: > - サードパーティクラウドプロバイダーにKubernetewを結合するコントロールプレーンコンポーネント + サードパーティクラウドプロバイダーにKubernetesを結合するコントロールプレーンコンポーネント aka: tags: - core-object diff --git a/content/ja/docs/setup/best-practices/certificates.md b/content/ja/docs/setup/best-practices/certificates.md index b1e5448be7a39..3ec8f9c0042ae 100644 --- a/content/ja/docs/setup/best-practices/certificates.md +++ b/content/ja/docs/setup/best-practices/certificates.md @@ -74,7 +74,7 @@ CAの秘密鍵をクラスターにコピーしたくない場合、自身で全 [1]: クラスターに接続するIPおよびDNS名( [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)を使用する場合と同様、ロードバランサーのIPおよびDNS名、`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`) -`kind`は下記の[x509の鍵用途](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage)のタイプにマッピングされます: +`kind`は下記の[x509の鍵用途](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage)のタイプにマッピングされます: | 種類 | 鍵の用途     | |--------|---------------------------------------------------------------------------------| diff --git a/content/ja/docs/setup/release/version-skew-policy.md b/content/ja/docs/setup/release/version-skew-policy.md index eb0764bcc3a4d..3e58503462ea9 100644 --- a/content/ja/docs/setup/release/version-skew-policy.md +++ b/content/ja/docs/setup/release/version-skew-policy.md @@ -16,7 +16,7 @@ Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメ Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}})。 -セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、これらのブランチから [定期的に](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence) 切り出され、必要に応じて追加の緊急リリースも行われます。 +セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、これらのブランチから [定期的に](https://kubernetes.io/releases/patch-releases/#cadence) 切り出され、必要に応じて追加の緊急リリースも行われます。 [リリースマネージャー](https://git.k8s.io/sig-release/release-managers.md)グループがこれを決定しています。 diff --git a/content/ja/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/ja/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index bb06ebe5db49f..1bac7b00183fd 100644 --- a/content/ja/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/ja/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -158,7 +158,7 @@ Kubernetesの認証局は、そのままでは機能しません。 ビルトインサイナーを有効にするには、`--cluster-signing-cert-file`と`--cluster-signing-key-file`フラグを渡す必要があります。 -新しいクラスターを作成する場合は、kubeadm[設定ファイル](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3)を使用します。 +新しいクラスターを作成する場合は、kubeadm[設定ファイル](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3)を使用します。 ```yaml apiVersion: kubeadm.k8s.io/v1beta3 diff --git a/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md new file mode 100644 index 0000000000000..8266904358b1f --- /dev/null +++ b/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -0,0 +1,197 @@ +--- +title: ネームスペースのデフォルトのメモリー要求と制限を設定する +content_type: task +weight: 10 +description: >- + ネームスペースのデフォルトのメモリーリソース制限を定義して、そのネームスペース内のすべての新しいPodにメモリーリソース制限が設定されるようにします。 +--- + + + +このページでは、{{< glossary_tooltip text="ネームスペース" term_id="namespace" >}}のデフォルトのメモリー要求と制限を設定する方法を説明します。 + +Kubernetesクラスターはネームスペースに分割することができます。デフォルトのメモリー[制限](/ja/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)を持つネームスペースがあり、独自のメモリー制限を指定しないコンテナでPodを作成しようとすると、{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}はそのコンテナにデフォルトのメモリー制限を割り当てます。 + +Kubernetesは、このトピックで後ほど説明する特定の条件下で、デフォルトのメモリー要求を割り当てます。 + + + +## {{% heading "prerequisites" %}} + + +{{< include "task-tutorial-prereqs.md" >}} + +クラスターにネームスペースを作成するには、アクセス権が必要です。 + +クラスターの各ノードには、最低でも2GiBのメモリーが必要です。 + + + + + +## ネームスペースの作成 + +この演習で作成したリソースがクラスターの他の部分から分離されるように、ネームスペースを作成します。 + +```shell +kubectl create namespace default-mem-example +``` + +## LimitRangeとPodの作成 + +以下は、{{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}のマニフェストの例です。このマニフェストでは、デフォルトのメモリー要求とデフォルトのメモリー制限を指定しています。 + +{{< codenew file="admin/resource/memory-defaults.yaml" >}} + +default-mem-exampleネームスペースにLimitRangeを作成します: + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example +``` + +default-mem-exampleネームスペースでPodを作成し、そのPod内のコンテナがメモリー要求とメモリー制限の値を独自に指定しない場合、{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}はデフォルト値のメモリー要求256MiBとメモリー制限512MiBを適用します。 + +以下は、コンテナを1つ持つPodのマニフェストの例です。コンテナは、メモリー要求とメモリー制限を指定していません。 + +{{< codenew file="admin/resource/memory-defaults-pod.yaml" >}} + +Podを作成します: + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example +``` + +Podの詳細情報を表示します: + +```shell +kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example +``` + +この出力は、Podのコンテナのメモリー要求が256MiBで、メモリー制限が512MiBであることを示しています。 +これらはLimitRangeで指定されたデフォルト値です。 + +```shell +containers: +- image: nginx + imagePullPolicy: Always + name: default-mem-demo-ctr + resources: + limits: + memory: 512Mi + requests: + memory: 256Mi +``` + +Podを削除します: + +```shell +kubectl delete pod default-mem-demo --namespace=default-mem-example +``` + +## コンテナの制限を指定し、要求を指定しない場合 + +以下は1つのコンテナを持つPodのマニフェストです。コンテナはメモリー制限を指定しますが、メモリー要求は指定しません。 + +{{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}} + +Podを作成します: + + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example +``` + +Podの詳細情報を表示します: + +```shell +kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example +``` + +この出力は、コンテナのメモリー要求がそのメモリー制限に一致するように設定されていることを示しています。 +コンテナにはデフォルトのメモリー要求値である256Miが割り当てられていないことに注意してください。 + +``` +resources: + limits: + memory: 1Gi + requests: + memory: 1Gi +``` + +## コンテナの要求を指定し、制限を指定しない場合 + +1つのコンテナを持つPodのマニフェストです。コンテナはメモリー要求を指定しますが、メモリー制限は指定しません。 + +{{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}} + +Podを作成します: + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example +``` + +Podの詳細情報を表示します: + +```shell +kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example +``` + +この出力は、コンテナのメモリー要求が、コンテナのマニフェストで指定された値に設定されていることを示しています。 +コンテナは512MiB以下のメモリーを使用するように制限されていて、これはネームスペースのデフォルトのメモリー制限と一致します。 + +``` +resources: + limits: + memory: 512Mi + requests: + memory: 128Mi +``` + +## デフォルトのメモリー制限と要求の動機 + +ネームスペースにメモリー{{< glossary_tooltip text="リソースクォータ" term_id="resource-quota" >}}が設定されている場合、メモリー制限のデフォルト値を設定しておくと便利です。 + +以下はリソースクォータがネームスペースに課す制限のうちの2つです。 + +* ネームスペースで実行されるすべてのPodについて、Podとその各コンテナにメモリー制限を設ける必要があります(Pod内のすべてのコンテナに対してメモリー制限を指定すると、Kubernetesはそのコンテナの制限を合計することでPodレベルのメモリー制限を推測することができます)。 +* メモリー制限は、当該Podがスケジュールされているノードのリソース予約を適用します。ネームスペース内のすべてのPodに対して予約されるメモリーの総量は、指定された制限を超えてはなりません。 +* また、ネームスペース内のすべてのPodが実際に使用するメモリーの総量も、指定された制限を超えてはなりません。 + +LimitRangeの追加時: + +コンテナを含む、そのネームスペース内のいずれかのPodが独自のメモリー制限を指定していない場合、コントロールプレーンはそのコンテナにデフォルトのメモリー制限を適用し、メモリーのResourceQuotaによって制限されているネームスペース内でPodを実行できるようにします。 + +## クリーンアップ + +ネームスペースを削除します: + +```shell +kubectl delete namespace default-mem-example +``` + + + +## {{% heading "whatsnext" %}} + + +### クラスター管理者向け + +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) + +* [Namespaceに対する最小および最大メモリー制約の構成](ja/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) + +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) + +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) + +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) + +* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) + +### アプリケーション開発者向け + +* [コンテナおよびPodへのメモリーリソースの割り当て](ja/docs/tasks/configure-pod-container/assign-memory-resource/) + +* [コンテナおよびPodへのCPUリソースの割り当て](ja/docs/tasks/configure-pod-container/assign-cpu-resource/) + +* [PodにQuality of Serviceを設定する](ja/docs/tasks/configure-pod-container/quality-service-pod/) diff --git a/content/ja/docs/tasks/administer-cluster/nodelocaldns.md b/content/ja/docs/tasks/administer-cluster/nodelocaldns.md index 4a5e59a2cae91..e7f97ea7f678c 100644 --- a/content/ja/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/ja/docs/tasks/administer-cluster/nodelocaldns.md @@ -45,7 +45,7 @@ NodeLocal DNSキャッシュは、クラスターノード上でDNSキャッシ {{< figure src="/images/docs/nodelocaldns.svg" alt="NodeLocal DNSCache flow" title="Nodelocal DNSCacheのフロー" caption="この図は、NodeLocal DNSキャッシュがDNSクエリーをどう扱うかを表したものです。" >}} ## 設定 -{{< note >}} NodeLocal DNSキャッシュ用のローカルに待ち受けているIPアドレスは、169.254.20.0/16の範囲のIPか、既存のIPと衝突しないことが保証されている他のIPとなります。このドキュメントでは例として169.254.10を使用します。 +{{< note >}} NodeLocal DNSキャッシュのローカルリッスン用のIPアドレスは、クラスタ内の既存のIPと衝突しないことが保証できるものであれば、どのようなアドレスでもかまいません。例えば、IPv4のリンクローカル範囲169.254.0.0/16やIPv6のユニークローカルアドレス範囲fd00::/8から、ローカルスコープのアドレスを使用することが推奨されています。 {{< /note >}} この機能は、下記の手順により有効化できます。 diff --git a/content/ja/docs/tasks/administer-cluster/securing-a-cluster.md b/content/ja/docs/tasks/administer-cluster/securing-a-cluster.md index d1a852efa2b63..221090019397a 100644 --- a/content/ja/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/ja/docs/tasks/administer-cluster/securing-a-cluster.md @@ -154,7 +154,7 @@ API用のetcdバックエンドへの書き込みアクセスは、クラスタ ### 監査ログの有効 -[audit logger](/docs/tasks/debug-application-cluster/audit/)はベータ版の機能で、APIによって行われたアクションを記録し、侵害があった場合に後から分析できるようにするものです。 +[audit logger](/docs/tasks/debug/debug-cluster/audit/)はベータ版の機能で、APIによって行われたアクションを記録し、侵害があった場合に後から分析できるようにするものです。 監査ログを有効にして、ログファイルを安全なサーバーにアーカイブすることをお勧めします。 diff --git a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 08e48b9fa7e87..2ac539bf06c34 100644 --- a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -51,7 +51,7 @@ Probeの動作としては、kubeletは`cat /tmp/healthy`を対象のコンテ このコンテナは、起動すると次のコマンドを実行します: ```shell -/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600" +/bin/sh -c "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600" ``` コンテナが起動してから初めの30秒間は`/tmp/healthy`ファイルがコンテナ内に存在します。 diff --git a/content/ja/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/ja/docs/tasks/debug-application-cluster/debug-running-pod.md new file mode 100644 index 0000000000000..3e2c1c42f0b41 --- /dev/null +++ b/content/ja/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -0,0 +1,279 @@ +--- +title: 実行中のPodのデバッグ +content_type: task +--- + + + +このページでは、ノード上で動作している(またはクラッシュしている)Podをデバッグする方法について説明します。 + + +## {{% heading "prerequisites" %}} + + +* あなたの{{< glossary_tooltip text="Pod" term_id="pod" >}}は既にスケジュールされ、実行されているはずです。Pod がまだ実行されていない場合は、[Troubleshoot Applications](/docs/tasks/debug-application-cluster/debug-application/) から始めてください。 + +* いくつかの高度なデバッグ手順では、Podがどのノードで動作しているかを知り、そのノードでコマンドを実行するためのシェルアクセス権を持っていることが必要です。`kubectl` を使用する標準的なデバッグ手順の実行には、そのようなアクセスは必要ではありません。 + + + + + +## Podログを調べます {#examine-pod-logs} + +まず、影響を受けるコンテナのログを見ます。 + +```shell +kubectl logs ${POD_NAME} ${CONTAINER_NAME} +``` + +コンテナが以前にクラッシュしたことがある場合、以前のコンテナのクラッシュログにアクセスすることができます。 + +```shell +kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} +``` + +## container execによるデバッグ {#container-exec} + +もし{{< glossary_tooltip text="container image" term_id="image" >}}がデバッグユーティリティを含んでいれば、LinuxやWindows OSのベースイメージからビルドしたイメージのように、`kubectl exec` で特定のコンテナ内でコマンドを実行することが可能です。 + +```shell +kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} +``` + +{{< note >}} +`-c ${CONTAINER_NAME}`は省略可能です。コンテナを1つだけ含むPodの場合は省略できます。 +{{< /note >}} + +例として、実行中のCassandra Podからログを見るには、次のように実行します。 + +```shell +kubectl exec cassandra -- cat /var/log/cassandra/system.log +``` + +例えば`kubectl exec`の`-i`と`-t`引数を使って、端末に接続されたシェルを実行することができます。 + +```shell +kubectl exec -it cassandra -- sh +``` + +詳しくは、[実行中のコンテナのシェルを取得する](/docs/tasks/debug-application-cluster/get-shell-running-container/)を参照してください。 + +## エフェメラルコンテナによるデバッグ {#ephemeral-container} + +{{< feature-state state="beta" for_k8s_version="v1.23" >}} + +{{< glossary_tooltip text="エフェメラルコンテナ" term_id="ephemeral-container" >}}は、コンテナがクラッシュしたり、コンテナイメージにデバッグユーティリティが含まれていないなどの理由で`kubectl exec`が不十分な場合に、対話的にトラブルシューティングを行うのに便利です([ディストロ・イメージ]( +https://github.com/GoogleContainerTools/distroless)の場合など)。 + +### エフェメラルコンテナを使用したデバッグ例 {#ephemeral-container-example} + +実行中のPodにエフェメラルコンテナを追加するには、`kubectl debug`コマンドを使用することができます。 +まず、サンプル用のPodを作成します。 + +```shell +kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never +``` + +このセクションの例では、デバッグユーティリティが含まれていない`pause`コンテナイメージを使用していますが、この方法はすべてのコンテナイメージで動作します。 + +もし、`kubectl exec`を使用してシェルを作成しようとすると、このコンテナイメージにはシェルが存在しないため、エラーが表示されます。 + +```shell +kubectl exec -it ephemeral-demo -- sh +``` + +``` +OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown +``` + +代わりに、`kubectl debug` を使ってデバッグ用のコンテナを追加することができます。 +引数に`-i`/`--interactive`を指定すると、`kubectl`は自動的にエフェメラルコンテナのコンソールにアタッチされます。 + +```shell +kubectl debug -it ephemeral-demo --image=busybox --target=ephemeral-demo +``` + +``` +Defaulting debug container name to debugger-8xzrl. +If you don't see a command prompt, try pressing enter. +/ # +``` + +このコマンドは新しいbusyboxコンテナを追加し、それにアタッチします。`target`パラメーターは、他のコンテナのプロセス名前空間をターゲットにします。これは`kubectl run`が作成するPodで[process namespace sharing](/docs/tasks/configure-pod-container/share-process-namespace/)を有効にしないため、指定する必要があります。 + +{{< note >}} +`target` パラメーターは {{< glossary_tooltip text="Container Runtime" term_id="container-runtime" >}} でサポートされている必要があります。サポートされていない場合、エフェメラルコンテナは起動されないか、`ps`が他のコンテナ内のプロセスを表示しないように孤立したプロセス名前空間を使用して起動されます。 +{{< /note >}} + +新しく作成されたエフェメラルコンテナの状態は`kubectl describe`を使って見ることができます。 + +```shell +kubectl describe pod ephemeral-demo +``` + +``` +... +Ephemeral Containers: + debugger-8xzrl: + Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb + Image: busybox + Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 + Port: + Host Port: + State: Running + Started: Wed, 12 Feb 2020 14:25:42 +0100 + Ready: False + Restart Count: 0 + Environment: + Mounts: +... +``` + +終了したら`kubectl delete`を使ってPodを削除してください。 + +```shell +kubectl delete pod ephemeral-demo +``` + +## Podのコピーを使ったデバッグ + +Podの設定オプションによって、特定の状況でのトラブルシューティングが困難になることがあります。 +例えば、コンテナイメージにシェルが含まれていない場合、またはアプリケーションが起動時にクラッシュした場合は、`kubectl exec`を実行してトラブルシューティングを行うことができません。 +このような状況では、`kubectl debug` を使用してデバッグを支援するために設定値を変更したPodのコピーです。 + +### 新しいコンテナを追加しながらPodをコピーします + +新しいコンテナを追加することは、アプリケーションが動作しているが期待通りの動作をせず、トラブルシューティングユーティリティをPodに追加したい場合に便利な場合があります。 +例えば、アプリケーションのコンテナイメージは`busybox`上にビルドされているが、`busybox`に含まれていないデバッグユーティリティが必要な場合があります。このシナリオは `kubectl run` を使ってシミュレーションすることができます。 + +```shell +kubectl run myapp --image=busybox --restart=Never -- sleep 1d +``` + +このコマンドを実行すると、`myapp`のコピーに`myapp-debug`という名前が付き、デバッグ用の新しいUbuntuコンテナが追加されます。 + +```shell +kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug +``` + +``` +Defaulting debug container name to debugger-w7xmf. +If you don't see a command prompt, try pressing enter. +root@myapp-debug:/# +``` + +{{< note >}} +* `kubectl debug`は`--container`フラグでコンテナ名を選択しない場合、自動的にコンテナ名を生成します。 + +* `i`フラグを指定すると、デフォルトで`kubectl debug`が新しいコンテナにアタッチされます。これを防ぐには、`--attach=false`を指定します。セッションが切断された場合は、`kubectl attach`を使用して再接続することができます。 + +* `share-processes` を指定すると、Pod 内のコンテナからプロセスを参照することができます。この仕組みについて詳しくは、[Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/)を参照してください。 +{{< /note >}} + +デバッグが終わったら、Podの後始末をするのを忘れないでください。 + +```shell +kubectl delete pod myapp myapp-debug +``` + +### Podのコマンドを変更しながらコピーします + +デバッグフラグを追加するためや、アプリケーションがクラッシュするためなど、コンテナのコマンドを変更すると便利な場合があります。 +アプリケーションのクラッシュをシミュレートするには、`kubectl run`を使用して、すぐに終了するコンテナを作成します。 + +``` +kubectl run --image=busybox myapp -- false +``` + +`kubectl describe pod myapp` を使用すると、このコンテナがクラッシュしていることがわかります。 + +``` +Containers: + myapp: + Image: busybox + ... + Args: + false + State: Waiting + Reason: CrashLoopBackOff + Last State: Terminated + Reason: Error + Exit Code: 1 +``` + +`kubectl debug`を使うと、コマンドをインタラクティブシェルに変更したこのPodのコピーを作成することができます。 + +``` +kubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh +``` + +``` +If you don't see a command prompt, try pressing enter. +/ # +``` + +これで、ファイルシステムのパスのチェックやコンテナコマンドの手動実行などのタスクを実行するために使用できる対話型シェルが完成しました。 + +{{< note >}} +* 特定のコンテナのコマンドを変更するには、そのコンテナ名を`--container`で指定する必要があり、そうしないと`kubectl debug`が代わりに指定したコマンドを実行する新しいコンテナを作成します。 + +* ` i` フラグは、デフォルトで `kubectl debug` がコンテナにアタッチされるようにします。これを防ぐには、`--attach=false`を指定します。セッションが切断された場合は、`kubectl attach` を使用して再接続することができます。 +{{< /note >}} + +デバッグが終わったら、Podの後始末をするのを忘れないでください。 + +```shell +kubectl delete pod myapp myapp-debug +``` + +### コンテナイメージを変更してPodをコピーします + +状況によっては、動作不良のPodを通常のプロダクション用のイメージから、デバッグ・ビルドや追加ユーティリティを含むイメージに変更したい場合があります。 + +例として、`kubectl run`を使用してPodを作成します。 + +``` +kubectl run myapp --image=busybox --restart=Never -- sleep 1d +``` + +ここで、`kubectl debug`を使用してコピーを作成し、そのコンテナイメージを`ubuntu`に変更します。 + +``` +kubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu +``` + +`set-image`の構文は、`kubectl set image`と同じ`container_name=image`の構文を使用します。`*=ubuntu`は、全てのコンテナのイメージを`ubuntu`に変更することを意味します。 + +デバッグが終わったら、Podの後始末をするのを忘れないでください。 + +```shell +kubectl delete pod myapp myapp-debug +``` + +## ノード上のシェルによるデバッグ {#node-shell-session} + +いずれの方法でもうまくいかない場合は、Podが動作しているノードを探し出し、ホストの名前空間で動作する特権Podを作成します。 +ノード上で `kubectl debug` を使って対話型のシェルを作成するには、以下を実行します。 + +```shell +kubectl debug node/mynode -it --image=ubuntu +``` + +``` +Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode. +If you don't see a command prompt, try pressing enter. +root@ek8s:/# +``` + +ノードでデバッグセッションを作成する場合、以下の点に注意してください: + +* `kubectl debug`はノードの名前に基づいて新しい Pod の名前を自動的に生成します。 +* コンテナはホストのIPC、Network、PIDネームスペースで実行されます。 +* ノードのルートファイルシステムは`/host`にマウントされます。 + +デバッグが終わったら、Podの後始末をするのを忘れないでください。 + +```shell +kubectl delete pod node-debugger-mynode-pdx84 +``` diff --git a/content/ja/docs/tasks/debug-application-cluster/monitor-node-health.md b/content/ja/docs/tasks/debug-application-cluster/monitor-node-health.md new file mode 100644 index 0000000000000..44d627a04c0f0 --- /dev/null +++ b/content/ja/docs/tasks/debug-application-cluster/monitor-node-health.md @@ -0,0 +1,151 @@ +--- +title: ノードの健全性を監視します +content_type: task +reviewers: +- ptux +--- + + + +*Node Problem Detector*は、ノードの健全性を監視し、報告するためのデーモンです。 +`Node Problem Detector`は`DaemonSet`として、あるいはスタンドアロンデーモンとして実行することができます。 + +`Node Problem Detector`は様々なデーモンからノードの問題に関する情報を収集し、これらの状態を[NodeCondition](/ja/docs/concepts/architecture/nodes/#condition)および[Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)としてAPIサーバーにレポートします。 +`Node Problem Detector`のインストール方法と使用方法については、[Node Problem Detectorプロジェクトドキュメント](https://github.com/kubernetes/node-problem-detector)を参照してください。 + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## 制限事項 + +* Node Problem Detectorは、ファイルベースのカーネルログのみをサポートします。 + `journald`のようなログツールはサポートされていません。 + +* Node Problem Detectorは、カーネルの問題を報告するためにカーネルログフォーマットを使用します。 + カーネルログフォーマットを拡張する方法については、[Add support for another log format](#support-other-log-format) を参照してください。 + +## ノード問題検出の有効化 + +クラウドプロバイダーによっては、`Node Problem Detector`を{{< glossary_tooltip text="Addon" term_id="addons" >}}として有効にしている場合があります。 +また、`kubectl`を使って`Node Problem Detector`を有効にするか、`Addon pod`を作成することで有効にできます。 + +### kubectlを使用してNode Problem Detectorを有効化します {#using-kubectl} + +`kubectl`は`Node Problem Detector`を最も柔軟に管理することができます。 +デフォルトの設定を上書きして自分の環境に合わせたり、カスタマイズしたノードの問題を検出したりすることができます。 +例えば: + +1. `node-problem-detector.yaml`のような`Node Problem Detector`の設定を作成します: + + {{< codenew file="debug/node-problem-detector.yaml" >}} + + {{< note >}} + システムログのディレクトリが、お使いのOSのディストリビューションに合っていることを確認する必要があります。 + {{< /note >}} + +1. `Node Problem Detector`を`kubectl`で起動します。 + + ```shell + kubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml + ``` + +### Addon podを使用してNode Problem Detectorを有効化します {#using-addon-pod} + +カスタムのクラスターブートストラップソリューションを使用していて、デフォルトの設定を上書きする必要がない場合は、`Addon Pod`を利用してデプロイをさらに自動化できます。 +`node-problem-detector.yaml`を作成し、制御プレーンノードの`Addon Pod`のディレクトリ`/etc/kubernetes/addons/node-problem-detector`に設定を保存します。 + +## コンフィギュレーションを上書きします + +`Node Problem Detector`の Dockerイメージをビルドする際に、[default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config)が埋め込まれます。 + +[`ConfigMap`](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/) を使用することで設定を上書きすることができます。 + + +1. `config/` にある設定ファイルを変更します +1. `ConfigMap` `node-problem-detector-config`を作成します。 + + ```shell + kubectl create configmap node-problem-detector-config --from-file=config/ + ``` + +1. `node-problem-detector.yaml`を変更して、`ConfigMap`を使用するようにします。 + + {{< codenew file="debug/node-problem-detector-configmap.yaml" >}} + +1. 新しい設定ファイルで`Node Problem Detector`を再作成します。 + + ```shell + # If you have a node-problem-detector running, delete before recreating + kubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml + kubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml + ``` + +{{< note >}} +この方法は `kubectl` で起動された Node Problem Detector にのみ適用されます。 +{{< /note >}} + +ノード問題検出装置がクラスターアドオンとして実行されている場合、設定の上書きはサポートされていません。 +`Addon Manager`は、`ConfigMap`をサポートしていません。 + +## Kernel Monitor + +*Kernel Monitor*は`Node Problem Detector`でサポートされるシステムログ監視デーモンです。 +*Kernel Monitor*はカーネルログを監視し、事前に定義されたルールに従って既知のカーネル問題を検出します。 +*Kernel Monitor*は[`config/kernel-monitor.json`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json)にある一連の定義済みルールリストに従ってカーネルの問題を照合します。 +ルールリストは拡張可能です。設定を上書きすることで、ルールリストを拡張することができます。 + +### 新しいNodeConditionsの追加 + +新しい`NodeCondition`をサポートするには、例えば`config/kernel-monitor.json`の`conditions`フィールド内に条件定義を作成します。 + +```json +{ + "type": "NodeConditionType", + "reason": "CamelCaseDefaultNodeConditionReason", + "message": "arbitrary default node condition message" +} +``` + +### 新たな問題の発見 + +新しい問題を検出するために、`config/kernel-monitor.json`の`rules`フィールドを新しいルール定義で拡張することができます。 + +```json +{ + "type": "temporary/permanent", + "condition": "NodeConditionOfPermanentIssue", + "reason": "CamelCaseShortReason", + "message": "regexp matching the issue in the kernel log" +} +``` + +### カーネルログデバイスのパスの設定 {#kernel-log-device-path} + +ご使用のオペレーティングシステム(OS)ディストリビューションのカーネルログパスをご確認ください。 +Linuxカーネルの[ログデバイス](https://www.kernel.org/doc/Documentation/ABI/testing/dev-kmsg)は通常`/dev/kmsg`として表示されます。 +しかし、OSのディストリビューションによって、ログパスの位置は異なります。 +`config/kernel-monitor.json`の`log`フィールドは、コンテナ内のログパスを表します。 +`log`フィールドは、`Node Problem Detector`で見たデバイスパスと一致するように設定することができます。 + +### 別のログ形式をサポートします {#support-other-log-format} + +Kernel monitorは[`Translator`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/pkg/kernelmonitor/translator/translator.go)プラグインを使用して、カーネルログの内部データ構造を変換します。 +新しいログフォーマット用に新しいトランスレータを実装することができます。 + + + +## 推奨・制限事項 + +ノードの健全性を監視するために、クラスターでNode Problem Detectorを実行することが推奨されます。 +`Node Problem Detector`を実行する場合、各ノードで余分なリソースのオーバーヘッドが発生することが予想されます。 + +通常これは問題ありません。 + +* カーネルログは比較的ゆっくりと成長します。 +* Node Problem Detector にはリソース制限が設定されています。 +* 高負荷時であっても、リソースの使用は許容範囲内です。 + +詳細は`Node Problem Detector`[ベンチマーク結果](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629)を参照してください。 diff --git a/content/ja/docs/tasks/debug-application-cluster/troubleshooting.md b/content/ja/docs/tasks/debug-application-cluster/troubleshooting.md new file mode 100644 index 0000000000000..436afc14eaf61 --- /dev/null +++ b/content/ja/docs/tasks/debug-application-cluster/troubleshooting.md @@ -0,0 +1,88 @@ +--- +content_type: concept +title: トラブルシューティング +--- + + + +時には物事がうまくいかないこともあります。このガイドは、それらを正すことを目的としています。 + + 2つのセクションから構成されています: + +* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Kubernetesにコードをデプロイしていて、なぜ動かないのか不思議に思っているユーザーに便利です。 +* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - クラスター管理者やKubernetesクラスターに不満がある人に有用です。 + +また、使用している[リリース](https://github.com/kubernetes/kubernetes/releases)の既知の問題を確認する必要があります。 + + + +## ヘルプを受けます + +もしあなたの問題が上記のどのガイドでも解決されない場合は、Kubernetesコミュニティから助けを得るための様々な方法があります。 + +### ご質問 + +本サイトのドキュメントは、様々な疑問に対する答えを提供するために構成されています。 + +[Concepts](/docs/concepts/)では、Kubernetesのアーキテクチャーと各コンポーネントの動作について説明し、[Setup](/docs/setup/)では、使い始めるための実用的な手順を提供しています。 +[Tasks](/docs/tasks/) は、よく使われるタスクの実行方法を示し、 [Tutorials](/docs/tutorials/)は、実世界の業界特有、またはエンドツーエンドの開発シナリオ、より包括的なウォークスルーとなります。 +[Reference](/docs/reference/)セクションでは、[Kubernetes API(/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)と`kubectl`](/docs/reference/kubectl/overview/)などのコマンドラインインターフェース(CLI)に関する詳しいドキュメントが提供されています。 + +## ヘルプ!私の質問はカバーされていません!今すぐ助けてください! + +### Stack Overflow + +コミュニティの誰かがすでに同じような質問をしている可能性があり、あなたの問題を解決できるかもしれません。 +Kubernetesチームも[Kubernetesタグが付けられた投稿](https://stackoverflow.com/questions/tagged/kubernetes)を監視しています。 +もし役立つ既存の質問がない場合は、[新しく質問してみてください](https://stackoverflow.com/questions/ask?tags=kubernetes)。 + + +### Slack + +Kubernetesコミュニティの多くの人々は、Kubernetes Slackの`#kubernetes-users`チャンネルに集まっています。 +Slackは登録が必要です。[招待をリクエストする](https://slack.kubernetes.io)ことができ、登録は誰でも可能です)。 +お気軽にお越しいただき、何でも質問してください。 +登録が完了したら、WebブラウザまたはSlackの専用アプリから[Kubernetes organization in Slack](https://kubernetes.slack.com)にアクセスします。 + +登録が完了したら、増え続けるチャンネルリストを見て、興味のある様々なテーマについて調べてみましょう。 +たとえば、Kubernetesの初心者は、[`#kubernetes-novice`](https://kubernetes.slack.com/messages/kubernetes-novice)に参加してみるのもよいでしょう。 +別の例として、開発者は[`#kubernetes-dev`](https://kubernetes.slack.com/messages/kubernetes-dev)チャンネルに参加するとよいでしょう。 + +また、多くの国別/言語別チャンネルがあります。これらのチャンネルに参加すれば、地域特有のサポートや情報を得ることができます。 + +{{< table caption="Country / language specific Slack channels" >}} +Country | Channels +:---------|:------------ +中国 | [`#cn-users`](https://kubernetes.slack.com/messages/cn-users), [`#cn-events`](https://kubernetes.slack.com/messages/cn-events) +フィンランド | [`#fi-users`](https://kubernetes.slack.com/messages/fi-users) +フランス | [`#fr-users`](https://kubernetes.slack.com/messages/fr-users), [`#fr-events`](https://kubernetes.slack.com/messages/fr-events) +ドイツ | [`#de-users`](https://kubernetes.slack.com/messages/de-users), [`#de-events`](https://kubernetes.slack.com/messages/de-events) +インド | [`#in-users`](https://kubernetes.slack.com/messages/in-users), [`#in-events`](https://kubernetes.slack.com/messages/in-events) +イタリア | [`#it-users`](https://kubernetes.slack.com/messages/it-users), [`#it-events`](https://kubernetes.slack.com/messages/it-events) +日本 | [`#jp-users`](https://kubernetes.slack.com/messages/jp-users), [`#jp-events`](https://kubernetes.slack.com/messages/jp-events) +韓国 | [`#kr-users`](https://kubernetes.slack.com/messages/kr-users) +オランダ | [`#nl-users`](https://kubernetes.slack.com/messages/nl-users) +ノルウェー | [`#norw-users`](https://kubernetes.slack.com/messages/norw-users) +ポーランド | [`#pl-users`](https://kubernetes.slack.com/messages/pl-users) +ロシア | [`#ru-users`](https://kubernetes.slack.com/messages/ru-users) +スペイン | [`#es-users`](https://kubernetes.slack.com/messages/es-users) +スウェーデン | [`#se-users`](https://kubernetes.slack.com/messages/se-users) +トルコ | [`#tr-users`](https://kubernetes.slack.com/messages/tr-users), [`#tr-events`](https://kubernetes.slack.com/messages/tr-events) +{{< /table >}} + +### フォーラム + +Kubernetesの公式フォーラムへの参加は大歓迎です[discuss.kubernetes.io](https://discuss.kubernetes.io)。 + +### バグと機能の要望 + +バグらしきものを発見した場合、または機能要望を出したい場合、[GitHub課題追跡システム](https://github.com/kubernetes/kubernetes/issues)をご利用ください。 +課題を提出する前に、既存の課題を検索して、あなたの課題が解決されているかどうかを確認してください。 + +バグを報告する場合は、そのバグを再現するための詳細な情報を含めてください。 + +* Kubernetes のバージョン: `kubectl version` +* クラウドプロバイダー、OSディストリビューション、ネットワーク構成、Dockerバージョン +* 問題を再現するための手順 + + diff --git a/content/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 5e7b77443d190..a55a093c2bd34 100644 --- a/content/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -234,7 +234,7 @@ WordPressのインストールをこのページのまま放置してはいけ ## {{% heading "whatsnext" %}} -* [イントロスペクションとデバッグ](/docs/tasks/debug-application-cluster/debug-application-introspection/)についてさらに学ぶ +* [イントロスペクションとデバッグ](/docs/tasks/debug/debug-application)についてさらに学ぶ * [Job](/docs/concepts/workloads/controllers/job/)についてさらに学ぶ * [Portフォワーディング](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)についてさらに学ぶ * [コンテナへのシェルを取得する](/ja/docs/tasks/debug-application-cluster/get-shell-running-container/)方法について学ぶ diff --git a/content/ja/examples/admin/resource/cpu-constraints-pod-3.yaml b/content/ja/examples/admin/resource/cpu-constraints-pod-3.yaml deleted file mode 100644 index 896d98ec2f7d4..0000000000000 --- a/content/ja/examples/admin/resource/cpu-constraints-pod-3.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: constraints-cpu-demo-4 -spec: - containers: - - name: constraints-cpu-demo-4-ctr - image: nginx - resources: - limits: - cpu: "800m" - requests: - cpu: "100m" diff --git a/content/ja/examples/pods/probe/exec-liveness.yaml b/content/ja/examples/pods/probe/exec-liveness.yaml index 07bf75f85c6f3..6a9c9b3213718 100644 --- a/content/ja/examples/pods/probe/exec-liveness.yaml +++ b/content/ja/examples/pods/probe/exec-liveness.yaml @@ -11,7 +11,7 @@ spec: args: - /bin/sh - -c - - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 + - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: diff --git a/content/ja/includes/default-storage-class-prereqs.md b/content/ja/includes/default-storage-class-prereqs.md index 52d100d6ef791..648bbfc1acdb7 100644 --- a/content/ja/includes/default-storage-class-prereqs.md +++ b/content/ja/includes/default-storage-class-prereqs.md @@ -1 +1 @@ -ここで使用されている[PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)の要件を満たすには、デフォルトの[StorageClass](/docs/concepts/storage/storage-classes/)を使用して動的PersistentVolumeプロビジョナーを作成するか、[PersistentVolumesを静的にプロビジョニングする](/docs/user-guide/persistent-volumes/#provisioning)必要があります。 +ここで使用されている[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)の要件を満たすには、デフォルトの[StorageClass](/docs/concepts/storage/storage-classes/)を使用して動的PersistentVolumeプロビジョナーを作成するか、[PersistentVolumesを静的にプロビジョニングする](/docs/concepts/storage/persistent-volumes/#provisioning)必要があります。 diff --git a/content/ko/docs/contribute/participate/pr-wranglers.md b/content/ko/docs/contribute/participate/pr-wranglers.md index 424f9b0227ea8..833b006f1a4b9 100644 --- a/content/ko/docs/contribute/participate/pr-wranglers.md +++ b/content/ko/docs/contribute/participate/pr-wranglers.md @@ -85,6 +85,6 @@ PR 랭글러는 일주일 간 매일 다음의 일을 해야 한다. {{< note >}} -[`fejta-bot`](https://github.com/fejta-bot)이라는 봇은 90일 동안 활동이 없으면 이슈를 오래된 것(stale)으로 표시한다. 30일이 더 지나면 rotten으로 표시하고 종료한다. PR 랭글러는 14-30일 동안 활동이 없으면 이슈를 닫아야 한다. +[`k8s-triage-robot`](https://github.com/k8s-triage-robot)이라는 봇은 90일 동안 활동이 없으면 이슈를 오래된 것(stale)으로 표시한다. 30일이 더 지나면 rotten으로 표시하고 종료한다. PR 랭글러는 14-30일 동안 활동이 없으면 이슈를 닫아야 한다. {{< /note >}} diff --git a/content/pl/releases/_index.md b/content/pl/releases/_index.md index 46c2a7659f5ab..a5cb3fdc52ca6 100644 --- a/content/pl/releases/_index.md +++ b/content/pl/releases/_index.md @@ -7,7 +7,7 @@ type: docs -Projekt Kubernetes zapewnia wsparcie dla trzech ostatnich wydań _minor_ ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Poprawki do wydania 1.19 i nowszych będą publikowane przez około rok. Kuberetes w wersji 1.18 i wcześniejszych będzie otrzymywał poprawki przez 9 miesięcy. +Projekt Kubernetes zapewnia wsparcie dla trzech ostatnich wydań _minor_ ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Poprawki do wydania 1.19 i nowszych będą publikowane przez około rok. Kubernetes w wersji 1.18 i wcześniejszych będzie otrzymywał poprawki przez 9 miesięcy. Wersje Kubernetesa oznaczane są jako **x.y.z**, gdzie **x** jest oznaczeniem wersji głównej (_major_), **y** — podwersji (_minor_), a **z** — numer poprawki (_patch_), zgodnie z terminologią [Semantic Versioning](https://semver.org/). diff --git a/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md index bf92c1fedce11..77e1ce0e6f1b8 100644 --- a/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -15,7 +15,7 @@ como parte do lançamento do Kubernetes v1.20. Para obter mais detalhes sobre o que isso significa, confira a postagem do blog [Não entre em pânico: Kubernetes e Docker](/pt-br/blog/2020/12/02/dont-panic-kubernetes-and-docker/). -Além disso, você pode ler [verifique se a remoção do dockershim afeta você](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +Além disso, você pode ler [verifique se a remoção do dockershim afeta você](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) para determinar qual impacto a remoção do _dockershim_ teria para você ou para sua organização. @@ -170,7 +170,7 @@ contêiner assim que possível. Outro aspecto a ser observado é que ferramentas para manutenção do sistema ou execuções dentro de um contêiner no momento da criação de imagens podem não funcionar mais. Para o primeiro, a ferramenta [`crictl`][cr] pode ser utilizada como um substituto natural (veja -[migrando do docker cli para o crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) +[migrando do docker cli para o crictl](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/#mapping-from-docker-cli-to-crictl)) e para o último, você pode usar novas opções de construções de contêiner, como [img], [buildah], [kaniko], ou [buildkit-cli-for-kubectl] que não requerem Docker. diff --git a/content/pt-br/docs/concepts/cluster-administration/addons.md b/content/pt-br/docs/concepts/cluster-administration/addons.md index f3a00ae26d03e..73551e9e657ab 100644 --- a/content/pt-br/docs/concepts/cluster-administration/addons.md +++ b/content/pt-br/docs/concepts/cluster-administration/addons.md @@ -21,7 +21,7 @@ Esta página lista alguns dos complementos disponíveis e links com suas respect * [Canal](https://github.com/tigera/canal/tree/master/k8s-install) une Flannel e Calico, fornecendo rede e política de rede. * [Cilium](https://github.com/cilium/cilium) é um plug-in de rede de camada 3 e de políticas de rede que pode aplicar políticas HTTP/API/camada 7 de forma transparente. Tanto o modo de roteamento quanto o de sobreposição/encapsulamento são suportados. Este plug-in também consegue operar no topo de outros plug-ins CNI. * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) permite que o Kubernetes se conecte facilmente a uma variedade de plug-ins CNI, como Calico, Canal, Flannel, Romana ou Weave. -* [Contiv](http://contiv.github.io) oferece serviços de rede configuráveis para diferentes casos de uso (camada 3 nativa usando BGP, _overlay_ (sobreposição) usando vxlan, camada 2 clássica e Cisco-SDN/ACI) e também um _framework_ rico de políticas de rede. O projeto Contiv é totalmente [open source](http://github.com/contiv). O [instalador](http://github.com/contiv/install) fornece opções de instalação com ou sem kubeadm. +* [Contiv](https://contivpp.io/) oferece serviços de rede configuráveis para diferentes casos de uso (camada 3 nativa usando BGP, _overlay_ (sobreposição) usando vxlan, camada 2 clássica e Cisco-SDN/ACI) e também um _framework_ rico de políticas de rede. O projeto Contiv é totalmente [open source](http://github.com/contiv). O [instalador](http://github.com/contiv/install) fornece opções de instalação com ou sem kubeadm. * [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/) é uma plataforma open source baseada no [Tungsten Fabric](https://tungsten.io) que oferece virtualização de rede multi-nuvem e gerenciamento de políticas de rede. O Contrail e o Tungsten Fabric são integrados a sistemas de orquestração de contêineres, como Kubernetes, OpenShift, OpenStack e Mesos, e fornecem modos de isolamento para cargas de trabalho executando em máquinas virtuais, contêineres/pods e servidores físicos. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) é um provedor de redes _overlay_ (sobrepostas) que pode ser usado com o Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) é um plug-in para suporte de múltiplas interfaces de rede em Pods do Kubernetes. @@ -30,7 +30,7 @@ Esta página lista alguns dos complementos disponíveis e links com suas respect * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) é uma plataforma de rede definida por software que fornece serviços de rede baseados em políticas entre os Pods do Kubernetes e os ambientes não-Kubernetes, com visibilidade e monitoramento de segurança. * [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) é um provedor de rede para o Kubernetes baseado no [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), uma implementação de redes virtuais que surgiu através do projeto Open vSwitch (OVS). O OVN-Kubernetes fornece uma implementação de rede baseada em _overlay_ (sobreposição) para o Kubernetes, incluindo uma implementação baseada em OVS para serviços de balanceamento de carga e políticas de rede. * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) é um plug-in controlador CNI baseado no OVN (Open Virtual Network) que fornece serviços de rede _cloud native_, como _Service Function Chaining_ (SFC), redes _overlay_ (sobrepostas) OVN múltiplas, criação dinâmica de subredes, criação dinâmica de redes virtuais, provedor de rede VLAN e provedor de rede direto, e é plugável a outros plug-ins multi-rede. Ideal para cargas de trabalho que utilizam computação de borda _cloud native_ em redes multi-cluster. -* [Romana](http://romana.io) é uma solução de rede de camada 3 para redes de pods que também suporta a [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Detalhes da instalação do complemento Kubeadm disponíveis [aqui](https://github.com/romana/romana/tree/master/containerize). +* [Romana](https://github.com/romana/romana) é uma solução de rede de camada 3 para redes de pods que também suporta a [API NetworkPolicy](/pt-br/docs/concepts/services-networking/network-policies/). Detalhes da instalação do complemento Kubeadm disponíveis [aqui](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) fornece rede e política de rede, funciona em ambos os lados de uma partição de rede e não requer um banco de dados externo. ## Descoberta de Serviço diff --git a/content/pt-br/docs/concepts/configuration/configmap.md b/content/pt-br/docs/concepts/configuration/configmap.md index e3666a8542484..3fe111a078858 100644 --- a/content/pt-br/docs/concepts/configuration/configmap.md +++ b/content/pt-br/docs/concepts/configuration/configmap.md @@ -45,7 +45,7 @@ são opcionais. O campo `data` foi pensado para conter sequências de bytes UTF- foi planejado para conter dados binários em forma de strings codificadas em base64. É obrigatório que o nome de um ConfigMap seja um -[subdomínio DNS válido](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +[subdomínio DNS válido](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). Cada chave sob as seções `data` ou `binaryData` pode conter quaisquer caracteres alfanuméricos, `-`, `_` e `.`. As chaves armazenadas na seção `data` não podem colidir com as chaves armazenadas diff --git a/content/pt-br/docs/concepts/configuration/secret.md b/content/pt-br/docs/concepts/configuration/secret.md index bd0cd7315dcad..0fc63bc2e2d19 100644 --- a/content/pt-br/docs/concepts/configuration/secret.md +++ b/content/pt-br/docs/concepts/configuration/secret.md @@ -65,7 +65,7 @@ A camada de gerenciamento do Kubernetes também utiliza Secrets. Por exemplo, os [Secrets de tokens de autoinicialização](#bootstrap-token-secrets) são um mecanismo que auxilia a automação do registro de nós. -O nome de um Secret deve ser um [subdomínio DNS válido](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +O nome de um Secret deve ser um [subdomínio DNS válido](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). Você pode especificar o campo `data` e/ou o campo `stringData` na criação de um arquivo de configuração de um Secret. Ambos os campos `data` e `stringData` são opcionais. Os valores das chaves no campo `data` devem ser strings codificadas diff --git a/content/pt-br/docs/concepts/containers/runtime-class.md b/content/pt-br/docs/concepts/containers/runtime-class.md index 8f6e33aeeee90..ee090beedcc3b 100644 --- a/content/pt-br/docs/concepts/containers/runtime-class.md +++ b/content/pt-br/docs/concepts/containers/runtime-class.md @@ -66,7 +66,7 @@ handler: myconfiguration # Nome da configuração CRI correspondente ``` O nome de um objeto RuntimeClass deve ser um -[nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +[nome de subdomínio DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. {{< note >}} É recomendado que operações de escrita no objeto RuntimeClass (criar/atualizar/patch/apagar) diff --git a/content/pt-br/docs/concepts/overview/working-with-objects/names.md b/content/pt-br/docs/concepts/overview/working-with-objects/names.md index 16556d127afae..bbe5a4987165b 100644 --- a/content/pt-br/docs/concepts/overview/working-with-objects/names.md +++ b/content/pt-br/docs/concepts/overview/working-with-objects/names.md @@ -1,30 +1,83 @@ --- -title: Nomes +title: Nomes de objetos e IDs content_type: concept weight: 20 --- -Cada objeto em um cluster possui um Nome que é único para aquele tipo de recurso. -Todo objeto do Kubernetes também possui um UID que é único para todo o cluster. +Cada objeto em seu cluster possui um [_Nome_](#names) que é único para aquele +tipo de recurso. +Todo objeto do Kubernetes também possui um [_UID_](#uids) que é único para todo +o cluster. -Por exemplo, você pode ter apenas um Pod chamado "myapp-1234", porém você pode ter um Pod -e um Deployment ambos com o nome "myapp-1234". +Por exemplo, você pode ter apenas um Pod chamado `myapp-1234` dentro de um +[namespace](/pt-br/docs/concepts/overview/working-with-objects/namespaces/), porém +você pode ter um Pod e um Deployment ambos com o nome `myapp-1234`. -Para atributos não únicos providenciados por usuário, Kubernetes providencia [labels](/docs/concepts/overview/working-with-objects/labels/) e [annotations](/docs/concepts/overview/working-with-objects/annotations/). +Para atributos não-únicos definidos pelo usuário, o Kubernetes fornece +[labels](/docs/concepts/overview/working-with-objects/labels/) e +[annotations](/docs/concepts/overview/working-with-objects/annotations/). + +## Nomes {#names} - +{{< glossary_definition term_id="name" length="all" >}} + +{{< note >}} +Em casos em que objetos representam uma entidade física, como no caso de um Nó +representando um host físico, caso o host seja recriado com o mesmo nome mas o +objeto Nó não seja recriado, o Kubernetes trata o novo host como o host antigo, +o que pode causar inconsistências. +{{< /note >}} + +Abaixo estão descritos quatro tipos de restrições de nomes comumente utilizadas +para recursos. + +### Nomes de subdomínio DNS {#dns-subdomain-names} + +A maior parte dos recursos do Kubernetes requerem um nome que possa ser +utilizado como um nome de subdomínio DNS, conforme definido na +[RFC 1123](https://tools.ietf.org/html/rfc1123). +Isso significa que o nome deve: + +- conter no máximo 253 caracteres +- conter somente caracteres alfanuméricos em caixa baixa, traço ('-') ou ponto + ('.'). +- iniciar com um caractere alfanumérico +- terminar com um caractere alfanumérico + +### Nomes de rótulos da RFC 1123 {#dns-label-names} + +Alguns tipos de recurso requerem que seus nomes sigam o padrão de rótulos DNS +definido na [RFC 1123](https://tools.ietf.org/html/rfc1123). +Isso significa que o nome deve: + +- conter no máximo 63 caracteres +- conter somente caracteres alfanuméricos em caixa baixa ou traço ('-') +- iniciar com um caractere alfanumérico +- terminar com um caractere alfanumérico + +### Nomes de rótulo da RFC 1035 + +Alguns tipos de recurso requerem que seus nomes sigam o padrão de rótulos DNS +definido na [RFC 1035](https://tools.ietf.org/html/rfc1035). +Isso significa que o nome deve: -## Nomes +- conter no máximo 63 caracteres +- conter somente caracteres alfanuméricos em caixa baixa ou traço ('-') +- iniciar com um caractere alfanumérico +- terminar com um caractere alfanumérico +### Nomes de segmentos de caminhos -Recursos Kubernetes podem ter nomes com até 253 caracteres. Os caracteres permitidos em nomes são: dígitos (0-9), letras minúsculas (a-z), `-`, e `.`. +Alguns tipos de recurso requerem que seus nomes possam ser seguramente +codificados como um segmento de caminho, ou seja, o nome não pode ser "." ou +".." e não pode conter "/" ou "%". -A seguir, um exemplo para um Pod chamado `nginx-demo`. +Exemplo de um manifesto para um Pod chamado `nginx-demo`. ```yaml apiVersion: v1 @@ -45,13 +98,15 @@ Alguns tipos de recursos possuem restrições adicionais em seus nomes. ## UIDs +{{< glossary_definition term_id="uid" length="all" >}} -Kubernetes UIDs são identificadores únicos universais (também chamados de UUIDs). -UUIDs utilizam padrões ISO/IEC 9834-8 e ITU-T X.667. +UIDs no Kubernetes são identificadores únicos universais (também conhecidos como +UUIDs). +UUIDs seguem os padrões ISO/IEC 9834-8 e ITU-T X.667. ## {{% heading "whatsnext" %}} -* Leia sobre [labels](/docs/concepts/overview/working-with-objects/labels/) em Kubernetes. -* Consulte o documento de design [Identificadores e Nomes em Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md). +* Leia sobre [labels](/docs/concepts/overview/working-with-objects/labels/) no Kubernetes. +* Consulte o documento de design [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md). diff --git a/content/pt-br/docs/concepts/policy/_index.md b/content/pt-br/docs/concepts/policy/_index.md new file mode 100644 index 0000000000000..1b5e70aa9b225 --- /dev/null +++ b/content/pt-br/docs/concepts/policy/_index.md @@ -0,0 +1,6 @@ +--- +title: "Políticas" +weight: 90 +description: > + Políticas que você pode configurar e que afetam grupos de recursos. +--- diff --git a/content/pt-br/docs/concepts/policy/limit-range.md b/content/pt-br/docs/concepts/policy/limit-range.md index 929a760c2e532..33db813a6048a 100644 --- a/content/pt-br/docs/concepts/policy/limit-range.md +++ b/content/pt-br/docs/concepts/policy/limit-range.md @@ -23,7 +23,7 @@ O suporte ao _LimitRange_ foi ativado por padrão desde o Kubernetes 1.10. Um _LimitRange_ é aplicado em um _namespace_ específico quando há um objeto _LimitRange_ nesse _namespace_. -O nome de um objeto _LimitRange_ deve ser um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +O nome de um objeto _LimitRange_ deve ser um [nome de subdomínio DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. ### Visão geral do Limit Range diff --git a/content/pt-br/docs/concepts/policy/resource-quotas.md b/content/pt-br/docs/concepts/policy/resource-quotas.md index b20baffb83ea7..6468858348532 100644 --- a/content/pt-br/docs/concepts/policy/resource-quotas.md +++ b/content/pt-br/docs/concepts/policy/resource-quotas.md @@ -34,7 +34,7 @@ As cotas de recursos funcionam assim: Veja o [passo a passo](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) para um exemplo de como evitar este problema. -O nome de um objeto `ResourceQuota` deve ser um [nome do subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +O nome de um objeto `ResourceQuota` deve ser um [nome do subdomínio DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. Exemplos de políticas que podem ser criadas usando _namespaces_ e cotas são: diff --git a/content/pt-br/docs/concepts/security/overview.md b/content/pt-br/docs/concepts/security/overview.md index cfef9135f7bf5..0cbbbf29612b2 100644 --- a/content/pt-br/docs/concepts/security/overview.md +++ b/content/pt-br/docs/concepts/security/overview.md @@ -103,7 +103,7 @@ vulnerável a um ataque de exaustão de recursos e, por consequência, o risco d Autorização RBAC (acesso à API Kubernetes) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/ Autenticação | https://kubernetes.io/docs/concepts/security/controlling-access/ Gerenciamento de segredos na aplicação (e encriptando-os no etcd em repouso) | https://kubernetes.io/docs/concepts/configuration/secret/
      https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ -Políticas de segurança do Pod | https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +Garantir que os Pods atendem aos padrões de segurança do Pod | https://kubernetes.io/docs/concepts/security/pod-security-standards/#policy-instantiation Qualidade de serviço (e gerenciamento de recursos de cluster) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ Políticas de Rede | https://kubernetes.io/docs/concepts/services-networking/network-policies/ TLS para Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls diff --git a/content/pt-br/docs/concepts/storage/_index.md b/content/pt-br/docs/concepts/storage/_index.md new file mode 100644 index 0000000000000..41cfbb35075d7 --- /dev/null +++ b/content/pt-br/docs/concepts/storage/_index.md @@ -0,0 +1,8 @@ +--- +title: "Armazenamento" +weight: 70 +description: > + Formas de fornecer armazenamento temporário e de longa duração a Pods em seu + cluster. +--- + diff --git a/content/pt-br/docs/concepts/storage/persistent-volumes.md b/content/pt-br/docs/concepts/storage/persistent-volumes.md index 65396a3c37b18..12ff2700fe412 100644 --- a/content/pt-br/docs/concepts/storage/persistent-volumes.md +++ b/content/pt-br/docs/concepts/storage/persistent-volumes.md @@ -1,15 +1,9 @@ --- -reviewers: -- jsafrane -- saad-ali -- thockin -- msau42 -- xing-yang title: Volumes Persistentes feature: title: Orquestração de Armazenamento description: > - Montar automaticamente o armazenamento de sua escolha, seja de um armazenamento local, de um provedor de cloud pública, como GCP ou AWS, ou um armazenameto de rede, como NFS, iSCSI, Gluster, Ceph, Cinder ou Flocker. + Monte automaticamente o armazenamento de sua escolha, seja de um armazenamento local, de um provedor de cloud pública, como GCP ou AWS, ou um armazenamento de rede, como NFS, iSCSI, Gluster, Ceph, Cinder ou Flocker. content_type: conceito weight: 20 @@ -28,7 +22,7 @@ O gerenciamento de armazenamento é uma questão bem diferente do gerenciamento Um _PersistentVolume_ (PV) é uma parte do armazenamento dentro do cluster que tenha sido provisionada por um administrador, ou dinamicamente utilizando [Classes de Armazenamento](/docs/concepts/storage/storage-classes/). Isso é um recurso dentro do cluster da mesma forma que um nó também é. PVs são plugins de volume da mesma forma que Volumes, porém eles têm um ciclo de vida independente de qualquer Pod que utilize um PV. Essa API tem por objetivo mostrar os detalhes da implementação do armazenamento, seja ele NFS, iSCSI, ou um armazenamento específico de um provedor de cloud pública. -Uma_PersistentVolumeClaim_ (PVC) é uma requisição para armazenamento por um usuário. É similar a um Pod. Pods utilizam recursos do nó e PVCs utilizam recursos do PV. Pods podem solicitar níveis específicos de recursos (CPU e Memória). Claims podem solicitar tamanho e modos de acesso específicos (exemplo: montagem como ReadWriteOnce, ReadOnlyMany ou ReadWriteMany, veja [Modos de Acesso](#modos-de-acesso)). +Uma _PersistentVolumeClaim_ (PVC) é uma requisição para armazenamento por um usuário. É similar a um Pod. Pods utilizam recursos do nó e PVCs utilizam recursos do PV. Pods podem solicitar níveis específicos de recursos (CPU e Memória). Claims podem solicitar tamanho e modos de acesso específicos (exemplo: montagem como ReadWriteOnce, ReadOnlyMany ou ReadWriteMany, veja [Modos de Acesso](#modos-de-acesso)). Enquanto as PersistentVolumeClaims permitem que um usuário utilize recursos de armazenamento de forma limitada, é comum que usuários precisem de PersistentVolumes com diversas propriedades, como desempenho, para problemas diversos. Os administradores de cluster precisam estar aptos a oferecer uma variedade de PersistentVolumes que difiram em tamanho e modo de acesso, sem expor os usuários a detalhes de como esses volumes são implementados. Para necessidades como essas, temos o recurso de _StorageClass_. @@ -314,7 +308,7 @@ Tipos de PersistentVolume são implementados como plugins. Atualmente o Kubernet ## Volumes Persistentes -Cada PV contém uma `spec` e um status, que é a especificação e o status do volume. O nome do PersistentVolume deve ser um [DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +Cada PV contém uma `spec` e um status, que é a especificação e o status do volume. O nome do PersistentVolume deve ser um [DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. ```yaml apiVersion: v1 @@ -468,7 +462,7 @@ A CLI mostrará o nome do PV que foi atrelado à PVC ## PersistentVolumeClaims -Cada PVC contém uma `spec` e um status, que é a especificação e estado de uma requisição. O nome de um objeto PersistentVolumeClaim precisa ser um [DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +Cada PVC contém uma `spec` e um status, que é a especificação e estado de uma requisição. O nome de um objeto PersistentVolumeClaim precisa ser um [DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. ```yaml apiVersion: v1 diff --git a/content/pt-br/docs/concepts/storage/volumes.md b/content/pt-br/docs/concepts/storage/volumes.md new file mode 100644 index 0000000000000..d2c42b832e76e --- /dev/null +++ b/content/pt-br/docs/concepts/storage/volumes.md @@ -0,0 +1,978 @@ +--- +reviewers: +- jsafrane +- saad-ali +- thockin +- msau42 +title: Volumes +content_type: conceito +weight: 10 +--- + + + +Os arquivos em disco em um contêiner são efêmeros, o que apresenta alguns problemas para +aplicações não triviais quando executadas em contêineres. Um problema é a perda de arquivos +quando um contêiner quebra. O kubelet reinicia o contêiner, mas em um estado limpo. Um segundo +problema ocorre ao compartilhar arquivos entre contêineres que são executados juntos em +um `Pod`. A abstração de {{< glossary_tooltip text="volume" term_id="volume" >}} +do Kubernetes resolve ambos os problemas. Sugere-se familiaridade com [Pods](/docs/concepts/workloads/pods/) . + + +## Contexto + +Docker tem um conceito de [volumes](https://docs.docker.com/storage/), embora seja um pouco mais +simples e menos gerenciado. Um volume Docker é um diretório em disco ou em outro contêiner. +O Docker oferece drivers de volume, mas a funcionalidade é um pouco limitada. + +O Kubernetes suporta muitos tipos de volumes. Um {{< glossary_tooltip term_id="pod" text="Pod" >}} é capaz de utilizar qualquer quantidade de tipos de volumes simultaneamente. Os tipos de volume efêmeros têm a mesma vida útil do pod, mas os volumes persistentes existem além da vida útil de um pod. Quando um pod deixa de existir, o Kubernetes destrói volumes efêmeros; no entanto, o Kubernetes não destrói volumes persistentes. Para qualquer tipo de volume em um determinado pod, os dados são preservados entre as reinicializações do contêiner. + +Em sua essência, um volume é um diretório, eventualmente com alguns dados dentro dele, que é acessível aos contêineres de um Pod. Como esse diretório vem a ser, o meio que o suporta e o conteúdo do mesmo são determinados pelo tipo particular de volume utilizado. + +Para utilizar um volume, especifique os volumes que serão disponibilizados para o Pod em `.spec.volumes` e declare onde montar esses volumes dentro dos contêineres em `.spec.containers[*].volumeMounts`. Um processo em um contêiner enxerga uma visualização do sistema de arquivos composta pelo do conteúdo inicial da {{< glossary_tooltip text="imagem do contêiner" term_id="image" >}} mais os volumes (se definidos) montados dentro do contêiner. O processo enxerga um sistema de arquivos raiz que inicialmente corresponde ao conteúdo da imagem do contêiner. Qualquer gravação dentro dessa hierarquia do sistema de arquivos, se permitida, afetará o que esse processo enxerga quando ele executa um acesso subsequente ao sistema de arquivos. Os volumes são montados nos [caminhos especificados](#using-subpath) dentro da imagem. Para cada contêiner definido em um Pod, você deve especificar independentemente onde montar cada volume utilizado pelo contêiner. + +Volumes não podem ser montados dentro de outros volumes (mas você pode consultar [Utilizando subPath](#using-subpath) para um mecanismo relacionado). Além disso, um volume não pode conter um link físico para qualquer outro dado em um volume diferente. + +## Tipos de Volumes {#volume-types} + +Kubernetes suporta vários tipos de volumes. + +### awsElasticBlockStore {#awselasticblockstore} + +Um volume `awsElasticBlockStore` monta um [volume EBS](https://aws.amazon.com/ebs/) da Amazon Web Services (AWS) em seu pod. Ao contrário do `emptyDir`que é apagado quando um pod é removido, o conteúdo de um volume EBS é preservado e o volume é desmontado. Isto significa que um volume EBS pode ser previamente populado com dados e que os dados podem ser compartilhados entre Pods. + +{{< note >}} +Você precisa criar um volume EBS usando `aws ec2 create-volume` ou pela API da AWS antes que você consiga utilizá-lo. +{{< /note >}} + +Existem algumas restrições ao utilizar um volume `awsElasticBlockStore`: + +* Os nós nos quais os Pods estão sendo executados devem ser instâncias AWS EC2 +* Estas instâncias devem estar na mesma região e na mesma zona de disponibilidade que o volume EBS +* O EBS suporta montar um volume em apenas uma única instância EC2 + +#### Criando um volume AWS EBS + +Antes de poder utilizar um volume EBS com um pod, precisa criá-lo. + +```shell +aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2 +``` + +Certifique-se de que a zona corresponde à mesma zona em que criou o cluster. Verifique se o tamanho e o tipo de volume EBS são adequados para a sua utilização. + +#### Exemplo de configuração do AWS EBS + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-ebs +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-ebs + name: test-volume + volumes: + - name: test-volume + # Esse volume AWS EBS já deve existir. + awsElasticBlockStore: + volumeID: "" + fsType: ext4 +``` + +Se o volume EBS estiver particionado, é possível informar o campo opcional `partition: ""` para especificar em que partição deve ser montado. + +#### Migração de CSI do AWS EBS + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +Quando o recurso `CSIMigration` para `awsElasticBlockStore` está habilitado, todas as operações de plugin do tipo in-tree são redirecionadas para o driver Cointainer Storage Interface (CSI) `ebs.csi.aws.com`. Para usar esse recurso, o [driver CSI AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) deve estar instalado no cluster e os recursos beta `CSIMigration` e `CSIMigrationAWS` devem estar ativados. + +#### Migração CSI AWS EBS concluída + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +Para desabilitar o carregamento do plugin de armazenamento `awsElasticBlockStore` pelo gerenciador de controladores e pelo kubelet, defina a flag `InTreePluginAWSUnregister` como `true`. + +### azureDisk {#azuredisk} + +O tipo de volume `azureDisk` monta um [Disco de Dados](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) Microsoft Azure em um pod. + +Para obter mais detalhes, consulte [plugin de volume `azureDisk`](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md). + +#### Migração de CSI do azureDisk + +{{< feature-state for_k8s_version="v1.19" state="beta" >}} + +Quando o recurso `CSIMigration` para `azureDisk` está habilitado, todas as operações de plugin do tipo in-tree são redirecionadas para o Driver de Cointêiner Storage Interface (CSI) `disk.csi.azure.com`. Para utilizar este recurso, o [Driver CSI Azure Disk](https://github.com/kubernetes-sigs/azuredisk-csi-driver) deve estar instalado no cluster e os recursos `CSIMigration` e `CSIMigrationAzureDisk` devem estar ativados. + +#### Migração CSI azureDisk concluída + +{{< feature-state for_k8s_version="v1.21" state="alpha" >}} + +Para desabilitar o carregamento do plugin de armazenamento `azureDisk` pelo gerenciador de controladores e pelo kubelet, defina a flag `InTreePluginAzureDiskUnregister` como `true`. + +### azureFile {#azurefile} + +O tipo de volume `azureFile` monta um volume de arquivo Microsoft Azure (SMB 2.1 e 3.0) em um pod. + +Para obter mais detalhes, consulte [plugin de volume `azureFile`](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md). + +#### Migração de CSI azureFile + +{{< feature-state for_k8s_version="v1.21" state="beta" >}} + +Quando o recurso `CSIMigration` para `azureFile` está habilitado, todas as operações de plugin do tipo in-tree são redirecionadas para o Driver de Cointainer Storage Interface (CSI) `file.csi.azure.com`. Para utilizar este recurso, o [Driver CSI do Azure Disk](https://github.com/kubernetes-sigs/azurefile-csi-driver) deve estar instalado no cluster e as [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `CSIMigration` e `CSIMigrationAzureFile` devem estar habilitadas. + +O driver de CSI do Azure File não oferece suporte ao uso do mesmo volume por fsgroups diferentes, se a migração de CSI Azurefile estiver habilitada, o uso do mesmo volume por fsgroups diferentes não será suportado. + +#### Migração do CSI azureFile concluída + +{{< feature-state for_k8s_version="v1.21" state="alpha" >}} + +Para desabilitar o carregamento do plugin de armazenamento `azureFile` pelo gerenciador de controladores e pelo kubelet, defina a flag `InTreePluginAzureFileUnregister` como `true`. + +### cephfs + +Um volume `cephfs` permite que um volume CephFS existente seja montado no seu Pod. Ao contrário do `emptyDir` que é apagado quando um pod é removido, o conteúdo de um volume `cephfs` é preservado e o volume é simplesmente desmontado. Isto significa que um volume `cephfs` pode ser previamente populado com dados e que os dados podem ser compartilhados entre os Pods. O volume `cephfs` pode ser montado por vários gravadores simultaneamente. + +{{< note >}} Você deve ter seu próprio servidor Ceph funcionando com o compartilhamento acessível antes de poder utilizá-lo. {{< /note >}} + +Consulte o [ exemplo CephFS](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) para mais detalhes. + +### cinder + +{{< note >}} O Kubernetes deve ser configurado com o provedor de nuvem OpenStack. {{< /note >}} + +O tipo de volume `cinder` é utilizado para montar o volume do OpenStack Cinder no seu pod. + +#### Exemplo de configuração de volume Cinder + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-cinder +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-cinder-container + volumeMounts: + - mountPath: /test-cinder + name: test-volume + volumes: + - name: test-volume + # Esse volume OpenStack já deve existir. + cinder: + volumeID: "" + fsType: ext4 +``` + +#### Migração de CSI OpenStack + +{{< feature-state for_k8s_version="v1.21" state="beta" >}} + +O recurso `CSIMigration` para o Cinder é ativado por padrão no Kubernetes 1.21. Ele redireciona todas as operações de plugin do tipo in-tree para o Driver de Cointainer Storage Interface (CSI) `cinder.csi.openstack.org`. O [Driver CSI OpenStack Cinder](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) tem de estar instalado no cluster. Você pode desativar a migração Cinder CSI para o seu cluster definindo a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `CSIMigrationOpenStack` como `false`. Se você desativar o recurso `CSIMigrationOpenStack`, o plugin de volume in-tree do Cinder assume a responsabilidade por todos os aspectos do gerenciamento de armazenamento de volume do Cinder. + +### configMap + +Um [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) oferece uma forma de injetar dados de configuração em Pods. Os dados armazenados em um ConfigMap podem ser referenciados em um volume de tipo `configMap` e depois consumidos por aplicações conteinerizadas executadas em um pod. + +Ao referenciar um ConfigMap, você informa o nome do ConfigMap no volume. Pode personalizar o caminho utilizado para uma entrada específica no ConfigMap. A seguinte configuração mostra como montar o `log-config` do ConfigMap em um Pod chamado `configmap-pod`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: configmap-pod +spec: + containers: + - name: test + image: busybox:1.28 + volumeMounts: + - name: config-vol + mountPath: /etc/config + volumes: + - name: config-vol + configMap: + name: log-config + items: + - key: log_level + path: log_level +``` + +O ConfigMap `log-config` é montado como um volume e todos os conteúdos armazenados em sua entrada `log_level` são montados no Pod através do caminho `/etc/config/log_level`. Observe que esse caminho é derivado do volume `mountPath`e do `path` configurado com `log_level`. + +{{< note >}} + +* É preciso criar um [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) antes de usá-lo. + +* Um contêiner que utiliza ConfigMap através de um ponto de montagem com a propriedade [`subPath`](#using-subpath) não receberá atualizações deste ConfigMap. + +* Os dados de texto são expostos como arquivos utilizando a codificação de caracteres UTF-8. Para outras codificações de caracteres, use `binaryData`. {{< /note >}} + +### downwardAPI {#downwardapi} + +Um volume `downwardAPI` disponibiliza dados da downward API para as aplicações. Ele monta um diretório e grava os dados solicitados em arquivos de texto sem formatação. + +{{< note >}} Um contêiner que utiliza downward API através de um ponto de montagem com a propriedade [`subPath`](#using-subpath) não receberá atualizações desta downward API. {{< /note >}} + +Consulte [o exemplo de downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) para obter mais detalhes. + +### emptyDir {#emptydir} + +Um volume `emptyDir` é criado pela primeira vez quando um Pod é atribuído a um nó e existe enquanto esse Pod estiver sendo executado nesse nó. Como o nome diz, o volume `emptyDir` está inicialmente vazio. Todos os contêineres no Pod podem ler e gravar os mesmos arquivos no volume `emptyDir`, embora esse volume possa ser montado no mesmo caminho ou em caminhos diferentes em cada contêiner. Quando um Pod é removido de um nó por qualquer motivo, os dados no `emptyDir` são eliminados permanentemente. + +{{< note >}} A falha de um contêiner *não* remove um Pod de um nó. Os dados em um volume `emptyDir` são mantidos em caso de falha do contêiner. {{< /note >}} + +Alguns usos para um `emptyDir` são: + +* espaço temporário, como para uma merge sort baseado em disco +* ponto de verificação de um processamento longo para recuperação de falhas +* manter arquivos que um contêiner gerenciador de conteúdo busca enquanto um contêiner de webserver entrega os dados + +Dependendo do seu ambiente, os volumes `emptyDir` são armazenados em qualquer mídia que componha o nó, como disco ou SSD, ou armazenamento de rede. No entanto, se você definir o campo `emptyDir.medium` como `"Memory"`, o Kubernetes monta um tmpfs (sistema de arquivos com suporte de RAM) para você. Embora o tmpfs seja muito rápido, tenha em atenção que, ao contrário dos discos, o tmpfs é limpo na reinicialização do nó e quaisquer arquivos que grave consomem o limite de memória do seu contêiner. + +{{< note >}} Se a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `SizeMemoryBackedVolumes` estiver habilitada, é possível especificar um tamanho para volumes mantidos em memória. Se nenhum tamanho for especificado, os volumes mantidos em memória são dimensionados para 50% da memória em um host Linux. {{< /note>}} + +#### Exemplo de configuração emptyDir + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pd +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /cache + name: cache-volume + volumes: + - name: cache-volume + emptyDir: {} +``` + +### fc (fibre channel) {#fc} + +Um tipo de volume `fc` permite que um volume de armazenamento de fibre channel existente seja montado em um Pod. Você pode especificar um ou vários WWNs usando o parâmetro `targetWWNs` em sua configuração de volume. Se forem especificados vários WWNs, o targetWWNs espera que esses WWNs sejam de conexões multipath. + +{{< note >}} Para que os hosts Kubernetes possam acessá-los, é necessário configurar o zoneamento FC SAN para alocar e mascarar essas LUNs (volumes) para os WWNs de destino. {{< /note >}} + +Consulte [o exemplo de fibre channel](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) para obter mais detalhes. + +### flocker (descontinuado) {#flocker} + +[Flocker](https://github.com/ClusterHQ/flocker) é um gerenciador de volumes de dados de contêineres em cluster de código aberto. O Flocker oferece gerenciamento e orquestração de volumes de dados suportados por uma variedade de backends de armazenamento. + +Um volume `flocker` permite que um conjunto de dados Flocker seja montado em um Pod. Se o conjunto de dados ainda não existir no Flocker, ele precisará ser criado primeiro com o CLI do Flocker ou usando a API do Flocker. Se o conjunto de dados já existir, ele será anexado pelo Flocker ao nó que o pod está escalonado. Isto significa que os dados podem ser compartilhados entre os Pods, conforme necessário. + +{{< note >}} Antes de poder utilizá-lo, é necessário ter a sua própria instalação do Flocker em execução. {{< /note >}} + +Consulte [exemplo do Flocker](https://github.com/kubernetes/examples/tree/master/staging/volumes/flocker) para obter mais detalhes. + +### gcePersistentDisk + +Um volume `gcePersistentDisk` monta um [disco persistente](https://cloud.google.com/compute/docs/disks) (PD) do Google Compute Engine (GCE) no seu Pod. Ao contrário do `emptyDir` que é apagado quando um pod é removido, o conteúdo de um PD é preservado e o volume é simplesmente desmontado. Isto significa que um PD pode ser previamente populado com dados e que os dados podem ser compartilhados entre os Pods. + +{{< note >}} Você dever criar um PD utilizando `gcloud`, ou via GCE API ou via UI antes de poder utilizá-lo. {{< /note >}} + +Existem algumas restrições ao utilizar um `gcePersistentDisk`: + +* Os nós nos quais os Pods estão sendo executados devem ser VMs GCE +* Essas VMs precisam estar no mesmo projeto e zona GCE que o disco persistente + +Uma característica do disco persistente GCE é o acesso simultâneo somente leitura a um disco persistente. Um volume `gcePersistentDisk` permite que vários consumidores montem simultaneamente um disco persistente como somente leitura. Isto significa que é possível alimentar previamente um PD com o seu conjunto de dados e, em seguida, disponibilizá-lo em paralelo a quantos Pods necessitar. Infelizmente, os PDs só podem ser montados por um único consumidor no modo de leitura e escrita. Não são permitidos gravadores simultâneos. + +O uso de um disco persistente GCE com um Pod controlado por um ReplicaSet falhará, a menos que o PD seja somente leitura ou a contagem de réplica seja 0 ou 1. + +#### Criando um disco persistente GCE {#gce-create-persistent-disk} + +Antes de poder utilizar um disco persistente GCE com um Pod, é necessário criá-lo. + +```shell +gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk +``` + +#### Exemplo de configuração de disco persistente GCE + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pd +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-pd + name: test-volume + volumes: + - name: test-volume + # Esse Disco Persistente (PD) GCE já deve existir. + gcePersistentDisk: + pdName: my-data-disk + fsType: ext4 +``` + +#### Discos persistentes regionais + +O recurso de [Discos persistentes regionais](https://cloud.google.com/compute/docs/disks/#repds) permite a criação de discos persistentes que estão disponíveis em duas zonas dentro da mesma região. Para usar esse recurso, o volume deve ser provisionado como PersistentVolume; referenciar o volume diretamente a partir de um pod não é uma configuração suportada. + +#### Provisionar manualmente um PersistentVolume PD Regional + +O provisionamento dinâmico é possível usando [uma StorageClass para GCE PD](/docs/concepts/storage/storage-classes/#gce). Antes de criar um PersistentVolume, você deve criar o disco persistente: + +```shell +gcloud compute disks create --size=500GB my-data-disk + --region us-central1 + --replica-zones us-central1-a,us-central1-b +``` + +#### Exemplo de configuração de disco persistente regional + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: test-volume +spec: + capacity: + storage: 400Gi + accessModes: + - ReadWriteOnce + gcePersistentDisk: + pdName: my-data-disk + fsType: ext4 + nodeAffinity: + required: + nodeSelectorTerms: + - matchExpressions: + # failure-domain.beta.kubernetes.io/zone deve ser usado para versões anteriores à 1.21 + - key: topology.kubernetes.io/zone + operator: In + values: + - us-central1-a + - us-central1-b +``` + +#### Migração do CSI GCE + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +Quando o recurso `CSIMigration` para o GCE PD é habilitado, todas as operações de plugin do plugin in-tree existente são redirecionadas para o Driver de Cointainer Storage Interface (CSI) `pd.csi.storage.gke.io`. Para utilizar este recurso, o [Driver CSI GCE PD](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) deve ser instalado no cluster e os recursos beta `CSIMigration` e `CSIMigrationGCE` devem estar habilitados. + +#### Migração de CSI GCE concluída + +{{< feature-state for_k8s_version="v1.21" state="alpha" >}} + +Para desabilitar o carregamento do plugin de armazenamento `gcePersistentDisk` pelo gerenciador de controladores e pelo kubelet, defina a flag `InTreePluginGCEUnregister` como `true`. + +### gitRepo (descontinuado) {#gitrepo} + +{{< warning >}}O tipo de volume `gitRepo` foi descontinuado. Para provisionar um contêiner com um repositório git , monte um [EmptyDir](#emptydir) em um InitContainer que clone o repositório usando git, depois monte [o EmptyDir](#emptydir) no contêiner do Pod. {{< /warning >}} + +Um volume `gitRepo` é um exemplo de um plugin de volume. Este plugin monta um diretório vazio e clona um repositório git neste diretório para que seu Pod utilize. + +Aqui está um exemplo de um volume `gitRepo`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: server +spec: + containers: + - image: nginx + name: nginx + volumeMounts: + - mountPath: /mypath + name: git-volume + volumes: + - name: git-volume + gitRepo: + repository: "git@somewhere:me/my-git-repository.git" + revision: "22f1d8406d464b0c0874075539c1f2e96c253775" +``` + +### glusterfs + +Um volume `glusterfs` permite que um volume [Glusterfs](https://www.gluster.org) (um sistema de arquivos em rede de código aberto) seja montado no seu Pod. Ao contrário do `emptyDir` que é apagado quando um Pod é removido, o conteúdo de um volume `glusterfs` é preservado e o volume é simplesmente desmontado. Isto significa que um volume glusterfs pode ser previamente populado com dados e que os dados podem ser compartilhados entre Pods. O GlusterFS pode ser montado para escrita por vários pods simultaneamente. + +{{< note >}} Para poder utilizá-lo, é necessário ter a sua própria instalação do GlusterFS em execução. {{< /note >}} + +Consulte o [exemplo do GlusterFS](https://github.com/kubernetes/examples/tree/master/volumes/glusterfs) para obter mais detalhes. + +### hostPath {#hostpath} + +{{< warning >}} Os volumes HostPath apresentam muitos riscos de segurança e é uma prática recomendada evitar o uso de HostPaths quando possível. Quando um volume HostPath precisa ser usado, ele deve ser definido com escopo apenas para o arquivo ou diretório necessário e montado como ReadOnly. + +Se você restringir o acesso do HostPath a diretórios específicos através da AdmissionPolicy, a propriedade `volumeMounts` DEVE obrigatoriamente usar pontos de montagem `readOnly` para que a política seja eficaz. {{< /warning >}} + +Um volume `hostPath` monta um arquivo ou diretório do sistema de arquivos do nó do host em seu Pod. Isto não é algo de que a maioria dos Pods irá precisar, mas oferece uma poderosa alternativa de escape para algumas aplicações. + +Por exemplo, alguns usos para um `hostPath` são: + +* Executar um contêiner que necessita de acesso aos documentos internos do Docker; utilizar um `hostPath` apontando para `/var/lib/docker` +* Executando o cAdvisor em um contêiner; use um `hostPath` apontando para `/sys` +* Permitir que um Pod especifique se um dado `hostPath` deve existir antes de o Pod ser executado, se deve ser criado e como deve existir + +Além da propriedade obrigatória `path` , você pode opcionalmente definir um `type` para um volume `hostPath`. + +Os valores suportados para o campo `type` são: + +| Valor| Comportamento| +|:----------|:----------| +| | A string vazia (padrão) é para compatibilidade com versões anteriores, o que significa que nenhuma verificação será executada antes de montar o volume hostPath.| +| `DirectoryOrCreate`| Se nada existir no caminho indicado, um diretório vazio será criado lá, conforme necessário, com permissão definida para 0755, tendo o mesmo grupo e propriedade com a Kubelet.| +| `Directory`| Um diretório deve existir no caminho indicado| +| `FileOrCreate`| Se não houver nada no caminho indicado, um arquivo vazio será criado lá, conforme necessário, com permissão definida para 0644, tendo o mesmo grupo e propriedade com Kubelet.| +| `File`| Um arquivo deve existir no caminho indicado| +| `Socket`| Um socket UNIX deve existir no caminho indicado| +| `CharDevice`| Deve existir um dispositivo de caracteres no caminho indicado| +| `BlockDevice`| Deve existir um dispositivo de bloco no caminho indicado| + +Tenha cuidado ao utilizar este tipo de volume, porque: + +* Os HostPaths podem expor as credenciais privilegiadas do sistema (como para o Kubelet) ou APIs privilegiadas (como o container runtime socket), que podem ser usadas para o explorar vulnerabilidades de escape do contêiner ou para atacar outras partes do cluster. +* Os Pods com configuração idêntica (como criado a partir de um PodTemplate) podem se comportar de forma diferente em nós diferentes devido a arquivos diferentes nos nós +* Os arquivos ou diretórios criados nos hosts subjacentes são graváveis apenas pelo root. Você precisa executar seu processo como root em um [contêiner privilegiado](/docs/tasks/configure-pod-container/security-context/) ou modificar as permissões de arquivo no host para poder gravar em um volume `hostPath` + +#### Exemplo de configuração do hostPath + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pd +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-pd + name: test-volume + volumes: + - name: test-volume + hostPath: + # localização do diretório no host + path: /data + # este campo é opcional + type: Directory +``` + +{{< caution >}} O modo `FileOrCreate` não cria o diretório onde ficará arquivo. Se o caminho de diretório do arquivo montado não existir, o pod não será iniciado. Para garantir que esse modo funcione, você pode tentar montar diretórios e arquivos separadamente, como mostrado em [configuração `FileOrCreate`](#hostpath-fileorcreate-example). {{< /caution >}} + +#### Exemplo de configuração FileOrCreate do hostPath {#hostpath-fileorcreate-example} + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-webserver +spec: + containers: + - name: test-webserver + image: k8s.gcr.io/test-webserver:latest + volumeMounts: + - mountPath: /var/local/aaa + name: mydir + - mountPath: /var/local/aaa/1.txt + name: myfile + volumes: + - name: mydir + hostPath: + # Certifique-se de que o diretório foi criado. + path: /var/local/aaa + type: DirectoryOrCreate + - name: myfile + hostPath: + path: /var/local/aaa/1.txt + type: FileOrCreate +``` + +### iscsi + +Um volume `iscsi` permite que um volume iSCSI (SCSI sobre IP) existente seja montado no seu Pod. Ao contrário do `emptyDir` que é apagado quando um Pod é removido, o conteúdo de um volume `iscsi` é preservado e o volume é simplesmente desmontado. Isto significa que um volume iscsi pode ser previamente populado com dados e que os dados podem ser compartilhados entre os Pods. + +{{< note >}} Você deve ter seu próprio servidor iSCSI rodando com o volume criado antes de poder utilizá-lo. {{< /note >}} + +Uma característica do iSCSI é que ele pode ser montado como somente leitura por vários consumidores simultaneamente. Isto significa que um volume pode ser previamente populado com seu conjunto de dados e, em seguida, ser disponibilizado em paralelo para tantos Pods quanto necessitar. Infelizmente, os volumes iSCSI só podem ser montados por um único consumidor no modo de leitura-escrita. Não são permitidos gravadores simultâneos. + +Consulte o [exemplo iSCSI](https://github.com/kubernetes/examples/tree/master/volumes/iscsi) para obter mais detalhes. + +### local + +Um volume `local` representa um dispositivo de armazenamento local montado, como um disco, partição ou diretório. + +Os volumes locais só podem ser usados como um PersistentVolume criado estaticamente. O provisionamento dinâmico não é suportado. + +Em comparação com volumes `hostPath`, os volumes `local` são usados de forma durável e portátil, sem escalonamento manual dos Pods para os nós. O sistema está ciente das restrições de nós do volume, observando a afinidade do nó com o PersistentVolume. + +No entanto, os volumes `local` estão sujeitos à disponibilidade do nó que o comporta e não são adequados para todas as aplicações. Se um nó não está íntegro, então o volume `local` torna-se inacessível pelo pod. O pod que utiliza este volume não consegue ser executado. Os aplicativos que usam volumes `local` devem ser capazes de tolerar essa disponibilidade reduzida, bem como uma possível perda de dados, dependendo das caraterísticas de durabilidade do disco subjacente. + +O exemplo a seguir mostra um PersistentVolume usando um volume `local` e `nodeAffinity`: + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: example-pv +spec: + capacity: + storage: 100Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Delete + storageClassName: local-storage + local: + path: /mnt/disks/ssd1 + nodeAffinity: + required: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/hostname + operator: In + values: + - example-node +``` + +É preciso definir a propriedade `nodeAffinity` do PersistentVolume ao utilizar volumes `local`. O escalonador do Kubernetes usa o PersistentVolume `nodeAffinity` para escalonar esses pods para o nó correto. + +A propriedade `volumeMode` do PersistentVolume pode ser definida como "Block" (ao invés do valor padrão "Filesystem") para expor o volume local como um dispositivo de bloco bruto. + +Ao usar volumes locais, é recomendável criar uma StorageClass com a propriedade `volumeBindingMode` definida como `WaitForFirstConsumer`. Para obter mais detalhes, consulte o exemplo local [StorageClass](/docs/concepts/storage/storage-classes/#local). A postergação da vinculação do volume garante que a decisão de vinculação da PersistentVolumeClaim também será avaliada com quaisquer outras restrições de nós que o Pod possa ter, tais como requisitos de recursos de nós, seletores de nós, afinidade do Pod e anti afinidade do Pod. + +Um provisionador estático externo pode ser executado separadamente para uma melhor gestão do ciclo de vida do volume local. Observe que este provisionador ainda não suporta o provisionamento dinâmico. Para um exemplo sobre como executar um provisionador local externo, veja o [manual do usuário do provisionador local do volume](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner). + +{{< note >}} O PersistentVolume local exige que o usuário faça limpeza e remoção manual se o provisionador estático externo não for utilizado para gerenciar o ciclo de vida do volume. {{< /note >}} + +### nfs + +Um volume `nfs` permite que um compartilhamento NFS (Network File System) existente seja montado em um Pod. Ao contrário do `emptyDir` que é apagado quando um Pod é removido, o conteúdo de um volume `nfs` é preservado e o volume é simplesmente desmontado. Isto significa que um volume NFS pode ser previamente populado com dados e que os dados podem ser compartilhados entre os Pods. O NFS pode ser montado por vários gravadores simultaneamente. + +{{< note >}} Você deve ter seu próprio servidor NFS rodando com o compartilhamento acessível antes de poder utilizá-lo. {{< /note >}} + +Consulte o [exemplo NFS](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) para obter mais detalhes. + +### persistentVolumeClaim {#persistentvolumeclaim} + +Um volume `persistentVolumeClaim` é usado para montar um [PersistentVolume](/pt-br/docs/concepts/storage/persistent-volumes/) em um Pod. PersistentVolumeClaims são uma forma de os usuários "solicitarem" armazenamento durável (como um GCE PersistentDisk ou um volume iSCSI) sem conhecerem os detalhes do ambiente de nuvem em particular. + +Consulte as informações sobre [PersistentVolumes](/pt-br/docs/concepts/storage/persistent-volumes/) para obter mais detalhes. + +### portworxVolume {#portworxvolume} + +Um `portworxVolume` é uma camada de armazenamento em bloco extensível que funciona hiperconvergente com Kubernetes. O [Portworx](https://portworx.com/use-case/kubernetes-storage/) tira as impressões digitais de um armazenamento em um servidor, organiza com base nas capacidades e agrega capacidade em múltiplos servidores. Portworx funciona em máquinas virtuais ou em nós Linux bare-metal. + +Um `portworxVolume` pode ser criado dinamicamente através do Kubernetes ou também pode ser previamente provisionado e referenciado dentro de um Pod. Aqui está um exemplo de um Pod referenciando um volume Portworx pré-provisionado: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-portworx-volume-pod +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /mnt + name: pxvol + volumes: + - name: pxvol + # Este volume Portworx já deve existir. + portworxVolume: + volumeID: "pxvol" + fsType: "" +``` + +{{< note >}} Certifique-se de ter um PortworxVolume com o nome `pxvol` antes de usá-lo no Pod. {{< /note >}} + +Para obter mais detalhes, consulte os exemplos de [volume Portworx](https://github.com/kubernetes/examples/tree/master/staging/volumes/portworx/README.md) . + +### projetado + +Um volume projetado mapeia várias fontes de volume existentes dentro do mesmo diretório. Para obter mais detalhes, consulte [Volumes projetados](/docs/concepts/storage/projected-volumes/). + +### quobyte (descontinuado) {#quobyte} + +Um Volume `quobyte` permite que um volume [Quobyte](https://www.quobyte.com) existente seja montado no seu Pod. + +{{< note >}} Você deve ter seu próprio Quobyte configurado e funcionando com os volumes criados antes de poder utilizá-lo. {{< /note >}} + +Quobyte oferece suporte para o {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}}. CSI é o plugin recomendado para usar volumes Quobyte dentro de Kubernetes. O projeto GitHub da Quobyte tem [instruções](https://github.com/quobyte/quobyte-csi#quobyte-csi) para implantar o Quobyte usando o CSI, acompanhado de exemplos. + +### rbd + +Um volume `rbd` permite que um volume [Rados Block Device](https://docs.ceph.com/en/latest/rbd/) (RBD) seja montado em seu Pod. Ao contrário do `emptyDir` que é apagado quando um pod é removido, o conteúdo de um volume `rbd` é preservado e o volume é desmontado. Isto significa que um volume RBD pode ser previamente populado com dados e que os dados podem ser compartilhados entre os Pods. + +{{< note >}} Você deve ter uma instalação Ceph em funcionamento antes de poder usar o RBD. {{< /note >}} + +Uma caraterística do RBD é que ele pode ser montado como somente leitura por vários consumidores simultaneamente. Isto significa que um volume pode ser previamente populado com seu conjunto de dados e, em seguida, ser disponibilizado em paralelo para tantos pods quanto necessitar. Infelizmente, os volumes RBD só podem ser montados por um único consumidor no modo de leitura-escrita. Não são permitidos gravadores simultâneos. + +Consulte o [exemplo RBD](https://github.com/kubernetes/examples/tree/master/volumes/rbd) para obter mais detalhes. + +#### Migração de CSI RBD {#rbd-csi-migration} + +{{< feature-state for_k8s_version="v1.23" state="alpha" >}} + +Quando o recurso `CSIMigration` do `RBD` está ativado, redireciona todas as operações do plugin in-tree existente para o driver {{< glossary_tooltip text="CSI" term_id="csi" >}} `rbd.csi.ceph.com`. Para utilizar este recurso, o [driver Ceph CSI](https://github.com/ceph/ceph-csi) deve estar instalado no cluster e as [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `CSIMigration` e `csiMigrationRBD` devem estar habilitadas. + +{{< note >}} + +Como operador do cluster Kubernetes que administra o armazenamento, aqui estão os pré-requisitos que você deve atender antes de tentar a migração para o driver CSI RBD: + +* Você deve instalar o driver Ceph CSI (`rbd.csi.ceph.com`), v3.5.0 ou superior, no cluster Kubernetes. +* Considerando que o campo `clusterID` é um parâmetro necessário para o driver CSI e sua operação , mas o campo in-tree StorageClass tem o parâmetro obrigatório `monitors`, um administrador de armazenamento Kubernetes precisa criar um clusterID baseado no hash dos monitores (ex.:`#echo -n '' | md5sum`) no mapa de configuração do CSI e manter os monitores sob esta configuração de clusterID. +* Além disso, se o valor de `adminId` no Storageclass in-tree for diferente de `admin`, o `adminSecretName` mencionado no Storageclass in-tree tem que ser corrigido com o valor base64 do valor do parâmetro `adminId`, caso contrário esta etapa pode ser ignorada. {{< /note >}} + +### secret + +Um volume `secret` é usado para passar informações sensíveis, tais como senhas, para Pods. Você pode armazenar segredos na API Kubernetes e montá-los como arquivos para serem usados por pods sem necessidade de vinculação direta ao Kubernetes. Volumes `secret` são mantidos pelo tmpfs (um sistema de arquivos com baseado em memória RAM) para que nunca sejam gravados em armazenamento não volátil. + +{{< note >}}Você deve criar um Secret na API Kubernetes antes de poder utilizá-lo. {{< /note >}} + +{{< note >}} Um contêiner que utiliza um Secret como ponto de montagem para a propriedade [`subPath`](#using-subpath) não receberá atualizações deste Secret. {{< /note >}} + +Para obter mais detalhes, consulte [Configurando Secrets](/pt-br/docs/concepts/configuration/secret/). + +### storageOS (descontinuado) {#storageos} + +Um volume `storageos` permite que um volume [StorageOS](https://www.storageos.com) existente seja montado em seu Pod. + +O StorageOS funciona como um contêiner dentro de seu ambiente Kubernetes, tornando o armazenamento local ou anexado acessível a partir de qualquer nó dentro do cluster Kubernetes. Os dados podem ser replicados para a proteção contra falhas do nó. O provisionamento e a compressão podem melhorar a utilização e reduzir os custos. + +Em sua essência, o StorageOS fornece armazenamento em bloco para containers, acessível a partir de um sistema de arquivo. + +O Conteiner StorageOS requer Linux de 64 bits e não possui dependências adicionais. Uma licença para desenvolvedores está disponível gratuitamente. + +{{< caution >}} Você deve executar o container StorageOS em cada nó que deseja acessar os volumes do StorageOS ou que contribuirá com a capacidade de armazenamento para o pool. Para obter instruções de instalação, consulte a [documentação do StorageOS](https://docs.storageos.com). {{< /caution >}} + +O exemplo a seguir é uma configuração do Pod com StorageOS: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + labels: + name: redis + role: master + name: test-storageos-redis +spec: + containers: + - name: master + image: kubernetes/redis:v1 + env: + - name: MASTER + value: "true" + ports: + - containerPort: 6379 + volumeMounts: + - mountPath: /redis-master-data + name: redis-data + volumes: + - name: redis-data + storageos: + # O volume `redis-vol01` já deve existir dentro do StorageOS no namespace `default`. + volumeName: redis-vol01 + fsType: ext4 +``` + +Para obter mais informações sobre StorageOS, provisionamento dinâmico e PersistentVolumeClaims, consulte os [exemplos do StorageOS](https://github.com/kubernetes/examples/blob/master/volumes/storageos). + +### vsphereVolume {#vspherevolume} + +{{< note >}} Você deve configurar o Kubernetes vSphere Cloud Provider. Para obter informações sobre a configuração do cloudprovider, consulte o [Guia Introdutório do vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/). {{< /note >}} + +Um `vsphereVolume` é usado para montar um volume VMDK do vSphere em seu Pod. O conteúdo de um volume é preservado quando é desmontado. Ele suporta sistemas de armazenamento de dados tanto do tipo VMFS quanto do tipo VSAN. + +{{< note >}} Você deve criar o volume do VMDK vSphere usando um dos métodos a seguir antes de usar com um Pod. {{< /note >}} + +#### Criar um volume VMDK {#creating-vmdk-volume} + +Escolha um dos seguintes métodos para criar um VMDK. + +{{< tabs name="tabs_volumes" >}} +{{% tab name="Criar usando vmkfstools" %}} +Primeiro acesse o ESX via ssh, depois use o seguinte comando para criar um VMDK: + +```shell +vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk +``` + +{{% /tab %}} +{{% tab name="Criar usando vmware-vdiskmanager" %}} +Utilize o seguinte comando para criar um VMDK: + +```shell +vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk +``` + +{{% /tab %}} + +{{< /tabs >}} + +#### Exemplo de configuração do VMDK no vSphere {#vsphere-vmdk-configuration} + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-vmdk +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-vmdk + name: test-volume + volumes: + - name: test-volume + # This VMDK volume must already exist. + vsphereVolume: + volumePath: "[DatastoreName] volumes/myDisk" + fsType: ext4 +``` + +Para obter mais informações, consulte os exemplos de [volume do vSphere](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) . + +#### Migração de CSI vSphere {#vsphere-csi-migration} + +{{< feature-state for_k8s_version="v1.19" state="beta" >}} + +Quando o recurso `CSIMigration` do `vsphereVolume` está ativado, redireciona todas as operações do plugin in-tree existente para o driver {{< glossary_tooltip text="CSI" term_id="csi" >}} `csi.vsphere.vmware.com`. Para usar esse recurso, o [driver CSI do vSphere](https://github.com/kubernetes-sigs/vsphere-csi-driver) deve estar instalado no cluster e as [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `CSIMigration` e `CSIMigrationvSphere` devem estar habilitadas. + +Isso também requer que a versão mínima do vSphere vCenter/ESXi seja 7.0u1 e a versão mínima do hardware seja a VM versão 15. + +{{< note >}} Os seguintes parâmetros da StorageClass do plugin integrado `vsphereVolume` não são suportados pelo driver CSI do vSphere: + +* `diskformat` +* `hostfailurestotolerate` +* `forceprovisioning` +* `cachereservation` +* `diskstripes` +* `objectspacereservation` +* `iopslimit` + +Os volumes existentes criados usando esses parâmetros serão migrados para o driver CSI do vSphere, mas novos volumes criados pelo driver de CSI do vSphere não estarão respeitando esses parâmetros. {{< /note >}} + +#### Migração do CSI do vSphere foi concluída {#vsphere-csi-migration-complete} + +{{< feature-state for_k8s_version="v1.19" state="beta" >}} + +Para desativar o carregamento do plugin de armazenamento `vsphereVolume` pelo gerenciador de controladores e pelo kubelet, defina a flag `InTreePluginvSphereUnregister` como `true`. Você precisa instalar o driver `csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} em todos os nós de processamento. + +#### Migração de driver CSI do Portworx + +{{< feature-state for_k8s_version="v1.23" state="alpha" >}} + +O recurso `CSIMigration` para Portworx foi adicionado, mas desativado por padrão no Kubernetes 1.23 visto que está no estado alfa. Ele redireciona todas as operações de plugin do tipo in-tree para o Driver de Cointainer Storage Interface (CSI) `pxd.portworx.com`. [O driver CSI Portworx](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/csi/) deve ser instalado no cluster. Para ativar o recurso, defina `CSIMigrationPortworx=true` no kube-controller-manager e no kubelet. + +## Utilizando subPath {#using-subpath} + +Às vezes, é útil compartilhar um volume para múltiplos usos em um único pod. A propriedade `volumeMounts.subPath` especifica um sub caminho dentro do volume referenciado em vez de sua raiz. + +O exemplo a seguir mostra como configurar um Pod com um ambiente LAMP (Linux, Apache, MySQL e PHP) usando um único volume compartilhado. Esta exemplo de configuração `subPath` não é recomendada para uso em produção. + +O código e os ativos da aplicação PHP mapeiam para a pasta do volume `html` e o banco de dados MySQL é armazenado na pasta do volume `mysql` . Por exemplo: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: my-lamp-site +spec: + containers: + - name: mysql + image: mysql + env: + - name: MYSQL_ROOT_PASSWORD + value: "rootpasswd" + volumeMounts: + - mountPath: /var/lib/mysql + name: site-data + subPath: mysql + - name: php + image: php:7.0-apache + volumeMounts: + - mountPath: /var/www/html + name: site-data + subPath: html + volumes: + - name: site-data + persistentVolumeClaim: + claimName: my-lamp-site-data +``` + +### Usando subPath com variáveis de ambiente expandidas {#using-subpath-expanded-environment} + +{{< feature-state for_k8s_version="v1.17" state="stable" >}} + +Use o campo `subPathExpr` para construir nomes de diretório `subPath` a partir de variáveis de ambiente da downward API. As propriedades `subPath` e `subPathExpr` são mutuamente exclusivas. + +Neste exemplo, um `Pod` usa `subPathExpr` para criar um diretório `pod1` dentro do volume `hostPath` `/var/log/pods`. O volume `hostPath`recebe o nome `Pod` do `downwardAPI`. O diretório `/var/log/pods/pod1` do host é montado em `/logs` no contêiner. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod1 +spec: + containers: + - name: container1 + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: busybox:1.28 + command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ] + volumeMounts: + - name: workdir1 + mountPath: /logs + # A expansão de variáveis usa parênteses (não chaves). + subPathExpr: $(POD_NAME) + restartPolicy: Never + volumes: + - name: workdir1 + hostPath: + path: /var/log/pods +``` + +## Recursos + +A mídia de armazenamento(como Disco ou SSD) de um volume `emptyDir` é determinada por meio do sistema de arquivos que mantém o diretório raiz do kubelet (normalmente `/var/lib/kubelet`). Não há limite para quanto espaço um volume `emptyDir` ou `hostPath` podem consumir, e não há isolamento entre contêineres ou entre pods. + +Para saber mais sobre como solicitar espaço usando uma especificação de recursos, consulte [como gerenciar recursos](/pt-br/docs/concepts/configuration/manage-resources-containers/). + +## Plugins de volume out-of-tree + +Os plugins de volume out-of-tree incluem o {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} (CSI) e também o FlexVolume (que foi descontinuado). Esses plugins permitem que os fornecedores de armazenamento criem plugins de armazenamento personalizados sem adicionar seu código-fonte do plugin ao repositório Kubernetes. + +Anteriormente, todos os plugins de volume eram "in-tree". Os plugins "in-tree" eram construídos, vinculados, compilados e distribuídos com o código principal dos binários do Kubernetes. Isto significava que a adição de um novo sistema de armazenamento ao Kubernetes (um plugin de volume) exigia uma validação do código no repositório central de código Kubernetes. + +Tanto o CSI quanto o FlexVolume permitem que os plugins de volume sejam desenvolvidos independentemente da base de código Kubernetes e implantados (instalados) nos clusters Kubernetes como extensões. + +Para fornecedores de armazenamento que procuram criar um plugin de volume out-of-tree, consulte as [Perguntas mais frequentes sobre plugins de volume](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md). + +### csi + +O [Cointainer Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI) define uma interface padrão para sistemas de orquestração de contêineres (como Kubernetes) para expor sistemas de armazenamento arbitrários a suas cargas de trabalho de contêiner. + +Leia a [proposta de design CSI](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) para obter mais informações. + +{{< note >}} O suporte para as versões 0.2 e 0.3 da especificação CSI foi descontinuado no Kubernetes v1.13 e será removido em uma versão futura. {{< /note >}} + +{{< note >}} Os controladores CSI podem não ser compatíveis em todas as versões do Kubernetes. Consulte a documentação específica do driver CSI para ver as etapas de implantação suportadas para cada versão do Kubernetes e uma matriz de compatibilidade. {{< /note >}} + +Uma vez que um driver de volume compatível com CSI seja implantado em um cluster Kubernetes, os usuários podem usar o tipo de volume `csi` para anexar ou montar os volumes expostos pelo driver CSI. + +Um volume `csi` pode ser utilizado em um Pod de três formas diferentes: + +* Através de uma referência a [PersistentVolumeClaim](#persistentvolumeclaim) +* com um [volume efêmero genérico](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volume) (recurso alfa) +* com [volume efêmero de CSI](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume) se o driver suportar esse (recurso beta) + +Os seguintes campos estão disponíveis para administradores de armazenamento configurarem um volume persistente de CSI: + +* `driver`: Um valor do tipo string que especifica o nome do driver de volume a ser usado. Este valor deve corresponder ao valor retornado no `GetPluginInfoResponse` pelo driver CSI, conforme definido na [especificação CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md#getplugininfo). Ele é usado pelo Kubernetes para identificar qual driver CSI chamar, e pelos componentes do driver CSI para identificar quais objetos PV pertencem ao driver CSI. +* `volumeHandle`: Um valor do tipo string que identifica exclusivamente o volume. Este valor deve corresponder ao valor retornado no campo `volume.id` em `CreateVolumeResponse` pelo driver CSI, conforme definido na [especificação CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume). O valor é passado como `volume_id` em todas as chamadas para o driver de volume CSI quando se faz referência ao volume. +* `readOnly`: Um valor booleano opcional que indica se o volume deve ser "ControllerPublished" (anexado) como somente leitura. O valor padrão é false. Este valor é passado para o driver CSI através do campo `readonly` em `ControllerPublishVolumeRequest`. +* `fsType`: Se o `VolumeMode` do PV for `Filesystem` então este campo pode ser usado para especificar o sistema de arquivos que deve ser usado para montar o volume. Se o volume não tiver sido formatado e a formatação for suportada, este valor será utilizado para formatar o volume. Este valor é passado para o driver CSI através do campo `VolumeCapability` nas propriedades `ControllerPublishVolumeRequest`, `NodeStageVolumeRequest` e `NodePublishVolumeRequest`. +* `volumeAttributes`: Um mapa de valores do tipo string para string que especifica propriedades estáticas de um volume. Este mapa deve corresponder ao mapa retornado no campo `volume.attributes` do `CreateVolumeResponse` pelo driver CSI, conforme definido na [especificação CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume). O mapa é passado para o driver CSI através do campo `volume_context` nas propriedades `ControllerPublishVolumeRequest`, `NodeStageVolumeRequest`, e `NodePublishVolumeRequest`. +* `controllerPublishSecretRef`: Uma referência ao objeto Secret que contém informações confidenciais para passar ao driver CSI para completar as chamadas CSI `ControllerPublishVolume` e `ControllerUnpublishVolume`. Este campo é opcional e pode estar vazio se não for necessário nenhum segredo. Se o Secret contiver mais de um segredo, todos os segredos serão passados. +* `nodeStageSecretRef`: Uma referência ao objeto Secret que contém informações confidenciais para passar ao driver de CSI para completar a chamada de CSI do `NodeStageVolume`. Este campo é opcional e pode estar vazio se não for necessário nenhum segredo. Se o Secret contiver mais de um segredo, todos os segredos serão passados. +* `nodePublishSecretRef`: Uma referência ao objeto Secret que contém informações confidenciais para passar ao driver de CSI para completar a chamada de CSI do `NodePublishVolume`. Este campo é opcional e pode estar vazio se não for necessário nenhum segredo. Se o objeto Secret contiver mais de um segredo, todos os segredos serão passados. + +#### Suporte CSI para volume de bloco bruto + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +Os fornecedores com drivers CSI externos podem implementar o suporte de volume de blocos brutos nas cargas de trabalho Kubernetes. + +Você pode configurar o [PersistentVolume/PersistentVolumeClaim com suporte de volume de bloco bruto](/pt-br/docs/concepts/storage/persistent-volumes/#suporte-a-volume-de-bloco-bruto) , como habitualmente, sem quaisquer alterações específicas de CSI. + +#### Volumes efêmeros de CSI + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +É possível configurar diretamente volumes CSI dentro da especificação do Pod. Os volumes especificados desta forma são efêmeros e não persistem nas reinicializações do pod. Consulte [Volumes efêmeros](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume) para obter mais informações. + +Para obter mais informações sobre como desenvolver um driver CSI, consulte a [documentação kubernetes-csi](https://kubernetes-csi.github.io/docs/) + +#### Migrando para drivers CSI a partir de plugins in-tree + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +Quando o recurso `CSIMigration` está habilitado, direciona operações relacionadas a plugins in-tree existentes para plugins CSI correspondentes (que devem ser instalados e configurados). Como resultado, os operadores não precisam fazer nenhuma alteração de configuração para Storage Classes, PersistentVolumes ou PersistentVolumeClaims existentes (referindo-se aos plugins in-tree) quando a transição para um driver CSI que substitui um plugin in-tree. + +As operações e características que são suportadas incluem: provisionamento/exclusão, anexação/remoção, montargem/desmontagem e redimensionamento de volumes. + +Plugins in-tree que suportam `CSIMigration` e têm um driver CSI correspondente implementado são listados [em tipos de volumes](#volume-types). + +### flexVolume + +{{< feature-state for_k8s_version="v1.23" state="deprecated" >}} + +O FlexVolume é uma interface de plugin out-of-tree que usa um modelo baseado em execução para fazer interface com drivers de armazenamento. Os binários do driver FlexVolume devem ser instalados em um caminho de plugin de volume predefinido em cada nó e, em alguns casos, também nos nós da camada de gerenciamento. + +Os Pods interagem com os drivers do FlexVolume através do plugin de volume in-tree `flexVolume`. Para obter mais detalhes, consulte o documento [README](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md#readme) do FlexVolume. + +{{< note >}} O FlexVolume foi descontinuado. Usar um driver CSI out-of-tree é a maneira recomendada de integrar o armazenamento externo com Kubernetes. + +Os mantenedores do driver FlexVolume devem implementar um driver CSI e ajudar a migrar usuários de drivers FlexVolume para CSI. Os usuários do FlexVolume devem mover suas cargas de trabalho para usar o driver CSI equivalente. {{< /note >}} + +## Propagação de montagem + +A propagação de montagem permite compartilhar volumes montados por um contêiner para outros contêineres no mesmo pod, ou mesmo para outros pods no mesmo nó. + +A propagação de montagem de um volume é controlada pelo campo `mountPropagation` na propriedade `Container.volumeMounts`. Os seus valores são: + +* `None` - Este volume de montagem não receberá do host nenhuma montagem posterior que seja montada para este volume ou qualquer um de seus subdiretórios. De forma semelhante, nenhum ponto de montagem criado pelo contêiner será visível no host. Este é o modo padrão. + + Este modo é igual à propagação de montagem `private` conforme descrito na [documentação do kernel Linux](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) + +* `HostToContainer` - Este volume de montagem receberá todas as montagens posteriores que forem montadas para este volume ou qualquer um de seus subdiretórios. + + Em outras palavras, se o host montar qualquer coisa dentro do volume de montagem, o container o visualizará montado ali. + + Da mesma forma, se qualquer Pod com propagação de montagem `Bidirectional` para o mesmo volume montar qualquer coisa lá, o contêiner com propagação de montagem `HostToContainer` o reconhecerá. + + Este modo é igual à propagação de montagem `rslave` conforme descrito na [documentação do kernel Linux](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) + +* `Bidirectional` - Esta montagem de volume se comporta da mesma forma que a montagem de volume `HostToContainer`. Além disso, todas as montagens de volume criadas pelo contêiner serão propagadas de volta ao host e a todos os contêineres de todas os pods que utilizam o mesmo volume. + + Um caso de uso típico para este modo é um Pod com um driver FlexVolume ou CSI ou um Pod que precisa montar algo no host utilizando um volume `hostPath`. + + Este modo é igual à propagação de montagem `rshared` conforme descrito na [documentação do kernel Linux](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) + + {{< warning >}} A propagação de montagem `Bidirectional` pode ser perigosa. Ela pode danificar o sistema operacional do host e, portanto, ela só é permitida em contêineres privilegiados. A familiaridade com o comportamento do kernel Linux é fortemente recomendada. Além disso, quaisquer montagens de volume criadas por contêineres em pods devem ser destruídas ( desmontadas) pelos contêineres ao final. {{< /warning >}} + +### Configuração + +Antes que a propagação da montagem possa funcionar corretamente em algumas distribuições (CoreOS, RedHat/Centos, Ubuntu), o compartilhamento de montagem deve ser configurado corretamente no Docker como mostrado abaixo. + +Edite seu arquivo de serviços `systemd` do Docker. Configure a propriedade `MountFlags` da seguinte forma: + +```shell +MountFlags=shared +``` + +Ou, se a propriedade `MountFlags=slave`existir, remova-a. Em seguida, reinicie o daemon Docker: + +```shell +sudo systemctl daemon-reload +sudo systemctl restart docker +``` + +## {{% heading "whatsnext" %}} + +Siga um exemplo de [implantação do WordPress e MySQL com volumes persistentes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/). diff --git a/content/pt-br/docs/concepts/workloads/controllers/cron-jobs.md b/content/pt-br/docs/concepts/workloads/controllers/cron-jobs.md index 19c7cd8604fe0..564b626162931 100644 --- a/content/pt-br/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/pt-br/docs/concepts/workloads/controllers/cron-jobs.md @@ -20,7 +20,7 @@ Se a camada de gerenciamento do cluster executa o kube-controller-manager em Pod {{< /caution >}} -Ao criar o manifesto para um objeto CronJob, verifique se o nome que você forneceu é um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +Ao criar o manifesto para um objeto CronJob, verifique se o nome que você forneceu é um [nome de subdomínio DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. O nome não pode ter mais que 52 caracteres. Esta limitação existe porque o controlador do CronJob adicionará automaticamente 11 caracteres ao final do nome escolhido para a tarefa, e o tamanho máximo de um nome de tarefa não pode ultrapassar 63 caracteres. diff --git a/content/pt-br/docs/reference/glossary/application-developer.md b/content/pt-br/docs/reference/glossary/application-developer.md new file mode 100644 index 0000000000000..037a9413ad78d --- /dev/null +++ b/content/pt-br/docs/reference/glossary/application-developer.md @@ -0,0 +1,17 @@ +--- +title: Desenvolvedor de Aplicativos +id: application-developer +date: 2018-04-12 +full_link: +short_description: > + Uma pessoa que escreve um aplicativo que é executado em um cluster Kubernetes. + +aka: +tags: +- user-type +--- + Uma pessoa que escreve um aplicativo que é executado em um cluster Kubernetes. + + + +Um desenvolvedor de aplicativos se concentra em uma parte da aplicação. O seu foco pode variar significativamente em tamanho. diff --git a/content/pt-br/docs/reference/glossary/applications.md b/content/pt-br/docs/reference/glossary/applications.md new file mode 100644 index 0000000000000..a00ca0ec6cbaf --- /dev/null +++ b/content/pt-br/docs/reference/glossary/applications.md @@ -0,0 +1,12 @@ +--- +title: Aplicações +id: applications +date: 2019-05-12 +full_link: +short_description: > + A camada onde vários aplicativos em contêiner são executados. +aka: +tags: +- fundamental +--- + A camada onde vários aplicativos em contêiner são executados. diff --git a/content/pt-br/docs/reference/glossary/certificate.md b/content/pt-br/docs/reference/glossary/certificate.md new file mode 100644 index 0000000000000..d43ead1f32cfd --- /dev/null +++ b/content/pt-br/docs/reference/glossary/certificate.md @@ -0,0 +1,17 @@ +--- +title: Certificado +id: certificate +date: 2018-04-12 +full_link: /docs/tasks/tls/managing-tls-in-a-cluster/ +short_description: > + Um arquivo criptograficamente seguro usado para validar o acesso ao cluster Kubernetes. + +aka: +tags: +- security +--- + Um arquivo criptograficamente seguro usado para validar o acesso ao cluster Kubernetes. + + + +Os certificados permitem que aplicativos dentro de um cluster Kubernetes acessem a API do Kubernetes com segurança. Os certificados validam que os clientes têm permissão para acessar a API. \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/cidr.md b/content/pt-br/docs/reference/glossary/cidr.md new file mode 100644 index 0000000000000..3073e6560c2d2 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/cidr.md @@ -0,0 +1,17 @@ +--- +title: CIDR +id: cidr +date: 2019-11-12 +full_link: +short_description: > + CIDR é uma notação para descrever blocos de endereços IP e é muito usada em várias configurações de rede. + +aka: +tags: +- networking +--- +CIDR (em inglês - Classless Inter-Domain Routing) é uma notação para descrever blocos de endereços IP e é muito usada em várias configurações de rede. + + + +No contexto do Kubernetes, cada {{< glossary_tooltip text="Nó" term_id="node" >}} recebe um intervalo de endereços IP através do endereço inicial e uma máscara de sub-rede usando CIDR. Isso permite que os Nodes atribuam a cada {{< glossary_tooltip text="Pod" term_id="pod" >}} um endereço IP exclusivo. Embora originalmente seja um conceito para IPv4, o CIDR também foi expandido para incluir IPv6. \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/cla.md b/content/pt-br/docs/reference/glossary/cla.md new file mode 100644 index 0000000000000..5a282a9606497 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/cla.md @@ -0,0 +1,17 @@ +--- +title: CLA (Contrato de Licença de Colaborador) +id: cla +date: 2018-04-12 +full_link: https://github.com/kubernetes/community/blob/master/CLA.md +short_description: > + Termos sob os quais um colaborador concede uma licença a um projeto de código aberto por suas contribuições. + +aka: +tags: +- community +--- + Termos sob os quais um {{< glossary_tooltip text="colaborador" term_id="contributor" >}} concede uma licença a um projeto de código aberto por suas contribuições. + + + +Os CLAs ajudam a resolver disputas legais envolvendo material contribuído e propriedade intelectual. diff --git a/content/pt-br/docs/reference/glossary/cluster-architect.md b/content/pt-br/docs/reference/glossary/cluster-architect.md new file mode 100644 index 0000000000000..3aeb95e084eef --- /dev/null +++ b/content/pt-br/docs/reference/glossary/cluster-architect.md @@ -0,0 +1,17 @@ +--- +title: Arquiteto de Cluster +id: cluster-architect +date: 2018-04-12 +full_link: +short_description: > + Uma pessoa que projeta infraestrutura que envolve um ou mais clusters Kubernetes. + +aka: +tags: +- user-type +--- + Uma pessoa que projeta infraestrutura que envolve um ou mais clusters Kubernetes. + + + +Os arquitetos de clusters estão preocupados com as melhores práticas para sistemas distribuídos, por exemplo: alta disponibilidade e segurança. \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/cluster-infrastructure.md b/content/pt-br/docs/reference/glossary/cluster-infrastructure.md new file mode 100644 index 0000000000000..a65f04a346f69 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/cluster-infrastructure.md @@ -0,0 +1,13 @@ +--- +title: Infraestrutura de Cluster +id: cluster-infrastructure +date: 2019-05-12 +full_link: +short_description: > + A camada de infraestrutura fornece e mantém máquinas virtuais, redes, grupos de segurança e outros. + +aka: +tags: +- operation +--- + A camada de infraestrutura fornece e mantém máquinas virtuais, redes, grupos de segurança e outros. diff --git a/content/pt-br/docs/reference/glossary/data-plane.md b/content/pt-br/docs/reference/glossary/data-plane.md new file mode 100644 index 0000000000000..2e7c9946f98d6 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/data-plane.md @@ -0,0 +1,13 @@ +--- +title: Plano de Dados +id: data-plane +date: 2019-05-12 +full_link: +short_description: > + A camada que fornece capacidade, tais como CPU, memória, rede e armazenamento, para que os contêineres possam ser executados e conectados a uma rede. + +aka: +tags: +- fundamental +--- + A camada que fornece capacidade, tais como CPU, memória, rede e armazenamento, para que os contêineres possam ser executados e conectados a uma rede. diff --git a/content/pt-br/docs/reference/glossary/ingress.md b/content/pt-br/docs/reference/glossary/ingress.md new file mode 100644 index 0000000000000..889b62a24469a --- /dev/null +++ b/content/pt-br/docs/reference/glossary/ingress.md @@ -0,0 +1,20 @@ +--- +title: Ingress +id: ingress +date: 2018-04-12 +full_link: /docs/concepts/services-networking/ingress/ +short_description: > + Um objeto da API que gerencia o acesso externo aos serviços em um cluster, normalmente HTTP. + +aka: +tags: +- networking +- architecture +- extension +--- + Um objeto da API (do inglês "Application Programming Interface") que gerencia o acesso externo aos serviços em um cluster, normalmente HTTP. + + + +Um Ingress pode fornecer as capacidades de um proxy reverso para as aplicações. + diff --git a/content/pt-br/docs/reference/glossary/name.md b/content/pt-br/docs/reference/glossary/name.md new file mode 100644 index 0000000000000..292f1de3e6dc9 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/name.md @@ -0,0 +1,20 @@ +--- +title: Nome +id: name +date: 2018-04-12 +full_link: /pt-br/docs/concepts/overview/working-with-objects/names +short_description: > + Uma string fornecida pelo cliente que referencia um objeto em uma URL de + recurso, como por exemplo `/api/v1/pods/qualquer-nome`. + +aka: +tags: +- fundamental +--- + Uma string fornecida pelo cliente que referencia um objeto em uma URL de + recurso, como por exemplo `/api/v1/pods/qualquer-nome`. + + + +Somente um objeto de um dado tipo pode ter um certo nome por vez. No entanto, +se você remover o objeto, você poderá criar um novo objeto com o mesmo nome. diff --git a/content/pt-br/docs/reference/glossary/reviewer.md b/content/pt-br/docs/reference/glossary/reviewer.md new file mode 100644 index 0000000000000..ff367087ee375 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/reviewer.md @@ -0,0 +1,17 @@ +--- +title: Revisor +id: reviewer +date: 2018-04-12 +full_link: +short_description: > + Uma pessoa que revisa o código quanto à qualidade e correção em alguma parte do projeto. + +aka: +tags: +- community +--- + Uma pessoa que revisa o código quanto à qualidade e correção em alguma parte do projeto. + + + +Os revisores têm conhecimento sobre o código base e os princípios de engenharia de software. O estado do revisor é atribuído a uma parte do código. \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/sysctl.md b/content/pt-br/docs/reference/glossary/sysctl.md new file mode 100644 index 0000000000000..715d01f7c26cd --- /dev/null +++ b/content/pt-br/docs/reference/glossary/sysctl.md @@ -0,0 +1,19 @@ +--- +title: sysctl +id: sysctl +date: 2019-02-12 +full_link: /docs/tasks/administer-cluster/sysctl-cluster/ +short_description: > + Uma interface para obter e definir parâmetros do kernel Unix. + +aka: +tags: +- tool +--- + `sysctl` é uma interface semi-padronizada para ler ou alterar os atributos do kernel Unix em execução. + + + +Em sistemas do tipo Unix, `sysctl` é tanto o nome da ferramenta que os administradores usam para visualizar e modificar essas configurações, quanto a chamada do sistema que a ferramenta usa. + +Os {{< glossary_tooltip text="contêineres" term_id="container" >}} em execução e os plugins de rede podem depender dos valores definidos do `sysctl`. \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/uid.md b/content/pt-br/docs/reference/glossary/uid.md index c5e34fd185862..25d1032572083 100644 --- a/content/pt-br/docs/reference/glossary/uid.md +++ b/content/pt-br/docs/reference/glossary/uid.md @@ -2,12 +2,20 @@ title: UID id: uid date: 2021-03-16 -full_link: +full_link: /pt-br/docs/concepts/overview/working-with-objects/names short_description: > - Um identificador exclusivo (UID) é uma sequência numérica ou alfanumérica associada a uma única entidade em um determinado sistema. + Uma string gerada pelos sistemas do Kubernetes para identificar objetos de + forma única. aka: tags: -- authentication +- fundamental --- -Um identificador exclusivo (UID) é uma sequência numérica ou alfanumérica associada a uma única entidade em um determinado sistema. Os UIDs tornam possível endereçar essa entidade para que ela possa ser acessada e interagida. Cada usuário é identificado no sistema por seu UID e os nomes de usuário geralmente são usados apenas como uma interface para humanos. \ No newline at end of file + Uma string gerada pelos sistemas do Kubernetes para identificar objetos de + forma única. + + + +Cada objeto criado durante todo o ciclo de vida do cluster do Kubernetes possui +um UID distinto. O objetivo deste identificador é distinguir ocorrências +históricas de entidades semelhantes. diff --git a/content/pt-br/docs/reference/glossary/volume-plugin.md b/content/pt-br/docs/reference/glossary/volume-plugin.md new file mode 100644 index 0000000000000..1936fdf3f2b20 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/volume-plugin.md @@ -0,0 +1,18 @@ +--- +title: Plugin de Volume +id: volumeplugin +date: 2018-04-12 +full_link: +short_description: > + Um plugin de volume permite a integração do armazenamento dentro de um Pod. + +aka: +tags: +- core-object +- storage +--- + Um plugin de volume permite a integração do armazenamento dentro de um {{< glossary_tooltip text="Pod" term_id="pod" >}}. + + + +Um plugin de volume permite anexar e montar volumes de armazenamento para uso por um {{< glossary_tooltip text="Pod" term_id="pod" >}}. Os plugins de volume podem estar _dentro_ ou _fora da árvore_. _Na árvore_, os plugins fazem parte do repositório de código Kubernetes e seguem seu ciclo de lançamento. Os plugins _fora da árvore_ são desenvolvidos de forma independente. \ No newline at end of file diff --git a/content/pt-br/docs/setup/production-environment/turnkey-solutions.md b/content/pt-br/docs/setup/production-environment/turnkey-solutions.md new file mode 100644 index 0000000000000..d60a54a754ca6 --- /dev/null +++ b/content/pt-br/docs/setup/production-environment/turnkey-solutions.md @@ -0,0 +1,12 @@ +--- +title: Soluções de Nuvem Prontas para uso +content_type: concept +weight: 30 +--- + + +Essa página fornece uma lista de provedores de soluções certificadas do Kubernetes. Na página de cada provedor, você pode aprender como instalar e configurar clusters prontos para produção. + + + +{{< cncf-landscape helpers=true category="certified-kubernetes-hosted" >}} diff --git a/content/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 0bac8410fa3ce..cefd54838399c 100644 --- a/content/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -59,7 +59,7 @@ data: ``` Perceba que o nome do objeto Secret precisa ser um -[nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-name) válido. +[nome de subdomínio DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. {{< note >}} Os valores serializados dos dados JSON e YAML de um Secret são codificados em strings diff --git a/content/ru/docs/concepts/architecture/garbage-collection.md b/content/ru/docs/concepts/architecture/garbage-collection.md index e94da8fc623cb..5c7a83c39294b 100644 --- a/content/ru/docs/concepts/architecture/garbage-collection.md +++ b/content/ru/docs/concepts/architecture/garbage-collection.md @@ -12,7 +12,7 @@ weight: 50 * [Объекты без ссылок на владельца Objects](#owners-dependents) * [Не используемые контейнеры и образы контейнеров](#containers-images) * [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete) - * [Устаревшие или просроченные запросы подписания сертификатов (CSR)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process) + * [Устаревшие или просроченные запросы подписания сертификатов (CSR)](/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process) * {{}} удалено в следующих сценариях: * В облаке, когда кластер использует [диспетчер облачных контроллеров](/docs/concepts/architecture/cloud-controller/) * Локально когда кластер использует дополнение, аналогичное диспетчер облачных контроллеров diff --git a/content/ru/docs/contribute/advanced.md b/content/ru/docs/contribute/advanced.md index bb4ff9a35f97a..7ee4af13cf8a4 100644 --- a/content/ru/docs/contribute/advanced.md +++ b/content/ru/docs/contribute/advanced.md @@ -63,7 +63,7 @@ weight: 30 {{< note >}} -Бот [`fejta-bot`](https://github.com/fejta-bot) автоматически помечает заявки как устаревшие после 90 дней отсутствия активности, а затем закрывает их после ещё 30 дней простоя, когда они становятся тухлыми. Дежурные по PR должны закрывать заявки после 14-30 дней бездействия. +Бот [`k8s-ci-robot`](https://github.com/k8s-ci-robot) автоматически помечает заявки как устаревшие после 90 дней отсутствия активности, а затем закрывает их после ещё 30 дней простоя, когда они становятся тухлыми. Дежурные по PR должны закрывать заявки после 14-30 дней бездействия. {{< /note >}} diff --git a/content/ru/docs/setup/learning-environment/minikube.md b/content/ru/docs/setup/learning-environment/minikube.md index 500d171abecbb..fea7e142528c9 100644 --- a/content/ru/docs/setup/learning-environment/minikube.md +++ b/content/ru/docs/setup/learning-environment/minikube.md @@ -525,6 +525,4 @@ Minikube использует [libmachine](https://github.com/docker/machine/tre ## Сообщество -Помощь, вопросы и комментарии приветствуются и поощряются! Разработчики Minikube проводят время на [Slack](https://kubernetes.slack.com) в канале #minikube (получить приглашение можно [здесь](http://slack.kubernetes.io/)). У нас также есть [список рассылки kubernetes-dev на Google Groups](https://groups.google.com/forum/#!forum/kubernetes-dev). Если вы отправляете сообщение в список, пожалуйста, начните вашу тему с "minikube: ". - - +Помощь, вопросы и комментарии приветствуются и поощряются! Разработчики Minikube проводят время на [Slack](https://kubernetes.slack.com) в канале #minikube (получить приглашение можно [здесь](http://slack.kubernetes.io/)). У нас также есть [список рассылки dev@kubernetes на Google Groups](https://groups.google.com/a/kubernetes.io/g/dev/). Если вы отправляете сообщение в список, пожалуйста, начните вашу тему с "minikube: ". diff --git a/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index c55819902a74f..fd99c78cd6733 100644 --- a/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -59,7 +59,7 @@ kubelet исполняет команду `cat /tmp/healthy` в целевом Когда контейнер запускается, он исполняет команду ```shell -/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600" +/bin/sh -c "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600" ``` Для первых 30 секунд жизни контейнера существует файл `/tmp/healthy`. diff --git a/content/ru/docs/tutorials/kubernetes-basics/_index.html b/content/ru/docs/tutorials/kubernetes-basics/_index.html index e6548931c38a9..ccdaefe8b8696 100644 --- a/content/ru/docs/tutorials/kubernetes-basics/_index.html +++ b/content/ru/docs/tutorials/kubernetes-basics/_index.html @@ -99,7 +99,7 @@

      Учебные модули по основам Kubernetes

      diff --git a/content/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 502beb23bd35a..738d2b0bc6786 100644 --- a/content/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -28,7 +28,7 @@

      Темы

      Развёртывания Kubernetes

      -

      Как только вы запустили кластер Kubernetes, вы можете развернуть свои контейнеризированные приложения в него. Для этого вам нужно создать конфигурацию развёртывания (Deployment) в Kubernetes. Развёртывание сообщает Kubernetes, как создавать и обновлять экземпляры вашего приложения. После создания развёртывания ведущий узел Kubernetes планирует запустить экземпляры приложения на отдельных узлах в кластере.

      +

      Как только вы запустили кластер Kubernetes, вы можете развернуть на нём свои контейнеризированные приложения. Для этого вам нужно создать конфигурацию развёртывания (Deployment) в Kubernetes. Развёртывание сообщает Kubernetes, как создавать и обновлять экземпляры вашего приложения. После создания развёртывания ведущий узел Kubernetes планирует запустить экземпляры приложения на отдельных узлах в кластере.

      Когда экземпляры приложения были созданы, контроллер развёртывания Kubernetes непрерывно отслеживает их. Если узел, на котором размещен экземпляр, вышёл из строя или был удалён, контроллер развёртывания вместо этого экземпляра использует экземпляр на другом узле в кластере. Этот процесс представляет собой механизм самовосстановления, обеспечивающий работу кластера в случае возникновения аппаратных неисправностей либо технических работ. diff --git a/content/ru/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ru/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 07cb3cd2df051..1a59057eb88ea 100644 --- a/content/ru/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/ru/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -29,7 +29,7 @@

      Темы

      Обзор сервисов Kubernetes

      Под — это расходный материал в Kubernetes. У подов есть жизненный цикл. Когда рабочий узел завершается, запущенные поды в узле также уничтожаются. После этого ReplicaSet попытается автоматически вернуть кластер обратно в требуемое состояние, создавая новые поды, чтобы поддержать работоспособность приложения. Другой пример — бэкенд для обработки изображений с 3 репликами. Поскольку это взаимозаменяемые реплики, то они не влияют на фронтенд-часть, даже если под был уничтожен и пересоздан. Тем не менее, каждый под в кластере Kubernetes имеет уникальный IP-адрес, даже под на одном и том же узле, поэтому должен быть способ автоматической координации изменений между подами, чтобы приложения продолжали функционировать.

      -

      Сервис в Kubernetes — это абстрактный объект, который определяет логический набор подов и политику доступа к ним. Сервисы создают слабую связь между подами, которые от них зависят. Сервис создаётся в формате YAML (рекомендуемый формат) или JSON, как и все остальные объекты в Kubernetes. Как правило, набор подов для сервиса определяется LabelSelector (ниже описано, в каких случаях понадобиться сервис без указания selector в спецификации).

      +

      Сервис в Kubernetes — это абстрактный объект, который определяет логический набор подов и политику доступа к ним. Сервисы создают слабую связь между подами, которые от них зависят. Сервис создаётся в формате YAML (рекомендуемый формат) или JSON, как и все остальные объекты в Kubernetes. Как правило, набор подов для сервиса определяется LabelSelector (ниже описано, в каких случаях понадобится сервис без указания selector в спецификации).

      Хотя у каждого пода есть уникальный IP-адрес, эти IP-адреса не доступны за пределами кластера без использования сервиса. Сервисы позволяют приложениям принимать трафик. Сервисы могут быть по-разному открыты, в зависимости от указанного поля type в ServiceSpec:

        diff --git a/content/ru/examples/pods/probe/exec-liveness.yaml b/content/ru/examples/pods/probe/exec-liveness.yaml index 07bf75f85c6f3..6a9c9b3213718 100644 --- a/content/ru/examples/pods/probe/exec-liveness.yaml +++ b/content/ru/examples/pods/probe/exec-liveness.yaml @@ -11,7 +11,7 @@ spec: args: - /bin/sh - -c - - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 + - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md index 4bc2cb6a006af..0b720f4aca705 100644 --- a/content/uk/docs/concepts/overview/what-is-kubernetes.md +++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md @@ -60,7 +60,7 @@ Each VM is a full machine running all the components, including its own operatin -**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саму тому контейнери вважаються легковісними. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем. +**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саме тому контейнери вважаються "легкими", в порівнянні з віртуалками. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем. @@ -93,7 +93,7 @@ Containers have become popular because they provide extra benefits, such as: -## Чому вам потрібен Kebernetes і що він може робити +## Чому вам потрібен Kubernetes і що він може робити 有关为博客提供内容的信息,请参见 -https://kubernetes.io/zh/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post +https://kubernetes.io/zh-cn/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post {{< /comment >}} \ No newline at end of file diff --git a/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md b/content/zh-cn/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md similarity index 93% rename from content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md rename to content/zh-cn/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md index 90dd2a13176a1..f085117d1cdbd 100644 --- a/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md +++ b/content/zh-cn/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md @@ -1,18 +1,13 @@ --- - title: " Kubernetes 采集视频 " date: 2015-03-23 slug: kubernetes-gathering-videos --- - - [使用 Vitess 和 Kubernetes 在云中扩展 MySQL](http://googlecloudplatform.blogspot.com/2015/03/scaling-MySQL-in-the-cloud-with-Vitess-and-Kubernetes.html) -- [虚拟机上的容器群集](http://googlecloudplatform.blogspot.com/2015/02/container-clusters-on-vms.html) +- [虚拟机上的容器集群](http://googlecloudplatform.blogspot.com/2015/02/container-clusters-on-vms.html) - [想知道的关于 kubernetes 的一切,却又不敢问](http://googlecloudplatform.blogspot.com/2015/01/everything-you-wanted-to-know-about-Kubernetes-but-were-afraid-to-ask.html) - [什么构成容器集群?](http://googlecloudplatform.blogspot.com/2015/01/what-makes-a-container-cluster.html) - [将 OpenStack 和 Kubernetes 与 Murano 集成](https://www.mirantis.com/blog/integrating-openstack-and-kubernetes-with-murano/) diff --git a/content/zh/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md b/content/zh-cn/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md similarity index 97% rename from content/zh/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md rename to content/zh-cn/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md index 41af82357b16a..9f766237f3fda 100644 --- a/content/zh/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md +++ b/content/zh-cn/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md @@ -2,16 +2,14 @@ title: "Borg: Kubernetes 的前身" date: 2015-04-23 slug: borg-predecessor-to-kubernetes -url: /zh/blog/2015/04/Borg-Predecessor-To-Kubernetes --- + @@ -114,6 +112,6 @@ Thanks to the advent of software-defined overlay networks such as [flannel](http With the growing popularity of container-based microservice architectures, the lessons Google has learned from running such systems internally have become of increasing interest to the external DevOps community. By revealing some of the inner workings of our cluster manager Borg, and building our next-generation cluster manager as both an open-source project (Kubernetes) and a publicly available hosted service ([Google Container Engine](http://cloud.google.com/container-engine)), we hope these lessons can benefit the broader community outside of Google and advance the state-of-the-art in container scheduling and cluster management. --> 随着基于容器的微服务架构的日益普及,Google 从内部运行此类系统所汲取的经验教训已引起外部 DevOps 社区越来越多的兴趣。 -通过揭示集群管理器 Borg 的一些内部工作原理,并将下一代集群管理器构建为一个开源项目(Kubernetes)和一个公开可用的托管服务([Google Container Engine](http://cloud.google.com/container-engine)) ,我们希望这些课程可以使 Google 之外的广大社区受益,并推动容器调度和集群管理方面的最新技术发展。 +通过揭示集群管理器 Borg 的一些内部工作原理,并将下一代集群管理器构建为一个开源项目(Kubernetes)和一个公开可用的托管服务([Google Container Engine](http://cloud.google.com/container-engine)),我们希望这些课程可以使 Google 之外的广大社区受益,并推动容器调度和集群管理方面的最新技术发展。 diff --git a/content/zh/blog/_posts/2015-04-00-Kubernetes-Release-0150.md b/content/zh-cn/blog/_posts/2015-04-00-Kubernetes-Release-0150.md similarity index 100% rename from content/zh/blog/_posts/2015-04-00-Kubernetes-Release-0150.md rename to content/zh-cn/blog/_posts/2015-04-00-Kubernetes-Release-0150.md diff --git a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md b/content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md similarity index 97% rename from content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md rename to content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md index a95a116b4956a..323cdad934f3d 100644 --- a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md +++ b/content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md @@ -1,17 +1,15 @@ --- -title: " 每周 Kubernetes 社区例会笔记 - 2015年4月3日 " +title: " 每周 Kubernetes 社区例会笔记 - 2015 年 4 月 3 日 " date: 2015-04-04 slug: weekly-kubernetes-community-hangout -url: /zh/blog/2015/04/Weekly-Kubernetes-Community-Hangout --- + diff --git a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md b/content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md similarity index 100% rename from content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md rename to content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md diff --git a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md b/content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md similarity index 100% rename from content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md rename to content/zh-cn/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md diff --git a/content/zh/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md b/content/zh-cn/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md similarity index 97% rename from content/zh/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md rename to content/zh-cn/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md index 275cbead77d3d..c715afe8b21e4 100644 --- a/content/zh/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md +++ b/content/zh-cn/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md @@ -1,17 +1,15 @@ - ---- -title: " 通过 RKT 对 Kubernetes 的 AppC 支持 " + + diff --git a/content/zh/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md b/content/zh-cn/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md similarity index 98% rename from content/zh/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md rename to content/zh-cn/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md index 68a50e93c8b97..0ac0f14276861 100644 --- a/content/zh/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md +++ b/content/zh-cn/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md @@ -3,14 +3,11 @@ title: " OpenStack 上的 Kubernetes " date: 2015-05-19 slug: kubernetes-on-openstack --- - [![](https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s400/Untitled%2Bdrawing.jpg)](https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s1600/Untitled%2Bdrawing.jpg) diff --git a/content/zh/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md b/content/zh-cn/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md similarity index 97% rename from content/zh/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md rename to content/zh-cn/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md index e7d01a5a6e0ff..39f88f204c30d 100644 --- a/content/zh/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md +++ b/content/zh-cn/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md @@ -3,14 +3,11 @@ title: " Kubernetes 社区每周聚会笔记- 2015年5月1日 " date: 2015-05-11 slug: weekly-kubernetes-community-hangout --- - +url: /blog/2015/07/Announcing-First-Kubernetes-Enterprise +--> 在谷歌,我们依赖 Linux 容器应用程序去运行我们的核心基础架构。所有服务,从搜索引擎到Gmail服务,都运行在容器中。事实上,我们非常喜欢容器,甚至我们的谷歌云计算引擎虚拟机也运行在容器上!由于容器对于我们的业务至关重要,我们已经与社区合作开发许多基本的容器技术(从 cgroups 到 Docker 的 LibContainer),甚至决定去构建谷歌的下一代开源容器调度技术,Kubernetes。 diff --git a/content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md b/content/zh-cn/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md similarity index 95% rename from content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md rename to content/zh-cn/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md index 7e2bc2442368f..54f648fd5c972 100644 --- a/content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md +++ b/content/zh-cn/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md @@ -1,18 +1,14 @@ - - --- -title: " Kubernetes社区每周环聊笔记-2015年7月31日 " + + 自从 Kubernetes 1.0 在七月发布以来,我们已经看到大量公司采用建立分布式系统来管理其容器集群。 我们也对帮助 Kubernetes 社区变得更好,迅速发展的人感到钦佩。 @@ -90,7 +88,7 @@ Today, we’re also proud to mark the inaugural Kubernetes conference, [KubeCon] -我们想强调几个使 Kuberbetes 变得更好的众多合作伙伴中的几位: +我们想强调几个使 Kubernetes 变得更好的众多合作伙伴中的几位: - -title: "Kubernetes 和 Docker 简单的 leader election" + +#### 概述 Kubernetes 简化了集群上运行的服务的部署和操作管理。但是,它也简化了这些服务的发展。在本文中,我们将看到如何使用 Kubernetes 在分布式应用程序中轻松地执行 leader election。分布式应用程序通常为了可靠性和可伸缩性而复制服务的任务,但通常需要指定其中一个副本作为负责所有副本之间协调的负责人。 diff --git a/content/zh/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md b/content/zh-cn/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md similarity index 99% rename from content/zh/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md rename to content/zh-cn/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md index ca8dfc8da44f7..649ff90059961 100644 --- a/content/zh/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md +++ b/content/zh-cn/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md @@ -4,12 +4,12 @@ date: 2016-01-14 slug: why-kubernetes-doesnt-use-libnetwork --- - +url: /blog/2016/01/Why-Kubernetes-Doesnt-Use-Libnetwork +--> diff --git a/content/zh/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md b/content/zh-cn/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md similarity index 93% rename from content/zh/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md rename to content/zh-cn/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md index 09b2880192b25..95677cb77747f 100644 --- a/content/zh/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md +++ b/content/zh-cn/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md @@ -5,12 +5,10 @@ slug: kubecon-eu-2016-kubernetes-community-in --- -会场地址:CodeNode * 英国伦敦南广场 10 号 -酒店住宿:[酒店](https://skillsmatter.com/contact-us) -网站:[kubecon.io] (https://www.kubecon.io/) +会场地址:CodeNode * 英国伦敦南广场 10 号 +酒店住宿:[酒店](https://skillsmatter.com/contact-us) +网站:[kubecon.io] (https://www.kubecon.io/) 推特:[@KubeConio] (https://twitter.com/kubeconio) 谷歌是 KubeCon EU 2016 的钻石赞助商。下个月 3 月 10 - 11 号来伦敦,参观 13 号展位,了解 Kubernetes,Google Container Engine(GKE),Google Cloud Platform 的所有信息! @@ -78,6 +76,6 @@ _KubeCon is organized by KubeAcademy, LLC, a community-driven group of developer -* Sarah Novotny, Kubernetes Community Manager, Google --> -_KubeCon 是由 KubeAcademy、LLC 组织的,这是一个由社区驱动的开发者团体,专注于开发人员的教育和 kubernet.com 的推广 +_KubeCon 是由 KubeAcademy、LLC 组织的,这是一个由社区驱动的开发者团体,专注于开发人员的教育和 kubernetes 的推广 -* Sarah Novotny, 谷歌的 Kubernetes 社区经理 diff --git a/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md b/content/zh-cn/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md similarity index 86% rename from content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md rename to content/zh-cn/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md index a9e0f10e4e7a5..8f5a80679f0b0 100644 --- a/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md +++ b/content/zh-cn/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md @@ -4,17 +4,16 @@ date: 2016-02-09 slug: kubernetes-community-meeting-notes --- + -#### 2月4日 - rkt演示(祝贺 1.0 版本, CoreOS!), eBay 将 k8s 放在 Openstack 上并认为 Openstack 在 k8s, SIG 和片状测试激增方面取得了进展。 +#### 2 月 4 日 - rkt 演示(祝贺 1.0 版本,CoreOS!),eBay 将 k8s 放在 Openstack 上并认为 Openstack 在 k8s,SIG 和片状测试激增方面取得了进展。 * 书记员:Rob Hirschfeld -* 演示视频(20分钟):CoreOS rkt + Kubernetes[Shaya Potter] - * 期待在未来几个月内看到与rkt和k8s的整合(“rkt-netes”)。 还没有集成到 v1.2版本中。 +* 演示视频(20 分钟):CoreOS rkt + Kubernetes [Shaya Potter] + * 期待在未来几个月内看到与rkt和k8s的整合(“rkt-netes”)。 还没有集成到 v1.2 版本中。 * Shaya 做了一个演示(8分钟的会议视频参考) - * rkt的CLI显示了旋转容器 + * rkt 的 CLI 显示了旋转容器 * [注意:音频在点数上是乱码] * 关于 k8s&rkt 整合的讨论 - * 下周 rkt 社区同步:https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY + * 下周 rkt 社区同步: https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY * Dawn Chen: * 将 rkt 与 kubernetes 集成的其余问题:1)cadivsor 2) DNS 3)与日志记录相关的错误 * 但是需要在 e2e 测试套件上做更多的工作 @@ -103,13 +102,16 @@ Kubernetes 贡献社区在每周四 10:00 PT 开会,通过视频会议讨论项 -要参与 Kubernetes 社区,请考虑加入我们的[Slack 频道][2],查看 GitHub上的 [Kubernetes 项目][3],或加入[Kubernetes-dev Google 小组][4]。如果你真的很兴奋,你可以完成上述所有工作并加入我们的下一次社区对话-2016年2月11日。请将您自己或您想要了解的主题添加到[议程][5]并通过加入[此组][6]来获取日历邀请。 +要参与 Kubernetes 社区,请考虑加入我们的 [Slack 频道][2],查看 GitHub 上的 +[Kubernetes 项目][3],或加入 [Kubernetes-dev Google 小组][4]。 +如果你真的很兴奋,你可以完成上述所有工作并加入我们的下一次社区对话 - 2016 年 2 月 11 日。 +请将你自己或你想要了解的主题添加到[议程][5]并通过加入[此组][6]来获取日历邀请。 "https://youtu.be/IScpP8Cj0hw?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ" [1]: https://github.com/kubernetes/kubernetes/pull/19714 -[2]: http://slack.k8s.io/ +[2]: https://slack.k8s.io/ [3]: https://github.com/kubernetes/ [4]: https://groups.google.com/forum/#!forum/kubernetes-dev [5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit# diff --git a/content/zh/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md b/content/zh-cn/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md similarity index 98% rename from content/zh/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md rename to content/zh-cn/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md index 1dc62626dff69..ce29a5681e236 100644 --- a/content/zh/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md +++ b/content/zh-cn/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md @@ -1,17 +1,15 @@ --- -title: " 容器世界现状,2016年1月 " +title: " 容器世界现状,2016 年 1 月 " date: 2016-02-01 slug: state-of-container-world-january-2016 -url: /zh/blog/2016/02/State-Of-Container-World-January-2016 --- + diff --git a/content/zh/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md b/content/zh-cn/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md similarity index 96% rename from content/zh/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md rename to content/zh-cn/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md index 10420cf73fca0..983495f152f89 100644 --- a/content/zh/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md +++ b/content/zh-cn/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md @@ -2,7 +2,6 @@ title: "Kubernetes 社区会议记录 - 20160218" date: 2016-02-23 slug: kubernetes-community-meeting-notes_23 -url: /zh/blog/2016/02/kubernetes-community-meeting-notes_23 --- diff --git a/content/zh/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md b/content/zh-cn/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md similarity index 98% rename from content/zh/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md rename to content/zh-cn/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md index c34940bcb2a70..00548b1585cdd 100644 --- a/content/zh/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md +++ b/content/zh-cn/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md @@ -2,15 +2,12 @@ title: " 在 Rancher 中添加对 Kuernetes 的支持 " date: 2016-04-08 slug: adding-support-for-kubernetes-in-rancher -url: /zh/blog/2016/04/Adding-Support-For-Kubernetes-In-Rancher --- diff --git a/content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md b/content/zh-cn/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md similarity index 99% rename from content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md rename to content/zh-cn/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md index 36eee4a09b718..83a6c5e4b99e1 100644 --- a/content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md +++ b/content/zh-cn/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md @@ -2,16 +2,14 @@ title: “SIG-Networking:1.3 版本引入 Kubernetes 网络策略 API” date: 2016-04-18 slug: kubernetes-network-policy-apis -url: /zh/blog/2016/04/Kubernetes-Network-Policy-APIs --- + diff --git a/content/zh/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md b/content/zh-cn/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md similarity index 96% rename from content/zh/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md rename to content/zh-cn/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md index 0a24f270581ef..c2b7969abc3b5 100644 --- a/content/zh/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md +++ b/content/zh-cn/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md @@ -2,16 +2,14 @@ title: " SIG-ClusterOps: 提升 Kubernetes 集群的可操作性和互操作性 " date: 2016-04-19 slug: sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters -url: /zh/blog/2016/04/Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters --- + diff --git a/content/zh/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md b/content/zh-cn/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md similarity index 98% rename from content/zh/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md rename to content/zh-cn/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md index 4b3b83ebea575..55a3f40613685 100644 --- a/content/zh/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md +++ b/content/zh-cn/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md @@ -2,16 +2,14 @@ title: " CoreOS Fest 2016: CoreOS 和 Kubernetes 在柏林(和旧金山)社区见面会 " date: 2016-05-03 slug: coreosfest2016-kubernetes-community -url: /zh/blog/2016/05/Coreosfest2016-Kubernetes-Community --- + diff --git a/content/zh/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md b/content/zh-cn/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md similarity index 97% rename from content/zh/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md rename to content/zh-cn/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md index 2231431ccb712..411640e700acc 100644 --- a/content/zh/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md +++ b/content/zh-cn/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md @@ -1,16 +1,13 @@ --- -题目: " 将端到端的 Kubernetes 测试引入 Azure (第二部分) " -日期: 2016-07-18 +title: " 将端到端的 Kubernetes 测试引入 Azure (第二部分) " +date: 2016-07-18 slug: bringing-end-to-end-kubernetes-testing-to-azure-2 -url: /zh/blog/2016/07/Bringing-End-To-End-Kubernetes-Testing-To-Azure-2 --- -Dashboard UI 现在处理所有工作负载资源。这意味着无论您运行什么工作负载类型,它都在 web 界面中可见,并且您可以对其进行操作更改。例如,可以使用[Pet Sets](/docs/user-guide/petset/)修改有状态的 mysql 安装,使用部署对 web 服务器进行滚动更新,或使用守护程序安装群集监视。 +Dashboard UI 现在处理所有工作负载资源。这意味着无论您运行什么工作负载类型,它都在 web 界面中可见,并且您可以对其进行操作更改。例如,可以使用[Pet Sets](/docs/user-guide/petset/)修改有状态的 mysql 安装,使用部署对 web 服务器进行滚动更新,或使用守护程序安装集群监视。 @@ -105,9 +103,9 @@ Here is a list of our focus areas for the following months: 以下是我们接下来几个月的重点领域: -- [Handle more Kubernetes resources](https://github.com/kubernetes/dashboard/issues/961) - 显示群集用户可能与之交互的所有资源。一旦完成,dashboard 就可以完全替代cli。 +- [Handle more Kubernetes resources](https://github.com/kubernetes/dashboard/issues/961) - 显示集群用户可能与之交互的所有资源。一旦完成,dashboard 就可以完全替代cli。 - [Monitoring and troubleshooting](https://github.com/kubernetes/dashboard/issues/962) - 将资源使用统计信息/图表添加到 Dashboard 中显示的对象。这个重点领域将允许对云应用程序进行可操作的调试和故障排除。 -- [Security, auth and logging in](https://github.com/kubernetes/dashboard/issues/964) - 使仪表板可从群集外部的网络访问,并使用自定义身份验证系统。 +- [Security, auth and logging in](https://github.com/kubernetes/dashboard/issues/964) - 使仪表板可从集群外部的网络访问,并使用自定义身份验证系统。 -祝贺 Kubernetes 社区发布了另一个[有价值的版本](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads)。 +祝贺 Kubernetes 社区发布了另一个[有价值的版本](https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/)。 专注于有状态应用程序和联邦集群是我对 1.3 如此兴奋的两个原因。 Kubernetes 对有状态应用程序(例如 Cassandra、Kafka 和 MongoDB)的支持至关重要。 重要服务依赖于数据库、键值存储、消息队列等。 @@ -40,7 +39,7 @@ Diamanti 正在加速在生产中使用有状态应用程序的容器-在这方 **应用程序不仅仅需要牛** 除了诸如Web服务器之类的无状态容器(因为它们是可互换的,因此被称为“牛”)之外,用户越来越多地使用容器来部署有状态工作负载,以从“一次构建,随处运行”中受益并提高裸机效率/利用率。 这些“宠物”(之所以称为“宠物”,是因为每个宠物都需要特殊的处理)带来了新的要求,包括更长的生命周期,配置依赖项,有状态故障转移以及性能敏感性。 @@ -56,10 +55,10 @@ Pet Set 还利用普遍存在的 DNS SRV 记录简化了服务发现,DNS SRV -Diamanti 对 Kubernete s的 [FlexVolume 贡献](https://github.com/kubernetes/kubernetes/pull/13840) 通过为持久卷提供低延迟存储并保证性能来实现有状态工作负载,包括从容器到媒体的强制服务质量。 +Diamanti 对 Kubernetes 的 [FlexVolume 贡献](https://github.com/kubernetes/kubernetes/pull/13840) 通过为持久卷提供低延迟存储并保证性能来实现有状态工作负载,包括从容器到媒体的强制服务质量。 **联邦主义者** @@ -83,7 +82,7 @@ It’s easy to imagine powerful multi-cluster use cases with cross-cluster feder 很容易想象在将来的版本中具有跨集群联邦服务的强大多集群用例。 一个示例是根据治理,安全性和性能要求调度容器。 Diamanti 的调度程序扩展是在考虑了这一概念的基础上开发的。 -我们的[第一个实现](https://github.com/kubernetes/kubernetes/pull/13580)使 Kubernetes 调度程序意识到每个群集节点本地的网络和存储资源。 +我们的[第一个实现](https://github.com/kubernetes/kubernetes/pull/13580)使 Kubernetes 调度程序意识到每个集群节点本地的网络和存储资源。 将来,类似的概念可以应用于跨集群联邦服务的更广泛的放置控件。 + + + +**编者注**:这篇文章由 Kubernetes SIG-Apps 团队撰写,分享他们如何关注在 Kubernetes +中运行应用的开发者和 devops 经验。 + +Kubernetes 是容器化应用程序的出色管理者。因此,[众多](https://kubernetes.io/blog/2016/02/sharethis-kubernetes-in-production) +[公司](https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/) +[已经](http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/) +[开始](http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/) 在 Kubernetes 中运行应用程序。 + +Kubernetes 特殊兴趣小组 ([SIGs](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig)) +自 1.0 版本开始就一直致力于支持开发者和运营商社区。围绕网络、存储、扩展和其他运营领域组织的人员。 + +随着 Kubernetes 的兴起,对工具、最佳实践以及围绕构建和运营云原生应用程序的讨论的需求也随之增加。为了满足这一需求, +Kubernetes [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) 应运而生。 + +SIG Apps 为公司和个人提供以下支持: + + + +- 查看和分享正在构建的、为应用操作人员赋能的工具的演示 +- 了解和讨论应用运营人员的需求 +- 组织各方努力改善体验 + + + +自从 SIG Apps 成立以来,我们已经进行了项目演示,例如 [KubeFuse](https://github.com/opencredo/kubefuse)、 +[KPM](https://github.com/kubespray/kpm),和 [StackSmith](https://stacksmith.bitnami.com/)。 +我们还对那些负责 Kubernetes 中应用运维的人进行了调查。 + +从调查结果中,我们学到了很多东西,包括: + + + +- 81% 的受访者希望采用某种形式的自动扩缩 +- 为了存储秘密信息,47% 的受访者使用内置 Secret。目前这些资料并未实现静态加密。 + (如果你需要关于加密的帮助,请参见[问题](https://github.com/kubernetes/kubernetes/issues/10439)。) +- 响应最多的问题与第三方工具和调试有关 +- 对于管理应用程序的第三方工具,没有明确的赢家。有各种各样的做法 +- 总体上对缺乏有用文件有较多抱怨。(请在[此处](https://github.com/kubernetes/kubernetes.github.io)帮助提交文档。) +- 数据量很大。很多回答是可选的,所以我们很惊讶所有候选人的所有问题中有 935 个都被填写了。 + 如果你想亲自查看数据,可以[在线](https://docs.google.com/spreadsheets/d/15SUL7QTpR4Flrp5eJ5TR8A5ZAFwbchfX2QL4MEoJFQ8/edit?usp=sharing)查看。 + + + +就应用运维而言,仍然有很多东西需要解决和共享。如果你对运行应用程序有看法或者有改善体验的工具, +或者只是想潜伏并了解状况,请加入我们。 + + + +- 在 SIG-Apps [Slack 频道](https://kubernetes.slack.com/messages/sig-apps)与我们聊天 +- 发送邮件到 SIG-Apps [邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-apps) +- 参加我们的公开会议:太平洋时间每周三上午 9 点,[详情点击此处](https://github.com/kubernetes/community/blob/master/sig-apps/README.md#meeting) + + +_--Matt Farina ,Hewlett Packard Enterprise 首席工程师_ \ No newline at end of file diff --git a/content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md b/content/zh-cn/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md similarity index 99% rename from content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md rename to content/zh-cn/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md index 682435db84fde..49c4facdeb60c 100644 --- a/content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md +++ b/content/zh-cn/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md @@ -2,15 +2,12 @@ title: " 使用 Kubernetes Pet Sets 和 Datera Elastic Data Fabric 的 FlexVolume 扩展有状态的应用程序 " date: 2016-08-29 slug: stateful-applications-using-kubernetes-datera -url: /zh/blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera --- + diff --git a/content/zh/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md b/content/zh-cn/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md similarity index 97% rename from content/zh/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md rename to content/zh-cn/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md index cab6c2b331509..c30f22aa35624 100644 --- a/content/zh/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md +++ b/content/zh-cn/blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md @@ -3,14 +3,11 @@ title: " Kubernetes 中自动缩放 " date: 2017-11-17 slug: autoscaling-in-kubernetes --- - -推特: 
 +推特: 博客: [http://www.ofbizian.com][5] 领英: diff --git a/content/zh/blog/_posts/2018-04-25-open-source-charts-2017.md b/content/zh-cn/blog/_posts/2018-04-25-open-source-charts-2017.md similarity index 100% rename from content/zh/blog/_posts/2018-04-25-open-source-charts-2017.md rename to content/zh-cn/blog/_posts/2018-04-25-open-source-charts-2017.md diff --git a/content/zh/blog/_posts/2018-05-01-developing-on-kubernetes.md b/content/zh-cn/blog/_posts/2018-05-01-developing-on-kubernetes.md similarity index 97% rename from content/zh/blog/_posts/2018-05-01-developing-on-kubernetes.md rename to content/zh-cn/blog/_posts/2018-05-01-developing-on-kubernetes.md index 5d31031ec7f10..0e94e5ffbfe71 100644 --- a/content/zh/blog/_posts/2018-05-01-developing-on-kubernetes.md +++ b/content/zh-cn/blog/_posts/2018-05-01-developing-on-kubernetes.md @@ -12,20 +12,22 @@ slug: developing-on-kubernetes --- --> - + **作者**: [Michael Hausenblas](https://twitter.com/mhausenblas) (Red Hat), [Ilya Dmitrichenko](https://twitter.com/errordeveloper) (Weaveworks) -您将如何开发一个 Kubernates 应用?也就是说,您如何编写并测试一个要在 Kubernates 上运行的应用程序?本文将重点介绍在独自开发或者团队协作中,您可能希望了解到的为了成功编写 Kubernetes 应用程序而需面临的挑战,工具和方法。 +您将如何开发一个 Kubernetes 应用?也就是说,您如何编写并测试一个要在 Kubernetes 上运行的应用程序?本文将重点介绍在独自开发或者团队协作中,您可能希望了解到的为了成功编写 Kubernetes 应用程序而需面临的挑战,工具和方法。 -我们假定您是一位开发人员,有您钟爱的编程语言,编辑器/IDE(集成开发环境),以及可用的测试框架。在针对 Kubernates 开发应用时,最重要的目标是减少对当前工作流程的影响,改变越少越好,尽量做到最小。举个例子,如果您是 Node.js 开发人员,习惯于那种热重载的环境 - 也就是说您在编辑器里一做保存,正在运行的程序就会自动更新 - 那么跟容器、容器镜像或者镜像仓库打交道,又或是跟 Kubernetes 部署、triggers 以及更多头疼东西打交道,不仅会让人难以招架也真的会让开发过程完全失去乐趣。 +我们假定您是一位开发人员,有您钟爱的编程语言,编辑器/IDE(集成开发环境),以及可用的测试框架。在针对 Kubernetes 开发应用时,最重要的目标是减少对当前工作流程的影响,改变越少越好,尽量做到最小。举个例子,如果您是 Node.js 开发人员,习惯于那种热重载的环境 - 也就是说您在编辑器里一做保存,正在运行的程序就会自动更新 - 那么跟容器、容器镜像或者镜像仓库打交道,又或是跟 Kubernetes 部署、triggers 以及更多头疼东西打交道,不仅会让人难以招架也真的会让开发过程完全失去乐趣。 -许多工具支持纯 offline 开发,包括 Minikube、Docker(Mac 版/Windows 版)、Minishift 以及下文中我们将详细讨论的几种。有时,比如说在一个微服务系统中,已经有若干微服务在运行,proxied 模式(通过转发把数据流传进传出集群)就非常合适,Telepresence 就是此类工具的一个实例。live 模式,本质上是您基于一个远程集群进行构建和部署。最后,纯 online 模式意味着您的开发环境和运行集群都是远程的,典型的例子是 [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) 或者 [Cloud 9](https://github.com/errordeveloper/k9c)。现在让我们仔细看看离线开发的基础:在本地运行 Kubernetes。 +许多工具支持纯 offline 开发,包括 Minikube、Docker(Mac 版/Windows 版)、Minishift 以及下文中我们将详细讨论的几种。有时,比如说在一个微服务系统中,已经有若干微服务在运行,proxied 模式(通过转发把数据流传进传出集群)就非常合适,Telepresence 就是此类工具的一个实例。live 模式,本质上是您基于一个远程集群进行构建和部署。最后,纯 online 模式意味着您的开发环境和运行集群都是远程的,典型的例子是 [Eclipse Che](https://www.eclipse.org/che/docs/che-7/introduction-to-eclipse-che/) 或者 [Cloud 9](https://github.com/errordeveloper/k9c)。现在让我们仔细看看离线开发的基础:在本地运行 Kubernetes。 -* 它允许开发人员使用本地或者远程的 Kubernates 集群 +* 它允许开发人员使用本地或者远程的 Kubernetes 集群 * 如何部署到生产环境取决于用户, Draft 的作者推荐了他们的另一个项目 - Brigade * 可以代替 Skaffold, 并且可以和 Squash 一起使用 @@ -255,7 +257,7 @@ More info: 更多信息: * [Squash: A Debugger for Kubernetes Apps](https://www.youtube.com/watch?v=5TrV3qzXlgI) -* [Getting Started Guide](https://github.com/solo-io/squash/blob/master/docs/getting-started.md) +* [Getting Started Guide](https://squash.solo.io/overview/) ### Telepresence @@ -397,10 +399,10 @@ Note that for the target Kubernetes cluster we’ve been using Minikube locally, 请注意,我们一直使用 Minikube 的本地 Kubernetes 集群,但是您也可以使用 ksync 和 Skaffold 的远程集群跟随练习。 -### 实践演练:ksync +## 实践演练:ksync -一旦两个部署建好并且 pod 开始运行,我们转发 `stock-con` 服务以供本地读取(另开一个终端窗口): +一旦两个部署建好并且 pod 开始运行,我们转发 `stock-con` 服务以供本地读取(另开一个终端窗口)并检查 `healthz` 端点的响应: ``` $ kubectl get -n dok po --selector=app=stock-con \ diff --git a/content/zh/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md b/content/zh-cn/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md similarity index 97% rename from content/zh/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md rename to content/zh-cn/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md index eb1aba85d2b46..2003493013199 100644 --- a/content/zh/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md +++ b/content/zh-cn/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md @@ -1,17 +1,13 @@ --- -title: 'Kubernetes 1 11:向 discuss kubernetes 问好' layout: blog +title: 向 Discuss Kubernetes 问好 date: 2018-05-30 +slug: say-hello-to-discuss-kubernetes --- - + + +**作者**:Joe Beda(Heptio 首席技术官兼创始人) + +2014 年 6 月 6 日,我检查了 Kubernetes 公共代码库的[第一次 commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56) 。许多人会认为这是故事开始的地方。这难道不是一切开始的地方吗?但这的确不能把整个过程说清楚。 + +![k8s_first_commit](/images/blog/2018-06-06-4-years-of-k8s/k8s-first-commit.png) + + + +第一次 commit 涉及的人员众多,自那以后 Kubernetes 的成功归功于更大的开发者阵容。 + +Kubernetes 建立在过去十年曾经在 Google 的 Borg 集群管理系统中验证过的思路之上。而 Borg 本身也是 Google 和其他公司早期努力的结果。 + +具体而言,Kubernetes 最初是从 Brendan Burns 的一些原型开始,结合我和 Craig McLuckie 正在进行的工作,以更好地将 Google 内部实践与 Google Cloud 的经验相结合。 Brendan,Craig 和我真的希望人们使用它,所以我们建议将这个原型构建为一个开源项目,将 Borg 的最佳创意带给大家。 + +在我们所有人同意后,就开始着手构建这个系统了。我们采用了 Brendan 的原型(Java 语言),用 Go 语言重写了它,并且以上述核心思想去构建该系统。到这个时候,团队已经成长为包括 Ville Aikas,Tim Hockin,Brian Grant,Dawn Chen 和 Daniel Smith。一旦我们有了一些工作需求,有人必须承担一些脱敏的工作,以便为公开发布做好准备。这个角色最终由我承担。当时,我不知道这件事情的重要性,我创建了一个新的仓库,把代码搬过来,然后进行了检查。所以在我第一次提交 public commit 之前,就有工作已经启动了。 + +那时 Kubernetes 的版本只是现在版本的简单雏形。核心概念已经有了,但非常原始。例如,Pods 被称为 Tasks,这在我们推广前一天就被替换。2014年6月10日 Eric Brewe 在第一届 DockerCon 上的演讲中正式发布了 Kubernetes。你可以在此处观看该视频: + +
        + + + +但是,无论多么原始,这小小的一步足以激起一个开始强大而且变得更强大的社区的兴趣。在过去的四年里,Kubernetes 已经超出了我们所有人的期望。我们对 Kubernetes 社区的所有人员表示感谢。该项目所取得的成功不仅基于代码和技术,还基于一群出色的人聚集在一起所做的有意义的事情。Sarah Novotny 策划的一套 [Kubernetes 价值观](https://github.com/kubernetes/steering/blob/master/values.md)是以上最好的表现形式。 + +让我们一起期待下一个 4 年!🎉🎉🎉 diff --git a/content/zh/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md b/content/zh-cn/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md similarity index 98% rename from content/zh/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md rename to content/zh-cn/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md index bb8d7d848c1fd..02f8851c7341e 100644 --- a/content/zh/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md +++ b/content/zh-cn/blog/_posts/2018-06-07-dynamic-ingress-kubernetes.md @@ -1,16 +1,16 @@ --- -title: 'Kubernetes 内的动态 Ingress' +title: 'Kubernetes 的动态 Ingress' +date: 2018-06-07 layout: blog +Author: Richard Li (Datawire) +slug: dynamic-ingress-in-kubernetes --- - -作者: Richard Li (Datawire) - + + +作者: Daniel Imberman (Bloomberg LP) + + +## 介绍 + +作为 Bloomberg [持续致力于开发 Kubernetes 生态系统](https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/)的一部分, +我们很高兴能够宣布 Kubernetes Airflow Operator 的发布; +[Apache Airflow](https://airflow.apache.org/)的一种机制,一种流行的工作流程编排框架, +使用 Kubernetes API 可以在本机启动任意的 Kubernetes Pod。 + + +## 什么是 Airflow? + +Apache Airflow 是“配置即代码”的 DevOps 理念的一种实现。 +Airflow 允许用户使用简单的 Python 对象 DAG(有向无环图)启动多步骤流水线。 +你可以在易于阅读的 UI 中定义依赖关系,以编程方式构建复杂的工作流,并监视调度的作业。 + +Airflow DAGs +Airflow UI + + +## 为什么在 Kubernetes 上使用 Airflow? + +自成立以来,Airflow 的最大优势在于其灵活性。 +Airflow 提供广泛的服务集成,包括Spark和HBase,以及各种云提供商的服务。 +Airflow 还通过其插件框架提供轻松的可扩展性。 +但是,该项目的一个限制是 Airflow 用户仅限于执行时 Airflow 站点上存在的框架和客户端。 +单个组织可以拥有各种 Airflow 工作流程,范围从数据科学流到应用程序部署。 +用例中的这种差异会在依赖关系管理中产生问题,因为两个团队可能会在其工作流程使用截然不同的库。 + +为了解决这个问题,我们使 Kubernetes 允许用户启动任意 Kubernetes Pod 和配置。 +Airflow 用户现在可以在其运行时环境,资源和机密上拥有全部权限,基本上将 Airflow 转变为“你想要的任何工作”工作流程协调器。 + + +## Kubernetes Operator + +在进一步讨论之前,我们应该澄清 Airflow 中的 [Operator](https://airflow.apache.org/concepts.html#operators) 是一个任务定义。 +当用户创建 DAG 时,他们将使用像 “SparkSubmitOperator” 或 “PythonOperator” 这样的 Operator 分别提交/监视 Spark 作业或 Python 函数。 +Airflow 附带了 Apache Spark,BigQuery,Hive 和 EMR 等框架的内置运算符。 +它还提供了一个插件入口点,允许DevOps工程师开发自己的连接器。 + +Airflow 用户一直在寻找更易于管理部署和 ETL 流的方法。 +在增加监控的同时,任何解耦流程的机会都可以减少未来的停机等问题。 +以下是 Airflow Kubernetes Operator 提供的好处: + + + * **提高部署灵活性:** +Airflow 的插件 API一直为希望在其 DAG 中测试新功能的工程师提供了重要的福利。 +不利的一面是,每当开发人员想要创建一个新的 Operator 时,他们就必须开发一个全新的插件。 +现在,任何可以在 Docker 容器中运行的任务都可以通过完全相同的运算符访问,而无需维护额外的 Airflow 代码。 + + + * **配置和依赖的灵活性:** + +对于在静态 Airflow 工作程序中运行的 Operator,依赖关系管理可能变得非常困难。 +如果开发人员想要运行一个需要 [SciPy](https://www.scipy.org) 的任务和另一个需要 [NumPy](http://www.numpy.org) 的任务, +开发人员必须维护所有 Airflow 节点中的依赖关系或将任务卸载到其他计算机(如果外部计算机以未跟踪的方式更改,则可能导致错误)。 +自定义 Docker 镜像允许用户确保任务环境,配置和依赖关系完全是幂等的。 + + + * **使用kubernetes Secret以增加安全性:** +处理敏感数据是任何开发工程师的核心职责。Airflow 用户总有机会在严格条款的基础上隔离任何API密钥,数据库密码和登录凭据。 +使用 Kubernetes 运算符,用户可以利用 Kubernetes Vault 技术存储所有敏感数据。 +这意味着 Airflow 工作人员将永远无法访问此信息,并且可以容易地请求仅使用他们需要的密码信息构建 Pod。 + + +# 架构 + +Airflow Architecture + +Kubernetes Operator 使用 [Kubernetes Python客户端](https://github.com/kubernetes-client/Python)生成由 APIServer 处理的请求(1)。 +然后,Kubernetes将使用你定义的需求启动你的 Pod(2)。 +镜像文件中将加载环境变量,Secret 和依赖项,执行单个命令。 +一旦启动作业,Operator 只需要监视跟踪日志的状况(3)。 +用户可以选择将日志本地收集到调度程序或当前位于其 Kubernetes 集群中的任何分布式日志记录服务。 + + +# 使用 Kubernetes Operator +## 一个基本的例子 + +以下 DAG 可能是我们可以编写的最简单的示例,以显示 Kubernetes Operator 的工作原理。 +这个 DAG 在 Kubernetes 上创建了两个 Pod:一个带有 Python 的 Linux 发行版和一个没有它的基本 Ubuntu 发行版。 +Python Pod 将正确运行 Python 请求,而没有 Python 的那个将向用户报告失败。 +如果 Operator 正常工作,则应该完成 “passing-task” Pod,而“ falling-task” Pod 则向 Airflow 网络服务器返回失败。 + +```Python +from airflow import DAG +from datetime import datetime, timedelta +from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator +from airflow.operators.dummy_operator import DummyOperator + + +default_args = { + 'owner': 'airflow', + 'depends_on_past': False, + 'start_date': datetime.utcnow(), + 'email': ['airflow@example.com'], + 'email_on_failure': False, + 'email_on_retry': False, + 'retries': 1, + 'retry_delay': timedelta(minutes=5) +} + +dag = DAG( + 'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10)) + + +start = DummyOperator(task_id='run_this_first', dag=dag) + +passing = KubernetesPodOperator(namespace='default', + image="Python:3.6", + cmds=["Python","-c"], + arguments=["print('hello world')"], + labels={"foo": "bar"}, + name="passing-test", + task_id="passing-task", + get_logs=True, + dag=dag + ) + +failing = KubernetesPodOperator(namespace='default', + image="ubuntu:1604", + cmds=["Python","-c"], + arguments=["print('hello world')"], + labels={"foo": "bar"}, + name="fail", + task_id="failing-task", + get_logs=True, + dag=dag + ) + +passing.set_upstream(start) +failing.set_upstream(start) +``` +Basic DAG Run + + +## 但这与我的工作流程有什么关系? + +虽然这个例子只使用基本映像,但 Docker 的神奇之处在于,这个相同的 DAG 可以用于你想要的任何图像/命令配对。 +以下是推荐的 CI/CD 管道,用于在 Airflow DAG 上运行生产就绪代码。 + +### 1:github 中的 PR + +使用Travis或Jenkins运行单元和集成测试,请你的朋友PR你的代码,并合并到主分支以触发自动CI构建。 + +### 2:CI/CD 构建 Jenkins - > Docker 镜像 + +[在 Jenkins 构建中生成 Docker 镜像和更新版本](https://getintodevops.com/blog/building-your-first-Docker-image-with-jenkins-2-guide-for-developers)。 + +### 3:Airflow 启动任务 + +最后,更新你的 DAG 以反映新版本,你应该准备好了! + +```Python +production_task = KubernetesPodOperator(namespace='default', + # image="my-production-job:release-1.0.1", <-- old release + image="my-production-job:release-1.0.2", + cmds=["Python","-c"], + arguments=["print('hello world')"], + name="fail", + task_id="failing-task", + get_logs=True, + dag=dag + ) +``` + + +# 启动测试部署 + +由于 Kubernetes Operator 尚未发布,我们尚未发布官方 +[helm](https://helm.sh/) 图表或 Operator(但两者目前都在进行中)。 +但是,我们在下面列出了基本部署的说明,并且正在积极寻找测试人员来尝试这一新功能。 +要试用此系统,请按以下步骤操作: + +## 步骤1:将 kubeconfig 设置为指向 kubernetes 集群 + +## 步骤2:克隆 Airflow 仓库: + +运行 `git clone https://github.com/apache/incubator-airflow.git` 来克隆官方 Airflow 仓库。 + +## 步骤3:运行 + +为了运行这个基本 Deployment,我们正在选择我们目前用于 Kubernetes Executor 的集成测试脚本(将在本系列的下一篇文章中对此进行解释)。 +要启动此部署,请运行以下三个命令: + +``` +sed -ie "s/KubernetesExecutor/LocalExecutor/g" scripts/ci/kubernetes/kube/configmaps.yaml +./scripts/ci/kubernetes/Docker/build.sh +./scripts/ci/kubernetes/kube/deploy.sh +``` + + +在我们继续之前,让我们讨论这些命令正在做什么: + +### sed -ie "s/KubernetesExecutor/LocalExecutor/g" scripts/ci/kubernetes/kube/configmaps.yaml + +Kubernetes Executor 是另一种 Airflow 功能,允许动态分配任务已解决幂等 Pod 的问题。 +我们将其切换到 LocalExecutor 的原因只是一次引入一个功能。 +如果你想尝试 Kubernetes Executor,欢迎你跳过此步骤,但我们将在以后的文章中详细介绍。 + +### ./scripts/ci/kubernetes/Docker/build.sh + +此脚本将对Airflow主分支代码进行打包,以根据Airflow的发行文件构建Docker容器 + +### ./scripts/ci/kubernetes/kube/deploy.sh + +最后,我们在你的集群上创建完整的Airflow部署。这包括 Airflow 配置,postgres 后端,web 服务器和调度程序以及之间的所有必要服务。 +需要注意的一点是,提供的角色绑定是集群管理员,因此如果你没有该集群的权限级别,可以在 scripts/ci/kubernetes/kube/airflow.yaml 中进行修改。 + +## 步骤4:登录你的网络服务器 + +现在你的 Airflow 实例正在运行,让我们来看看 UI! +用户界面位于 Airflow Pod的 8080 端口,因此只需运行即可: + +``` +WEB=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep "airflow" | head -1) +kubectl port-forward $WEB 8080:8080 +``` + + +现在,Airflow UI 将存在于 http://localhost:8080上。 +要登录,只需输入`airflow`/`airflow`,你就可以完全访问 Airflow Web UI。 + +## 步骤5:上传测试文档 + +要修改/添加自己的 DAG,可以使用 `kubectl cp` 将本地文件上传到 Airflow 调度程序的 DAG 文件夹中。 +然后,Airflow 将读取新的 DAG 并自动将其上传到其系统。以下命令将任何本地文件上载到正确的目录中: + +`kubectl cp /:/root/airflow/dags -c scheduler` + + +## 步骤6:使用它! +# 那么我什么时候可以使用它? + +虽然此功能仍处于早期阶段,但我们希望在未来几个月内发布该功能以进行广泛发布。 + +# 参与其中 + +此功能只是将 Apache Airflow 集成到 Kubernetes 中的多项主要工作的开始。 +Kubernetes Operator 已合并到 [Airflow 的 1.10 发布分支](https://github.com/apache/incubator-airflow/tree/v1-10-test)(实验模式中的执行模块), +以及完整的 k8s 本地调度程序称为 Kubernetes Executor(即将发布文章)。 +这些功能仍处于早期采用者/贡献者可能对这些功能的未来产生巨大影响的阶段。 + +对于有兴趣加入这些工作的人,我建议按照以下步骤: + + * 加入 airflow-dev 邮件列表 dev@airflow.apache.org。 + * 在 [Apache Airflow JIRA](https://issues.apache.org/jira/projects/AIRFLOW/issues/)中提出问题 + * 周三上午 10点 太平洋标准时间加入我们的 SIG-BigData 会议。 + * 在 kubernetes.slack.com 上的 #sig-big-data 找到我们。 + +特别感谢 Apache Airflow 和 Kubernetes 社区,特别是 Grant Nicholas,Ben Goldberg,Anirudh Ramanathan,Fokko Dreisprong 和 Bolke de Bruin, +感谢你对这些功能的巨大帮助以及我们未来的努力。 diff --git a/content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md b/content/zh-cn/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md similarity index 99% rename from content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md rename to content/zh-cn/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md index 1d6ed8025e3ee..7484f6a372271 100644 --- a/content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md +++ b/content/zh-cn/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md @@ -1,11 +1,14 @@ --- -title: 基于IPVS的集群内部负载均衡 -cn-approvers: -- congfairy layout: blog -title: 'IPVS-Based In-Cluster Load Balancing Deep Dive' +title: '基于 IPVS 的集群内部负载均衡' date: 2018-07-09 +slug: ipvs-based-in-cluster-load-balancing-deep-dive --- + ## 一些特殊功能 -标准的 CoreDNS Kubernetes 配置旨在与以前的 kube-dns 在行为上向后兼容。但是,通过进行一些配置更改,CoreDNS 允许您修改 DNS 服务发现在群集中的工作方式。这些功能中的许多功能仍要符合 [Kubernetes DNS规范](https://github.com/kubernetes/dns/blob/master/docs/specification.md);它们在增强了功能的同时保持向后兼容。由于 CoreDNS 并非 *仅* 用于 Kubernetes,而是通用的 DNS 服务器,因此您可以做很多超出该规范的事情。 +标准的 CoreDNS Kubernetes 配置旨在与以前的 kube-dns 在行为上向后兼容。但是,通过进行一些配置更改,CoreDNS 允许您修改 DNS 服务发现在集群中的工作方式。这些功能中的许多功能仍要符合 [Kubernetes DNS规范](https://github.com/kubernetes/dns/blob/master/docs/specification.md);它们在增强了功能的同时保持向后兼容。由于 CoreDNS 并非 *仅* 用于 Kubernetes,而是通用的 DNS 服务器,因此您可以做很多超出该规范的事情。 -Kubernetes v1.10 使得可以通过 Beta 版本的[配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/) +Kubernetes v1.10 使得可以通过 Beta 版本的[配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) API 配置 kubelet。 Kubernetes 已经提供了用于在 API 服务器中存储任意文件数据的 ConfigMap 抽象。 @@ -93,7 +92,7 @@ Dynamic Kubelet configuration provides the following core features: -要使用动态 Kubelet 配置功能,群集管理员或服务提供商将首先发布包含所需配置的 ConfigMap, +要使用动态 Kubelet 配置功能,集群管理员或服务提供商将首先发布包含所需配置的 ConfigMap, 然后设置每个 Node.Spec.ConfigSource.ConfigMap 引用以指向新的 ConfigMap。 运营商可以以他们喜欢的速率更新这些参考,从而使他们能够执行新配置的受控部署。 diff --git a/content/zh/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md b/content/zh-cn/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md similarity index 98% rename from content/zh/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md rename to content/zh-cn/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md index 755239e9eb9af..51217af127ba9 100644 --- a/content/zh/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md +++ b/content/zh-cn/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md @@ -2,14 +2,12 @@ layout: blog title: '使用 CSI 和 Kubernetes 实现卷的动态扩容' date: 2018-08-02 +slug: dynamically-expand-volume-with-csi-and-kubernetes --- - -更多详细信息,请访问:https://github.com/container-storage-interface/spec/blob/master/spec.md +更多详细信息,请访问: https://github.com/container-storage-interface/spec/blob/master/spec.md ---- -layout: blog -title: '机器可以完成这项工作,一个关于 kubernetes 测试、CI 和自动化贡献者体验的故事' -date: 2019-08-29 ---- **更新(2021 年 12 月):** “Kubernetes 从 v1.23 开始具有内置 gRPC 健康探测。 -了解更多信息,请参阅[配置存活探针、就绪探针和启动探针](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。 +了解更多信息,请参阅[配置存活探针、就绪探针和启动探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。 本文最初是为有关实现相同任务的外部工具所写。” RuntimeClass 资源是将运行时属性显示到控制平面的重要基础。 -例如,要对具有支持不同运行时间的异构节点的群集实施调度程序支持,我们可以在 RuntimeClass 定义中添加 -[NodeAffinity](/zh/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)条件。 +例如,要对具有支持不同运行时间的异构节点的集群实施调度程序支持,我们可以在 RuntimeClass 定义中添加 +[NodeAffinity](/zh-cn/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)条件。 另一个需要解决的领域是管理可变资源需求以运行不同运行时的 Pod。 [Pod Overhead 提案](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) 是一项较早的尝试,与 RuntimeClass 设计非常吻合,并且可能会进一步推广。 @@ -107,7 +106,7 @@ Many other RuntimeClass extensions have also been proposed, and will be revisite - 提供运行时支持的可选功能,并更好地查看由不兼容功能导致的错误。 - 自动运行时或功能发现,支持无需手动配置的调度决策。 - 标准化或一致的 RuntimeClass 名称,用于定义一组具有相同名称的 RuntimeClass 的集群应支持的属性。 -- 动态注册附加的运行时,因此用户可以在不停机的情况下在现有群集上安装新的运行时。 +- 动态注册附加的运行时,因此用户可以在不停机的情况下在现有集群上安装新的运行时。 - 根据 Pod 的要求“匹配” RuntimeClass。 例如,指定运行时属性并使系统与适当的 RuntimeClass 匹配,而不是通过名称显式分配 RuntimeClass。 @@ -129,7 +128,7 @@ RuntimeClass will be under active development at least through 2019, and we’re --> - 试试吧! 作为Alpha功能,还有一些其他设置步骤可以使用RuntimeClass。 - 有关如何使其运行,请参考 [RuntimeClass文档](/zh/docs/concepts/containers/runtime-class/#runtime-class) 。 + 有关如何使其运行,请参考 [RuntimeClass文档](/zh-cn/docs/concepts/containers/runtime-class/#runtime-class) 。 - 查看 [RuntimeClass Kubernetes 增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) 以获取更多细节设计细节。 - [沙盒隔离级别决策](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview) 记录了最初使 RuntimeClass 成为 Pod 级别选项的思考过程。 diff --git a/content/zh/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md b/content/zh-cn/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md similarity index 82% rename from content/zh/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md rename to content/zh-cn/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md index 3aefd018ab019..940996da208e2 100644 --- a/content/zh/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md +++ b/content/zh-cn/blog/_posts/2018-10-11-topology-aware-volume-provisioning.md @@ -2,13 +2,12 @@ layout: blog title: 'Kubernetes 中的拓扑感知数据卷供应' date: 2018-10-11 +slug: topology-aware-volume-provisioning-in-kubernetes --- -通过提供拓扑感知动态卷供应功能,具有持久卷的多区域集群体验在 Kubernetes 1.12 中得到了改进。此功能使得 Kubernetes 在动态供应卷时能做出明智的决策,方法是从调度器获得为 Pod 提供数据卷的最佳位置。在多区域集群环境,这意味着数据卷能够在满足你的 Pod 运行需要的合适的区域被供应,从而允许您跨故障域轻松部署和扩展有状态工作负载,从而提供高可用性和容错能力。 +通过提供拓扑感知动态卷供应功能,具有持久卷的多区域集群体验在 Kubernetes 1.12 +中得到了改进。此功能使得 Kubernetes 在动态供应卷时能做出明智的决策,方法是从调度器获得为 +Pod 提供数据卷的最佳位置。在多区域集群环境,这意味着数据卷能够在满足你的 Pod +运行需要的合适的区域被供应,从而允许你跨故障域轻松部署和扩展有状态工作负载,从而提供高可用性和容错能力。 -在此功能被提供之前,在多区域集群中使用区域化的持久磁盘(例如 AWS ElasticBlockStore,Azure Disk,GCE PersistentDisk)运行有状态工作负载存在许多挑战。动态供应独立于 Pod 调度处理,这意味着只要您创建了一个 PersistentVolumeClaim(PVC),一个卷就会被供应。这也意味着供应者不知道哪些 Pod 正在使用该卷,也不清楚任何可能影响调度的 Pod 约束。 +在此功能被提供之前,在多区域集群中使用区域化的持久磁盘(例如 AWS ElasticBlockStore、 +Azure Disk、GCE PersistentDisk)运行有状态工作负载存在许多挑战。动态供应独立于 Pod +调度处理,这意味着只要你创建了一个 PersistentVolumeClaim(PVC),一个卷就会被供应。 +这也意味着供应者不知道哪些 Pod 正在使用该卷,也不清楚任何可能影响调度的 Pod 约束。 * AWS EBS * Azure Disk -* GCE PD (包括 Regional PD) +* GCE PD(包括 Regional PD) * CSI(alpha) - 目前只有 GCE PD CSI 驱动实现了拓扑支持 虽然最初支持的插件集都是基于区域的,但我们设计此功能时遵循 Kubernetes 跨环境可移植性的原则。 拓扑规范是通用的,并使用类似于基于标签的规范,如 Pod nodeSelectors 和 nodeAffinity。 -该机制允许您定义自己的拓扑边界,例如内部部署集群中的机架,而无需修改调度程序以了解这些自定义拓扑。 +该机制允许你定义自己的拓扑边界,例如内部部署集群中的机架,而无需修改调度程序以了解这些自定义拓扑。 此外,拓扑信息是从 Pod 规范中抽象出来的,因此 Pod 不需要了解底层存储系统的拓扑特征。 -这意味着您可以在多个集群、环境和存储系统中使用相同的 Pod 规范。 +这意味着你可以在多个集群、环境和存储系统中使用相同的 Pod 规范。 -要启用此功能,您需要做的就是创建一个将 `volumeBindingMode` 设置为 `WaitForFirstConsumer` 的 StorageClass: +要启用此功能,你需要做的就是创建一个将 `volumeBindingMode` 设置为 `WaitForFirstConsumer` 的 StorageClass: ``` kind: StorageClass @@ -210,7 +215,7 @@ spec: -之后,您可以看到根据 Pod 设置的策略在区域中配置卷: +之后,你可以看到根据 Pod 设置的策略在区域中配置卷: ``` $ kubectl get pv -o=jsonpath='{range .items[*]}{.spec.claimRef.name}{"\t"}{.metadata.labels.failure\-domain\.beta\.kubernetes\.io/zone}{"\n"}{end}' @@ -228,12 +233,13 @@ logs-web-1 us-central1-a -有关拓扑感知动态供应功能的官方文档可在此处获取:https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode +有关拓扑感知动态供应功能的官方文档可在此处获取: +https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode -有关 CSI 驱动程序的文档,请访问:https://kubernetes-csi.github.io/docs/ +有关 CSI 驱动程序的文档,请访问: https://kubernetes-csi.github.io/docs/ -如果您对此功能有反馈意见或有兴趣参与设计和开发,请加入 [Kubernetes 存储特别兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)(SIG)。我们正在快速成长,并始终欢迎新的贡献者。 +如果你对此功能有反馈意见或有兴趣参与设计和开发,请加入 +[Kubernetes 存储特别兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)(SIG)。 +我们正在快速成长,并始终欢迎新的贡献者。 -特别感谢帮助推出此功能的所有贡献者,包括 Cheng Xing ([verult](https://github.com/verult))、Chuqiang Li ([lichuqiang](https://github.com/lichuqiang))、David Zhu ([davidz627](https://github.com/davidz627))、Deep Debroy ([ddebroy](https://github.com/ddebroy))、Jan Šafránek ([jsafrane](https://github.com/jsafrane))、Jordan Liggitt ([liggitt](https://github.com/liggitt))、Michelle Au ([msau42](https://github.com/msau42))、Pengfei Ni ([feiskyer](https://github.com/feiskyer))、Saad Ali ([saad-ali](https://github.com/saad-ali))、Tim Hockin ([thockin](https://github.com/thockin)),以及 Yecheng Fu ([cofyc](https://github.com/cofyc))。 +特别感谢帮助推出此功能的所有贡献者,包括 Cheng Xing ([verult](https://github.com/verult))、 +Chuqiang Li ([lichuqiang](https://github.com/lichuqiang))、David Zhu ([davidz627](https://github.com/davidz627))、 +Deep Debroy ([ddebroy](https://github.com/ddebroy))、Jan Šafránek ([jsafrane](https://github.com/jsafrane))、 +Jordan Liggitt ([liggitt](https://github.com/liggitt))、Michelle Au ([msau42](https://github.com/msau42))、 +Pengfei Ni ([feiskyer](https://github.com/feiskyer))、Saad Ali ([saad-ali](https://github.com/saad-ali))、 +Tim Hockin ([thockin](https://github.com/thockin)),以及 Yecheng Fu ([cofyc](https://github.com/cofyc))。 diff --git a/content/zh/blog/_posts/2018-10-15-steering-election-results.md b/content/zh-cn/blog/_posts/2018-10-15-steering-election-results.md similarity index 98% rename from content/zh/blog/_posts/2018-10-15-steering-election-results.md rename to content/zh-cn/blog/_posts/2018-10-15-steering-election-results.md index 8e7d5edf7e569..73d2845fe50a3 100644 --- a/content/zh/blog/_posts/2018-10-15-steering-election-results.md +++ b/content/zh-cn/blog/_posts/2018-10-15-steering-election-results.md @@ -2,13 +2,12 @@ layout: blog title: '2018 年督导委员会选举结果' date: 2018-10-15 +slug: 2018-steering-committee-election-results --- diff --git a/content/zh/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md b/content/zh-cn/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md similarity index 98% rename from content/zh/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md rename to content/zh-cn/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md index b504a5c6b6082..af6b93d768f73 100644 --- a/content/zh/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md +++ b/content/zh-cn/blog/_posts/2018-10-16-kubernetes-2018-north-american-contributor-summit.md @@ -2,14 +2,14 @@ layout: "Blog" title: "Kubernetes 2018 年北美贡献者峰会" date: 2018-10-16 +slug: kubernetes-2018-north-american-contributor-summit --- + diff --git a/content/zh/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md b/content/zh-cn/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md similarity index 99% rename from content/zh/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md rename to content/zh-cn/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md index cf6d5e58cbd64..90831b02b030f 100644 --- a/content/zh/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md +++ b/content/zh-cn/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md @@ -2,13 +2,12 @@ layout: blog title: 'Kubernetes 文档更新,国际版' date: 2018-11-08 +slug: kubernetes-docs-updates-international-edition --- diff --git a/content/zh/blog/_posts/2018-12-05-new-contributor-shanghai.md b/content/zh-cn/blog/_posts/2018-12-05-new-contributor-shanghai.md similarity index 99% rename from content/zh/blog/_posts/2018-12-05-new-contributor-shanghai.md rename to content/zh-cn/blog/_posts/2018-12-05-new-contributor-shanghai.md index c3c74e429878e..5b8ddff2de21d 100644 --- a/content/zh/blog/_posts/2018-12-05-new-contributor-shanghai.md +++ b/content/zh-cn/blog/_posts/2018-12-05-new-contributor-shanghai.md @@ -2,14 +2,12 @@ layout: blog title: '新贡献者工作坊上海站' date: 2018-12-05 +slug: new-contributor-workshop-shanghai --- - + + +**作者: Zach Corleissen(Linux 基金会)** + +去年我们对 Kubernetes 网站进行了优化,加入了[多语言内容的支持](https://kubernetes.io/blog/2018/11/08/kubernetes-docs-updates-international-edition/)。贡献者们踊跃响应,加入了多种新的本地化内容:截至 2019 年 4 月,Kubernetes 文档有了 9 个不同语言的未完成版本,其中有 6 个是 2019 年加入的。在每个 Kubernetes 文档页面的上方,读者都可以看到一个语言选择器,其中列出了所有可用语言。 + +不论是完成度最高的[中文版 v1.12](https://v1-12.docs.kubernetes.io/zh-cn/),还是最新加入的[葡萄牙文版 v1.14](https://kubernetes.io/pt/),各语言的本地化内容还未完成,这是一个进行中的项目。如果读者有兴趣对现有本地化工作提供支持,请继续阅读。 + + +## 什么是本地化 + +翻译是以词表意的问题。而本地化在此基础之上,还包含了过程和设计方面的工作。 + +本地化和翻译很像,但是包含更多内容。除了进行翻译之外,本地化还要为编写和发布过程的框架进行优化。例如,Kubernetes.io 多数的站点浏览功能(按钮文字)都保存在[单独的文件](https://github.com/kubernetes/website/tree/master/i18n)之中。所以启动新本地化的过程中,需要包含加入对特定文件中字符串进行翻译的工作。 + +本地化很重要,能够有效的降低 Kubernetes 的采纳和支持门槛。如果能用母语阅读 Kubernetes 文档,就能更轻松的开始使用 Kubernetes,并对其发展作出贡献。 + + +## 如何启动本地化工作 + +不同语言的本地化工作都是单独的功能——和其它 Kubernetes 功能一致,贡献者们在一个 SIG 中进行本地化工作,分享出来进行评审,并加入项目。 + +贡献者们在团队中进行内容的本地化工作。因为自己不能批准自己的 PR,所以一个本地化团队至少应该有两个人——例如意大利文的本地化团队有两个人。这个团队规模可能很大:中文团队有几十个成员。 + +每个团队都有自己的工作流。有些团队手工完成所有的内容翻译;有些会使用带有翻译插件的编译器,并使用评审机来提供正确性的保障。SIG Docs 专注于输出的标准;这就给了本地化团队采用适合自己工作情况的工作流。这样一来,团队可以根据最佳实践进行协作,并以 Kubernetes 的社区精神进行分享。 + + +## 为本地化工作添砖加瓦 + +如果你有兴趣为 Kubernetes 文档加入新语种的本地化内容,[Kubernetes contribution guide](https://kubernetes.io/docs/contribute/localization/) 中包含了这方面的相关内容。 + +已经启动的的本地化工作同样需要支持。如果有兴趣为现存项目做出贡献,可以加入本地化团队的 Slack 频道,去做个自我介绍。各团队的成员会帮助你开始工作。 + +|语种|Slack 频道| +|---|---| +|中文|[#kubernetes-docs-zh](https://kubernetes.slack.com/messages/CE3LNFYJ1/)| +|英文|[#sig-docs](https://kubernetes.slack.com/messages/C1J0BPD2M/)| +|法文|[#kubernetes-docs-fr](https://kubernetes.slack.com/messages/CG838BFT9/)| +|德文|[#kubernetes-docs-de](https://kubernetes.slack.com/messages/CH4UJ2BAL/)| +|印地|[#kubernetes-docs-hi](https://kubernetes.slack.com/messages/CJ14B9BDJ/)| +|印度尼西亚文|[#kubernetes-docs-id](https://kubernetes.slack.com/messages/CJ1LUCUHM/)| +|意大利文|[#kubernetes-docs-it](https://kubernetes.slack.com/messages/CGB1MCK7X/)| +|日文|[#kubernetes-docs-ja](https://kubernetes.slack.com/messages/CAG2M83S8/)| +|韩文|[#kubernetes-docs-ko](https://kubernetes.slack.com/messages/CA1MMR86S/)| +|葡萄牙文|[#kubernetes-docs-pt](https://kubernetes.slack.com/messages/CJ21AS0NA/)| +|西班牙文|[#kubernetes-docs-es](https://kubernetes.slack.com/messages/CH7GB2E3B/)| + + + +## 下一步? + +最新的[印地文本地化](https://kubernetes.slack.com/messages/CJ14B9BDJ/)工作正在启动。为什么不加入你的语言? + +身为 SIG Docs 的主席,我甚至希望本地化工作跳出文档范畴,直接为 Kubernetes 组件提供本地化支持。有什么组件是你希望支持不同语言的么?可以提交一个 [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/tree/master/keps) 来促成这一进步。 \ No newline at end of file diff --git a/content/zh/blog/_posts/2019-05-14-expanding-our-contributor-workshops.md b/content/zh-cn/blog/_posts/2019-05-14-expanding-our-contributor-workshops.md similarity index 100% rename from content/zh/blog/_posts/2019-05-14-expanding-our-contributor-workshops.md rename to content/zh-cn/blog/_posts/2019-05-14-expanding-our-contributor-workshops.md diff --git a/content/zh/blog/_posts/2019-06-12-contributor-summit-shanghai.md b/content/zh-cn/blog/_posts/2019-06-12-contributor-summit-shanghai.md similarity index 99% rename from content/zh/blog/_posts/2019-06-12-contributor-summit-shanghai.md rename to content/zh-cn/blog/_posts/2019-06-12-contributor-summit-shanghai.md index 1d136352bdb72..ae932956ae689 100644 --- a/content/zh/blog/_posts/2019-06-12-contributor-summit-shanghai.md +++ b/content/zh-cn/blog/_posts/2019-06-12-contributor-summit-shanghai.md @@ -2,6 +2,7 @@ layout: blog title: '欢迎参加在上海举行的贡献者峰会' date: 2019-06-11 +slug: join-us-at-the-contributor-summit-in-shanghai --- -在接收请求被持久化为 Kubernetes 中的对象之前,Kubernetes 允许通过 [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) 将策略决策与 API 服务器分离,从而拦截这些请求。[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 创建的目的是使用户能够通过配置(而不是代码)自定义控制许可,并使用户了解群集的状态,而不仅仅是针对评估状态的单个对象,在这些对象准许加入的时候。Gatekeeper 是 Kubernetes 的一个可定制的许可 webhook ,它由 [Open Policy Agent (OPA)](https://www.openpolicyagent.org) 强制执行, OPA 是 Cloud Native 环境下的策略引擎,由 CNCF 主办。 +在接收请求被持久化为 Kubernetes 中的对象之前,Kubernetes 允许通过 [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) 将策略决策与 API 服务器分离,从而拦截这些请求。[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 创建的目的是使用户能够通过配置(而不是代码)自定义控制许可,并使用户了解集群的状态,而不仅仅是针对评估状态的单个对象,在这些对象准许加入的时候。Gatekeeper 是 Kubernetes 的一个可定制的许可 webhook ,它由 [Open Policy Agent (OPA)](https://www.openpolicyagent.org) 强制执行, OPA 是 Cloud Native 环境下的策略引擎,由 CNCF 主办。 ### 审核 -根据群集中强制执行的 Constraint,审核功能可定期评估复制的资源,并检测先前存在的错误配置。Gatekeeper 将审核结果存储为 `violations`,在相关 Constraint 的 `status` 字段中列出。 +根据集群中强制执行的 Constraint,审核功能可定期评估复制的资源,并检测先前存在的错误配置。Gatekeeper 将审核结果存储为 `violations`,在相关 Constraint 的 `status` 字段中列出。 ```yaml apiVersion: constraints.gatekeeper.sh/v1beta1 diff --git a/content/zh/blog/_posts/2019-09-24-san-diego-contributor-summit.md b/content/zh-cn/blog/_posts/2019-09-24-san-diego-contributor-summit.md similarity index 100% rename from content/zh/blog/_posts/2019-09-24-san-diego-contributor-summit.md rename to content/zh-cn/blog/_posts/2019-09-24-san-diego-contributor-summit.md diff --git a/content/zh/blog/_posts/2019-10-03-2019-Steering-Committee-Election-Results.md b/content/zh-cn/blog/_posts/2019-10-03-2019-Steering-Committee-Election-Results.md similarity index 100% rename from content/zh/blog/_posts/2019-10-03-2019-Steering-Committee-Election-Results.md rename to content/zh-cn/blog/_posts/2019-10-03-2019-Steering-Committee-Election-Results.md diff --git a/content/zh/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md b/content/zh-cn/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md similarity index 100% rename from content/zh/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md rename to content/zh-cn/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md diff --git a/content/zh/blog/_posts/2019-10-29-2019-sig-docs-survey.md b/content/zh-cn/blog/_posts/2019-10-29-2019-sig-docs-survey.md similarity index 100% rename from content/zh/blog/_posts/2019-10-29-2019-sig-docs-survey.md rename to content/zh-cn/blog/_posts/2019-10-29-2019-sig-docs-survey.md diff --git a/content/zh/blog/_posts/2019-11-05-kubernetes-with-microk8s.md b/content/zh-cn/blog/_posts/2019-11-05-kubernetes-with-microk8s.md similarity index 99% rename from content/zh/blog/_posts/2019-11-05-kubernetes-with-microk8s.md rename to content/zh-cn/blog/_posts/2019-11-05-kubernetes-with-microk8s.md index 2e789dbcac8ca..c01a3af556d56 100644 --- a/content/zh/blog/_posts/2019-11-05-kubernetes-with-microk8s.md +++ b/content/zh-cn/blog/_posts/2019-11-05-kubernetes-with-microk8s.md @@ -1,14 +1,14 @@ --- +layout: blog title: '使用 Microk8s 在 Linux 上本地运行 Kubernetes' - date: 2019-11-26 +slug: running-kubernetes-locally-on-linux-with-microk8s --- + diff --git a/content/zh/blog/_posts/2019-11-26-cloud-native-java-controller-sdk.md b/content/zh-cn/blog/_posts/2019-11-26-cloud-native-java-controller-sdk.md similarity index 100% rename from content/zh/blog/_posts/2019-11-26-cloud-native-java-controller-sdk.md rename to content/zh-cn/blog/_posts/2019-11-26-cloud-native-java-controller-sdk.md diff --git a/content/zh/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md b/content/zh-cn/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md similarity index 95% rename from content/zh/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md rename to content/zh-cn/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md index 8792bd7c10efd..31554dbe91703 100644 --- a/content/zh/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md +++ b/content/zh-cn/blog/_posts/2019-12-09-kubernetes-1.17-release-announcement.md @@ -69,9 +69,9 @@ Standard labels are used by Kubernetes components to support some features. For The labels are reaching general availability in this release. Kubernetes components have been updated to populate the GA and beta labels and to react to both. However, if you are using the beta labels in your pod specs for features such as node affinity, or in your custom controllers, we recommend that you start migrating them to the new GA labels. You can find the documentation for the new labels here: --> -- [实例类型](/zh/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type) -- [地区](/zh/docs/reference/labels-annotations-taints/#topologykubernetesioregion) -- [区域](/zh/docs/reference/labels-annotations-taints/#topologykubernetesiozone) +- [实例类型](/zh-cn/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type) +- [地区](/zh-cn/docs/reference/labels-annotations-taints/#topologykubernetesioregion) +- [区域](/zh-cn/docs/reference/labels-annotations-taints/#topologykubernetesiozone) ### 卷快照是什么? - + 许多的存储系统(如谷歌云持久化磁盘,亚马逊弹性块存储和许多的内部存储系统)支持为持久卷创建快照。快照代表卷在一个时间点的复制。它可用于配置新卷(使用快照数据提前填充)或恢复卷到一个之前的状态(用快照表示)。 -支持所有这些特性是Kubernets负载可移植的目标:Kubernetes旨在分布式系统应用和底层集群之间创建一个抽象层,使得应用可以不感知其运行集群的具体信息并且部署也不需特定集群的知识。 +支持所有这些特性是Kubernetes负载可移植的目标:Kubernetes旨在分布式系统应用和底层集群之间创建一个抽象层,使得应用可以不感知其运行集群的具体信息并且部署也不需特定集群的知识。 @@ -145,7 +147,8 @@ Prior to CSI, Kubernetes provided a powerful volume plugin system. These volume 随着更多容器存储接口驱动变成生产环境可用,我们希望所有的Kubernetes用户从容器存储接口模型中获益。然而,我们不希望强制用户以破坏现有基本可用的存储接口的方式去改变负载和配置。道路很明确,我们将不得不用CSI替换树内插件接口。什么是容器存储接口迁移? 在容器存储接口迁移上所做的努力使得替换现有的树内存储插件,如`kubernetes.io/gce-pd`或`kubernetes.io/aws-ebs`,为相应的容器存储接口驱动成为可能。如果容器存储接口迁移正常工作,Kubernetes终端用户不会注意到任何差别。迁移过后,Kubernetes用户可以继续使用现有接口来依赖树内存储插件的功能。 @@ -165,7 +168,8 @@ The Kubernetes team has worked hard to ensure the stability of storage APIs and 你可以在这篇博客中阅读更多关于[容器存储接口迁移成为公开测试版](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/). +You can read more in the blog entry about [CSI migration going to beta](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/). +--> ## 其它更新 ### 可用性 Kubernetes 1.17 可以[在GitHub下载](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.0)。开始使用Kubernetes,看看这些[交互教学](https://kubernetes.io/docs/tutorials/)。你可以非常容易使用[kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)安装1.17。 ### 发布团队 -我们很高兴宣布 Kubernetes 1.18 版本的交付,这是我们 2020 年的第一版! Kubernetes 1.18 包含 38 个增强功能:15 项增强功能已转为稳定版,11 项增强功能处于 beta 阶段,12 项增强功能处于 alpha 阶段。 +我们很高兴宣布 Kubernetes 1.18 版本的交付,这是我们 2020 年的第一版!Kubernetes +1.18 包含 38 个增强功能:15 项增强功能已转为稳定版,11 项增强功能处于 beta +阶段,12 项增强功能处于 alpha 阶段。 -Kubernetes 1.18 是一个近乎 “完美” 的版本。 为了改善 beta 和稳定的特性,已进行了大量工作,以确保用户获得更好的体验。 我们在增强现有功能的同时也增加了令人兴奋的新特性,这些有望进一步增强用户体验。 +Kubernetes 1.18 是一个近乎 “完美” 的版本。为了改善 beta 和稳定的特性,已进行了大量工作, +以确保用户获得更好的体验。我们在增强现有功能的同时也增加了令人兴奋的新特性,这些有望进一步增强用户体验。 + -对 alpha,beta 和稳定版进行几乎同等程度的增强是一项伟大的成就。 它展现了社区在提高 Kubernetes 的可靠性以及继续扩展其现有功能方面所做的巨大努力。 +对 alpha、beta 和稳定版进行几乎同等程度的增强是一项伟大的成就。它展现了社区在提高 +Kubernetes 的可靠性以及继续扩展其现有功能方面所做的巨大努力。 -Kubernetes 在 1.18 版中的 Beta 阶段功能 [拓扑管理器特性](https://github.com/nolancon/website/blob/f4200307260ea3234540ef13ed80de325e1a7267/content/en/docs/tasks/administer-cluster/topology-manager.md) 启用 CPU 和设备(例如 SR-IOV VF)的 NUMA 对齐,这将使您的工作负载在针对低延迟而优化的环境中运行。在引入拓扑管理器之前,CPU 和设备管理器将做出彼此独立的资源分配决策。 这可能会导致在多处理器系统上非预期的资源分配结果,从而导致对延迟敏感的应用程序的性能下降。 +Kubernetes 在 1.18 版中的 Beta 阶段功能[拓扑管理器特性](https://github.com/nolancon/website/blob/f4200307260ea3234540ef13ed80de325e1a7267/content/en/docs/tasks/administer-cluster/topology-manager.md)启用 +CPU 和设备(例如 SR-IOV VF)的 NUMA 对齐,这将使你的工作负载在针对低延迟而优化的环境中运行。 +在引入拓扑管理器之前,CPU 和设备管理器将做出彼此独立的资源分配决策。 +这可能会导致在多处理器系统上非预期的资源分配结果,从而导致对延迟敏感的应用程序的性能下降。 -### Serverside Apply 推出Beta 2 +### Serverside Apply 推出 Beta 2 -Serverside Apply 在1.16 中进入 Beta 阶段,但现在在 1.18 中进入了第二个 Beta 阶段。 这个新版本将跟踪和管理所有新 Kubernetes 对象的字段更改,从而使您知道什么更改了资源以及何时发生了更改。 +Serverside Apply 在1.16 中进入 Beta 阶段,但现在在 1.18 中进入了第二个 Beta 阶段。 +这个新版本将跟踪和管理所有新 Kubernetes 对象的字段更改,从而使你知道什么更改了资源以及何时发生了更改。 -在 Kubernetes 1.18 中,Ingress 有两个重要的补充:一个新的 `pathType` 字段和一个新的 `IngressClass` 资源。`pathType` 字段允许指定路径的匹配方式。 除了默认的`ImplementationSpecific`类型外,还有新的 `Exact`和`Prefix` 路径类型。 +在 Kubernetes 1.18 中,Ingress 有两个重要的补充:一个新的 `pathType` 字段和一个新的 +`IngressClass` 资源。`pathType` 字段允许指定路径的匹配方式。除了默认的 +`ImplementationSpecific` 类型外,还有新的 `Exact` 和 `Prefix` 路径类型。 -`IngressClass` 资源用于描述 Kubernetes 集群中 Ingress 的类型。 Ingress 对象可以通过在Ingress 资源类型上使用新的`ingressClassName` 字段来指定与它们关联的类。 这个新的资源和字段替换了不再建议使用的 `kubernetes.io/ingress.class` 注解。 +`IngressClass` 资源用于描述 Kubernetes 集群中 Ingress 的类型。Ingress 对象可以通过在 +Ingress 资源类型上使用新的 `ingressClassName` 字段来指定与它们关联的类。 +这个新的资源和字段替换了不再建议使用的 `kubernetes.io/ingress.class` 注解。 -SIG-CLI 一直在争论着调试工具的必要性。随着 [临时容器](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/) 的发展,我们如何使用基于 `kubectl exec` 的工具来支持开发人员的必要性变得越来越明显。 [`kubectl alpha debug` 命令](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/20190805-kubectl-debug.md) 的增加,(由于是 alpha 阶段,非常欢迎您反馈意见),使开发人员可以轻松地在集群中调试 Pod。我们认为这个功能的价值非常高。 此命令允许创建一个临时容器,该容器在要尝试检查的 Pod 旁边运行,并且还附加到控制台以进行交互式故障排除。 +SIG-CLI 一直在争论着调试工具的必要性。随着[临时容器](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/)的发展, +我们如何使用基于 `kubectl exec` 的工具来支持开发人员的必要性变得越来越明显。 +[`kubectl alpha debug` 命令](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/20190805-kubectl-debug.md)的增加, +(由于是 alpha 阶段,非常欢迎你反馈意见),使开发人员可以轻松地在集群中调试 Pod。 +我们认为这个功能的价值非常高。此命令允许创建一个临时容器,该容器在要尝试检查的 +Pod 旁边运行,并且还附加到控制台以进行交互式故障排除。 -用于 Windows 的 CSI 代理的 Alpha 版本随 Kubernetes 1.18 一起发布。 CSI 代理通过允许Windows 中的容器执行特权存储操作来启用 Windows 上的 CSI 驱动程序。 +用于 Windows 的 CSI 代理的 Alpha 版本随 Kubernetes 1.18 一起发布。CSI 代理通过允许 +Windows 中的容器执行特权存储操作来启用 Windows 上的 CSI 驱动程序。 -在我们的 [发布文档](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md)中查看 Kubernetes 1.18 发行版的完整详细信息。 +在我们的[发布文档](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md)中查看 +Kubernetes 1.18 发行版的完整详细信息。 -Kubernetes 1.18 可以在 [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.0) 上下载。 要开始使用Kubernetes,请查看这些 [交互教程](https://kubernetes.io/docs/tutorials/) 或通过[kind](https://kind.sigs.k8s.io/) 使用 Docker 容器运行本地 kubernetes 集群。您还可以使用[kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)轻松安装 1.18。 +Kubernetes 1.18 可以在 [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.0) +上下载。要开始使用 Kubernetes,请查看这些[交互教程](https://kubernetes.io/docs/tutorials/)或通过 +[kind](https://kind.sigs.k8s.io/) 使用 Docker 容器运行本地 kubernetes 集群。你还可以使用 +[kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) 轻松安装 1.18。 -通过数百位贡献了技术和非技术内容的个人的努力,使本次发行成为可能。 特别感谢由 Searchable AI 的网站可靠性工程师 Jorge Alarcon Ochoa 领导的[发布团队](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.18/release_team.md)。 34 位发布团队成员协调了发布的各个方面,从文档到测试、验证和功能完整性。 +通过数百位贡献了技术和非技术内容的个人的努力,使本次发行成为可能。 +特别感谢由 Searchable AI 的网站可靠性工程师 Jorge Alarcon Ochoa +领导的[发布团队](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.18/release_team.md)。 +34 位发布团队成员协调了发布的各个方面,从文档到测试、验证和功能完整性。 -随着 Kubernetes 社区的发展壮大,我们的发布过程很好地展示了开源软件开发中的协作。 Kubernetes 继续快速获取新用户。 这种增长创造了一个积极的反馈回路,其中有更多的贡献者提交了代码,从而创建了更加活跃的生态系统。 迄今为止,Kubernetes 已有 [40,000 独立贡献者](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1) 和一个超过3000人的活跃社区。 +随着 Kubernetes 社区的发展壮大,我们的发布过程很好地展示了开源软件开发中的协作。 +Kubernetes 继续快速获取新用户。这种增长创造了一个积极的反馈回路, +其中有更多的贡献者提交了代码,从而创建了更加活跃的生态系统。迄今为止,Kubernetes 已有 +[40,000 独立贡献者](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1)和一个超过 3000 人的活跃社区。 -LHC 是世界上最大,功能最强大的粒子加速器。它是由来自世界各地成千上万科学家合作的结果,所有这些合作都是为了促进科学的发展。以类似的方式,Kubernetes 已经成为一个聚集了来自数百个组织的数千名贡献者–所有人都朝着在各个方面改善云计算的相同目标努力的项目! 发布名称“ A Bit Quarky” 的意思是提醒我们,非常规的想法可以带来巨大的变化,对开放性保持开放态度将有助于我们进行创新。 +LHC 是世界上最大,功能最强大的粒子加速器。它是由来自世界各地成千上万科学家合作的结果, +所有这些合作都是为了促进科学的发展。以类似的方式,Kubernetes +已经成为一个聚集了来自数百个组织的数千名贡献者–所有人都朝着在各个方面改善云计算的相同目标努力的项目! +发布名称 “A Bit Quarky” 的意思是提醒我们,非常规的想法可以带来巨大的变化,对开放性保持开放态度将有助于我们进行创新。 -Maru Lango 是目前居住在墨西哥城的设计师。她的专长是产品设计,她还喜欢使用 CSS + JS 进行品牌、插图和视觉实验,为技术和设计社区的多样性做贡献。您可能会在大多数社交媒体上以 @marulango 的身份找到她,或查看她的网站: https://marulango.com +Maru Lango 是目前居住在墨西哥城的设计师。她的专长是产品设计,她还喜欢使用 CSS + JS +进行品牌、插图和视觉实验,为技术和设计社区的多样性做贡献。你可能会在大多数社交媒体上以 +@marulango 的身份找到她,或查看她的网站: https://marulango.com -- 爱立信正在使用 Kubernetes 和其他云原生技术来交付[高标准的 5G 网络](https://www.cncf.io/case-study/ericsson/),这可以在 CI/CD 上节省多达 90% 的支出。 -- Zendesk 正在使用 Kubernetes [运行其现有应用程序的约 70%](https://www.cncf.io/case-study/zendesk/)。它还正在使所构建的所有新应用都可以在 Kubernetes 上运行,从而节省时间、提高灵活性并加快其应用程序开发的速度。 -- LifeMiles 因迁移到 Kubernetes 而[降低了 50% 的基础设施开支](https://www.cncf.io/case-study/lifemiles/)。Kubernetes 还使他们可以将其可用资源容量增加一倍。 +- 爱立信正在使用 Kubernetes 和其他云原生技术来交付[高标准的 5G 网络](https://www.cncf.io/case-study/ericsson/), + 这可以在 CI/CD 上节省多达 90% 的支出。 +- Zendesk 正在使用 Kubernetes [运行其现有应用程序的约 70%](https://www.cncf.io/case-study/zendesk/)。 + 它还正在使所构建的所有新应用都可以在 Kubernetes 上运行,从而节省时间、提高灵活性并加快其应用程序开发的速度。 +- LifeMiles 因迁移到 Kubernetes 而[降低了 50% 的基础设施开支](https://www.cncf.io/case-study/lifemiles/)。 + Kubernetes 还使他们可以将其可用资源容量增加一倍。 -- CNCF发布了[年度调查](https://www.cncf.io/blog/2020/03/04/2019-cncf-survey-results-are-here-deployments-are-growing-in-size-and-speed-as-cloud-native-adoption-becomes-mainstream/) 的结果,表明 Kubernetes 在生产中的使用正在飞速增长。调查发现,有78%的受访者在生产中使用Kubernetes,而去年这一比例为 58%。 -- CNCF 举办的 “Kubernetes入门” 课程有[超过 100,000 人注册](https://www.cncf.io/announcement/2020/01/28/cloud-native-computing-foundation-announces-introduction-to-kubernetes-course-surpasses-100000-registrations/)。 +- CNCF 发布了[年度调查](https://www.cncf.io/blog/2020/03/04/2019-cncf-survey-results-are-here-deployments-are-growing-in-size-and-speed-as-cloud-native-adoption-becomes-mainstream/)的结果, + 表明 Kubernetes 在生产中的使用正在飞速增长。调查发现,有 78% 的受访者在生产中使用 Kubernetes,而去年这一比例为 58%。 +- CNCF 举办的 “Kubernetes 入门” 课程有[超过 100,000 人注册](https://www.cncf.io/announcement/2020/01/28/cloud-native-computing-foundation-announces-introduction-to-kubernetes-course-surpasses-100000-registrations/)。 -CNCF 继续完善 DevStats。这是一个雄心勃勃的项目,旨在对项目中的无数贡献数据进行可视化展示。[K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1) 展示了主要公司贡献者的贡献细目,以及一系列令人印象深刻的预定义的报告,涉及从贡献者个人的各方面到 PR 生命周期的各个方面。 +CNCF 继续完善 DevStats。这是一个雄心勃勃的项目,旨在对项目中的无数贡献数据进行可视化展示。 +[K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1) 展示了主要公司贡献者的贡献细目, +以及一系列令人印象深刻的预定义的报告,涉及从贡献者个人的各方面到 PR 生命周期的各个方面。 -在过去的一个季度中,641 家不同的公司和超过 6,409 个个人为 Kubernetes 作出贡献。 [查看 DevStats](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&var-period=m&var-repogroup_name=All) 以了解有关 Kubernetes 项目和社区发展速度的信息。 +在过去的一个季度中,641 家不同的公司和超过 6,409 个个人为 Kubernetes 作出贡献。 +[查看 DevStats](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&var-period=m&var-repogroup_name=All) +以了解有关 Kubernetes 项目和社区发展速度的信息。 -Kubecon + CloudNativeCon EU 2020 已经推迟 - 有关最新信息,请查看[新型肺炎发布页面](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/attend/novel-coronavirus-update/)。 +Kubecon + CloudNativeCon EU 2020 已经推迟 - 有关最新信息, +请查看[新型肺炎发布页面](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/attend/novel-coronavirus-update/)。 -在 2020 年 4 月 23 日,和 Kubernetes 1.18 版本团队一起了解此版本的主要功能,包括 kubectl debug、拓扑管理器、Ingress 毕业为 V1 版本以及 client-go。 在此处注册:https://www.cncf.io/webinars/kubernetes-1-18/ 。 +在 2020 年 4 月 23 日,和 Kubernetes 1.18 版本团队一起了解此版本的主要功能, +包括 kubectl debug、拓扑管理器、Ingress 毕业为 V1 版本以及 client-go。 +在此处注册: https://www.cncf.io/webinars/kubernetes-1-18/ 。 -参与 Kubernetes 的最简单方法是加入众多与您的兴趣相关的 [特别兴趣小组](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) 之一。 您有什么想向 Kubernetes 社区发布的内容吗? 参与我们的每周 [社区会议](https://github.com/kubernetes/community/tree/master/communication),并通过以下渠道分享您的声音。 感谢您一直以来的反馈和支持。 +参与 Kubernetes 的最简单方法是加入众多与你的兴趣相关的[特别兴趣小组](https://github.com/kubernetes/community/blob/master/sig-list.md)(SIGs)之一。 +你有什么想向 Kubernetes 社区发布的内容吗?参与我们的每周[社区会议](https://github.com/kubernetes/community/tree/master/communication), +并通过以下渠道分享你的声音。感谢你一直以来的反馈和支持。 -虽然在典型部署中,我们已按日志量更新了99%以上的日志条目,但仍有数千个日志需要更新。 选择一个您要改进的文件或目录,然后[迁移现有的日志调用以使用结构化日志](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md)。这是对Kubernetes做出第一笔贡献的好方法! +虽然在典型部署中,我们已按日志量更新了99%以上的日志条目,但仍有数千个日志需要更新。 选择一个您要改进的文件或目录,然后[迁移现有的日志调用以使用结构化日志](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md)。这是对Kubernetes做出第一笔贡献的好方法! diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image01.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image01.png new file mode 100644 index 0000000000000..91e885613945a Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image01.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image02.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image02.png new file mode 100644 index 0000000000000..dfd14d7cdc994 Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image02.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image03.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image03.png new file mode 100644 index 0000000000000..443a6f2d671be Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image03.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image04.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image04.png new file mode 100644 index 0000000000000..e107adc88b6a3 Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image04.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image05.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image05.png new file mode 100644 index 0000000000000..6d80447d094a3 Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image05.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image06.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image06.png new file mode 100644 index 0000000000000..d40b2eb0b6838 Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image06.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image07.png b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image07.png new file mode 100644 index 0000000000000..fc3976040fe09 Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/image07.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md new file mode 100644 index 0000000000000..d2f1fa1e407f4 --- /dev/null +++ b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md @@ -0,0 +1,255 @@ +--- +layout: blog +title: "一个编排高可用应用的 Kubernetes 自定义调度器" +date: 2020-12-21 +slug: writing-crl-scheduler +--- + +**作者**: Chris Seto (Cockroach Labs) + + + +只要你愿意遵守规则,那么在 Kubernetes 上的部署和探索可以是相当愉快的。更多时候,事情会 "顺利进行"。 +然而,如果一个人对与必须保持存活的鳄鱼一起旅行或者是对必须保持可用的数据库进行扩展有兴趣, +情况可能会变得更复杂一点。 +相较于这个问题,建立自己的飞机或数据库甚至还可能更容易一些。撇开与鳄鱼的旅行不谈,扩展一个高可用的有状态系统也不是一件小事。 + + +任何系统的扩展都有两个主要组成部分。 +1. 增加或删除系统将运行的基础架构,以及 +2. 确保系统知道如何处理自身额外实例的添加和删除。 + + +大多数无状态系统,例如网络服务器,在创建时不需要意识到对等实例。而有状态的系统,包括像 CockroachDB 这样的数据库, +必须与它们的对等实例协调,并对数据进行 shuffle。运气好的话,CockroachDB 可以处理数据的再分布和复制。 +棘手的部分是在确保数据和实例分布在许多故障域(可用性区域)的操作过程中能够容忍故障的发生。 + + +Kubernetes 的职责之一是将 "资源"(如磁盘或容器)放入集群中,并满足其请求的约束。 +例如。"我必须在可用性区域 _A_"(见[在多个区域运行](/zh-cn/docs/setup/best-practices/multiple-zones/#nodes-are-labeled)), +或者 "我不能被放置到与某个 Pod 相同的节点上" +(见[亲和与反亲和](/zh-cn/docs/setup/best-practices/multiple-zones/#nodes-are-labeled))。 + + +作为对这些约束的补充,Kubernetes 提供了 [StatefulSets](/zh-cn/docs/concepts/workloads/controllers/statefulset/), +为 Pod 提供身份,以及 "跟随" 这些指定 Pod 的持久化存储。 +在 StatefulSet 中,身份是由 Pod 名称末尾一个呈增序的整数处理的。 +值得注意的是,这个整数必须始终是连续的:在一个 StatefulSet 中, +如果 Pod 1 和 3 存在,那么 Pod 2 也必须存在。 + + +在架构上,CockroachCloud 将 CockroachDB 的每个区域作为 StatefulSet 部署在自己的 Kubernetes 集群中 -- +参见 [Orchestrate CockroachDB in a Single Kubernetes Cluster](https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html)。 +在这篇文章中,我将着眼于一个单独的区域,一个 StatefulSet 和一个至少分布有三个可用区的 Kubernetes 集群。 + + +一个三节点的 CockroachCloud 集群如下所示: + + +![3-node, multi-zone cockroachdb cluster](image01.png) + + +在向集群增加额外的资源时,我们也会将它们分布在各个区域。 +为了获得最快的用户体验,我们同时添加所有 Kubernetes 节点,然后扩大 StatefulSet 的规模。 + + +![illustration of phases: adding Kubernetes nodes to the multi-zone cockroachdb cluster](image02.png) + + +请注意,无论 Pod 被分配到 Kubernetes 节点的顺序如何,都会满足反亲和性。 +在这个例子中,Pod 0、1、2 分别被分配到 A、B、C 区,但 Pod 3 和 4 以不同的顺序被分配到 B 和 A 区。 +反亲和性仍然得到满足,因为 Pod 仍然被放置在不同的区域。 + + +要从集群中移除资源,我们以相反的顺序执行这些操作。 + + +我们首先缩小 StatefulSet 的规模,然后从集群中移除任何缺少 CockroachDB Pod 的节点。 + + +![illustration of phases: scaling down pods in a multi-zone cockroachdb cluster in Kubernetes](image03.png) + + +现在,请记住,规模为 _n_ 的 StatefulSet 中的 Pods 一定具有 `[0,n)` 范围内的 id。 +当把一个 StatefulSet 规模缩减了 _m_ 时,Kubernetes 会移除 _m_ 个 Pod,从最高的序号开始,向最低的序号移动, +[与它们被添加的顺序相反](/zh-cn/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees)。 +考虑一下下面的集群拓扑结构。 + + +![illustration: cockroachdb cluster: 6 nodes distributed across 3 availability zones](image04.png) + + +当从这个集群中移除 5 号到 3 号 Pod 时,这个 StatefulSet 仍然横跨三个可用区。 + + +![illustration: removing 3 nodes from a 6-node, 3-zone cockroachdb cluster](image05.png) + + +然而,Kubernetes 的调度器并不像我们一开始预期的那样 _保证_ 上面的分布。 + + +我们对以下内容的综合认识是导致这种误解的原因。 +* Kubernetes [自动跨区分配 Pod](/zh-cn/docs/setup/best-practices/multiple-zones/#pods-are-spread-across-zones) 的能力 +* 一个有 _n_ 个副本的 StatefulSet,当 Pod 被部署时,它们会按照 `{0...n-1}` 的顺序依次创建。 +更多细节见 [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees)。 + + +考虑以下拓扑结构: + + +![illustration: 6-node cockroachdb cluster distributed across 3 availability zones](image06.png) + + +这些 Pod 是按顺序创建的,它们分布在集群里所有可用区。当序号 5 到 3 的 Pod 被终止时, +这个集群将从 C 区消失! + + +![illustration: terminating 3 nodes in 6-node cluster spread across 3 availability zones, where 2/2 nodes in the same availability zone are terminated, knocking out that AZ](image07.png) + + +更糟糕的是,在这个时候,我们的自动化机制将删除节点 A-2,B-2,和 C-2。 +并让 CRDB-1 处于未调度状态,因为持久性卷只在其创建时所处的区域内可用。 + + +为了纠正后一个问题,我们现在采用了一种“狩猎和啄食”的方法来从集群中移除机器。 +与其盲目地从集群中移除 Kubernetes 节点,不如只移除没有 CockroachDB Pod 的节点。 +更为艰巨的任务是管理 Kubernetes 的调度器。 + + +## 一场头脑风暴后我们有了 3 个选择。 + +### 1. 升级到 kubernetes 1.18 并利用 Pod 拓扑分布约束 + +虽然这似乎是一个完美的解决方案,但在写这篇文章的时候,Kubernetes 1.18 在公有云中两个最常见的 +托管 Kubernetes 服务( EKS 和 GKE )上是不可用的。 +此外,[Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)在 +[1.18 中仍是测试版功能](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/), +这意味着即使在 v1.18 可用时,它[也不能保证在托管集群中可用](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices)。 +整个努力让人联想到在 Internet Explorer 8 还存在的时候访问 [caniuse.com](https://caniuse.com/)。 + + +### 2. 在每个区部署一个 StatefulSet。 + +与跨所有可用区部署一个 StatefulSet 相比,在每个区部署一个带有节点亲和性的 StatefulSet 可以实现手动控制分区拓扑结构。 +我们的团队过去曾考虑过这个选项,我们也倾向此选项。 +但最终,我们决定放弃这个方案,因为这需要对我们的代码库进行大规模的修改,而且在现有的客户集群上进行迁移也是一个同样大的工程。 + + + +### 3. 编写一个自定义的 Kubernetes 调度器 + +感谢 [Kelsey Hightower](https://github.com/kelseyhightower/scheduler) 的例子和 +[Banzai Cloud](https://banzaicloud.com/blog/k8s-custom-scheduler/) 的博文,我们决定投入进去,编写自己的[自定义 Kubernetes 调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)。 +一旦我们的概念验证被部署和运行,我们很快就发现,Kubernetes 的调度器也负责将持久化卷映射到它所调度的 Pod 上。 +[`kubectl get events`](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/#verifying-that-the-pods-wer-scheduled-using-the-desired-schedulers) +的输出让我们相信有另一个系统在发挥作用。 +在我们寻找负责存储声明映射的组件的过程中,我们发现了 +[kube-scheduler 插件系统](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/)。 +我们的下一个 POC 是一个"过滤器"插件,它通过 Pod 的序号来确定适当的可用区域,并且工作得非常完美。 + +我们的[自定义调度器插件](https://github.com/cockroachlabs/crl-scheduler)是开源的,并在我们所有的 CockroachCloud 集群中运行。 +对 StatefulSet Pod 的调度方式有掌控力,让我们有信心扩大规模。 +一旦 GKE 和 EKS 中的 Pod 拓扑分布约束可用,我们可能会考虑让我们的插件退役,但其维护的开销出乎意料地低。 +更好的是:该插件的实现与我们的业务逻辑是横向的。部署它,或取消它,就像改变 StatefulSet 定义中的 "schedulerName" 字段一样简单。 + +--- + +[Chris Seto](https://twitter.com/_ostriches) 是 Cockroach 实验室的一名软件工程师,负责 +[CockroachCloud](https://cockroachlabs.cloud) CockroachDB 的 Kubernetes 自动化。 diff --git a/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md b/content/zh-cn/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md similarity index 100% rename from content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md rename to content/zh-cn/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md diff --git a/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg b/content/zh-cn/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg similarity index 100% rename from content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg rename to content/zh-cn/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg diff --git a/content/zh/blog/_posts/2020-12-02-dockershim-faq.md b/content/zh-cn/blog/_posts/2020-12-02-dockershim-faq.md similarity index 88% rename from content/zh/blog/_posts/2020-12-02-dockershim-faq.md rename to content/zh-cn/blog/_posts/2020-12-02-dockershim-faq.md index eb65cc636963f..f484d7abaac5d 100644 --- a/content/zh/blog/_posts/2020-12-02-dockershim-faq.md +++ b/content/zh-cn/blog/_posts/2020-12-02-dockershim-faq.md @@ -3,20 +3,18 @@ layout: blog title: "弃用 Dockershim 的常见问题" date: 2020-12-02 slug: dockershim-faq -aliases: [ '/zh/dockershim' ] --- -_**更新**:本文有[较新版本](/zh/blog/2022/02/17/dockershim-faq/)。_ +_**更新**:本文有[较新版本](/zh-cn/blog/2022/02/17/dockershim-faq/)。_ 本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。 关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义,请参考博文 [别慌: Kubernetes 和 Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。 -此外,你可以阅读 [检查 Dockershim 弃用是否影响你](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) -以检查它是否会影响你。 +此外,你可以阅读[检查 Dockershim 移除是否影响你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)以检查它是否会影响你。 Dockershim 向来都是一个临时解决方案(因此得名:shim)。 你可以进一步阅读 -[移除 Kubernetes 增强方案 Dockershim][drkep] +[移除 Dockershim 这一 Kubernetes 增强方案][drkep] 以了解相关的社区讨论和计划。 当然可以,在 1.20 版本中仅有的改变就是:如果使用 Docker 运行时,启动 -[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) +[kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) 的过程中将打印一条警告日志。 +### 从 Kubernetes 中移除后我还能使用 dockershim 吗? {#can-i-still-use-dockershim-after-it-is-removed-from-kubernetes} + + +更新:Mirantis 和 Docker [已承诺][mirantis]在 dockershim 从 Kubernetes +中删除后对其进行维护。 + +[mirantis]: https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/ + + @@ -164,11 +178,11 @@ related projects follow a similar pattern as well, demonstrating the stability a usability of other container runtimes. As an example, OpenShift 4.x has been using the [CRI-O] runtime in production since June 2019. --> -此外,[kind](https://kind.sigs.k8s.io/) 项目使用 containerd 已经有年头了, +此外,[kind] 项目使用 containerd 已经有年头了, 并且在这个场景中,稳定性还明显得到提升。 Kind 和 containerd 每天都会做多次协调,以验证对 Kubernetes 代码库的所有更改。 其他相关项目也遵循同样的模式,从而展示了其他容器运行时的稳定性和可用性。 -例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O](https://cri-o.io/) 运行时。 +例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O] 运行时。 @@ -267,14 +286,15 @@ runtime where possible. 另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。 对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案 -(参见 [从 docker 命令映射到 crictl](https://kubernetes.io/zh/docs/tasks/debug/debug-cluster/crictl/#mapping-from-docker-cli-to-crictl)); +(参见[从 docker 命令映射到 crictl](/zh-cn/docs/reference/tools/map-crictl-dockercli/)); 对于后者,可以用新的容器创建选项,比如 +[cr](https://github.com/kubernetes-sigs/cri-tools)、 [img](https://github.com/genuinetools/img)、 [buildah](https://github.com/containers/buildah)、 [kaniko](https://github.com/GoogleContainerTools/kaniko)、或 @@ -295,12 +315,12 @@ For instructions on how to use containerd and CRI-O with Kubernetes, see the Kubernetes documentation on [Container Runtimes] --> 对于如何协同 Kubernetes 使用 containerd 和 CRI-O 的说明,参见 Kubernetes 文档中这部分: -[容器运行时](/zh/docs/setup/production-environment/container-runtimes)。 +[容器运行时](/zh-cn/docs/setup/production-environment/container-runtimes)。 -### 我还有问题怎么办?{#what-if-I-have-more-question} +### 我还有问题怎么办?{#what-if-I-have-more-questions} 如果你使用了一个有供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。 -对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。 +对于最终用户的问题,请把问题发到我们的最终用户社区的[论坛](https://discuss.kubernetes.io/)。 **作者:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas -_更新:Kubernetes 通过 `dockershim` 对 Docker 的支持现已弃用。 -有关更多信息,请阅读[弃用通知](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)。 -你还可以通过专门的 [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917) 讨论弃用。_ +**更新**:Kubernetes 通过 `dockershim` 对 Docker 的支持现已移除。 +有关更多信息,请阅读[移除 FAQ](/zh-cn/dockershim)。 +你还可以通过专门的 [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917) 讨论弃用。 -如果你正在使用 GKE、EKS、或 AKS -([默认使用 containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16)) -这类托管 Kubernetes 服务,你需要在 Kubernetes 后续版本移除对 Docker 支持之前, +如果你正在使用 GKE、EKS、或 AKS 这类托管 Kubernetes 服务, +你需要在 Kubernetes 后续版本移除对 Docker 支持之前, 确认工作节点使用了被支持的容器运行时。 如果你的节点被定制过,你可能需要根据你自己的环境和运行时需求更新它们。 请与你的服务供应商协作,确保做出适当的升级测试和计划。 @@ -75,15 +76,15 @@ testing and planning. 如果你正在运营你自己的集群,那还应该做些工作,以避免集群中断。 在 v1.20 版中,你仅会得到一个 Docker 的弃用警告。 -当对 Docker 运行时的支持在 Kubernetes 某个后续发行版(目前的计划是 2021 年晚些时候的 1.22 版)中被移除时, +当对 Docker 运行时的支持在 Kubernetes 某个后续发行版(目前的计划是 2021 年晚些时候的 1.22 版)中被移除时, 你需要切换到 containerd 或 CRI-O 等兼容的容器运行时。 只要确保你选择的运行时支持你当前使用的 Docker 守护进程配置(例如 logging)。 @@ -216,4 +217,4 @@ Kubernetes 有很多变化中的功能,没有人是100%的专家。 Looking for more answers? Check out our accompanying [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) _(updated February 2022)_. --> 还在寻求更多答案吗?请参考我们附带的 -[移除 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/) _(2022年2月更新)_。 +[移除 Dockershim 的常见问题](/zh-cn/blog/2020/12/02/dockershim-faq/) _(2022年2月更新)_。 diff --git a/content/zh/blog/_posts/2020-12-08-kubernetes-release-1.20.md b/content/zh-cn/blog/_posts/2020-12-08-kubernetes-release-1.20.md similarity index 99% rename from content/zh/blog/_posts/2020-12-08-kubernetes-release-1.20.md rename to content/zh-cn/blog/_posts/2020-12-08-kubernetes-release-1.20.md index c88a1cf745638..ccc01f3b33389 100644 --- a/content/zh/blog/_posts/2020-12-08-kubernetes-release-1.20.md +++ b/content/zh-cn/blog/_posts/2020-12-08-kubernetes-release-1.20.md @@ -56,7 +56,7 @@ evergreen: true 请注意,作为新的内置命令,`kubectl debug` 优先于任何名为 “debug” 的 kubectl 插件。你必须重命名受影响的插件。 -`kubectl alpha debug` 现在不推荐使用,并将在后续版本中删除。更新你的脚本以使用 `kubectl debug`。 有关更多信息 `kubectl debug`,请参阅[调试正在运行的 Pod]((https://kubernetes.io/zh/docs/tasks/debug/debug-application/debug-running-pod/)。 +`kubectl alpha debug` 现在不推荐使用,并将在后续版本中删除。更新你的脚本以使用 `kubectl debug`。 有关更多信息 `kubectl debug`,请参阅[调试正在运行的 Pod]((https://kubernetes.io/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/)。 ### 测试版:API 优先级和公平性 {#beta-api-priority-and-fairness) @@ -110,7 +110,7 @@ Kubernetes 社区写了一篇关于弃用的详细[博客文章](https://blog.k8 新引入的 `ExecProbeTimeout` 特性门控所提供的修复使集群操作员能够恢复到以前的行为,但这种行为将在后续版本中锁定并删除。为了恢复到以前的行为,集群运营商应该将此特性门控设置为 `false`。 -有关更多详细信息,请查看有关配置探针的[更新文档](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)。 +有关更多详细信息,请查看有关配置探针的[更新文档](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)。 ## 其他更新 {#other-updates} diff --git a/content/zh/blog/_posts/2020-12-10-Pod-Impersonation-and-Short-lived-Volumes-in-CSI-Drivers.md b/content/zh-cn/blog/_posts/2020-12-10-Pod-Impersonation-and-Short-lived-Volumes-in-CSI-Drivers.md similarity index 100% rename from content/zh/blog/_posts/2020-12-10-Pod-Impersonation-and-Short-lived-Volumes-in-CSI-Drivers.md rename to content/zh-cn/blog/_posts/2020-12-10-Pod-Impersonation-and-Short-lived-Volumes-in-CSI-Drivers.md diff --git a/content/zh/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md b/content/zh-cn/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md similarity index 98% rename from content/zh/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md rename to content/zh-cn/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md index c5da376b8db88..1909f2cd33928 100644 --- a/content/zh/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md +++ b/content/zh-cn/blog/_posts/2021-09-27-SIG-Node-Spotlight/index.md @@ -168,7 +168,8 @@ SK/EH: It takes time and effort to get to any open source community. SIG Node ma ### 最后你有什么想法/资源要分享吗? SK/EH:进入任何开源社区都需要时间和努力。一开始 SIG Node 可能会因为参与者的数量、工作量和项目范围而让你不知所措。但这是完全值得的。 -请加入我们这个热情的社区! [SIG Node GitHub Repo](https://github.com/kubernetes/community/tree/master/sig-node)包含许多有用的资源,包括 Slack、邮件列表和其他联系信息。 +请加入我们这个热情的社区! [SIG Node GitHub Repo](https://github.com/kubernetes/community/tree/master/sig-node) +包含许多有用的资源,包括 Slack、邮件列表和其他联系信息。 -[Services](/zh/docs/concepts/services-networking/service/) 在 1.20 版本之前是单协议栈的, +[Services](/zh-cn/docs/concepts/services-networking/service/) 在 1.20 版本之前是单协议栈的, 因此,使用两个 IP 协议族意味着需为每个 IP 协议族创建一个 Service。在 1.20 版本中对用户体验进行简化, 重新实现了 Service 以支持两个 IP 协议族,这意味着一个 Service 就可以处理 IPv4 和 IPv6 协议。 对于 Service 而言,任意的 IPv4 和 IPv6 协议组合都可以实现负载均衡。 @@ -88,7 +88,7 @@ While Services are set according to what you configure, Pods default to whatever Even though dual-stack is possible, it is not mandatory to use it. Examples in the documentation show the variety possible in [dual-stack service configurations](/docs/concepts/services-networking/dual-stack/#dual-stack-service-configuration-scenarios). --> 尽管双协议栈是可用的,但并不强制你使用它。 -在[双协议栈服务配置](/zh/docs/concepts/services-networking/dual-stack/#dual-stack-service-configuration-scenarios) +在[双协议栈服务配置](/zh-cn/docs/concepts/services-networking/dual-stack/#dual-stack-service-configuration-scenarios) 文档中的示例列出了可能出现的各种场景. -虽然现在上游 Kubernetes 支持[双协议栈网络](/zh/docs/concepts/services-networking/dual-stack/) +虽然现在上游 Kubernetes 支持[双协议栈网络](/zh-cn/docs/concepts/services-networking/dual-stack/) 作为 GA 或稳定特性,但每个提供商对双协议栈 Kubernetes 的支持可能会有所不同。节点需要提供可路由的 IPv4/IPv6 网络接口。 -Pod 需要是双协议栈的。[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +Pod 需要是双协议栈的。[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) 是用来为 Pod 分配 IP 地址的,所以集群需要支持双协议栈的网络插件。一些容器网络接口(CNI)插件支持双协议栈,例如 kubenet。 支持双协议栈的生态系统在不断壮大;你可以使用 -[kubeadm 创建双协议栈集群](/zh/docs/setup/production-environment/tools/kubeadm/dual-stack-support/), +[kubeadm 创建双协议栈集群](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/), 在本地尝试用 [KIND 创建双协议栈集群](https://kind.sigs.k8s.io/docs/user/configuration/#ip-family), 还可以将双协议栈集群部署到云上(在查阅 CNI 或 kubenet 可用性的文档之后) diff --git a/content/zh/blog/_posts/2021-12-10-csi-migration-status.md b/content/zh-cn/blog/_posts/2021-12-10-csi-migration-status.md similarity index 100% rename from content/zh/blog/_posts/2021-12-10-csi-migration-status.md rename to content/zh-cn/blog/_posts/2021-12-10-csi-migration-status.md diff --git a/content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md b/content/zh-cn/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md similarity index 96% rename from content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md rename to content/zh-cn/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md index 6722e9bd4ebf5..2f19fcae560f6 100644 --- a/content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md +++ b/content/zh-cn/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md @@ -23,9 +23,9 @@ Kubernetes v1.23 introduced a new, alpha-level policy for StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet is deleted or pods in the StatefulSet are scaled down. --> -Kubernetes v1.23 为 [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/) +Kubernetes v1.23 为 [StatefulSets](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 引入了一个新的 alpha 级策略,用来控制由 StatefulSet 规约模板生成的 -[PersistentVolumeClaims](/zh/docs/concepts/storage/persistent-volumes/) (PVCs) 的生命周期, +[PersistentVolumeClaims](/zh-cn/docs/concepts/storage/persistent-volumes/) (PVCs) 的生命周期, 用于当删除 StatefulSet 或减少 StatefulSet 中的 Pods 数量时 PVCs 应该被自动删除的场景。 -查阅[文档](/zh/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) +查阅[文档](/zh-cn/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) 获取更多详细信息。 Kubernetes 将在即将发布的 1.24 版本中移除 dockershim。我们很高兴能够通过支持开源容器运行时、支持更小的 kubelet 以及为使用 Kubernetes 的团队提高工程速度来重申我们的社区价值。 -如果你[使用 Docker Engine 作为 Kubernetes 集群的容器运行时](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/), +如果你[使用 Docker Engine 作为 Kubernetes 集群的容器运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/), 请准备好在 1.24 中迁移!要检查你是否受到影响, -请参考[检查移除 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)。 +请参考[检查移除 Dockershim 对你的影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)。 ## 弃用时间线 {#deprecation-timeline} -我们[正式宣布](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/)于 +我们[正式宣布](/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/)于 2020 年 12 月弃用 dockershim。目标是在 2022 年 4 月, Kubernetes 1.24 中完全移除 dockershim。 -此时间线与我们的[弃用策略](/zh/docs/reference/using api/deprecation-policy/#deprecating-a-feature-or-behavior)一致, +此时间线与我们的[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)一致, 即规定已弃用的行为必须在其宣布弃用后至少运行 1 年。 -在这一点上,我们相信你(和 Kubernetes)从移除 dockershim 中获得的价值可以弥补你将要进行的迁移工作。 +在这一点上,我们相信你(和 Kubernetes)从移除 dockershim 中获得的价值可以弥补你将要进行的迁移工作。 现在就开始计划以避免出现意外。在 Kubernetes 1.24 发布之前,我们将提供更多更新信息和指南。 diff --git a/content/zh/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md b/content/zh-cn/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md similarity index 100% rename from content/zh/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md rename to content/zh-cn/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md diff --git a/content/zh/blog/_posts/2022-01-19-Securing-Admission-Controllers.md b/content/zh-cn/blog/_posts/2022-01-19-Securing-Admission-Controllers.md similarity index 97% rename from content/zh/blog/_posts/2022-01-19-Securing-Admission-Controllers.md rename to content/zh-cn/blog/_posts/2022-01-19-Securing-Admission-Controllers.md index 99dad347e56a2..caf4c05423d43 100644 --- a/content/zh/blog/_posts/2022-01-19-Securing-Admission-Controllers.md +++ b/content/zh-cn/blog/_posts/2022-01-19-Securing-Admission-Controllers.md @@ -23,7 +23,7 @@ slug: secure-your-admission-controllers-and-webhooks [Admission control](/docs/reference/access-authn-authz/admission-controllers/) is a key part of Kubernetes security, alongside authentication and authorization. Webhook admission controllers are extensively used to help improve the security of Kubernetes clusters in a variety of ways including restricting the privileges of workloads and ensuring that images deployed to the cluster meet organization’s security requirements. --> -[准入控制](/zh/docs/reference/access-authn-authz/admission-controllers/)和认证、授权都是 Kubernetes 安全性的关键部分。 +[准入控制](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)和认证、授权都是 Kubernetes 安全性的关键部分。 Webhook 准入控制器被广泛用于以多种方式帮助提高 Kubernetes 集群的安全性, 包括限制工作负载权限和确保部署到集群的镜像满足组织安全要求。 @@ -109,7 +109,7 @@ In most cases, the admission controller webhook used by a cluster will be instal -* **限制 [RBAC](/zh/docs/reference/access-authn-authz/rbac/) 权限**。 +* **限制 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 权限**。 任何有权修改 webhook 对象的配置或准入控制器使用的工作负载的用户都可以破坏其运行。 因此,确保只有集群管理员拥有这些权限非常重要。 diff --git a/content/zh/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/release-mostly-green.png b/content/zh-cn/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/release-mostly-green.png similarity index 100% rename from content/zh/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/release-mostly-green.png rename to content/zh-cn/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/release-mostly-green.png diff --git a/content/zh/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/serial-tests-green.png b/content/zh-cn/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/serial-tests-green.png similarity index 100% rename from content/zh/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/serial-tests-green.png rename to content/zh-cn/blog/_posts/2022-02-16-sig-node-ci-subproject-celebrates/serial-tests-green.png diff --git a/content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md similarity index 65% rename from content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md rename to content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md index 61f3860602843..85d1ce25bf8dd 100644 --- a/content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -1,9 +1,10 @@ --- layout: blog -title: "更新:弃用 Dockershim 的常见问题" -linkTitle: "弃用 Dockershim 的常见问题" +title: "更新:移除 Dockershim 的常见问题" +linkTitle: "移除 Dockershim 的常见问题" date: 2022-02-17 slug: dockershim-faq +aliases: [ 'zh/dockershim' ] --- -**本文是针对2020年末发布的[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)的博客更新。** +**本文是针对 2020 年末发布的[弃用 Dockershim 的常见问题](/zh-cn/blog/2020/12/02/dockershim-faq/)的博客更新。 +本文包括 Kubernetes v1.24 版本的更新。** + +--- +本文介绍了一些关于从 Kubernetes 中移除 _dockershim_ 的常见问题。 +该移除最初是作为 Kubernetes v1.20 +版本的一部分[宣布](/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/)的。 +Kubernetes 在 [v1.24 版](/releases/#release-v1-24)移除了 dockershim。 + + -本文回顾了自 Kubernetes v1.20 版本[宣布](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/)弃用 -Dockershim 以来所引发的一些常见问题。关于弃用细节以及这些细节背后的含义,请参考博文 -[别慌: Kubernetes 和 Docker](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。 +关于细节请参考博文 +[别慌: Kubernetes 和 Docker](/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。 -你还可以查阅:[检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)这篇文章, -以确定弃用 dockershim 会对你或你的组织带来多大的影响。 +要确定移除 dockershim 是否会对你或你的组织的影响,可以查阅: +[检查弃用 Dockershim 对你的影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +这篇文章。 -随着 Kubernetes 1.24 版本的发布迫在眉睫,我们一直在努力尝试使其能够平稳升级顺利过渡。 +在 Kubernetes 1.24 发布之前的几个月和几天里,Kubernetes +贡献者努力试图让这个过渡顺利进行。 -- 我们已经写了一篇博文,详细说明了我们的[承诺和后续操作](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)。 -- 我们我们相信可以无障碍的迁移到其他[容器运行时](/zh/docs/setup/production-environment/container-runtimes/#container-runtimes)。 -- 我们撰写了 [dockershim 迁移指南](/docs/tasks/administer-cluster/migrating-from-dockershim/)供你参考。 -- 我们还创建了一个页面来列出[有关 dockershim 移除和使用 CRI 兼容运行时的文章](/zh/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)。 +- 一篇详细说明[承诺和后续操作](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)的博文。 +- 检查是否存在迁移到其他 [容器运行时](/zh-cn/docs/setup/production-environment/container-runtimes/#container-runtimes) 的主要障碍。 +- 添加 [从 dockershim 迁移](/docs/tasks/administer-cluster/migrating-from-dockershim/)的指南。 +- 创建了一个[有关 dockershim 移除和使用 CRI 兼容运行时的列表](/zh-cn/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)。 该列表包括一些已经提到的文档,还涵盖了选定的外部资源(包括供应商指南)。 -### 为什么会从 Kubernetes 中移除 dockershim ? +### 为什么会从 Kubernetes 中移除 dockershim ? {#why-was-the-dockershim-removed-from-kubernetes} 此外,在较新的 CRI 运行时中实现了与 dockershim 不兼容的功能,例如 cgroups v2 和用户命名空间。 -取消对 dockershim 的支持将加速这些领域的发展。 +从 Kubernetes 中移除 dockershim 允许在这些领域进行进一步的开发。 + + +### Docker 和容器一样吗? {#are-docker-and-containers-the-same-thing} + + +Docker 普及了 Linux 容器模式,并在开发底层技术方面发挥了重要作用,但是 Linux +中的容器已经存在了很长时间,容器生态系统已经发展到比 Docker 广泛得多。 +OCI 和 CRI 等标准帮助许多工具在我们的生态系统中发展壮大,其中一些替代了 Docker +的某些方面,而另一些则增强了现有功能。 + + +### 我现有的容器镜像是否仍然有效? {#will-my-existing-container-images-still-work} + + +是的,从 `docker build` 生成的镜像将适用于所有 CRI 实现, +现有的所有镜像仍将完全相同。 + + +#### 私有镜像呢? {#what-about-private-images} + + +当然可以,所有 CRI 运行时都支持在 Kubernetes 中使用的相同的 pull secrets +配置,无论是通过 PodSpec 还是 ServiceAccount。 -### 在 Kubernetes 1.23 版本中还可以使用 Docker Engine 吗? +### 在 Kubernetes 1.23 版本中还可以使用 Docker Engine 吗? {#can-i-still-use-docker-engine-in-kubernetes-1-23} 可以使用,在 1.20 版本中唯一的改动是,如果使用 Docker Engine, -在 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) +在 [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) 启动时会打印一个警告日志。 -你将在 1.23 版本及以前版本看到此警告。dockershim 将在 Kubernetes 1.24 版本中移除 。 +你将在 1.23 版本及以前版本看到此警告,dockershim 已在 Kubernetes 1.24 版本中移除 。 + + +如果你运行的是 Kubernetes v1.24 或更高版本,请参阅 +[我仍然可以使用 Docker Engine 作为我的容器运行时吗?](#can-i-still-use-docker-engine-as-my-container-runtime) +(如果你使用任何支持 dockershim 的版本,可以随时切换离开;从版本 v1.24 +开始,因为 Kubernetes 不再包含 dockershim,你**必须**切换)。 -### 什么时候移除 dockershim ? +### 我应该用哪个 CRI 实现? {#which-cri-implementation-should-i-use} -考虑到此变更带来的影响,我们使用了一个加长的废弃时间表。 -dockershim 计划在 Kubernetes v1.24 中进行移除, -参见 [Kubernetes 移除 Dockershim 增强方案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim)。 -Kubernetes 项目将与供应商和其他生态系统组织密切合作,以确保平稳过渡,并将依据事态的发展评估后续事项。 +这是一个复杂的问题,依赖于许多因素。 +如果你正在使用 Docker Engine,迁移到 containerd +应该是一个相对容易地转换,并将获得更好的性能和更少的开销。 +然而,我们鼓励你探索 [CNCF landscape] 提供的所有选项,做出更适合你的选择。 + +[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category -### 我还可以使用 Docker Engine 作为我的容器运行时吗? +#### 我还可以使用 Docker Engine 作为我的容器运行时吗? {#can-i-still-use-docker-engine-as-my-container-runtime} -### 我现有的容器镜像还能正常工作吗? - - -当然可以,`docker build` 创建的镜像适用于任何 CRI 实现。 -所有你的现有镜像将和往常一样工作。 - - -### 私有镜像呢? - - -当然可以。所有 CRI 运行时均支持在 Kubernetes 中相同的拉取(pull)Secret 配置, -无论是通过 PodSpec 还是 ServiceAccount。 - - -### Docker 和容器是一回事吗? +你可以安装 `cri-dockerd` 并使用它将 kubelet 连接到 Docker Engine。 +阅读[将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) +以了解更多信息。 - -Docker 普及了 Linux 容器模式,并在开发底层技术方面发挥了重要作用, -但是 Linux 中的容器已经存在了很长时间。容器的生态相比于 Docker 具有更宽广的领域。 -OCI 和 CRI 等标准帮助许多工具在我们的生态系统中发展壮大, -其中一些替代了 Docker 的某些方面,而另一些则增强了现有功能。 -### 现在是否有在生产系统中使用其他运行时的例子? +### 现在是否有在生产系统中使用其他运行时的例子? {#are-there-examples-of-folks-using-other-runtimes-in-production-today} -### 人们总在谈论 OCI,它是什么? +### 人们总在谈论 OCI,它是什么? {#people-keep-referencing-oci-what-is-that} -### 我应该用哪个 CRI 实现? - - -这是一个复杂的问题,依赖于许多因素。 -如果你正在使用 Docker,迁移到 containerd 应该是一个相对容易地转换,并将获得更好的性能和更少的开销。 -然而,我们鼓励你探索 [CNCF landscape](https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category) -提供的所有选项,做出更适合你的选择。 -### 当切换 CRI 实现时,应该注意什么? +### 当切换 CRI 实现时,应该注意什么? {#what-should-i-look-out-for-when-changing-cri-implementations} - 日志配置 - 运行时的资源限制 -- 调用 docker 或通过其控制套接字使用 docker 的节点配置脚本 -- 需要访问 docker 命令或控制套接字的 kubectl 插件 +- 调用 docker 或通过其控制套接字使用 Docker Engine 的节点配置脚本 +- 需要 `docker` 命令或 Docker Engine 控制套接字的 `kubectl` 插件 - 需要直接访问 Docker Engine 的 Kubernetes 工具(例如:已弃用的 'kube-imagepuller' 工具) - `registry-mirrors` 和不安全注册表等功能的配置 - 保障 Docker Engine 可用、且运行在 Kubernetes 之外的脚本或守护进程(例如:监视或安全代理) @@ -304,7 +324,7 @@ common things to consider when migrating are: @@ -314,13 +334,14 @@ runtime where possible. 另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。 对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案 -(参阅[从 docker cli 到 crictl 的映射](/zh/docs/tasks/debug/debug-cluster/crictl/#mapping-from-docker-cli-to-crictl))。 +(参阅[从 docker cli 到 crictl 的映射](/zh-cn/docs/tasks/debug/debug-cluster/crictl/#mapping-from-docker-cli-to-crictl))。 对于后者,可以用新的容器创建选项,例如 [img](https://github.com/genuinetools/img)、 [buildah](https://github.com/containers/buildah)、 @@ -345,15 +366,15 @@ Kubernetes documentation on [Container Runtimes]. -### 我还有其他问题怎么办? +### 我还有其他问题怎么办? {#what-if-i-have-more-questions} 如果你使用了供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。 -对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。 +对于最终用户的问题,请把问题发到我们的最终用户社区的[论坛](https://discuss.kubernetes.io/)。 -### 是否有任何工具可以帮助我找到正在使用的 dockershim +### 是否有任何工具可以帮助我找到正在使用的 dockershim? {#is-there-any-tooling-that-can-help-me-find-dockershim-in-use} 是的! [Docker Socket 检测器 (DDS)][dds] 是一个 kubectl 插件, 你可以安装它用于检查你的集群。 DDS 可以检测运行中的 Kubernetes -工作负载是否将 Docker 引擎套接字 (`docker.sock`) 作为卷挂载。 +工作负载是否将 Docker Engine 套接字 (`docker.sock`) 作为卷挂载。 在 DDS 项目的 [README][dds] 中查找更多详细信息和使用方法。 [dds]: https://github.com/aws-containers/kubectl-detector-for-docker-socket @@ -391,7 +412,7 @@ Find more details and usage patterns in the DDS project's [README][dds]. -### 我可以加入吗? +### 我可以加入吗? {#can-i-have-a-hug} + + +**作者:** Kat Cosgrove + + + +早在 2020 年 12 月,Kubernetes 就宣布[弃用 Dockershim](/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。 +在 Kubernetes 中,dockershim 是一个软件 shim, +它允许你将整个 Docker 引擎用作 Kubernetes 中的容器运行时。 +在即将发布的 v1.24 版本中,我们将移除 Dockershim - +在宣布弃用之后到彻底移除这段时间内,我们至少预留了一年的时间继续支持此功能, +这符合相关的[项目策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。 +如果你是集群操作员,则该指南包含你在此版本中需要了解的实际情况。 +另外还包括你需要做些什么来确保你的集群不会崩溃! + + +## 首先,这对你有影响吗? + + +如果你正在管理自己的集群或不确定此删除是否会影响到你, +请保持安全状态并[检查你对 Docker Engine 是否有依赖](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)。 +请注意,使用 Docker Desktop 构建应用程序容器并不算是集群对 Docker 有依赖。 +Docker 创建的容器镜像符合 [Open Container Initiative (OCI)](https://opencontainers.org/) 规范, +而 OCI 是 Linux 基金会的一种治理架构,负责围绕容器格式和运行时定义行业标准。 +这些镜像可以在 Kubernetes 支持的任何容器运行时上正常工作。 + + +如果你使用的是云服务提供商管理的 Kubernetes 服务, +并且你确定没有更改过容器运行时,那么你可能不需要做任何事情。 +Amazon EKS、Azure AKS 和 Google GKE 现在都默认使用 containerd, +但如果你的集群中有任何自定义的节点,你要确保它们不需要被更新。 +要检查节点的运行时,请参考[查明节点上所使用的容器运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)。 + + +无论你是在管理自己的集群还是使用云服务提供商管理的 Kubernetes 服务, +你可能都需要[迁移依赖 Docker Engine 的遥测或安全代理](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)。 + + +## 我对 Docker 有依赖。现在该怎么办? + + +如果你的 Kubernetes 集群对 Docker Engine 有依赖, +并且你打算升级到 Kubernetes v1.24 版本(出于安全和类似原因,你最终应该这样做), +你需要将容器运行时从 Docker Engine 更改为其他方式或使用 [cri-dockerd](https://github.com/Mirantis/cri-dockerd)。 +由于 [containerd](https://containerd.io/) 是一个已经毕业的 CNCF 项目, +并且是 Docker 本身的运行时,因此用它作为容器运行时的替代方式是一个安全的选择。 +幸运的是,Kubernetes 项目已经以 containerd 为例, +提供了[更改节点容器运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)的过程文档。 +切换到其它支持的运行时的操作指令与此类似。 + + +## 我想升级 Kubernetes,并且我需要保持与 Docker 作为运行时的兼容性。我有哪些选择? + + +别担心,你不会被冷落,也不必冒着安全风险继续使用旧版本的 Kubernetes。 +Mirantis 和 Docker 已经联合发布并正在维护 dockershim 的替代品。 +这种替代品称为 [cri-dockerd](https://github.com/Mirantis/cri-dockerd)。 +如果你确实需要保持与 Docker 作为运行时的兼容性,请按照项目文档中的说明安装 cri-dockerd。 + + +## 这样就可以了吗? + + + +是的。只要你深入了解此版本所做的变更和你自己集群的详细信息, +并确保与你的开发团队进行清晰的沟通,它的不确定性就会降到最低。 +你可能需要对集群、应用程序代码或脚本进行一些更改,但所有这些要求都已经有说明指导。 +从使用 Docker Engine 作为运行时,切换到使用[其他任何一种支持的容器运行时](/zh-cn/docs/setup/production-environment/container-runtimes/), +这意味着移除了中间层的组件,因为 dockershim 的作用是访问 Docker 本身使用的容器运行时。 +从实际角度长远来看,这种移除对你和 Kubernetes 维护者都更有好处。 + + +如果你仍有疑问,请先查看[弃用 Dockershim 的常见问题](/zh-cn/blog/2022/02/17/dockershim-faq/)。 diff --git a/content/zh/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md b/content/zh-cn/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md similarity index 93% rename from content/zh/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md rename to content/zh-cn/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md index ed73904af3af9..3963d3915a922 100644 --- a/content/zh/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md +++ b/content/zh-cn/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md @@ -67,7 +67,7 @@ the authors succinctly captured the change's impact and encouraged users to rema > Container Runtime Interface (CRI) created for Kubernetes. Docker-produced images > will continue to work in your cluster with all runtimes, as they always have. --> -在文章[别慌: Kubernetes 和 Docker](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/) 中, +在文章[别慌: Kubernetes 和 Docker](/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/) 中, 作者简洁地记述了变化的影响,并鼓励用户保持冷静: >弃用 Docker 这个底层运行时,转而支持符合为 Kubernetes 创建的容器运行接口 >Container Runtime Interface (CRI) 的运行时。 @@ -80,7 +80,7 @@ to container runtimes that are directly compatible with Kubernetes. You can find page in the Kubernetes documentation. --> 已经有一些文档指南,提供了关于从 dockershim 迁移到与 Kubernetes 直接兼容的容器运行时的有用信息。 -你可以在 Kubernetes 文档中的[从 dockershim 迁移](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/) +你可以在 Kubernetes 文档中的[从 dockershim 迁移](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/) 页面上找到它们。 有关 Kubernetes 为何不再使用 dockershim 的更多信息, 请参见:[Kubernetes 正在离开 Dockershim](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) -和[最新的弃用 Dockershim 的常见问题](/zh/blog/2022/02/17/dockershim-faq/)。 +和[最新的弃用 Dockershim 的常见问题](/zh-cn/blog/2022/02/17/dockershim-faq/)。 查看[你的集群准备好使用 v1.24 了吗?](/blog/2022/03/31/ready-for-dockershim-removal/) 一文, 了解如何确保你的集群在从 1.23 版本升级到 1.24 版本后继续工作。 @@ -113,7 +113,7 @@ same API is available and that APIs have a minimum lifetime as indicated by the ## Kubernetes API 删除和弃用流程 {#the-Kubernetes-api-removal-and-deprecation-process} Kubernetes 包含大量随时间演变的组件。在某些情况下,这种演变会导致 API、标志或整个特性被删除。 -为了防止用户面对重大变化,Kubernetes 贡献者采用了一项特性[弃用策略](/zh/docs/reference/using-api/deprecation-policy/)。 +为了防止用户面对重大变化,Kubernetes 贡献者采用了一项特性[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。 此策略确保仅当同一 API 的较新稳定版本可用并且 API 具有以下稳定性级别所指示的最短生命周期时,才可能弃用稳定版本 API: @@ -212,14 +212,14 @@ Docker Engine dependencies. Before upgrading to v1.24, you decide to either rema ## 需要做什么 {#what-to-do} ### 删除 Dockershim {#dockershim-removal} -如前所述,有一些关于从 [dockershim 迁移](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/)的指南。 -你可以[从查明节点上所使用的容器运行时](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)开始。 +如前所述,有一些关于从 [dockershim 迁移](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/)的指南。 +你可以[从查明节点上所使用的容器运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)开始。 如果你的节点使用 dockershim,则还有其他可能的 Docker Engine 依赖项, 例如 Pod 或执行 Docker 命令的第三方工具或 Docker 配置文件中的私有注册表。 -你可以按照[检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +你可以按照[检查弃用 Dockershim 对你的影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) 的指南来查看可能的 Docker 引擎依赖项。在升级到 1.24 版本之前, 你决定要么继续使用 Docker Engine 并 [将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/), -要么迁移到与 CRI 兼容的运行时。这是[将节点上的容器运行时从 Docker Engine 更改为 containerd](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/) 的指南。 +要么迁移到与 CRI 兼容的运行时。这是[将节点上的容器运行时从 Docker Engine 更改为 containerd](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/) 的指南。 ### `kubectl convert` {#kubectl-convert} -kubectl 的 [`kubectl convert`](/zh/docs/tasks/tools/included/kubectl-convert-overview/) +kubectl 的 [`kubectl convert`](/zh-cn/docs/tasks/tools/included/kubectl-convert-overview/) 插件有助于解决弃用 API 的迁移问题。该插件方便了不同 API 版本之间清单的转换, 例如,从弃用的 API 版本到非弃用的 API 版本。关于 API 迁移过程的更多信息可以在 [已弃用 API 的迁移指南](/docs/reference/using-api/deprecation-guide/)中找到。按照 @@ -258,7 +258,7 @@ Kubernetes API 的 beta 版本,这些 API 当前为稳定版。1.25 版本还 -[Kubernetes 1.25 计划移除的 API 的官方列表](/zh/docs/reference/using-api/deprecation-guide/#v1-25)是: +[Kubernetes 1.25 计划移除的 API 的官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-25)是: * The beta CronJob API (batch/v1beta1) * The beta EndpointSlice API (discovery.k8s.io/v1beta1) @@ -274,7 +274,7 @@ The official [list of API removals planned for Kubernetes 1.26](/docs/reference/ * The beta FlowSchema and PriorityLevelConfiguration APIs (flowcontrol.apiserver.k8s.io/v1beta1) * The beta HorizontalPodAutoscaler API (autoscaling/v2beta2) --> -[Kubernetes 1.25 计划移除的 API 的官方列表](/zh/docs/reference/using-api/deprecation-guide/#v1-25)是: +[Kubernetes 1.25 计划移除的 API 的官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-25)是: * The beta FlowSchema 和 PriorityLevelConfiguration API (flowcontrol.apiserver.k8s.io/v1beta1) * The beta HorizontalPodAutoscaler API (autoscaling/v2beta2) @@ -297,5 +297,5 @@ Kubernetes 发行说明中宣告了弃用信息。你可以在以下版本的发 * 我们将正式宣布 [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) 的弃用信息, 作为该版本 CHANGELOG 的一部分。 -有关弃用和删除过程的信息,请查看 Kubernetes 官方[弃用策略](/zh/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) 文档。 +有关弃用和删除过程的信息,请查看 Kubernetes 官方[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) 文档。 diff --git a/content/zh/blog/_posts/2022-05-03-dockershim-historical-context.md b/content/zh-cn/blog/_posts/2022-05-03-dockershim-historical-context.md similarity index 67% rename from content/zh/blog/_posts/2022-05-03-dockershim-historical-context.md rename to content/zh-cn/blog/_posts/2022-05-03-dockershim-historical-context.md index 5a994452715ca..1243b46d77313 100644 --- a/content/zh/blog/_posts/2022-05-03-dockershim-historical-context.md +++ b/content/zh-cn/blog/_posts/2022-05-03-dockershim-historical-context.md @@ -21,11 +21,12 @@ So what is the dockershim, and why is it going away? --> **作者:** Kat Cosgrove -自 Kubernetes v1.24 起,Dockershim 已被删除,这对项目来说是一个积极的举措。 -然而,背景对于充分理解某事很重要,无论是社交还是软件开发,这值得更深入的审查。 -除了 Kubernetes v1.24 中的 dockershim 移除之外,我们在社区中看到了一些 -混乱(有时处于恐慌级别)和对这一决定的不满,主要是由于缺乏有关此删除背景的了解。 -弃用并最终从 Kubernetes 中删除 dockershim 的决定并不是迅速或轻率地做出的。 +自 Kubernetes v1.24 起,Dockershim 已被删除,这对项目来说是一个积极的举措。 +然而,背景对于充分理解某事很重要,无论是社交还是软件开发,这值得更深入的审查。 +除了 Kubernetes v1.24 中的 dockershim 移除之外, +我们在社区中看到了一些混乱(有时处于恐慌级别)和对这一决定的不满, +主要是由于缺乏有关此删除背景的了解。弃用并最终从 Kubernetes 中删除 +dockershim 的决定并不是迅速或轻率地做出的。 尽管如此,它已经工作了很长时间,以至于今天的许多用户都比这个决定更新, 更不用提当初为何引入 dockershim 了。 @@ -34,55 +35,57 @@ So what is the dockershim, and why is it going away? -在 Kubernetes 的早期,我们只支持一个容器运行时,那个运行时就是 Docker Engine。 -那时,并没有太多其他选择,而 Docker 是使用容器的主要工具,所以这不是一个有争议的选择。 -最终,我们开始添加更多的容器运行时,比如 rkt 和 hypernetes,很明显 Kubernetes 用户 -希望选择最适合他们的运行时。 因此,Kubernetes 需要一种方法来允许集群操作员灵活地使用 -他们选择的任何运行时。 +在 Kubernetes 的早期,我们只支持一个容器运行时,那个运行时就是 Docker Engine。 +那时,并没有太多其他选择,而 Docker 是使用容器的主要工具,所以这不是一个有争议的选择。 +最终,我们开始添加更多的容器运行时,比如 rkt 和 hypernetes,很明显 Kubernetes +用户希望选择最适合他们的运行时。因此,Kubernetes 需要一种方法来允许集群操作员灵活地使用他们选择的任何运行时。 [容器运行时接口](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (CRI) 已发布以支持这种灵活性。 CRI 的引入对项目和用户来说都很棒,但它确实引入了一个问题:Docker Engine -作为容器运行时的使用早于 CRI,并且 Docker Engine 不兼容 CRI。 为了解决这个问题,在 kubelet 组件 -中引入了一个小型软件 shim (dockershim),专门用于填补 Docker Engine 和 CRI 之间的空白, +作为容器运行时的使用早于 CRI,并且 Docker Engine 不兼容 CRI。 为了解决这个问题,在 kubelet +组件中引入了一个小型软件 shim (dockershim),专门用于填补 Docker Engine 和 CRI 之间的空白, 允许集群操作员继续使用 Docker Engine 作为他们的容器运行时基本上不间断。 -然而,这个小软件 shim 从来没有打算成为一个永久的解决方案。 多年来,它的存在给 kubelet -本身带来了许多不必要的复杂性。 由于这个 shim,Docker 的一些集成实现不一致,导致维护人员 -的负担增加,并且维护特定于供应商的代码不符合我们的开源理念。 为了减少这种维护负担并朝着支 -持开放标准的更具协作性的社区迈进,[引入了 KEP-2221](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221- remove-dockershim), -建议移除 dockershim。 随着 Kubernetes v1.20 的发布,正式弃用。 +然而,这个小软件 shim 从来没有打算成为一个永久的解决方案。 多年来,它的存在给 +kubelet 本身带来了许多不必要的复杂性。由于这个 shim,Docker +的一些集成实现不一致,导致维护人员的负担增加,并且维护特定于供应商的代码不符合我们的开源理念。 +为了减少这种维护负担并朝着支持开放标准的更具协作性的社区迈进, +[引入了 KEP-2221](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim), +建议移除 dockershim。随着 Kubernetes v1.20 的发布,正式弃用。 -我们没有很好地传达这一点,不幸的是,弃用公告在社区内引起了一些恐慌。关于这对 Docker 作为 -一家公司意味着什么,Docker 构建的容器镜像是否仍然可以运行,以及 Docker Engine 究竟是 -什么导致了社交媒体上的一场大火,人们感到困惑。这是我们的错;我们应该更清楚地传达当时发生 -的事情和原因。为了解决这个问题,我们发布了[一篇博客](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/) -和[相应的 FAQ](/zh/blog/2020/12/02/dockershim-faq/ ) 以减轻社区的恐惧并纠正对 -Docker 是什么以及容器如何在 Kubernetes 中工作的一些误解。由于社区的关注,Docker 和 Mirantis -共同决定继续以 [cri-dockerd] 的形式支持 dockershim 代码(https://www.mirantis.com/blog/the-future-of-dockershim-is -cri-dockerd/), -允许你在需要时继续使用 Docker Engine 作为容器运行时。对于想要尝试其他运行时(如 containerd 或 cri-o) -的用户,[已编写迁移文档](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)。 +我们没有很好地传达这一点,不幸的是,弃用公告在社区内引起了一些恐慌。关于这对 +Docker作为一家公司意味着什么,Docker 构建的容器镜像是否仍然可以运行,以及 +Docker Engine 究竟是什么导致了社交媒体上的一场大火,人们感到困惑。 +这是我们的错;我们应该更清楚地传达当时发生的事情和原因。为了解决这个问题, +我们发布了[一篇博客](/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/)和[相应的 FAQ](/zh-cn/blog/2020/12/02/dockershim-faq/) +以减轻社区的恐惧并纠正对 Docker 是什么以及容器如何在 Kubernetes 中工作的一些误解。 +由于社区的关注,Docker 和 Mirantis 共同决定继续以 +[cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/) +的形式支持 dockershim 代码,允许你在需要时继续使用 Docker Engine 作为容器运行时。 +对于想要尝试其他运行时(如 containerd 或 cri-o)的用户, +[已编写迁移文档](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)。 -我们后来[调查了社区](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) -[发现还有很多用户有疑问和顾虑](/zh/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim)。 -作为回应,Kubernetes 维护人员和 CNCF 承诺通过扩展文档和其他程序来解决这些问题。 事实上,这篇博文是 -这个计划的一部分。 随着如此多的最终用户成功迁移到其他运行时,以及改进的文档,我们相信每个人现在都为迁移铺平了道路。 +我们后来[调查了社区](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/)[发现还有很多用户有疑问和顾虑](/zh-cn/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim)。 +作为回应,Kubernetes 维护人员和 CNCF 承诺通过扩展文档和其他程序来解决这些问题。 +事实上,这篇博文是这个计划的一部分。随着如此多的最终用户成功迁移到其他运行时,以及改进的文档, +我们相信每个人现在都为迁移铺平了道路。 -Docker 不会消失,无论是作为一种工具还是作为一家公司。 它是云原生社区的重要组成部分, -也是 Kubernetes 项目的历史。 没有他们,我们就不会是现在的样子。 也就是说,从 kubelet -中删除 dockershim 最终对社区、生态系统、项目和整个开源都有好处。 这是我们所有人齐心协力 -支持开放标准的机会,我们很高兴在 Docker 和社区的帮助下这样做。 \ No newline at end of file +Docker 不会消失,无论是作为一种工具还是作为一家公司。它是云原生社区的重要组成部分, +也是 Kubernetes 项目的历史。没有他们,我们就不会是现在的样子。也就是说,从 kubelet +中删除 dockershim 最终对社区、生态系统、项目和整个开源都有好处。 +这是我们所有人齐心协力支持开放标准的机会,我们很高兴在 Docker 和社区的帮助下这样做。 diff --git a/content/zh-cn/blog/_posts/2022-05-05-volume-expansion-ga.md b/content/zh-cn/blog/_posts/2022-05-05-volume-expansion-ga.md new file mode 100644 index 0000000000000..3a5ae01872a0a --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-05-volume-expansion-ga.md @@ -0,0 +1,207 @@ +--- +layout: blog +title: "Kubernetes 1.24:卷扩充现在成为稳定功能" +date: 2022-05-05 +slug: volume-expansion-ga +--- + + + + +**作者:** Hemant Kumar (Red Hat) + +卷扩充在 Kubernetes 1.8 作为 Alpha 功能引入, +在 Kubernetes 1.11 进入了 Beta 阶段。 +在 Kubernetes 1.24 中,我们很高兴地宣布卷扩充正式发布(GA)。 + +此功能允许 Kubernetes 用户简单地编辑其 `PersistentVolumeClaim` 对象, +并在 PVC Spec 中指定新的大小,Kubernetes 将使用存储后端自动扩充卷, +同时也会扩充 Pod 使用的底层文件系统,使得无需任何停机时间成为可能。 + +### 如何使用卷扩充 + +通过编辑 PVC 的 `spec` 字段,指定不同的(和更大的)存储请求, +可以触发 PersistentVolume 的扩充。 +例如,给定以下 PVC: + +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi # 在此处指定新的大小 +``` + +你可以指定新的值来替代旧的 `1Gi` 大小来请求扩充下层 PersistentVolume。 +一旦你更改了请求的大小,可以查看 PVC 的 `status.conditions` 字段, +确认卷大小的调整是否已完成。 + +当 Kubernetes 开始扩充卷时,它会给 PVC 添加 `Resizing` 状况。 +一旦扩充结束,这个状况会被移除。通过监控与 PVC 关联的事件, +还可以获得更多关于扩充操作进度的信息: + +```bash +kubectl describe pvc +``` + +### 存储驱动支持 + +然而,并不是每种卷类型都默认支持扩充。 +某些卷类型(如树内 hostpath 卷)不支持扩充。 +对于 CSI 卷, +CSI 驱动必须在控制器或节点服务(如果合适,二者兼备) +中具有 `EXPAND_VOLUME` 能力。 +请参阅 CSI 驱动的文档,了解其是否支持卷扩充。 + +有关支持卷扩充的树内(intree)卷类型, +请参阅卷扩充文档:[扩充 PVC 申领](/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)。 + + +通常,为了对可扩充的卷提供某种程度的控制, +只有在存储类将 `allowVolumeExpansion` 参数设置为 `true` 时, +动态供应的 PVC 才是可扩充的。 + +Kubernetes 集群管理员必须编辑相应的 StorageClass 对象, +并将 `allowVolumeExpansion` 字段设置为 `true`。例如: + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gp2-default +provisioner: kubernetes.io/aws-ebs +parameters: + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +### 在线扩充与离线扩充比较 + +默认情况下,Kubernetes 会在用户请求调整大小后立即尝试扩充卷。 +如果一个或多个 Pod 正在使用该卷, +Kubernetes 会尝试通过在线调整大小来扩充该卷; +因此,卷扩充通常不需要应用停机。 +节点上的文件系统也可以在线扩充,因此不需要关闭任何正在使用 PVC 的 Pod。 + +如果要扩充的 PersistentVolume 未被使用,Kubernetes 会用离线方式调整卷大小 +(而且,由于该卷未使用,所以也不会造成工作负载中断)。 + +但在某些情况下,如果底层存储驱动只能支持离线扩充, +则 PVC 用户必须先停止 Pod,才能让扩充成功。 +请参阅存储提供商的文档,了解其支持哪种模式的卷扩充。 + +当卷扩充作为 Alpha 功能引入时, +Kubernetes 仅支持在节点上进行离线的文件系统扩充, +因此需要用户重新启动 Pod,才能完成文件系统的大小调整。 +今天,用户的行为已经被改变,无论底层 PersistentVolume 是在线还是离线, +Kubernetes 都会尽最大努力满足任何调整大小的请求。 +如果你的存储提供商支持在线扩充,则无需重启 Pod 即可完成卷扩充。 + +## 下一步 + +尽管卷扩充在最近的 v1.24 发行版中成为了稳定版本, +但 SIG Storage 团队仍然在努力让 Kubernetes 用户扩充其持久性存储变得更简单。 +Kubernetes 1.23 引入了卷扩充失败后触发恢复机制的功能特性, +允许用户在大小调整失败后尝试自助修复。 +更多详细信息,请参阅[处理扩充卷过程中的失败](/zh-cn/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes)。 + +Kubernetes 贡献者社区也在讨论有状态(StatefulSet)驱动的存储扩充的潜力。 +这个提议的功能特性将允许用户通过直接编辑 StatefulSet 对象, +触发为 StatefulSet 提供存储的所有底层 PV 的扩充。 +更多详细信息,请参阅[通过 StatefulSet 支持卷扩充](https://github.com/kubernetes/enhancements/issues/661)的改善提议。 diff --git a/content/zh-cn/blog/_posts/2022-05-06-storage-capacity-GA/index.md b/content/zh-cn/blog/_posts/2022-05-06-storage-capacity-GA/index.md new file mode 100644 index 0000000000000..7311d7a25500b --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-06-storage-capacity-GA/index.md @@ -0,0 +1,149 @@ +--- +layout: blog +title: "Kubernetes 1.24 版本中存储容量跟踪特性进入 GA 阶段" +date: 2022-05-06 +slug: storage-capacity-ga +--- + + + + **作者:** Patrick Ohly(Intel) + + +在 Kubernetes v1.24 版本中,[存储容量](/zh-cn/docs/concepts/storage/storage-capacity/)跟踪已经成为一项正式发布的功能。 + + +## 已经解决的问题 + + +如[上一篇关于此功能的博文](/blog/2021/04/14/local-storage-features-go-beta/)中所详细介绍的, +存储容量跟踪允许 CSI 驱动程序发布有关剩余容量的信息。当 Pod 仍然有需要配置的卷时, +kube-scheduler 使用该信息为 Pod 选择合适的节点。 + + +如果没有这些信息,Pod 可能会被卡住,而不会被调度到合适节点,这是因为 kube-scheduler +只能盲目地选择节点。由于 CSI 驱动程序管理的下层存储系统没有足够的容量, +kube-scheduler 常常会选择一个无法为其配置卷的节点。 + + +因为 CSI 驱动程序发布的这些存储容量信息在被使用的时候可能已经不是最新的信息了, +所以最终选择的节点无法正常工作的情况仍然可能会发生。 +卷配置通过通知调度程序需要在其他节点上重试来恢复。 + + +升级到 GA 版本后重新进行的[负载测试](https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/docs/storage-capacity-tracking.md)证实, +集群中部署了存储容量跟踪功能的 Pod 可以使用所有的存储,而没有部署此功能的 Pod 就会被卡住。 + + +## *尚未*解决的问题 + + +如果尝试恢复一个制备失败的卷,存在一个已知的限制: +如果 Pod 使用两个卷并且只能制备其中一个,那么所有将来的调度决策都受到已经制备的卷的限制。 +如果该卷是节点的本地卷,并且另一个卷无法被制备,则 Pod 会卡住。 +此问题早在存储容量跟踪功能之前就存在,虽然苛刻的附加条件使这种情况不太可能发生, +但是无法完全避免,当然每个 Pod 仅使用一个卷的情况除外。 + + +[KEP 草案](https://github.com/kubernetes/enhancements/pull/1703)中提出了一个解决此问题的想法: +已制备但尚未被使用的卷不能包含任何有价值的数据,因此可以在其他地方释放并且再次被制备。 +SIG Storage 正在寻找对此感兴趣并且愿意继续从事此工作的开发人员。 + + +另一个没有解决的问题是 Cluster Autoscaler 对包含卷的 Pod 的支持。 +对于具有存储容量跟踪功能的 CSI 驱动程序,我们开发了一个原型并在此 +[PR](https://github.com/kubernetes/autoscaler/pull/3887) 中进行了讨论。 +此原型旨在与任意 CSI 驱动程序协同工作,但这种灵活性使其难以配置并减慢了扩展操作: +因为自动扩展程序无法模拟卷制备操作,它一次只能将集群扩展一个节点,这是此方案的不足之处。 + + +因此,这个 PR 没有被合入,需要另一种不同的方法,在自动缩放器和 CSI 驱动程序之间实现更紧密的耦合。 +为此,需要更好地了解哪些本地存储 CSI 驱动程序与集群自动缩放结合使用。如果这会引出新的 KEP, +那么用户将不得不在实践中尝试实现,然后才能迁移到 beta 版本或 GA 版本中。 +如果你对此主题感兴趣,请联系 SIG Storage。 + + +## 致谢 + + +非常感谢为此功能做出贡献或提供反馈的 [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling)、 +[SIG Autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling) +和 [SIG Storage](https://github.com/kubernetes/community/tree/master/sig-storage) 成员! diff --git a/content/zh-cn/blog/_posts/2022-05-13-grpc-probes-in-beta.md b/content/zh-cn/blog/_posts/2022-05-13-grpc-probes-in-beta.md new file mode 100644 index 0000000000000..657f768e31213 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-13-grpc-probes-in-beta.md @@ -0,0 +1,343 @@ +--- +layout: blog +title: "Kubernetes 1.24:gRPC 容器探针功能进入 Beta 阶段" +date: 2022-05-13 +slug: grpc-probes-now-in-beta +--- + + + +**作者**:Sergey Kanzhelev (Google) + + +在 Kubernetes 1.24 中,gRPC 探针(probe)功能进入了 beta 阶段,默认情况下可用。 +现在,你可以为 gRPC 应用程序配置启动、活跃和就绪探测,而无需公开任何 HTTP 端点, +也不需要可执行文件。Kubernetes 可以通过 gRPC 直接连接到你的工作负载并查询其状态。 + + +## 一些历史 + +让管理你的工作负载的系统检查应用程序是否健康、启动是否正常,以及应用程序是否认为自己可以接收流量,是很有用的。 +在添加 gRPC 探针支持之前,Kubernetes 已经允许你通过从容器镜像内部运行可执行文件、发出 HTTP +请求或检查 TCP 连接是否成功来检查健康状况。 + + +对于大多数应用程序来说,这些检查就足够了。如果你的应用程序提供了用于运行状况(或准备就绪)检查的 +gRPC 端点,则很容易重新调整 `exec` 探针的用途,将其用于 gRPC 运行状况检查。 +在博文[在 Kubernetes 上对 gRPC 服务器进行健康检查](/zh-cn/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/)中, +Ahmet Alp Balkan 描述了如何做到这一点 —— 这种机制至今仍在工作。 + + +2018 年 8 月 21 日所[创建](https://github.com/grpc-ecosystem/grpc-health-probe/commit/2df4478982e95c9a57d5fe3f555667f4365c025d)的一种常用工具可以启用此功能, +工具于 [2018 年 9 月 19 日](https://github.com/grpc-ecosystem/grpc-health-probe/releases/tag/v0.1.0-alpha.1)首次发布。 + + +这种 gRPC 应用健康检查的方法非常受欢迎。使用 GitHub 上的基本搜索,发现了带有 `grpc_health_probe` +的 [3,626 个 Dockerfile 文件](https://github.com/search?l=Dockerfile&q=grpc_health_probe&type=code)和 +[6,621 个 yaml 文件](https://github.com/search?l=YAML&q=grpc_health_probe&type=Code)(在撰写本文时)。 +这很好地表明了该工具的受欢迎程度,以及对其本地支持的需求。 + + +Kubernetes v1.23 引入了一个 alpha 质量的实现,原生支持使用 gRPC 查询工作负载状态。 +因为这是一个 alpha 特性,所以在 1.23 版中默认是禁用的。 + + +## 使用该功能 + +我们用与其他探针类似的方式构建了 gRPC 健康检查,相信如果你熟悉 Kubernetes 中的其他探针类型, +它会[很容易使用](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。 +与涉及 `grpc_health_probe` 可执行文件的解决办法相比,原生支持的健康探针有许多好处。 + + +有了原生 gRPC 支持,你不需要在镜像中下载和携带 `10MB` 的额外可执行文件。 +Exec 探针通常比 gRPC 调用慢,因为它们需要实例化一个新进程来运行可执行文件。 +当 Pod 在最大资源下运行并且在实例化新进程时遇到困难时,它还使得对边界情况的检查变得不那么智能。 + + +不过有一些限制。由于为探针配置客户端证书很难,因此不支持依赖客户端身份验证的服务。 +内置探针也不检查服务器证书,并忽略相关问题。 + + +内置检查也不能配置为忽略某些类型的错误(`grpc_health_probe` 针对不同的错误返回不同的退出代码), +并且不能“串接”以在单个探测中对多个服务运行健康检查。 + + +但是所有这些限制对于 gRPC 来说都是相当标准的,并且有简单的解决方法。 + + +## 自己试试 + +### 集群级设置 + +你现在可以尝试这个功能。要尝试原生 gRPC 探针,你可以自己启动一个启用了 +`GRPCContainerProbe` 特性门控的 Kubernetes 集群,可用的[工具](/zh-cn/docs/tasks/tools/)有很多。 + + +由于特性门控 `GRPCContainerProbe` 在 1.24 版本中是默认启用的,因此许多供应商支持此功能开箱即用。 +因此,你可以在自己选择的平台上创建 1.24 版本集群。一些供应商允许在 1.23 版本集群上启用 alpha 特性。 + + +例如,在编写本文时,你可以在 GKE 上运行测试集群来进行快速测试。 +其他供应商可能也有类似的功能,尤其是当你在 Kubernetes 1.24 版本发布很久后才阅读这篇博客时。 + + +在 GKE 上使用以下命令(注意,版本是 `1.23`,并且指定了 `enable-kubernetes-alpha`)。 + +```shell +gcloud container clusters create test-grpc \ + --enable-kubernetes-alpha \ + --no-enable-autorepair \ + --no-enable-autoupgrade \ + --release-channel=rapid \ + --cluster-version=1.23 +``` + + +你还需要配置 kubectl 来访问集群: + +```shell +gcloud container clusters get-credentials test-grpc +``` + + +### 试用该功能 + +让我们创建 Pod 来测试 gRPC 探针是如何工作的。对于这个测试,我们将使用 `agnhost` 镜像。 +这是一个 k8s 维护的镜像,可用于各种工作负载测试。例如,它有一个有用的 +[grpc-health-checking](https://github.com/kubernetes/kubernetes/blob/b2c5bd2a278288b5ef19e25bf7413ecb872577a4/test/images/agnhost/README.md#grpc-health-checking) +模块,该模块暴露了两个端口:一个是提供健康检查服务的端口,另一个是对 `make-serving` 和 +`make-not-serving` 命令做出反应的 http 端口。 + + +下面是一个 Pod 定义示例。它启用 `grpc-health-checking` 模块,暴露 5000 和 8080 端口,并配置 gRPC 就绪探针: + +``` yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: test-grpc +spec: + containers: + - name: agnhost + image: k8s.gcr.io/e2e-test-images/agnhost:2.35 + command: ["/agnhost", "grpc-health-checking"] + ports: + - containerPort: 5000 + - containerPort: 8080 + readinessProbe: + grpc: + port: 5000 +``` + + +如果文件名为 `test.yaml`,你可以用以下命令创建 Pod,并检查它的状态。如输出片段所示,Pod 将处于就绪状态。 + +```shell +kubectl apply -f test.yaml +kubectl describe test-grpc +``` + + +输出将包含如下内容: + +``` +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +``` + + +现在让我们将健康检查端点状态更改为 `NOT_SERVING`。为了调用 Pod 的 http 端口,让我们创建一个端口转发: + +```shell +kubectl port-forward test-grpc 8080:8080 +``` + + +你可以用 `curl` 来调用这个命令。 + +```shell +curl http://localhost:8080/make-not-serving +``` + + +几秒钟后,端口状态将切换到未就绪。 + +```shell +kubectl describe pod test-grpc +``` + + +现在的输出将显示: + +``` +Conditions: + Type Status + Initialized True + Ready False + ContainersReady False + PodScheduled True + +... + + Warning Unhealthy 2s (x6 over 42s) kubelet Readiness probe failed: service unhealthy (responded with "NOT_SERVING") +``` + + +一旦切换回来,Pod 将在大约一秒钟后恢复到就绪状态: + +``` bsh +curl http://localhost:8080/make-serving +kubectl describe test-grpc +``` + + +输出表明 Pod 恢复为 `Ready`: + +``` +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +``` + + +Kubernetes 上这种新的内置 gRPC 健康探测,使得通过 gRPC 实现健康检查比依赖使用额外的 `exec` +探测的旧方法更容易。请阅读官方 +[文档](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe) +了解更多信息并在该功能正式发布(GA)之前提供反馈。 + + +## 总结 + +Kubernetes 是一个流行的工作负载编排平台,我们根据反馈和需求添加功能。 +像 gRPC 探针支持这样的特性是一个小的改进,它将使许多应用程序开发人员的生活更容易,应用程序更有弹性。 +在该功能 GA(正式发布)之前,现在就试试,并给出反馈。 diff --git a/content/zh-cn/blog/_posts/2022-05-16-volume-populators-beta.md b/content/zh-cn/blog/_posts/2022-05-16-volume-populators-beta.md new file mode 100644 index 0000000000000..cb867f983733b --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-16-volume-populators-beta.md @@ -0,0 +1,256 @@ +--- +layout: blog +title: "Kubernetes 1.24: 卷填充器功能进入 Beta 阶段" +date: 2022-05-16 +slug: volume-populators-beta +--- + + + +**作者:** +Ben Swartzlander (NetApp) + + +卷填充器功能现在已经经历两个发行版本并进入 Beta 阶段! +在 Kubernetes v1.24 中 `AnyVolumeDataSource` 特性门控默认被启用。 +这意味着用户可以指定任何自定义资源作为 PVC 的数据源。 + + +[之前的一篇博客](/blog/2021/08/30/volume-populators-redesigned/)详细介绍了卷填充器功能的工作原理。 +简而言之,集群管理员可以在集群中安装 CRD 和相关的填充器控制器, +任何可以创建 CR 实例的用户都可以利用填充器创建预填充卷。 + + +出于不同的目的,可以一起安装多个填充器。存储 SIG 社区已经有了一些公开的实现,更多原型应该很快就会出现。 + + +**强烈建议**集群管理人员在安装任何填充器之前安装 volume-data-source-validator 控制器和相关的 +`VolumePopulator` CRD,以便用户可以获得有关无效 PVC 数据源的反馈。 + + +## 新功能 + + +构建填充器的 [lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator) +库现在包含可帮助操作员监控和检测问题的指标。这个库现在是 beta 阶段,最新版本是 v1.0.1。 + + +[卷数据源校验器](https://github.com/kubernetes-csi/volume-data-source-validator)控制器也添加了指标支持, +处于 beta 阶段。`VolumePopulator` CRD 是 beta 阶段,最新版本是 v1.0.1。 + + +## 尝试一下 + + +要查看它是如何工作的,你可以安装 “hello” 示例填充器并尝试一下。 + + +首先安装 volume-data-source-validator 控制器。 + +```shell +kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/volume-data-source-validator/v1.0.1/client/config/crd/populator.storage.k8s.io_volumepopulators.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/volume-data-source-validator/v1.0.1/deploy/kubernetes/rbac-data-source-validator.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/volume-data-source-validator/v1.0.1/deploy/kubernetes/setup-data-source-validator.yaml +``` + +接下来安装 hello 示例填充器。 + +```shell +kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/lib-volume-populator/v1.0.1/example/hello-populator/crd.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/lib-volume-populator/87a47467b86052819e9ad13d15036d65b9a32fbb/example/hello-populator/deploy.yaml +``` + +你的集群现在有一个新的 CustomResourceDefinition,它提供了一个名为 Hello 的测试 API。 +创建一个 `Hello` 自定义资源的实例,内容如下: + +```yaml +apiVersion: hello.example.com/v1alpha1 +kind: Hello +metadata: + name: example-hello +spec: + fileName: example.txt + fileContents: Hello, world! +``` + +创建一个将该 CR 引用为其数据源的 PVC。 + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-pvc +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Mi + dataSourceRef: + apiGroup: hello.example.com + kind: Hello + name: example-hello + volumeMode: Filesystem +``` + +接下来,运行一个读取 PVC 中文件的 Job。 + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: example-job +spec: + template: + spec: + containers: + - name: example-container + image: busybox:latest + command: + - cat + - /mnt/example.txt + volumeMounts: + - name: vol + mountPath: /mnt + restartPolicy: Never + volumes: + - name: vol + persistentVolumeClaim: + claimName: example-pvc +``` + +等待 Job 完成(包括其所有依赖项)。 + +```shell +kubectl wait --for=condition=Complete job/example-job +``` + + +最后检查 Job 中的日志。 + +```shell +kubectl logs job/example-job +``` + +输出应该是: + +```terminal +Hello, world! +``` + +请注意,该卷已包含一个文本文件,其中包含来自 CR 的字符串内容。这只是最简单的例子。 +实际填充器可以将卷设置为包含任意内容。 + + +## 如何编写自己的卷填充器 + + +鼓励有兴趣编写新的填充器的开发人员使用 +[lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator) 库, +只提供一个小型控制器,以及一个能够连接到卷并向卷写入适当数据的 Pod 镜像。 + + +单个填充器非常通用,它们可以与所有类型的 PVC 一起使用, +或者如果卷是来自同一供应商的特定 CSI 驱动程序供应的, +它们可以执行供应商特定的的操作以快速用数据填充卷,例如,通过通信直接使用该卷的存储。 + + +## 我怎样才能了解更多? + + +增强提案, +[卷填充器](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1495-volume-populators), +包含有关此功能的历史和技术实现的许多详细信息。 + + +[卷填充器与数据源](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-populators-and-data-sources), +在有关持久卷的文档主题中,解释了如何在集群中使用此功能。 + + +请加入 Kubernetes 的存储 SIG,帮助我们增强这一功能。这里已经有很多好的主意了,我们很高兴能有更多! + diff --git a/content/zh-cn/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md b/content/zh-cn/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md new file mode 100644 index 0000000000000..b4645b030d610 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-18-prevent-unauthorised-volume-mode-conversion.md @@ -0,0 +1,203 @@ +--- +layout: blog +title: 'Kubernetes 1.24: 防止未经授权的卷模式转换' +date: 2022-05-18 +slug: prevent-unauthorised-volume-mode-conversion-alpha +--- + + + + +**作者:** Raunak Pradip Shah (Mirantis) + + +Kubernetes v1.24 引入了一个新的 alpha 级特性,可以防止未经授权的用户修改基于 Kubernetes +集群中已有的 [`VolumeSnapshot`](/zh-cn/docs/concepts/storage/volume-snapshots/) +创建的 [`PersistentVolumeClaim`](/zh-cn/docs/concepts/storage/persistent-volumes/) 的卷模式。 + + +### 问题 + + +[卷模式](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-mode)确定卷是格式化为文件系统还是显示为原始块设备。 + + +用户可以使用自 Kubernetes v1.20 以来就稳定的 `VolumeSnapshot` 功能, +基于 Kubernetes 集群中的已有的 `VolumeSnapshot` 创建一个 `PersistentVolumeClaim` (简称 PVC )。 +PVC 规约包括一个 `dataSource` 字段,它可以指向一个已有的 `VolumeSnapshot` 实例。 +查阅[基于卷快照创建 PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot) +获取更多详细信息。 + + +当使用上述功能时,没有逻辑来验证快照所在的原始卷的模式是否与新创建的卷的模式匹配。 + + +这引起了一个安全漏洞,允许恶意用户潜在地利用主机操作系统中的未知漏洞。 + + +为了提高效率,许多流行的存储备份供应商在备份操作过程中转换卷模式, +这使得 Kubernetes 无法完全阻止该操作,并在区分受信任用户和恶意用户方面带来挑战。 + + +### 防止未经授权的用户转换卷模式 + + +在这种情况下,授权用户是指有权对 `VolumeSnapshotContents`(集群级资源)执行 `Update` +或 `Patch` 操作的用户。集群管理员只能向受信任的用户或应用程序(如备份供应商)提供这些权限。 + + +如果在 `snapshot-controller`、`snapshot-validation-webhook` 和 +`external-provisioner` 中[启用](https://kubernetes-csi.github.io/docs/)了这个 alpha +特性,则基于 `VolumeSnapshot` 创建 PVC 时,将不允许未经授权的用户修改其卷模式。 + + +如要转换卷模式,授权用户必须执行以下操作: + + +1. 确定要用作给定命名空间中新创建 PVC 的数据源的 `VolumeSnapshot`。 +2. 确定绑定到上面 `VolumeSnapshot` 的 `VolumeSnapshotContent`。 + + ``` + kubectl get volumesnapshot -n + ``` + +3. 给 `VolumeSnapshotContent` 添加 + [`snapshot.storage.kubernetes.io/allowVolumeModeChange`](/zh-cn/docs/reference/labels-annotations-taints/#snapshot-storage-kubernetes-io-allowvolumemodechange) + 注解。 + + +4. 此注解可通过软件添加或由授权用户手动添加。`VolumeSnapshotContent` 注解必须类似于以下清单片段: + + ```yaml + kind: VolumeSnapshotContent + metadata: + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" + ... + ``` + +**注意**:对于预先制备的 `VolumeSnapshotContents`,你必须采取额外的步骤设置 `spec.sourceVolumeMode` +字段为 `Filesystem` 或 `Block`,这取决于快照所在卷的模式。 + + +如下为一个示例: + +```yaml + apiVersion: snapshot.storage.k8s.io/v1 + kind: VolumeSnapshotContent + metadata: + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" + name: new-snapshot-content-test + spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + + +对于在备份或恢复操作期间需要转换卷模式的所有 `VolumeSnapshotContents`,重复步骤 1 到 3。 + + +如果 `VolumeSnapshotContent` 对象上存在上面步骤 4 中显示的注解,Kubernetes 将不会阻止转换卷模式。 +用户在尝试将注解添加到任何 `VolumeSnapshotContent` 之前,应该记住这一点。 + + +### 接下来 + + +[启用此特性](https://kubernetes-csi.github.io/docs/)并让我们知道你的想法! + + +我们希望此功能不会中断现有工作流程,同时防止恶意用户利用集群中的安全漏洞。 + + +若有任何问题,请在 #sig-storage slack 频道中创建一个会话, +或在 CSI 外部快照存储[仓库](https://github.com/kubernetes-csi/external-snapshotter)中报告一个 issue。 \ No newline at end of file diff --git a/content/zh-cn/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md b/content/zh-cn/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md new file mode 100644 index 0000000000000..b3b8b9cb4bb91 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md @@ -0,0 +1,268 @@ +--- +layout: blog +title: "Kubernetes 1.24: 避免为 Services 分配 IP 地址时发生冲突" +date: 2022-05-23 +slug: service-ip-dynamic-and-static-allocation +--- + + + +**作者:** Antonio Ojea (Red Hat) + + +在 Kubernetes 中,[Services](/zh-cn/docs/concepts/services-networking/service/) +是一种抽象,用来暴露运行在一组 Pod 上的应用。 +Service 可以有一个集群范围的虚拟 IP 地址(使用 `type: ClusterIP` 的 Service)。 +客户端可以使用该虚拟 IP 地址进行连接, Kubernetes 为对该 Service 的访问流量提供负载均衡,以访问不同的后端 Pod。 + + +## Service ClusterIP 是如何分配的? + + +Service `ClusterIP` 有如下分配方式: + + +**动态** +:群集的控制平面会自动从配置的 IP 范围内为 `type:ClusterIP` 的 Service 选择一个空闲 IP 地址。 + + +**静态** +:你可以指定一个来自 Service 配置的 IP 范围内的 IP 地址。 + + +在整个集群中,每个 Service 的 `ClusterIP` 必须是唯一的。 +尝试创建一个已经被分配了的 `ClusterIP` 的 Service 将会返回错误。 + + +## 为什么需要预留 Service Cluster IP? + + +有时,你可能希望让 Service 运行在众所周知的 IP 地址上,以便集群中的其他组件和用户可以使用它们。 + + +最好的例子是集群的 DNS Service。一些 Kubernetes 安装程序将 Service IP 范围中的第 10 个地址分配给 DNS Service。 +假设你配置集群 Service IP 范围是 10.96.0.0/16,并且希望 DNS Service IP 为 10.96.0.10, +那么你必须创建一个如下所示的 Service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + k8s-app: kube-dns + kubernetes.io/cluster-service: "true" + kubernetes.io/name: CoreDNS + name: kube-dns + namespace: kube-system +spec: + clusterIP: 10.96.0.10 + ports: + - name: dns + port: 53 + protocol: UDP + targetPort: 53 + - name: dns-tcp + port: 53 + protocol: TCP + targetPort: 53 + selector: + k8s-app: kube-dns + type: ClusterIP +``` + + +但正如我之前解释的,IP 地址 10.96.0.10 没有被保留; +如果其他 Service 在动态分配之前创建或与动态分配并行创建,则它们有可能分配此 IP 地址, +因此,你将无法创建 DNS Service,因为它将因冲突错误而失败。 + + +## 如何避免 Service ClusterIP 冲突? {#avoid-ClusterIP-conflict} + + +在 Kubernetes 1.24 中,你可以启用一个新的特性门控 `ServiceIPStaticSubrange`。 +启用此特性允许你为 Service 使用不同的 IP 分配策略,减少冲突的风险。 + + +`ClusterIP` 范围将根据公式 `min(max(16, cidrSize / 16), 256)` 进行划分, +该公式可描述为 “在不小于 16 且不大于 256 之间有一个步进量(Graduated Step)”。 + + +分配默认使用上半段地址,当上半段地址耗尽后,将使用下半段地址范围。 +这将允许用户使用下半段地址中静态分配的地址并且降低冲突的风险。 + + +举例: + + +#### Service IP CIDR 地址段: 10.96.0.0/24 + + +地址段大小:28 - 2 = 254 +地址段偏移:`min(max(16,256/16),256)` = `min(16,256)` = 16 +静态地址段起点:10.96.0.1 +静态地址段终点:10.96.0.16 +地址范围终点:10.96.0.254 + + +{{< mermaid >}} +pie showData +title 10.96.0.0/24 +"静态" : 16 +"动态" : 238 +{{< /mermaid >}} + + +#### Service IP CIDR 地址段: 10.96.0.0/20 + + +地址段大小:212 - 2 = 4094 +地址段偏移:`min(max(16,4094/16),256)` = `min(256,256)` = 256 +静态地址段起点:10.96.0.1 +静态地址段终点:10.96.1.0 +地址范围终点:10.96.15.254 + + +{{< mermaid >}} +pie showData +title 10.96.0.0/20 +"静态" : 256 +"动态" : 3838 +{{< /mermaid >}} + + +#### Service IP CIDR 地址段: 10.96.0.0/16 + + +地址段大小:216 - 2 = 65534 +地址段偏移:`min(max(16,65536/16),256)` = `min(4096,256)` = 256 +静态地址段起点:10.96.0.1 +静态地址段终点:10.96.1.0 +地址范围终点:10.96.255.254 + + +{{< mermaid >}} +pie showData +title 10.96.0.0/16 +"静态" : 256 +"动态" : 65278 +{{< /mermaid >}} + + +## 加入 SIG Network + + +当前 SIG-Network 在 GitHub 上的 [KEPs](https://github.com/orgs/kubernetes/projects/10) 和 +[issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fnetwork) +表明了该 SIG 的重点领域。 + + +[SIG Network 会议](https://github.com/kubernetes/community/tree/master/sig-network)是一个友好、热情的场所, +你可以与社区联系并分享你的想法。期待你的回音! diff --git a/content/zh-cn/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md b/content/zh-cn/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md new file mode 100644 index 0000000000000..a0565b93f8191 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md @@ -0,0 +1,239 @@ +--- +layout: blog +title: 'Kubernetes 1.24: StatefulSet 的最大不可用副本数' +date: 2022-05-27 +slug: maxunavailable-for-statefulset +--- + +**作者:** Mayank Kumar (Salesforce) + + +Kubernetes [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/), +自 1.5 版本中引入并在 1.9 版本中变得稳定以来,已被广泛用于运行有状态应用。它提供固定的 Pod 身份标识、 +每个 Pod 的持久存储以及 Pod 的有序部署、扩缩容和滚动更新功能。你可以将 StatefulSet +视为运行复杂有状态应用程序的原子构建块。随着 Kubernetes 的使用增多,需要 StatefulSet 的场景也越来越多。 +当 StatefulSet 的 Pod 管理策略为 `OrderedReady` 时,其中许多场景需要比当前所支持的一次一个 Pod +的更新更快的滚动更新。 + + +这里有些例子: + +- 我使用 StatefulSet 来编排一个基于缓存的多实例应用程序,其中缓存的规格很大。 + 缓存冷启动,需要相当长的时间才能启动容器。所需要的初始启动任务有很多。在应用程序完全更新之前, + 此 StatefulSet 上的 RollingUpdate 将花费大量时间。如果 StatefulSet 支持一次更新多个 Pod, + 那么更新速度会快得多。 + + +- 我的有状态应用程序由 leader 和 follower 或者一个 writer 和多个 reader 组成。 + 我有多个 reader 或 follower,并且我的应用程序可以容忍多个 Pod 同时出现故障。 + 我想一次更新这个应用程序的多个 Pod,特别是当我的应用程序实例数量很多时,这样我就能快速推出新的更新。 + 注意,我的应用程序仍然需要每个 Pod 具有唯一标识。 + + +为了支持这样的场景,Kubernetes 1.24 提供了一个新的 alpha 特性。在使用新特性之前,必须启用 +`MaxUnavailableStatefulSet` 特性标志。一旦启用,就可以指定一个名为 `maxUnavailable` 的新字段, +这是 StatefulSet `spec` 的一部分。例如: + +``` +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: web + namespace: default +spec: + podManagementPolicy: OrderedReady # 你必须设为 OrderedReady + replicas: 5 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: k8s.gcr.io/nginx-slim:0.8 + imagePullPolicy: IfNotPresent + name: nginx + updateStrategy: + rollingUpdate: + maxUnavailable: 2 # 这是 alpha 特性的字段,默认值是 1 + partition: 0 + type: RollingUpdate +``` + + +如果你启用了新特性,但没有在 StatefulSet 中指定 `maxUnavailable` 的值,Kubernetes +会默认设置 `maxUnavailable: 1`。这与你不启用新特性时看到的行为是一致的。 + + +我将基于该示例清单做场景演练,以演示此特性是如何工作的。我将部署一个有 5 个副本的 StatefulSet, +`maxUnavailable` 设置为 2 并将 `partition` 设置为 0。 + + +我可以通过将镜像更改为 `k8s.gcr.io/nginx-slim:0.9` 来触发滚动更新。一旦开始滚动更新, +就可以看到一次更新 2 个 Pod,因为 `maxUnavailable` 的当前值是 2。 +下面的输出显示了一个时间段内的结果,但并不是完整过程。`maxUnavailable` 可以是绝对数值(例如 2)或所需 Pod +的百分比(例如 10%),绝对数是通过百分比计算结果进行四舍五入得出的。 + +``` +kubectl get pods --watch +``` + +``` +NAME READY STATUS RESTARTS AGE +web-0 1/1 Running 0 85s +web-1 1/1 Running 0 2m6s +web-2 1/1 Running 0 106s +web-3 1/1 Running 0 2m47s +web-4 1/1 Running 0 2m27s +web-4 1/1 Terminating 0 5m43s ----> start terminating 4 +web-3 1/1 Terminating 0 6m3s ----> start terminating 3 +web-3 0/1 Terminating 0 6m7s +web-3 0/1 Pending 0 0s +web-3 0/1 Pending 0 0s +web-4 0/1 Terminating 0 5m48s +web-4 0/1 Terminating 0 5m48s +web-3 0/1 ContainerCreating 0 2s +web-3 1/1 Running 0 2s +web-4 0/1 Pending 0 0s +web-4 0/1 Pending 0 0s +web-4 0/1 ContainerCreating 0 0s +web-4 1/1 Running 0 1s +web-2 1/1 Terminating 0 5m46s ----> start terminating 2 (only after both 4 and 3 are running) +web-1 1/1 Terminating 0 6m6s ----> start terminating 1 +web-2 0/1 Terminating 0 5m47s +web-1 0/1 Terminating 0 6m7s +web-1 0/1 Pending 0 0s +web-1 0/1 Pending 0 0s +web-1 0/1 ContainerCreating 0 1s +web-1 1/1 Running 0 2s +web-2 0/1 Pending 0 0s +web-2 0/1 Pending 0 0s +web-2 0/1 ContainerCreating 0 0s +web-2 1/1 Running 0 1s +web-0 1/1 Terminating 0 6m6s ----> start terminating 0 (only after 2 and 1 are running) +web-0 0/1 Terminating 0 6m7s +web-0 0/1 Pending 0 0s +web-0 0/1 Pending 0 0s +web-0 0/1 ContainerCreating 0 0s +web-0 1/1 Running 0 1s +``` + +注意,滚动更新一开始,4 和 3(两个最高序号的 Pod)同时开始进入 `Terminating` 状态。 +Pod 4 和 3 会按照自身节奏进行更新。一旦 Pod 4 和 3 更新完毕后,Pod 2 和 1 会同时进入 +`Terminating` 状态。当 Pod 2 和 1 都准备完毕处于 `Running` 状态时,Pod 0 开始进入 `Terminating` 状态 + + +在 Kubernetes 中,StatefulSet 更新 Pod 时遵循严格的顺序。在此示例中,更新从副本 4 开始, +然后是副本 3,然后是副本 2,以此类推,一次更新一个 Pod。当一次只更新一个 Pod 时, +副本 3 不可能在副本 4 之前准备好进入 `Running` 状态。当 `maxUnavailable` 值 +大于 1 时(在示例场景中我设置 `maxUnavailable` 值为 2),副本 3 可能在副本 4 之前准备好并运行, +这是没问题的。如果你是开发人员并且设置 `maxUnavailable` 值大于 1,你应该知道可能出现这种情况, +并且如果有这种情况的话,你必须确保你的应用程序能够处理发生的此类顺序问题。当你设置 `maxUnavailable` +值大于 1 时,更新 Pod 的批次之间会保证顺序。该保证意味着在批次 0(副本 4 和 3)中的 Pod +准备好之前,更新批次 2(副本 2 和 1)中的 Pod 无法开始更新。 + + +尽管 Kubernetes 将这些称为**副本**,但你的有状态应用程序可能不这样理解,StatefulSet 的每个 +Pod 可能持有与其他 Pod 完全不同的数据。重要的是,StatefulSet 的更新是分批进行的, +你现在让批次大小大于 1(作为 alpha 特性)。 + + +还要注意,上面的行为采用的 Pod 管理策略是 `podManagementPolicy: OrderedReady`。 +如果你的 StatefulSet 的 Pod 管理策略是 `podManagementPolicy: Parallel`, +那么不仅是 `maxUnavailable` 数量的副本同时被终止,还会导致 `maxUnavailable` 数量的副本同时在 +`ContainerCreating` 阶段。这就是所谓的突发(Bursting)。 + + +因此,现在你可能有很多关于以下方面的问题: +- 当设置 `podManagementPolicy:Parallel` 时,会产生什么行为? +- 将 `partition` 设置为非 `0` 值时会发生什么? + + +自己试试看可能会更好。这是一个 alpha 特性,Kubernetes 贡献者正在寻找有关此特性的反馈。 +这是否有助于你实现有状态的场景?你是否发现了一个 bug,或者你认为实现的行为不直观易懂, +或者它可能会破坏应用程序或让他们感到吃惊?请[登记一个 issue](https://github.com/kubernetes/kubernetes/issues) +告知我们。 + + +## 进一步阅读和后续步骤 {#next-steps} +- [最多不可用 Pod 数](/zh-cn/docs/concepts/workloads/controllers/statefulset/#maximum-unavailable-pods) +- [KEP for MaxUnavailable for StatefulSet](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/961-maxunavailable-for-statefulset) +- [代码实现](https://github.com/kubernetes/kubernetes/pull/82162/files) +- [增强跟踪 Issue](https://github.com/kubernetes/enhancements/issues/961) \ No newline at end of file diff --git a/content/zh/case-studies/_index.html b/content/zh-cn/case-studies/_index.html similarity index 100% rename from content/zh/case-studies/_index.html rename to content/zh-cn/case-studies/_index.html diff --git a/content/zh/case-studies/adform/adform_featured_logo.png b/content/zh-cn/case-studies/adform/adform_featured_logo.png similarity index 100% rename from content/zh/case-studies/adform/adform_featured_logo.png rename to content/zh-cn/case-studies/adform/adform_featured_logo.png diff --git a/content/zh/case-studies/adform/index.html b/content/zh-cn/case-studies/adform/index.html similarity index 100% rename from content/zh/case-studies/adform/index.html rename to content/zh-cn/case-studies/adform/index.html diff --git a/content/zh/case-studies/adidas/adidas-featured.svg b/content/zh-cn/case-studies/adidas/adidas-featured.svg similarity index 100% rename from content/zh/case-studies/adidas/adidas-featured.svg rename to content/zh-cn/case-studies/adidas/adidas-featured.svg diff --git a/content/zh/case-studies/adidas/index.html b/content/zh-cn/case-studies/adidas/index.html similarity index 100% rename from content/zh/case-studies/adidas/index.html rename to content/zh-cn/case-studies/adidas/index.html diff --git a/content/zh/case-studies/amadeus/amadeus_featured.png b/content/zh-cn/case-studies/amadeus/amadeus_featured.png similarity index 100% rename from content/zh/case-studies/amadeus/amadeus_featured.png rename to content/zh-cn/case-studies/amadeus/amadeus_featured.png diff --git a/content/zh/case-studies/amadeus/amadeus_logo.png b/content/zh-cn/case-studies/amadeus/amadeus_logo.png similarity index 100% rename from content/zh/case-studies/amadeus/amadeus_logo.png rename to content/zh-cn/case-studies/amadeus/amadeus_logo.png diff --git a/content/zh/case-studies/amadeus/index.html b/content/zh-cn/case-studies/amadeus/index.html similarity index 100% rename from content/zh/case-studies/amadeus/index.html rename to content/zh-cn/case-studies/amadeus/index.html diff --git a/content/zh/case-studies/ancestry/ancestry_featured.png b/content/zh-cn/case-studies/ancestry/ancestry_featured.png similarity index 100% rename from content/zh/case-studies/ancestry/ancestry_featured.png rename to content/zh-cn/case-studies/ancestry/ancestry_featured.png diff --git a/content/zh/case-studies/ancestry/ancestry_logo.png b/content/zh-cn/case-studies/ancestry/ancestry_logo.png similarity index 100% rename from content/zh/case-studies/ancestry/ancestry_logo.png rename to content/zh-cn/case-studies/ancestry/ancestry_logo.png diff --git a/content/zh/case-studies/ancestry/index.html b/content/zh-cn/case-studies/ancestry/index.html similarity index 100% rename from content/zh/case-studies/ancestry/index.html rename to content/zh-cn/case-studies/ancestry/index.html diff --git a/content/zh/case-studies/ant-financial/ant-financial_featured_logo.png b/content/zh-cn/case-studies/ant-financial/ant-financial_featured_logo.png similarity index 100% rename from content/zh/case-studies/ant-financial/ant-financial_featured_logo.png rename to content/zh-cn/case-studies/ant-financial/ant-financial_featured_logo.png diff --git a/content/zh/case-studies/ant-financial/index.html b/content/zh-cn/case-studies/ant-financial/index.html similarity index 100% rename from content/zh/case-studies/ant-financial/index.html rename to content/zh-cn/case-studies/ant-financial/index.html diff --git a/content/zh/case-studies/appdirect/appdirect_featured_logo.png b/content/zh-cn/case-studies/appdirect/appdirect_featured_logo.png similarity index 100% rename from content/zh/case-studies/appdirect/appdirect_featured_logo.png rename to content/zh-cn/case-studies/appdirect/appdirect_featured_logo.png diff --git a/content/zh/case-studies/appdirect/index.html b/content/zh-cn/case-studies/appdirect/index.html similarity index 100% rename from content/zh/case-studies/appdirect/index.html rename to content/zh-cn/case-studies/appdirect/index.html diff --git a/content/zh/case-studies/babylon/babylon_featured_logo.png b/content/zh-cn/case-studies/babylon/babylon_featured_logo.png similarity index 100% rename from content/zh/case-studies/babylon/babylon_featured_logo.png rename to content/zh-cn/case-studies/babylon/babylon_featured_logo.png diff --git a/content/zh/case-studies/babylon/babylon_featured_logo.svg b/content/zh-cn/case-studies/babylon/babylon_featured_logo.svg similarity index 100% rename from content/zh/case-studies/babylon/babylon_featured_logo.svg rename to content/zh-cn/case-studies/babylon/babylon_featured_logo.svg diff --git a/content/zh/case-studies/babylon/index.html b/content/zh-cn/case-studies/babylon/index.html similarity index 100% rename from content/zh/case-studies/babylon/index.html rename to content/zh-cn/case-studies/babylon/index.html diff --git a/content/zh/case-studies/blablacar/blablacar_featured.png b/content/zh-cn/case-studies/blablacar/blablacar_featured.png similarity index 100% rename from content/zh/case-studies/blablacar/blablacar_featured.png rename to content/zh-cn/case-studies/blablacar/blablacar_featured.png diff --git a/content/zh/case-studies/blablacar/blablacar_logo.png b/content/zh-cn/case-studies/blablacar/blablacar_logo.png similarity index 100% rename from content/zh/case-studies/blablacar/blablacar_logo.png rename to content/zh-cn/case-studies/blablacar/blablacar_logo.png diff --git a/content/zh/case-studies/blablacar/index.html b/content/zh-cn/case-studies/blablacar/index.html similarity index 100% rename from content/zh/case-studies/blablacar/index.html rename to content/zh-cn/case-studies/blablacar/index.html diff --git a/content/zh/case-studies/blackrock/blackrock_featured.png b/content/zh-cn/case-studies/blackrock/blackrock_featured.png similarity index 100% rename from content/zh/case-studies/blackrock/blackrock_featured.png rename to content/zh-cn/case-studies/blackrock/blackrock_featured.png diff --git a/content/zh/case-studies/blackrock/blackrock_logo.png b/content/zh-cn/case-studies/blackrock/blackrock_logo.png similarity index 100% rename from content/zh/case-studies/blackrock/blackrock_logo.png rename to content/zh-cn/case-studies/blackrock/blackrock_logo.png diff --git a/content/zh/case-studies/blackrock/index.html b/content/zh-cn/case-studies/blackrock/index.html similarity index 100% rename from content/zh/case-studies/blackrock/index.html rename to content/zh-cn/case-studies/blackrock/index.html diff --git a/content/zh/case-studies/booking-com/booking.com_featured_logo.png b/content/zh-cn/case-studies/booking-com/booking.com_featured_logo.png similarity index 100% rename from content/zh/case-studies/booking-com/booking.com_featured_logo.png rename to content/zh-cn/case-studies/booking-com/booking.com_featured_logo.png diff --git a/content/zh/case-studies/booking-com/booking.com_featured_logo.svg b/content/zh-cn/case-studies/booking-com/booking.com_featured_logo.svg similarity index 100% rename from content/zh/case-studies/booking-com/booking.com_featured_logo.svg rename to content/zh-cn/case-studies/booking-com/booking.com_featured_logo.svg diff --git a/content/zh/case-studies/booking-com/index.html b/content/zh-cn/case-studies/booking-com/index.html similarity index 100% rename from content/zh/case-studies/booking-com/index.html rename to content/zh-cn/case-studies/booking-com/index.html diff --git a/content/zh/case-studies/booz-allen/booz-allen-featured-logo.svg b/content/zh-cn/case-studies/booz-allen/booz-allen-featured-logo.svg similarity index 100% rename from content/zh/case-studies/booz-allen/booz-allen-featured-logo.svg rename to content/zh-cn/case-studies/booz-allen/booz-allen-featured-logo.svg diff --git a/content/zh/case-studies/booz-allen/booz-allen_featured_logo.png b/content/zh-cn/case-studies/booz-allen/booz-allen_featured_logo.png similarity index 100% rename from content/zh/case-studies/booz-allen/booz-allen_featured_logo.png rename to content/zh-cn/case-studies/booz-allen/booz-allen_featured_logo.png diff --git a/content/zh/case-studies/booz-allen/index.html b/content/zh-cn/case-studies/booz-allen/index.html similarity index 100% rename from content/zh/case-studies/booz-allen/index.html rename to content/zh-cn/case-studies/booz-allen/index.html diff --git a/content/zh/case-studies/bose/bose_featured_logo.png b/content/zh-cn/case-studies/bose/bose_featured_logo.png similarity index 100% rename from content/zh/case-studies/bose/bose_featured_logo.png rename to content/zh-cn/case-studies/bose/bose_featured_logo.png diff --git a/content/zh/case-studies/bose/index.html b/content/zh-cn/case-studies/bose/index.html similarity index 100% rename from content/zh/case-studies/bose/index.html rename to content/zh-cn/case-studies/bose/index.html diff --git a/content/zh/case-studies/box/box_featured.png b/content/zh-cn/case-studies/box/box_featured.png similarity index 100% rename from content/zh/case-studies/box/box_featured.png rename to content/zh-cn/case-studies/box/box_featured.png diff --git a/content/zh/case-studies/box/box_logo.png b/content/zh-cn/case-studies/box/box_logo.png similarity index 100% rename from content/zh/case-studies/box/box_logo.png rename to content/zh-cn/case-studies/box/box_logo.png diff --git a/content/zh/case-studies/box/box_small.png b/content/zh-cn/case-studies/box/box_small.png similarity index 100% rename from content/zh/case-studies/box/box_small.png rename to content/zh-cn/case-studies/box/box_small.png diff --git a/content/zh/case-studies/box/index.html b/content/zh-cn/case-studies/box/index.html similarity index 100% rename from content/zh/case-studies/box/index.html rename to content/zh-cn/case-studies/box/index.html diff --git a/content/zh/case-studies/box/video.png b/content/zh-cn/case-studies/box/video.png similarity index 100% rename from content/zh/case-studies/box/video.png rename to content/zh-cn/case-studies/box/video.png diff --git a/content/zh/case-studies/buffer/buffer_featured.png b/content/zh-cn/case-studies/buffer/buffer_featured.png similarity index 100% rename from content/zh/case-studies/buffer/buffer_featured.png rename to content/zh-cn/case-studies/buffer/buffer_featured.png diff --git a/content/zh/case-studies/buffer/buffer_logo.png b/content/zh-cn/case-studies/buffer/buffer_logo.png similarity index 100% rename from content/zh/case-studies/buffer/buffer_logo.png rename to content/zh-cn/case-studies/buffer/buffer_logo.png diff --git a/content/zh/case-studies/buffer/index.html b/content/zh-cn/case-studies/buffer/index.html similarity index 100% rename from content/zh/case-studies/buffer/index.html rename to content/zh-cn/case-studies/buffer/index.html diff --git a/content/zh/case-studies/capital-one/capitalone_featured_logo.png b/content/zh-cn/case-studies/capital-one/capitalone_featured_logo.png similarity index 100% rename from content/zh/case-studies/capital-one/capitalone_featured_logo.png rename to content/zh-cn/case-studies/capital-one/capitalone_featured_logo.png diff --git a/content/zh/case-studies/capital-one/index.html b/content/zh-cn/case-studies/capital-one/index.html similarity index 100% rename from content/zh/case-studies/capital-one/index.html rename to content/zh-cn/case-studies/capital-one/index.html diff --git a/content/zh/case-studies/cern/cern_featured_logo.png b/content/zh-cn/case-studies/cern/cern_featured_logo.png similarity index 100% rename from content/zh/case-studies/cern/cern_featured_logo.png rename to content/zh-cn/case-studies/cern/cern_featured_logo.png diff --git a/content/zh/case-studies/cern/index.html b/content/zh-cn/case-studies/cern/index.html similarity index 100% rename from content/zh/case-studies/cern/index.html rename to content/zh-cn/case-studies/cern/index.html diff --git a/content/zh/case-studies/chinaunicom/chinaunicom_featured_logo.png b/content/zh-cn/case-studies/chinaunicom/chinaunicom_featured_logo.png similarity index 100% rename from content/zh/case-studies/chinaunicom/chinaunicom_featured_logo.png rename to content/zh-cn/case-studies/chinaunicom/chinaunicom_featured_logo.png diff --git a/content/zh/case-studies/chinaunicom/index.html b/content/zh-cn/case-studies/chinaunicom/index.html similarity index 100% rename from content/zh/case-studies/chinaunicom/index.html rename to content/zh-cn/case-studies/chinaunicom/index.html diff --git a/content/zh/case-studies/city-of-montreal/city-of-montreal_featured_logo.png b/content/zh-cn/case-studies/city-of-montreal/city-of-montreal_featured_logo.png similarity index 100% rename from content/zh/case-studies/city-of-montreal/city-of-montreal_featured_logo.png rename to content/zh-cn/case-studies/city-of-montreal/city-of-montreal_featured_logo.png diff --git a/content/zh/case-studies/city-of-montreal/index.html b/content/zh-cn/case-studies/city-of-montreal/index.html similarity index 100% rename from content/zh/case-studies/city-of-montreal/index.html rename to content/zh-cn/case-studies/city-of-montreal/index.html diff --git a/content/zh/case-studies/crowdfire/crowdfire_featured_logo.png b/content/zh-cn/case-studies/crowdfire/crowdfire_featured_logo.png similarity index 100% rename from content/zh/case-studies/crowdfire/crowdfire_featured_logo.png rename to content/zh-cn/case-studies/crowdfire/crowdfire_featured_logo.png diff --git a/content/zh/case-studies/crowdfire/index.html b/content/zh-cn/case-studies/crowdfire/index.html similarity index 100% rename from content/zh/case-studies/crowdfire/index.html rename to content/zh-cn/case-studies/crowdfire/index.html diff --git a/content/zh/case-studies/denso/denso_featured_logo.svg b/content/zh-cn/case-studies/denso/denso_featured_logo.svg similarity index 100% rename from content/zh/case-studies/denso/denso_featured_logo.svg rename to content/zh-cn/case-studies/denso/denso_featured_logo.svg diff --git a/content/zh/case-studies/denso/index.html b/content/zh-cn/case-studies/denso/index.html similarity index 100% rename from content/zh/case-studies/denso/index.html rename to content/zh-cn/case-studies/denso/index.html diff --git a/content/zh/case-studies/golfnow/golfnow_featured.png b/content/zh-cn/case-studies/golfnow/golfnow_featured.png similarity index 100% rename from content/zh/case-studies/golfnow/golfnow_featured.png rename to content/zh-cn/case-studies/golfnow/golfnow_featured.png diff --git a/content/zh/case-studies/golfnow/golfnow_logo.png b/content/zh-cn/case-studies/golfnow/golfnow_logo.png similarity index 100% rename from content/zh/case-studies/golfnow/golfnow_logo.png rename to content/zh-cn/case-studies/golfnow/golfnow_logo.png diff --git a/content/zh/case-studies/golfnow/index.html b/content/zh-cn/case-studies/golfnow/index.html similarity index 100% rename from content/zh/case-studies/golfnow/index.html rename to content/zh-cn/case-studies/golfnow/index.html diff --git a/content/zh/case-studies/haufegroup/haufegroup_featured.png b/content/zh-cn/case-studies/haufegroup/haufegroup_featured.png similarity index 100% rename from content/zh/case-studies/haufegroup/haufegroup_featured.png rename to content/zh-cn/case-studies/haufegroup/haufegroup_featured.png diff --git a/content/zh/case-studies/haufegroup/haufegroup_logo.png b/content/zh-cn/case-studies/haufegroup/haufegroup_logo.png similarity index 100% rename from content/zh/case-studies/haufegroup/haufegroup_logo.png rename to content/zh-cn/case-studies/haufegroup/haufegroup_logo.png diff --git a/content/zh/case-studies/haufegroup/index.html b/content/zh-cn/case-studies/haufegroup/index.html similarity index 100% rename from content/zh/case-studies/haufegroup/index.html rename to content/zh-cn/case-studies/haufegroup/index.html diff --git a/content/zh/case-studies/huawei/huawei_featured.png b/content/zh-cn/case-studies/huawei/huawei_featured.png similarity index 100% rename from content/zh/case-studies/huawei/huawei_featured.png rename to content/zh-cn/case-studies/huawei/huawei_featured.png diff --git a/content/zh/case-studies/huawei/huawei_logo.png b/content/zh-cn/case-studies/huawei/huawei_logo.png similarity index 100% rename from content/zh/case-studies/huawei/huawei_logo.png rename to content/zh-cn/case-studies/huawei/huawei_logo.png diff --git a/content/zh/case-studies/huawei/index.html b/content/zh-cn/case-studies/huawei/index.html similarity index 100% rename from content/zh/case-studies/huawei/index.html rename to content/zh-cn/case-studies/huawei/index.html diff --git a/content/zh/case-studies/ibm/ibm_featured_logo.png b/content/zh-cn/case-studies/ibm/ibm_featured_logo.png similarity index 100% rename from content/zh/case-studies/ibm/ibm_featured_logo.png rename to content/zh-cn/case-studies/ibm/ibm_featured_logo.png diff --git a/content/zh/case-studies/ibm/ibm_featured_logo.svg b/content/zh-cn/case-studies/ibm/ibm_featured_logo.svg similarity index 100% rename from content/zh/case-studies/ibm/ibm_featured_logo.svg rename to content/zh-cn/case-studies/ibm/ibm_featured_logo.svg diff --git a/content/zh/case-studies/ibm/index.html b/content/zh-cn/case-studies/ibm/index.html similarity index 100% rename from content/zh/case-studies/ibm/index.html rename to content/zh-cn/case-studies/ibm/index.html diff --git a/content/zh/case-studies/ing/index.html b/content/zh-cn/case-studies/ing/index.html similarity index 100% rename from content/zh/case-studies/ing/index.html rename to content/zh-cn/case-studies/ing/index.html diff --git a/content/zh/case-studies/ing/ing_featured_logo.png b/content/zh-cn/case-studies/ing/ing_featured_logo.png similarity index 100% rename from content/zh/case-studies/ing/ing_featured_logo.png rename to content/zh-cn/case-studies/ing/ing_featured_logo.png diff --git a/content/zh/case-studies/ing/ing_featured_logo.svg b/content/zh-cn/case-studies/ing/ing_featured_logo.svg similarity index 100% rename from content/zh/case-studies/ing/ing_featured_logo.svg rename to content/zh-cn/case-studies/ing/ing_featured_logo.svg diff --git a/content/zh/case-studies/jd-com/index.html b/content/zh-cn/case-studies/jd-com/index.html similarity index 100% rename from content/zh/case-studies/jd-com/index.html rename to content/zh-cn/case-studies/jd-com/index.html diff --git a/content/zh/case-studies/jd-com/jd-com_featured_logo.png b/content/zh-cn/case-studies/jd-com/jd-com_featured_logo.png similarity index 100% rename from content/zh/case-studies/jd-com/jd-com_featured_logo.png rename to content/zh-cn/case-studies/jd-com/jd-com_featured_logo.png diff --git a/content/zh/case-studies/jd-com/jd.com_featured_logo.svg b/content/zh-cn/case-studies/jd-com/jd.com_featured_logo.svg similarity index 100% rename from content/zh/case-studies/jd-com/jd.com_featured_logo.svg rename to content/zh-cn/case-studies/jd-com/jd.com_featured_logo.svg diff --git a/content/zh/case-studies/naic/naic_featured_logo.png b/content/zh-cn/case-studies/naic/naic_featured_logo.png similarity index 100% rename from content/zh/case-studies/naic/naic_featured_logo.png rename to content/zh-cn/case-studies/naic/naic_featured_logo.png diff --git a/content/zh/case-studies/nav/nav_featured_logo.png b/content/zh-cn/case-studies/nav/nav_featured_logo.png similarity index 100% rename from content/zh/case-studies/nav/nav_featured_logo.png rename to content/zh-cn/case-studies/nav/nav_featured_logo.png diff --git a/content/zh/case-studies/nerdalize/nerdalize_featured_logo.png b/content/zh-cn/case-studies/nerdalize/nerdalize_featured_logo.png similarity index 100% rename from content/zh/case-studies/nerdalize/nerdalize_featured_logo.png rename to content/zh-cn/case-studies/nerdalize/nerdalize_featured_logo.png diff --git a/content/zh/case-studies/netease/index.html b/content/zh-cn/case-studies/netease/index.html similarity index 100% rename from content/zh/case-studies/netease/index.html rename to content/zh-cn/case-studies/netease/index.html diff --git a/content/zh/case-studies/netease/netease_featured_logo.png b/content/zh-cn/case-studies/netease/netease_featured_logo.png similarity index 100% rename from content/zh/case-studies/netease/netease_featured_logo.png rename to content/zh-cn/case-studies/netease/netease_featured_logo.png diff --git a/content/zh/case-studies/newyorktimes/newyorktimes_featured.png b/content/zh-cn/case-studies/newyorktimes/newyorktimes_featured.png similarity index 100% rename from content/zh/case-studies/newyorktimes/newyorktimes_featured.png rename to content/zh-cn/case-studies/newyorktimes/newyorktimes_featured.png diff --git a/content/zh/case-studies/newyorktimes/newyorktimes_logo.png b/content/zh-cn/case-studies/newyorktimes/newyorktimes_logo.png similarity index 100% rename from content/zh/case-studies/newyorktimes/newyorktimes_logo.png rename to content/zh-cn/case-studies/newyorktimes/newyorktimes_logo.png diff --git a/content/zh/case-studies/nokia/nokia_featured_logo.png b/content/zh-cn/case-studies/nokia/nokia_featured_logo.png similarity index 100% rename from content/zh/case-studies/nokia/nokia_featured_logo.png rename to content/zh-cn/case-studies/nokia/nokia_featured_logo.png diff --git a/content/zh/case-studies/nordstrom/index.html b/content/zh-cn/case-studies/nordstrom/index.html similarity index 100% rename from content/zh/case-studies/nordstrom/index.html rename to content/zh-cn/case-studies/nordstrom/index.html diff --git a/content/zh/case-studies/nordstrom/nordstrom_featured_logo.png b/content/zh-cn/case-studies/nordstrom/nordstrom_featured_logo.png similarity index 100% rename from content/zh/case-studies/nordstrom/nordstrom_featured_logo.png rename to content/zh-cn/case-studies/nordstrom/nordstrom_featured_logo.png diff --git a/content/zh/case-studies/northwestern-mutual/northwestern_featured_logo.png b/content/zh-cn/case-studies/northwestern-mutual/northwestern_featured_logo.png similarity index 100% rename from content/zh/case-studies/northwestern-mutual/northwestern_featured_logo.png rename to content/zh-cn/case-studies/northwestern-mutual/northwestern_featured_logo.png diff --git a/content/zh/case-studies/ocado/ocado_featured_logo.png b/content/zh-cn/case-studies/ocado/ocado_featured_logo.png similarity index 100% rename from content/zh/case-studies/ocado/ocado_featured_logo.png rename to content/zh-cn/case-studies/ocado/ocado_featured_logo.png diff --git a/content/zh/case-studies/openAI/openai_featured.png b/content/zh-cn/case-studies/openAI/openai_featured.png similarity index 100% rename from content/zh/case-studies/openAI/openai_featured.png rename to content/zh-cn/case-studies/openAI/openai_featured.png diff --git a/content/zh/case-studies/openAI/openai_logo.png b/content/zh-cn/case-studies/openAI/openai_logo.png similarity index 100% rename from content/zh/case-studies/openAI/openai_logo.png rename to content/zh-cn/case-studies/openAI/openai_logo.png diff --git a/content/zh/case-studies/peardeck/peardeck_featured.png b/content/zh-cn/case-studies/peardeck/peardeck_featured.png similarity index 100% rename from content/zh/case-studies/peardeck/peardeck_featured.png rename to content/zh-cn/case-studies/peardeck/peardeck_featured.png diff --git a/content/zh/case-studies/peardeck/peardeck_logo.png b/content/zh-cn/case-studies/peardeck/peardeck_logo.png similarity index 100% rename from content/zh/case-studies/peardeck/peardeck_logo.png rename to content/zh-cn/case-studies/peardeck/peardeck_logo.png diff --git a/content/zh/case-studies/pearson/pearson_featured.png b/content/zh-cn/case-studies/pearson/pearson_featured.png similarity index 100% rename from content/zh/case-studies/pearson/pearson_featured.png rename to content/zh-cn/case-studies/pearson/pearson_featured.png diff --git a/content/zh/case-studies/pearson/pearson_logo.png b/content/zh-cn/case-studies/pearson/pearson_logo.png similarity index 100% rename from content/zh/case-studies/pearson/pearson_logo.png rename to content/zh-cn/case-studies/pearson/pearson_logo.png diff --git a/content/zh/case-studies/pingcap/pingcap_featured_logo.png b/content/zh-cn/case-studies/pingcap/pingcap_featured_logo.png similarity index 100% rename from content/zh/case-studies/pingcap/pingcap_featured_logo.png rename to content/zh-cn/case-studies/pingcap/pingcap_featured_logo.png diff --git a/content/zh/case-studies/pinterest/pinterest_feature.png b/content/zh-cn/case-studies/pinterest/pinterest_feature.png similarity index 100% rename from content/zh/case-studies/pinterest/pinterest_feature.png rename to content/zh-cn/case-studies/pinterest/pinterest_feature.png diff --git a/content/zh/case-studies/pinterest/pinterest_logo.png b/content/zh-cn/case-studies/pinterest/pinterest_logo.png similarity index 100% rename from content/zh/case-studies/pinterest/pinterest_logo.png rename to content/zh-cn/case-studies/pinterest/pinterest_logo.png diff --git a/content/zh/case-studies/prowise/prowise_featured_logo.png b/content/zh-cn/case-studies/prowise/prowise_featured_logo.png similarity index 100% rename from content/zh/case-studies/prowise/prowise_featured_logo.png rename to content/zh-cn/case-studies/prowise/prowise_featured_logo.png diff --git a/content/zh/case-studies/ricardo-ch/ricardo-ch_featured_logo.png b/content/zh-cn/case-studies/ricardo-ch/ricardo-ch_featured_logo.png similarity index 100% rename from content/zh/case-studies/ricardo-ch/ricardo-ch_featured_logo.png rename to content/zh-cn/case-studies/ricardo-ch/ricardo-ch_featured_logo.png diff --git a/content/zh/case-studies/slamtec/slamtec_featured_logo.png b/content/zh-cn/case-studies/slamtec/slamtec_featured_logo.png similarity index 100% rename from content/zh/case-studies/slamtec/slamtec_featured_logo.png rename to content/zh-cn/case-studies/slamtec/slamtec_featured_logo.png diff --git a/content/zh/case-studies/slingtv/slingtv_featured_logo.png b/content/zh-cn/case-studies/slingtv/slingtv_featured_logo.png similarity index 100% rename from content/zh/case-studies/slingtv/slingtv_featured_logo.png rename to content/zh-cn/case-studies/slingtv/slingtv_featured_logo.png diff --git a/content/zh/case-studies/sos/sos_featured_logo.png b/content/zh-cn/case-studies/sos/sos_featured_logo.png similarity index 100% rename from content/zh/case-studies/sos/sos_featured_logo.png rename to content/zh-cn/case-studies/sos/sos_featured_logo.png diff --git a/content/zh/case-studies/spotify/spotify_featured_logo.png b/content/zh-cn/case-studies/spotify/spotify_featured_logo.png similarity index 100% rename from content/zh/case-studies/spotify/spotify_featured_logo.png rename to content/zh-cn/case-studies/spotify/spotify_featured_logo.png diff --git a/content/zh/case-studies/squarespace/index.html b/content/zh-cn/case-studies/squarespace/index.html similarity index 100% rename from content/zh/case-studies/squarespace/index.html rename to content/zh-cn/case-studies/squarespace/index.html diff --git a/content/zh/case-studies/squarespace/squarespace_featured_logo.png b/content/zh-cn/case-studies/squarespace/squarespace_featured_logo.png similarity index 100% rename from content/zh/case-studies/squarespace/squarespace_featured_logo.png rename to content/zh-cn/case-studies/squarespace/squarespace_featured_logo.png diff --git a/content/zh/case-studies/squarespace/squarespace_featured_logo.svg b/content/zh-cn/case-studies/squarespace/squarespace_featured_logo.svg similarity index 100% rename from content/zh/case-studies/squarespace/squarespace_featured_logo.svg rename to content/zh-cn/case-studies/squarespace/squarespace_featured_logo.svg diff --git a/content/zh/case-studies/thredup/thredup_featured_logo.png b/content/zh-cn/case-studies/thredup/thredup_featured_logo.png similarity index 100% rename from content/zh/case-studies/thredup/thredup_featured_logo.png rename to content/zh-cn/case-studies/thredup/thredup_featured_logo.png diff --git a/content/zh/case-studies/vsco/vsco_featured_logo.png b/content/zh-cn/case-studies/vsco/vsco_featured_logo.png similarity index 100% rename from content/zh/case-studies/vsco/vsco_featured_logo.png rename to content/zh-cn/case-studies/vsco/vsco_featured_logo.png diff --git a/content/zh-cn/case-studies/wikimedia/index.html b/content/zh-cn/case-studies/wikimedia/index.html new file mode 100644 index 0000000000000..8800c5f19ee06 --- /dev/null +++ b/content/zh-cn/case-studies/wikimedia/index.html @@ -0,0 +1,148 @@ +--- +title: 案例研究:Wikimedia +case_study_styles: true +cid: caseStudies + +new_case_study_styles: true +heading_title_text: Wikimedia +use_gradient_overlay: true +subheading: > + 利用 Kubernetes 构建工具提升世界的维基 +case_study_details: + - 公司: Wikimedia + - 地点: 加州旧金山 +--- + + + +

        非营利的 Wikimedia 基金会运营着一些世界上最大的合作编辑参考项目,包括 Wikipedia。为了帮助用户维护和使用 wiki,它运行 Wikimedia 工具实验室,这是一个托管环境,为社区开发人员工作的工具和机器人,以帮助编辑和其他志愿者做他们的工作,包括减少破坏。Wikimedia 工具实验室周围的社区在近 10 年前开始形成。

        + + +{{< case-studies/quote author="Yuvi Panda, Wikimedia 基金会和 Wikimedia 工具实验室的运维工程师">}} + +Wikimedia +
        +
        +“Wikimedia 工具实验室对于确保世界各地的 wiki 尽可能正常运行至关重要。因为它有机地生长了近 10 年,所以它已成为一个极具挑战性且难以维护的环境。它就像一个大的泥球,你真的看不透它。借助 Kubernetes,我们正在简化环境并让开发人员更容易构建出使 wiki 更好运行的工具。” +{{< /case-studies/quote >}} + + +

        挑战

        + + +
          +
        • 简化复杂、难以管理的基础架构
        • +
        • 允许开发人员使用现有技术继续编写工具和机器人
        • +
        + + +

        为什么要使用 Kubernetes

        + +
          + +
        • Wikimedia 工具实验室之所以选择 Kubernetes,是因为它可以模仿现有的工作流程,同时降低复杂性。
        • +
        + + +

        解决方案

        + +
          + +
        • 将旧系统和复杂的基础设施迁移到 Kubernetes
        • +
        + + +

        结果

        + + +
          +
        • 现在占 Web 流量 40% 以上的 20% Web 工具都在 Kubernetes 上运行
        • +
        • 一个 25 节点集群可跟上每个新 Kubernetes 版本
        • +
        • 多亏了 Kubernetes,数千行旧代码可被删除
        • +
        + + +

        使用 Kubernetes 提供维护 wiki 的工具

        + + +

        Wikimedia 工具实验室由四个半带薪员工和两名志愿者管理。基础架构无法使开发人员轻松或直观地构建机器人和其他工具,使 wiki 更易于工作。Yuvi 说,“它非常混乱,我们有很多的 Perl 和 Bash 缠绕在上面,一切都是超级脆弱。”

        + + +

        为了解决这个问题,Wikimedia 工具实验室将其部分基础设施迁移到了 Kubernetes,为最终迁移整个系统做准备。Yuvi 说,Kubernetes 大大简化了维护。目标是允许创建机器人和其他工具的开发人员使用他们想要的任何开发方法,但使 Wikimedia 工具实验室更容易维护托管和共享它们所需的基础结构。

        + + +

        Yuvi 说:“借助 Kubernetes,我能够删除大量我们定制的代码,这使得所有内容更易于维护,我们的用户代码也以比以前更稳定的方式运行。”

        + + +

        简化基础架构让 wiki 更好地运行

        + + +

        Wikimedia 工具实验室在最初的 Kubernetes 部署中取得了巨大成功。旧代码正在被简化和消除,使开发人员不必改变他们编写工具和机器人的方式,这些工具和机器人的运行方式比过去更稳定。带薪员工和志愿者能够更好地解决问题。

        + + +

        将来,随着更完整的迁移到 Kubernetes,Wikimedia 工具实验室希望更轻松地托管和维护有助于在世界各地运行 wiki 的机器人和工具。该工具实验室已经拥有来自 800 名志愿者的大约 1300 个工具和机器人,而且每天提交量会更多。占 Web 流量 60% 以上的工具实验室的 Web 工具中有 20% 现在运行在 Kubernetes 上。工具实验室有一个 25 节点的集群,可以跟上每个新的 Kubernetes 版本。许多现有的 Web 工具正在迁移到 Kubernetes。

        + + +

        Yuvi 说:“我们的目标是确保世界各地的人们能够尽可能轻松地分享知识,Kubernetes 帮助实现了这一点,它让世界各地的 wiki 更容易拥有蓬勃发展所需的工具。”

        diff --git a/content/zh/case-studies/wikimedia/wikimedia_featured.png b/content/zh-cn/case-studies/wikimedia/wikimedia_featured.png similarity index 100% rename from content/zh/case-studies/wikimedia/wikimedia_featured.png rename to content/zh-cn/case-studies/wikimedia/wikimedia_featured.png diff --git a/content/zh-cn/case-studies/wikimedia/wikimedia_featured.svg b/content/zh-cn/case-studies/wikimedia/wikimedia_featured.svg new file mode 100644 index 0000000000000..5fa786aaa52ba --- /dev/null +++ b/content/zh-cn/case-studies/wikimedia/wikimedia_featured.svg @@ -0,0 +1 @@ +kubernetes.io-logos2 \ No newline at end of file diff --git a/content/zh/case-studies/wikimedia/wikimedia_logo.png b/content/zh-cn/case-studies/wikimedia/wikimedia_logo.png similarity index 100% rename from content/zh/case-studies/wikimedia/wikimedia_logo.png rename to content/zh-cn/case-studies/wikimedia/wikimedia_logo.png diff --git a/content/zh/case-studies/wink/index.html b/content/zh-cn/case-studies/wink/index.html similarity index 100% rename from content/zh/case-studies/wink/index.html rename to content/zh-cn/case-studies/wink/index.html diff --git a/content/zh/case-studies/wink/wink_featured.png b/content/zh-cn/case-studies/wink/wink_featured.png similarity index 100% rename from content/zh/case-studies/wink/wink_featured.png rename to content/zh-cn/case-studies/wink/wink_featured.png diff --git a/content/zh/case-studies/wink/wink_logo.png b/content/zh-cn/case-studies/wink/wink_logo.png similarity index 100% rename from content/zh/case-studies/wink/wink_logo.png rename to content/zh-cn/case-studies/wink/wink_logo.png diff --git a/content/zh/case-studies/woorank/woorank_featured_logo.png b/content/zh-cn/case-studies/woorank/woorank_featured_logo.png similarity index 100% rename from content/zh/case-studies/woorank/woorank_featured_logo.png rename to content/zh-cn/case-studies/woorank/woorank_featured_logo.png diff --git a/content/zh/case-studies/workiva/index.html b/content/zh-cn/case-studies/workiva/index.html similarity index 100% rename from content/zh/case-studies/workiva/index.html rename to content/zh-cn/case-studies/workiva/index.html diff --git a/content/zh/case-studies/workiva/workiva_featured_logo.png b/content/zh-cn/case-studies/workiva/workiva_featured_logo.png similarity index 100% rename from content/zh/case-studies/workiva/workiva_featured_logo.png rename to content/zh-cn/case-studies/workiva/workiva_featured_logo.png diff --git a/content/zh/case-studies/yahoo-japan/index.html b/content/zh-cn/case-studies/yahoo-japan/index.html similarity index 100% rename from content/zh/case-studies/yahoo-japan/index.html rename to content/zh-cn/case-studies/yahoo-japan/index.html diff --git a/content/zh/case-studies/yahoo-japan/yahooJapan_logo.png b/content/zh-cn/case-studies/yahoo-japan/yahooJapan_logo.png similarity index 100% rename from content/zh/case-studies/yahoo-japan/yahooJapan_logo.png rename to content/zh-cn/case-studies/yahoo-japan/yahooJapan_logo.png diff --git a/content/zh/case-studies/ygrene/index.html b/content/zh-cn/case-studies/ygrene/index.html similarity index 100% rename from content/zh/case-studies/ygrene/index.html rename to content/zh-cn/case-studies/ygrene/index.html diff --git a/content/zh/case-studies/ygrene/ygrene_featured_logo.png b/content/zh-cn/case-studies/ygrene/ygrene_featured_logo.png similarity index 100% rename from content/zh/case-studies/ygrene/ygrene_featured_logo.png rename to content/zh-cn/case-studies/ygrene/ygrene_featured_logo.png diff --git a/content/zh/case-studies/zalando/index.html b/content/zh-cn/case-studies/zalando/index.html similarity index 100% rename from content/zh/case-studies/zalando/index.html rename to content/zh-cn/case-studies/zalando/index.html diff --git a/content/zh/case-studies/zalando/zalando_feature_logo.png b/content/zh-cn/case-studies/zalando/zalando_feature_logo.png similarity index 100% rename from content/zh/case-studies/zalando/zalando_feature_logo.png rename to content/zh-cn/case-studies/zalando/zalando_feature_logo.png diff --git a/content/zh/community/_index.html b/content/zh-cn/community/_index.html similarity index 100% rename from content/zh/community/_index.html rename to content/zh-cn/community/_index.html diff --git a/content/zh/community/code-of-conduct.md b/content/zh-cn/community/code-of-conduct.md similarity index 83% rename from content/zh/community/code-of-conduct.md rename to content/zh-cn/community/code-of-conduct.md index 98d8156770811..3dc283fec875b 100644 --- a/content/zh/community/code-of-conduct.md +++ b/content/zh-cn/community/code-of-conduct.md @@ -15,16 +15,16 @@ community_styles_migrated: true

        Kubernetes 遵循 -CNCF 行为规范。 +CNCF 行为规范。 CNCF 社区规范文本如下链接 commit 0ce4694。 -如果您发现这个 CNCF 社区规范文本已经过时,请 +如果你发现这个 CNCF 社区规范文本已经过时,请 提交 issue

        @@ -35,7 +35,7 @@ the [Kubernetes Code of Conduct Committee](https://github.com/kubernetes/communi Your anonymity will be protected. --> 如果你在活动、会议、Slack 或是其它场合发现有任何违反行为规范的行为,请联系[Kubernetes 行为规范委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)。 -我们会确保您的匿名性。 +我们会确保你的匿名性。

        diff --git a/content/zh/community/static/README.md b/content/zh-cn/community/static/README.md similarity index 100% rename from content/zh/community/static/README.md rename to content/zh-cn/community/static/README.md diff --git a/content/zh-cn/community/static/cncf-code-of-conduct.md b/content/zh-cn/community/static/cncf-code-of-conduct.md new file mode 100644 index 0000000000000..dde18750ea16f --- /dev/null +++ b/content/zh-cn/community/static/cncf-code-of-conduct.md @@ -0,0 +1,39 @@ + +## 云原生计算基金会(CNCF)社区行为准则 1.0 版本 + +### 贡献者行为准则 + +作为这个项目的贡献者和维护者,为了建立一个开放和受欢迎的社区, +我们保证尊重所有通过报告问题、发布功能请求、更新文档、提交拉取请求或补丁以及其他活动做出贡献的人员。 + +我们致力于让参与此项目的每个人都不受骚扰, +无论其经验水平、性别、性别认同和表达、性取向、残疾、个人外貌、体型、人种、种族、年龄、宗教或国籍等。 + +不可接受的参与者行为包括: + +- 使用性语言或图像 +- 人身攻击 +- 挑衅、侮辱或贬低性评论 +- 公开或私下骚扰 +- 未经明确许可,发布他人的私人信息,比如地址或电子邮箱 +- 其他不道德或不专业的行为 + +项目维护者有权利和责任删除、编辑或拒绝评论、提交、代码、维基编辑、问题和其他不符合本行为准则的贡献。 +通过采用本行为准则,项目维护者承诺将这些原则公平且一致地应用到这个项目管理的各个方面。 +不遵守或不执行行为准则的项目维护者可能被永久地从项目团队中移除。 + +当个人代表项目或其社区时,本行为准则适用于项目空间和公共空间。 + +如需举报侮辱、骚扰或其他不可接受的行为, +你可发送邮件至 联系 +[Kubernetes行为守则委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)。 +其他事务请联系CNCF项目维护专员,或发送邮件至 联系我们的调解员Mishi Choudhary。 + +本行为准则改编自《贡献者契约》( https://contributor-covenant.org )1.2.0 版本, +可在 https://contributor-covenant.org/version/1/2/0/ 查看。 + +### CNCF 活动行为准则 + +云原生计算基金会(CNCF)活动受 Linux 基金会《[行为准则](https://events.linuxfoundation.org/code-of-conduct/)》管辖, +该行为准则可在活动页面获得。其旨在与上述政策兼容,且包括更多关于事件回应的细节。 \ No newline at end of file diff --git a/content/zh/docs/_index.md b/content/zh-cn/docs/_index.md similarity index 100% rename from content/zh/docs/_index.md rename to content/zh-cn/docs/_index.md diff --git a/content/zh/docs/concepts/_index.md b/content/zh-cn/docs/concepts/_index.md similarity index 100% rename from content/zh/docs/concepts/_index.md rename to content/zh-cn/docs/concepts/_index.md diff --git a/content/zh/docs/concepts/architecture/_index.md b/content/zh-cn/docs/concepts/architecture/_index.md similarity index 100% rename from content/zh/docs/concepts/architecture/_index.md rename to content/zh-cn/docs/concepts/architecture/_index.md diff --git a/content/zh/docs/concepts/architecture/cloud-controller.md b/content/zh-cn/docs/concepts/architecture/cloud-controller.md similarity index 97% rename from content/zh/docs/concepts/architecture/cloud-controller.md rename to content/zh-cn/docs/concepts/architecture/cloud-controller.md index 05caa7690f1d7..d46b7967adb8a 100644 --- a/content/zh/docs/concepts/architecture/cloud-controller.md +++ b/content/zh-cn/docs/concepts/architecture/cloud-controller.md @@ -318,10 +318,10 @@ To upgrade a HA control plane to use the cloud controller manager, see [Migrate Want to know how to implement your own cloud controller manager, or extend an existing project? --> -[云控制器管理器的管理](/zh/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager) +[云控制器管理器的管理](/zh-cn/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager) 给出了运行和管理云控制器管理器的指南。 -要升级 HA 控制平面以使用云控制器管理器,请参见 [将复制的控制平面迁移以使用云控制器管理器](/zh/docs/tasks/administer-cluster/controller-manager-leader-migration/) +要升级 HA 控制平面以使用云控制器管理器,请参见 [将复制的控制平面迁移以使用云控制器管理器](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/) 想要了解如何实现自己的云控制器管理器,或者对现有项目进行扩展么? @@ -343,6 +343,6 @@ For more information about developing plugins, see [Developing Cloud Controller 特定于云驱动的实现虽不是 Kubernetes 核心成分,仍要实现 `CloudProvider` 接口。 关于如何开发插件的详细信息,可参考 -[开发云控制器管理器](/zh/docs/tasks/administer-cluster/developing-cloud-controller-manager/) +[开发云控制器管理器](/zh-cn/docs/tasks/administer-cluster/developing-cloud-controller-manager/) 文档。 diff --git a/content/zh/docs/concepts/architecture/control-plane-node-communication.md b/content/zh-cn/docs/concepts/architecture/control-plane-node-communication.md similarity index 81% rename from content/zh/docs/concepts/architecture/control-plane-node-communication.md rename to content/zh-cn/docs/concepts/architecture/control-plane-node-communication.md index 8cf742e272fa0..57f05179ed68c 100644 --- a/content/zh/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/zh-cn/docs/concepts/architecture/control-plane-node-communication.md @@ -24,7 +24,7 @@ This document catalogs the communication paths between the control plane (apiser ## 节点到控制面 @@ -33,17 +33,17 @@ Kubernetes 采用的是中心辐射型(Hub-and-Spoke)API 模式。 所有从集群(或所运行的 Pods)发出的 API 调用都终止于 API 服务器。 其它控制面组件都没有被设计为可暴露远程服务。 API 服务器被配置为在一个安全的 HTTPS 端口(通常为 443)上监听远程连接请求, -并启用一种或多种形式的客户端[身份认证](/zh/docs/reference/access-authn-authz/authentication/)机制。 -一种或多种客户端[鉴权机制](/zh/docs/reference/access-authn-authz/authorization/)应该被启用, -特别是在允许使用[匿名请求](/zh/docs/reference/access-authn-authz/authentication/#anonymous-requests) -或[服务账号令牌](/zh/docs/reference/access-authn-authz/authentication/#service-account-tokens)的时候。 +并启用一种或多种形式的客户端[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)机制。 +一种或多种客户端[鉴权机制](/zh-cn/docs/reference/access-authn-authz/authorization/)应该被启用, +特别是在允许使用[匿名请求](/zh-cn/docs/reference/access-authn-authz/authentication/#anonymous-requests) +或[服务账号令牌](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)的时候。 应该使用集群的公共根证书开通节点,这样它们就能够基于有效的客户端凭据安全地连接 API 服务器。 一种好的方法是以客户端证书的形式将客户端凭据提供给 kubelet。 -请查看 [kubelet TLS 启动引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) +请查看 [kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) 以了解如何自动提供 kubelet 客户端证书。 为了对这个连接进行认证,使用 `--kubelet-certificate-authority` 标志给 API 服务器提供一个根证书包,用于 kubelet 的服务证书。 @@ -114,13 +114,13 @@ Finally, [Kubelet authentication and/or authorization](/docs/reference/command-l kubelet 之间使用 [SSH 隧道](#ssh-tunnels)。 最后,应该启用 -[kubelet 用户认证和/或鉴权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) +[kubelet 用户认证和/或鉴权](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/) 来保护 kubelet API。 ### API 服务器到节点、Pod 和服务 @@ -136,7 +136,7 @@ The connections from the apiserver to a node, pod, or service default to plain H Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. -SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. +SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. --> ### SSH 隧道 {#ssh-tunnels} @@ -167,6 +167,6 @@ Konnectivity 服务包含两个部分:Konnectivity 服务器和 Konnectivity 控制面网络和节点网络中。Konnectivity 代理建立并维持到 Konnectivity 服务器的网络连接。 启用 Konnectivity 服务之后,所有控制面到节点的通信都通过这些连接传输。 -请浏览 [Konnectivity 服务任务](/zh/docs/tasks/extend-kubernetes/setup-konnectivity/) +请浏览 [Konnectivity 服务任务](/zh-cn/docs/tasks/extend-kubernetes/setup-konnectivity/) 在你的集群中配置 Konnectivity 服务。 diff --git a/content/zh/docs/concepts/architecture/controller.md b/content/zh-cn/docs/concepts/architecture/controller.md similarity index 95% rename from content/zh/docs/concepts/architecture/controller.md rename to content/zh-cn/docs/concepts/architecture/controller.md index 7c11a5a0d2ab0..fecf82269aaae 100644 --- a/content/zh/docs/concepts/architecture/controller.md +++ b/content/zh-cn/docs/concepts/architecture/controller.md @@ -50,7 +50,7 @@ detail. ## 控制器模式 {#controller-pattern} 一个控制器至少追踪一种类型的 Kubernetes 资源。这些 -[对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/) +[对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/) 有一个代表期望状态的 `spec` 字段。 该资源的控制器负责确保其当前状态接近期望状态。 @@ -96,7 +96,7 @@ and eventually the work is done. Job 是一种 Kubernetes 资源,它运行一个或者多个 {{< glossary_tooltip term_id="pod" >}}, 来执行一个任务然后停止。 -(一旦[被调度了](/zh/docs/concepts/scheduling-eviction/),对 `kubelet` 来说 Pod +(一旦[被调度了](/zh-cn/docs/concepts/scheduling-eviction/),对 `kubelet` 来说 Pod 对象就会变成了期望状态的一部分)。 在集群中,当 Job 控制器拿到新任务时,它会保证一组 Node 节点上的 `kubelet` @@ -175,7 +175,7 @@ cloud provider APIs, and other services by --> 在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。 就 Kubernetes 集群而言,控制面间接地与 IP 地址管理工具、存储服务、云驱动 -APIs 以及其他服务协作,通过[扩展 Kubernetes](/zh/docs/concepts/extend-kubernetes/) +APIs 以及其他服务协作,通过[扩展 Kubernetes](/zh-cn/docs/concepts/extend-kubernetes/) 来实现这点。 -* 阅读 [Kubernetes 控制平面组件](/zh/docs/concepts/overview/components/#control-plane-components) -* 了解 [Kubernetes 对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/) +* 阅读 [Kubernetes 控制平面组件](/zh-cn/docs/concepts/overview/components/#control-plane-components) +* 了解 [Kubernetes 对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/) 的一些基本知识 -* 进一步学习 [Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/) +* 进一步学习 [Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api/) * 如果你想编写自己的控制器,请看 Kubernetes 的 - [扩展模式](/zh/docs/concepts/extend-kubernetes/#extension-patterns)。 + [扩展模式](/zh-cn/docs/concepts/extend-kubernetes/#extension-patterns)。 diff --git a/content/zh/docs/concepts/architecture/cri.md b/content/zh-cn/docs/concepts/architecture/cri.md similarity index 97% rename from content/zh/docs/concepts/architecture/cri.md rename to content/zh-cn/docs/concepts/architecture/cri.md index 68cd082fc0154..c0b3579578c83 100644 --- a/content/zh/docs/concepts/architecture/cri.md +++ b/content/zh-cn/docs/concepts/architecture/cri.md @@ -46,7 +46,7 @@ flags](/docs/reference/command-line-tools-reference/kubelet) --> 当通过 gRPC 连接到容器运行时时,kubelet 充当客户端。 运行时和镜像服务端点必须在容器运行时中可用,可以使用 -[命令行标志](/zh/docs/reference/command-line-tools-reference/kubelet)的 +[命令行标志](/zh-cn/docs/reference/command-line-tools-reference/kubelet)的 `--image-service-endpoint` 和 `--container-runtime-endpoint` 在 kubelet 中单独配置。 diff --git a/content/zh/docs/concepts/architecture/garbage-collection.md b/content/zh-cn/docs/concepts/architecture/garbage-collection.md similarity index 85% rename from content/zh/docs/concepts/architecture/garbage-collection.md rename to content/zh-cn/docs/concepts/architecture/garbage-collection.md index 7fb0eb0d3e21c..d45b98fa8feda 100644 --- a/content/zh/docs/concepts/architecture/garbage-collection.md +++ b/content/zh-cn/docs/concepts/architecture/garbage-collection.md @@ -32,16 +32,16 @@ allows the clean up of resources like the following: manager * [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats) --> -* [失败的 Pod](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection) -* [已完成的 Job](/zh/docs/concepts/workloads/controllers/ttlafterfinished/) +* [失败的 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection) +* [已完成的 Job](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/) * [不再存在属主引用的对象](#owners-dependents) * [未使用的容器和容器镜像](#containers-images) -* [动态制备的、StorageClass 回收策略为 Delete 的 PV 卷](/zh/docs/concepts/storage/persistent-volumes/#delete) -* [阻滞或者过期的 CertificateSigningRequest (CSRs)](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process) +* [动态制备的、StorageClass 回收策略为 Delete 的 PV 卷](/zh-cn/docs/concepts/storage/persistent-volumes/#delete) +* [阻滞或者过期的 CertificateSigningRequest (CSRs)](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process) * 在以下情形中删除了的{{}}对象: - * 当集群使用[云控制器管理器](/zh/docs/concepts/architecture/cloud-controller/)运行于云端时; + * 当集群使用[云控制器管理器](/zh-cn/docs/concepts/architecture/cloud-controller/)运行于云端时; * 当集群使用类似于云控制器管理器的插件运行在本地环境中时。 -* [节点租约对象](/zh/docs/concepts/architecture/nodes/#heartbeats) +* [节点租约对象](/zh-cn/docs/concepts/architecture/nodes/#heartbeats) ## 属主与依赖 {#owners-dependents} -Kubernetes 中很多对象通过[*属主引用*](/zh/docs/concepts/overview/working-with-objects/owners-dependents/) +Kubernetes 中很多对象通过[*属主引用*](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/) 链接到彼此。属主引用(Owner Reference)可以告诉控制面哪些对象依赖于其他对象。 -Kubernetes 使用属主引用来为控制面以及其他 API 客户端在删除某对象时提供一个 -清理关联资源的机会。在大多数场合,Kubernetes 都是自动管理属主引用的。 +Kubernetes 使用属主引用来为控制面以及其他 API 客户端在删除某对象时提供一个清理关联资源的机会。 +在大多数场合,Kubernetes 都是自动管理属主引用的。 -属主关系与某些资源所使用的的[标签和选择算符](/zh/docs/concepts/overview/working-with-objects/labels/) -不同。例如,考虑一个创建 `EndpointSlice` 对象的 {{}} +属主关系与某些资源所使用的[标签和选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)不同。 +例如,考虑一个创建 `EndpointSlice` 对象的 {{}} 对象。Service 对象使用*标签*来允许控制面确定哪些 `EndpointSlice` 对象被该 Service 使用。除了标签,每个被 Service 托管的 `EndpointSlice` 对象还有一个属主引用属性。 属主引用可以帮助 Kubernetes 中的不同组件避免干预并非由它们控制的对象。 @@ -85,8 +85,7 @@ is subject to deletion once all owners are verified absent. --> 根据设计,系统不允许出现跨名字空间的属主引用。名字空间作用域的依赖对象可以指定集群作用域或者名字空间作用域的属主。 名字空间作用域的属主**必须**存在于依赖对象所在的同一名字空间。 -如果属主位于不同名字空间,则属主引用被视为不存在,而当检查发现所有属主都已不存在时, -依赖对象会被删除。 +如果属主位于不同名字空间,则属主引用被视为不存在,而当检查发现所有属主都已不存在时,依赖对象会被删除。 ### 前台级联删除 {#foreground-deletion} -在前台级联删除中,正在被你删除的对象首先进入 *deletion in progress* 状态。 +在前台级联删除中,正在被你删除的属主对象首先进入 *deletion in progress* 状态。 在这种状态下,针对属主对象会发生以下事情: 要配置对未使用容器和镜像的垃圾收集选项,可以使用一个 -[配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/),基于 -[`KubeletConfiguration`](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +[配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/),基于 +[`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) 资源类型来调整与垃圾搜集相关的 kubelet 行为。 @@ -354,8 +353,8 @@ configure garbage collection: * Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/). * Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) (beta) that cleans up finished Jobs. --> -* 进一步了解 [Kubernetes 对象的属主关系](/zh/docs/concepts/overview/working-with-objects/owners-dependents/)。 -* 进一步了解 Kubernetes [finalizers](/zh/docs/concepts/overview/working-with-objects/finalizers/)。 -* 进一步了解 [TTL 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/) (beta), +* 进一步了解 [Kubernetes 对象的属主关系](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/)。 +* 进一步了解 Kubernetes [finalizers](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)。 +* 进一步了解 [TTL 控制器](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/) (beta), 该控制器负责清理已完成的 Job。 diff --git a/content/zh/docs/concepts/architecture/nodes.md b/content/zh-cn/docs/concepts/architecture/nodes.md similarity index 83% rename from content/zh/docs/concepts/architecture/nodes.md rename to content/zh-cn/docs/concepts/architecture/nodes.md index 625c114df402d..a57f98dc14ad0 100644 --- a/content/zh/docs/concepts/architecture/nodes.md +++ b/content/zh-cn/docs/concepts/architecture/nodes.md @@ -38,7 +38,7 @@ Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行 通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能 只有一个节点。 -节点上的[组件](/zh/docs/concepts/overview/components/#node-components)包括 +节点上的[组件](/zh-cn/docs/concepts/overview/components/#node-components)包括 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}、 {{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}以及 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}。 @@ -110,7 +110,7 @@ The name of a Node object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> Node 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 ### 节点名称唯一性 {#node-name-uniqueness} -节点的[名称](/zh/docs/concepts/overview/working-with-objects/names#names)用来标识 Node 对象。 +节点的[名称](/zh-cn/docs/concepts/overview/working-with-objects/names#names)用来标识 Node 对象。 没有两个 Node 可以同时使用相同的名称。 Kubernetes 还假定名字相同的资源是同一个对象。 就 Node 而言,隐式假定使用相同名称的实例会具有相同的状态(例如网络配置、根磁盘内容) 和类似节点标签这类属性。这可能在节点被更改但其名称未变时导致系统状态不一致。 @@ -167,7 +167,7 @@ For self-registration, the kubelet is started with the following options: (逗号分隔的 `=:`)注册节点。当 `register-node` 为 false 时无效。 - `--node-ip` - 节点 IP 地址。 - `--node-labels` - 在集群中注册节点时要添加的{{< glossary_tooltip text="标签" term_id="label" >}}。 - (参见 [NodeRestriction 准入控制插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。 + (参见 [NodeRestriction 准入控制插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。 - `--node-status-update-frequency` - 指定 kubelet 向控制面发送状态的频率。 -启用[Node 鉴权模式](/zh/docs/reference/access-authn-authz/node/)和 -[NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)时, +启用[Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)和 +[NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)时, 仅授权 `kubelet` 创建或修改其自己的节点资源。 {{< note >}} @@ -256,7 +256,7 @@ kubectl cordon $NODENAME See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/) for more details. --> -更多细节参考[安全地腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。 +更多细节参考[安全地腾空节点](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)。 {{< note >}} 当节点上出现问题时,Kubernetes 控制面会自动创建与影响节点的状况对应的 -[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 +[污点](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。 调度器在将 Pod 指派到某 Node 时会考虑 Node 上的污点设置。 Pod 也可以设置{{< glossary_tooltip text="容忍度" term_id="toleration" >}}, 以便能够在设置了特定污点的 Node 上运行。 @@ -439,7 +439,7 @@ Pod 也可以设置{{< glossary_tooltip text="容忍度" term_id="toleration" >} See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) for more details. --> -进一步的细节可参阅[根据状况为节点设置污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)。 +进一步的细节可参阅[根据状况为节点设置污点](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)。 -可以在学习如何在节点上[预留计算资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +可以在学习如何在节点上[预留计算资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) 的时候了解有关容量和可分配资源的更多信息。 如果要为非 Pod 进程显式保留资源。 -请参考[为系统守护进程预留资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)。 +请参考[为系统守护进程预留资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)。 {{< /note >}} -如果启用了 `TopologyManager` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), +如果启用了 `TopologyManager` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), `kubelet` 可以在作出资源分配决策时使用拓扑提示。 -参考[控制节点上拓扑管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)了解详细信息。 +参考[控制节点上拓扑管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)了解详细信息。 -体面节点关闭特性依赖于 systemd,因为它要利用 +节点体面关闭特性依赖于 systemd,因为它要利用 [systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)机制, 在给定的期限内延迟节点关闭。 @@ -762,7 +762,7 @@ Graceful node shutdown is controlled with the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) which is enabled by default in 1.21. --> -体面节点关闭特性受 `GracefulNodeShutdown` +节点体面关闭特性受 `GracefulNodeShutdown` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)控制, 在 1.21 版本中是默认启用的。 @@ -773,7 +773,7 @@ thus not activating the graceful node shutdown functionality. To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values. --> 注意,默认情况下,下面描述的两个配置选项,`shutdownGracePeriod` 和 -`shutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活体面节点关闭功能。 +`shutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活节点体面关闭功能。 要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。 节点体面关闭的特性对应两个 -[`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项: +[`KubeletConfiguration`](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 选项: * `shutdownGracePeriod`: * 指定节点应延迟关闭的总持续时间。此时间是 Pod 体面终止的时间总和,不区分常规 Pod - 还是[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。 + 还是[关键 Pod](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。 * `shutdownGracePeriodCriticalPods`: - * 在节点关闭期间指定用于终止[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) + * 在节点关闭期间指定用于终止[关键 Pod](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) 的持续时间。该值应小于 `shutdownGracePeriod`。 +## 节点非体面关闭 {#non-graceful-node-shutdown} + +{{< feature-state state="alpha" for_k8s_version="v1.24" >}} + + +节点关闭的操作可能无法被 kubelet 的节点关闭管理器检测到, +是因为该命令不会触发 kubelet 所使用的抑制锁定机制,或者是因为用户错误的原因, +即 ShutdownGracePeriod 和 ShutdownGracePeriodCriticalPod 配置不正确。 +请参考以上[节点体面关闭](#graceful-node-shutdown)部分了解更多详细信息。 + + +当某节点关闭但 kubelet 的节点关闭管理器未检测到这一事件时, +在那个已关闭节点上、属于 StatefulSet 的 Pod 将停滞于终止状态,并且不能移动到新的运行节点上。 +这是因为已关闭节点上的 kubelet 已不存在,亦无法删除 Pod, +因此 StatefulSet 无法创建同名的新 Pod。 +如果 Pod 使用了卷,则 VolumeAttachments 不会从原来的已关闭节点上删除, +因此这些 Pod 所使用的卷也无法挂接到新的运行节点上。 +所以,那些以 StatefulSet 形式运行的应用无法正常工作。 +如果原来的已关闭节点被恢复,kubelet 将删除 Pod,新的 Pod 将被在不同的运行节点上创建。 +如果原来的已关闭节点没有被恢复,那些在已关闭节点上的 Pod 将永远滞留在终止状态。 + + +为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的 +`node kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。 +如果在 `kube-controller-manager` 上启用了 `NodeOutOfServiceVolumeDetach` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +并且节点被通过污点标记为无法提供服务,如果节点 Pod 上没有设置对应的容忍度, +那么这样的 Pod 将被强制删除,并且该在节点上被终止的 Pod 将立即进行卷分离操作。 +这样就允许那些在无法提供服务节点上的 Pod 能在其他节点上快速恢复。 + + +在非体面关闭期间,Pod 分两个阶段终止: +1. 强制删除没有匹配的 `out-of-service` 容忍度的 Pod。 +2. 立即对此类 Pod 执行分离卷操作。 + + +{{< note >}} +- 在添加 `node.kubernetes.io/out-of-service` 污点之前,应该验证节点已经处于关闭或断电状态(而不是在重新启动中)。 +- 将 Pod 移动到新节点后,用户需要手动移除停止服务的污点,并且用户要检查关闭节点是否已恢复,因为该用户是最初添加污点的用户。 +{{< /note >}} + + -### 基于 Pod 优先级的体面节点关闭 {#pod-priority-graceful-node-shutdown} +### 基于 Pod 优先级的节点体面关闭 {#pod-priority-graceful-node-shutdown} {{< feature-state state="alpha" for_k8s_version="v1.23" >}} @@ -847,11 +935,11 @@ allows cluster administers to explicitly define the ordering of pods during graceful node shutdown based on [priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). --> -为了在体面节点关闭期间提供更多的灵活性,尤其是处理关闭期间的 Pod 排序问题, -体面节点关闭机制能够关注 Pod 的 PriorityClass 设置,前提是你已经在集群中启用了此功能特性。 +为了在节点体面关闭期间提供更多的灵活性,尤其是处理关闭期间的 Pod 排序问题, +节点体面关闭机制能够关注 Pod 的 PriorityClass 设置,前提是你已经在集群中启用了此功能特性。 此功能特性允许集群管理员基于 Pod -的[优先级类(Priority Class)](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) -显式地定义体面节点关闭期间 Pod 的处理顺序。 +的[优先级类(Priority Class)](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) +显式地定义节点体面关闭期间 Pod 的处理顺序。 -前文所述的[体面节点关闭](#graceful-node-shutdown)特性能够分两个阶段关闭 Pod, +前文所述的[节点体面关闭](#graceful-node-shutdown)特性能够分两个阶段关闭 Pod, 首先关闭的是非关键的 Pod,之后再处理关键 Pod。 如果需要显式地以更细粒度定义关闭期间 Pod 的处理顺序,需要一定的灵活度, 这时可以使用基于 Pod 优先级的体面关闭机制。 @@ -871,7 +959,7 @@ graceful node shutdown in multiple phases, each phase shutting down a particular priority class of pods. The kubelet can be configured with the exact phases and shutdown time per phase. --> -当体面节点关闭能够处理 Pod 优先级时,体面节点关闭的处理可以分为多个阶段, +当节点体面关闭能够处理 Pod 优先级时,节点体面关闭的处理可以分为多个阶段, 每个阶段关闭特定优先级类的 Pod。kubelet 可以被配置为按确切的阶段处理 Pod, 且每个阶段可以独立设置关闭时间。 @@ -881,7 +969,7 @@ Assuming the following custom pod in a cluster, --> 假设集群中存在以下自定义的 Pod -[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)。 +[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)。 | Pod 优先级类名称 | Pod 优先级类数值 | |-------------------------|------------------------| @@ -894,7 +982,7 @@ in a cluster, Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) the settings for `shutdownGracePeriodByPodPriority` could look like: --> -在 [kubelet 配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)中, +在 [kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)中, `shutdownGracePeriodByPodPriority` 可能看起来是这样: | Pod 优先级类数值 | 关闭期限 | @@ -961,17 +1049,33 @@ kubelet 会直接跳到下一个优先级数值范围进行处理。 If this feature is enabled and no configuration is provided, then no ordering action will be taken. -Using this feature, requires enabling the -`GracefulNodeShutdownBasedOnPodPriority` feature gate, and setting the kubelet -config's `ShutdownGracePeriodByPodPriority` to the desired configuration -containing the pod priority class values and their respective shutdown periods. +Using this feature requires enabling the `GracefulNodeShutdownBasedOnPodPriority` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +, and setting `ShutdownGracePeriodByPodPriority` in the +[kubelet config](/docs/reference/config-api/kubelet-config.v1beta1/) +to the desired configuration containing the pod priority class values and +their respective shutdown periods. --> 如果此功能特性被启用,但没有提供配置数据,则不会出现排序操作。 -使用此功能特性需要启用 `GracefulNodeShutdownBasedOnPodPriority` 特性门控, -并将 kubelet 配置中的 `shutdownGracePeriodByPodPriority` 设置为期望的配置, +使用此功能特性需要启用 `GracefulNodeShutdownBasedOnPodPriority` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +并将 [kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) +中的 `shutdownGracePeriodByPodPriority` 设置为期望的配置, 其中包含 Pod 的优先级类数值以及对应的关闭期限。 + +{{< note >}} +在节点体面关闭期间考虑 Pod 优先级的能力是作为 Kubernetes v1.23 中的 Alpha 功能引入的。 +在 Kubernetes {{< skew currentVersion >}} 中该功能是 Beta 版,默认启用。 +{{< /note >}} + 要在节点上启用交换内存,必须启用kubelet 的 `NodeSwap` 特性门控, 同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn` -[配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。 +[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。 -* 进一步了解节点[组件](/zh/docs/concepts/overview/components/#node-components)。 +* 进一步了解节点[组件](/zh-cn/docs/concepts/overview/components/#node-components)。 * 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。 * 阅读架构设计文档中有关 [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) 的章节。 -* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 +* 了解[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。 diff --git a/content/zh/docs/concepts/cluster-administration/_index.md b/content/zh-cn/docs/concepts/cluster-administration/_index.md similarity index 79% rename from content/zh/docs/concepts/cluster-administration/_index.md rename to content/zh-cn/docs/concepts/cluster-administration/_index.md index e1e929cfa6e9b..c340a741c7a89 100644 --- a/content/zh/docs/concepts/cluster-administration/_index.md +++ b/content/zh-cn/docs/concepts/cluster-administration/_index.md @@ -25,7 +25,7 @@ The cluster administration overview is for anyone creating or administering a Ku It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/). --> 集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。 -我们假设你大概了解一些核心的 Kubernetes [概念](/zh/docs/concepts/)。 +我们假设你大概了解一些核心的 Kubernetes [概念](/zh-cn/docs/concepts/)。 @@ -40,7 +40,7 @@ Before choosing a guide, here are some considerations: --> ## 规划集群 {#planning-a-cluster} -查阅[安装](/zh/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes +查阅[安装](/zh-cn/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes 集群的示例。本文所列的文章称为*发行版* 。 {{< note >}} @@ -68,12 +68,12 @@ Before choosing a guide, here are some considerations: - 你的集群是在**本地**还是**云(IaaS)** 上?Kubernetes 不能直接支持混合集群。 作为代替,你可以建立多个集群。 - **如果你在本地配置 Kubernetes**,需要考虑哪种 - [网络模型](/zh/docs/concepts/cluster-administration/networking/)最适合。 + [网络模型](/zh-cn/docs/concepts/cluster-administration/networking/)最适合。 - 你的 Kubernetes 在**裸金属硬件**上还是**虚拟机(VMs)** 上运行? - 你是想**运行一个集群**,还是打算**参与开发 Kubernetes 项目代码**? 如果是后者,请选择一个处于开发状态的发行版。 某些发行版只提供二进制发布版,但提供更多的选择。 -- 让你自己熟悉运行一个集群所需的[组件](/zh/docs/concepts/overview/components/)。 +- 让你自己熟悉运行一个集群所需的[组件](/zh-cn/docs/concepts/overview/components/)。 ## 管理集群 {#managing-a-cluster} -* 学习如何[管理节点](/zh/docs/concepts/architecture/nodes/)。 +* 学习如何[管理节点](/zh-cn/docs/concepts/architecture/nodes/)。 -* 学习如何设定和管理集群共享的[资源配额](/zh/docs/concepts/policy/resource-quotas/) 。 +* 学习如何设定和管理集群共享的[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/) 。 ## 保护集群 {#securing-a-cluster} -* [生成证书](/zh/docs/tasks/administer-cluster/certificates/) +* [生成证书](/zh-cn/docs/tasks/administer-cluster/certificates/) 节描述了使用不同的工具链生成证书的步骤。 -* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment/) +* [Kubernetes 容器环境](/zh-cn/docs/concepts/containers/container-environment/) 描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。 -* [控制到 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/) +* [控制到 Kubernetes API 的访问](/zh-cn/docs/concepts/security/controlling-access/) 描述了如何为用户和 service accounts 建立权限许可。 -* [身份认证](/zh/docs/reference/access-authn-authz/authentication/) +* [身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/) 节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。 -* [鉴权](/zh/docs/reference/access-authn-authz/authorization/) +* [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/) 与身份认证不同,用于控制如何处理 HTTP 请求。 -* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers) +* [使用准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。 -* [在 Kubernetes 集群中使用 Sysctls](/zh/docs/tasks/administer-cluster/sysctl-cluster/) +* [在 Kubernetes 集群中使用 Sysctls](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/) 描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。 -* [审计](/zh/docs/tasks/debug/debug-cluster/audit/) +* [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/) 描述了如何与 Kubernetes 的审计日志交互。 ### 保护 kubelet {#securing-the-kubelet} -* [主控节点通信](/zh/docs/concepts/architecture/control-plane-node-communication/) -* [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) -* [Kubelet 认证/授权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) +* [主控节点通信](/zh-cn/docs/concepts/architecture/control-plane-node-communication/) +* [TLS 引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) +* [Kubelet 认证/授权](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/) ## 可选集群服务 {#optional-cluster-services} -* [DNS 集成](/zh/docs/concepts/services-networking/dns-pod-service/) +* [DNS 集成](/zh-cn/docs/concepts/services-networking/dns-pod-service/) 描述了如何将一个 DNS 名解析到一个 Kubernetes service。 -* [记录和监控集群活动](/zh/docs/concepts/cluster-administration/logging/) +* [记录和监控集群活动](/zh-cn/docs/concepts/cluster-administration/logging/) 阐述了 Kubernetes 的日志如何工作以及怎样实现。 diff --git a/content/zh/docs/concepts/cluster-administration/addons.md b/content/zh-cn/docs/concepts/cluster-administration/addons.md similarity index 93% rename from content/zh/docs/concepts/cluster-administration/addons.md rename to content/zh-cn/docs/concepts/cluster-administration/addons.md index dc8303f872b1c..0c231d5bb76b1 100644 --- a/content/zh/docs/concepts/cluster-administration/addons.md +++ b/content/zh-cn/docs/concepts/cluster-administration/addons.md @@ -30,7 +30,7 @@ Add-ons 扩展了 Kubernetes 的功能。 * [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) is a secure L3 networking and network policy provider. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy. * [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported. -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave. +* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. * [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. * [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes. @@ -40,7 +40,7 @@ Add-ons 扩展了 Kubernetes 的功能。 * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring. -* **Romana** is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). +* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API. * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. --> ## 网络和网络策略 @@ -54,7 +54,7 @@ Add-ons 扩展了 Kubernetes 的功能。 * [Cilium](https://github.com/cilium/cilium) 是一个 L3 网络和网络策略插件,能够透明的实施 HTTP/API/L7 策略。 同时支持路由(routing)和覆盖/封装(overlay/encapsulation)模式。 * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件, - 例如:Flannel、Calico、Canal、Romana 或者 Weave。 + 例如:Flannel、Calico、Canal 或者 Weave。 * [Contiv](https://contivpp.io/) 为各种用例和丰富的策略框架提供可配置的网络 (使用 BGP 的本机 L3、使用 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI)。 Contiv 项目完全[开源](https://github.com/contiv)。 @@ -84,9 +84,8 @@ Add-ons 扩展了 Kubernetes 的功能。 CaaS / PaaS 平台(例如关键容器服务(PKS)和 OpenShift)之间的集成。 * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) 是一个 SDN 平台,可在 Kubernetes Pods 和非 Kubernetes 环境之间提供基于策略的联网,并具有可视化和安全监控。 -* Romana 是一个 pod 网络的第三层解决方案,并支持 - [NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/)。 - Kubeadm add-on 安装细节可以在[这里](https://github.com/romana/romana/tree/master/containerize)找到。 +* [Romana](https://github.com/romana) 是一个 Pod 网络的第三层解决方案,并支持 + [NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。 * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) 提供在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。 @@ -129,7 +128,7 @@ Add-ons 扩展了 Kubernetes 的功能。 运行虚拟机的 add-ons。通常运行在裸机集群上。 * [节点问题检测器](https://github.com/kubernetes/node-problem-detector) 在 Linux 节点上运行, 并将系统问题报告为[事件](/docs/reference/kubernetes-api/cluster-resources/event-v1/) - 或[节点状况](/zh/docs/concepts/architecture/nodes/#condition)。 + 或[节点状况](/zh-cn/docs/concepts/architecture/nodes/#condition)。 -要了解如何为集群生成证书,参阅[证书](/zh/docs/tasks/administer-cluster/certificates/)。 +要了解如何为集群生成证书,参阅[证书](/zh-cn/docs/tasks/administer-cluster/certificates/)。 diff --git a/content/zh/docs/concepts/cluster-administration/flow-control.md b/content/zh-cn/docs/concepts/cluster-administration/flow-control.md similarity index 99% rename from content/zh/docs/concepts/cluster-administration/flow-control.md rename to content/zh-cn/docs/concepts/cluster-administration/flow-control.md index cdc98c2fee278..746a7b4e3bd1a 100644 --- a/content/zh/docs/concepts/cluster-administration/flow-control.md +++ b/content/zh-cn/docs/concepts/cluster-administration/flow-control.md @@ -86,7 +86,7 @@ command-line flags to your `kube-apiserver` invocation: --> API 优先级与公平性(APF)特性由特性门控控制,默认情况下启用。 有关特性门控的一般性描述以及如何启用和禁用特性门控, -请参见[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +请参见[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 APF 的特性门控称为 `APIPriorityAndFairness`。 此特性也与某个 {{< glossary_tooltip term_id="api-group" text="API 组" >}} 相关: @@ -584,7 +584,7 @@ opinions of the proper content of these objects. 就可能出现抖动。 -在使用 systemd 机制的服务器上,kubelet 和容器容器运行时将日志写入到 journald 中。 +在使用 systemd 机制的服务器上,kubelet 和容器运行时将日志写入到 journald 中。 如果没有 systemd,它们将日志写入到 `/var/log` 目录下的 `.log` 文件中。 容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。 他们使用 [klog](https://github.com/kubernetes/klog) 日志库。 @@ -280,7 +280,7 @@ While Kubernetes does not provide a native solution for cluster-level logging, t -你可以通过在每个节点上使用 _节点级的日志记录代理_ 来实现群集级日志记录。 +你可以通过在每个节点上使用 _节点级的日志记录代理_ 来实现集群级日志记录。 日志记录代理是一种用于暴露日志或将日志推送到后端的专用工具。 通常,日志记录代理程序是一个容器,它可以访问包含该节点上所有应用程序容器的日志文件的目录。 @@ -478,7 +478,7 @@ a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to c --> 下面是两个配置文件,可以用来实现一个带日志代理的边车容器。 第一个文件包含用来配置 fluentd 的 -[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +[ConfigMap](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/)。 {{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} diff --git a/content/zh/docs/concepts/cluster-administration/manage-deployment.md b/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md similarity index 96% rename from content/zh/docs/concepts/cluster-administration/manage-deployment.md rename to content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md index 30be88f7cc6e3..38bda5a21fb15 100644 --- a/content/zh/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md @@ -12,8 +12,8 @@ You've deployed your application and exposed it via a service. Now what? Kuberne 你已经部署了应用并通过服务暴露它。然后呢? Kubernetes 提供了一些工具来帮助管理你的应用部署,包括扩缩容和更新。 我们将更深入讨论的特性包括 -[配置文件](/zh/docs/concepts/configuration/overview/)和 -[标签](/zh/docs/concepts/overview/working-with-objects/labels/)。 +[配置文件](/zh-cn/docs/concepts/configuration/overview/)和 +[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。 @@ -85,7 +85,7 @@ A URL can also be specified as a configuration source, which is handy for deploy 还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署: ```shell -kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/zh/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/zh-cn/examples/application/nginx/nginx-deployment.yaml ``` ``` @@ -239,7 +239,7 @@ persistentvolumeclaim/my-pvc created If you're interested in learning more about `kubectl`, go ahead and read [Command line tool (kubectl)](/docs/reference/kubectl/). --> 如果你有兴趣进一步学习关于 `kubectl` 的内容,请阅读 -[命令行工具(kubectl)](/zh/docs/reference/kubectl/)。 +[命令行工具(kubectl)](/zh-cn/docs/reference/kubectl/)。 想要了解更多信息,请参考 -[注解](/zh/docs/concepts/overview/working-with-objects/annotations/)和 +[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)和 [`kubectl annotate`](/docs/reference/generated/kubectl/kubectl-commands/#annotate) 命令文档。 @@ -535,7 +535,7 @@ For more information, please see [kubectl scale](/docs/reference/generated/kubec 想要了解更多信息,请参考 [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale)命令文档、 [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 命令文档和 -[水平 Pod 自动伸缩](/zh/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。 +[水平 Pod 自动伸缩](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。 你可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patch、 JSON merge patch、以及 strategic merge patch。 请参考 -[使用 kubectl patch 更新 API 对象](/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) +[使用 kubectl patch 更新 API 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) 和 [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch). @@ -716,14 +716,13 @@ That's it! The Deployment will declaratively update the deployed nginx applicati --> 没错,就是这样!Deployment 将在后台逐步更新已经部署的 nginx 应用。 它确保在更新过程中,只有一定数量的旧副本被开闭,并且只有一定基于所需 Pod 数量的新副本被创建。 -想要了解更多细节,请参考 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/)。 +想要了解更多细节,请参考 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)。 ## {{% heading "whatsnext" %}} -- 学习[如何使用 `kubectl` 观察和调试应用](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/) -- 阅读[配置最佳实践和技巧](/zh/docs/concepts/configuration/overview/) - +- 学习[如何使用 `kubectl` 观察和调试应用](/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/) +- 阅读[配置最佳实践和技巧](/zh-cn/docs/concepts/configuration/overview/) diff --git a/content/zh/docs/concepts/cluster-administration/networking.md b/content/zh-cn/docs/concepts/cluster-administration/networking.md similarity index 87% rename from content/zh/docs/concepts/cluster-administration/networking.md rename to content/zh-cn/docs/concepts/cluster-administration/networking.md index 6175dc539573e..edc6866ec867f 100644 --- a/content/zh/docs/concepts/cluster-administration/networking.md +++ b/content/zh-cn/docs/concepts/cluster-administration/networking.md @@ -1,10 +1,15 @@ --- -reviewers: -- thockin title: 集群网络系统 content_type: concept weight: 50 --- + @@ -51,7 +56,7 @@ Kubernetes 的宗旨就是在应用之间共享机器。 而 API 服务器还需要知道如何将动态端口数值插入到配置模块中,服务也需要知道如何找到对方等等。 与其去解决这些问题,Kubernetes 选择了其他不同的方法。 -要了解 Kubernetes 网络模型,请参阅[此处](/zh/docs/concepts/services-networking/)。 +要了解 Kubernetes 网络模型,请参阅[此处](/zh-cn/docs/concepts/services-networking/)。 -## 如何实现 Kubernetes 的网络模型 +## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-networking-model} 有很多种方式可以实现这种网络模型,本文档并不是对各种实现技术的详细研究, 但是希望可以作为对各种技术的详细介绍,并且成为你研究的起点。 @@ -105,7 +110,7 @@ Using this CNI plugin allows Kubernetes pods to have the same IP address inside Additionally, the CNI can be run alongside [Calico for network policy enforcement](https://docs.aws.amazon.com/eks/latest/userguide/calico.html). The AWS VPC CNI project is open source with [documentation on GitHub](https://github.com/aws/amazon-vpc-cni-k8s). --> -### Kubernetes 的 AWS VPC CNI +### Kubernetes 的 AWS VPC CNI {#aws-vpc-cni-for-kubernetes} [AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) 为 Kubernetes 集群提供了集成的 AWS 虚拟私有云(VPC)网络。该 CNI 插件提供了高吞吐量和可用性,低延迟以及最小的网络抖动。 @@ -113,7 +118,7 @@ AWS 虚拟私有云(VPC)网络。该 CNI 插件提供了高吞吐量和可 这包括使用 VPC 流日志、VPC 路由策略和安全组进行网络流量隔离的功能。 使用该 CNI 插件,可使 Kubernetes Pod 拥有与在 VPC 网络上相同的 IP 地址。 -CNI 将 AWS 弹性网络接口(ENI)分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod 。 +CNI 将 AWS 弹性网络接口(ENI)分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod。 CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间,并且能够支持多达 2000 个节点的大型集群。 此外,CNI 可以与 @@ -121,20 +126,20 @@ CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的 AWS VPC CNI 项目是开源的,请查看 [GitHub 上的文档](https://github.com/aws/amazon-vpc-cni-k8s)。 -### Kubernetes 的 Azure CNI +### Kubernetes 的 Azure CNI {#azure-cni-for-kubernetes} [Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) 是一个[开源插件](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md), 将 Kubernetes Pods 和 Azure 虚拟网络(也称为 VNet)集成在一起,可提供与 VM 相当的网络性能。 -Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等的 VNet , +Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等的 VNet, 也可以从这些网络来直接访问 Pod。Pod 可以访问受服务端点或者受保护链接的 Azure 服务,比如存储和 SQL。 你可以使用 VNet 安全策略和路由来筛选 Pod 流量。 -该插件通过利用在 Kubernetes 节点的网络接口上预分配的辅助 IP 池将 VNet 分配给 Pod 。 +该插件通过利用在 Kubernetes 节点的网络接口上预分配的辅助 IP 池将 VNet 分配给 Pod。 Azure CNI 可以在 [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) 中获得。 @@ -142,7 +147,7 @@ Azure CNI 可以在 ### Calico @@ -150,7 +155,8 @@ Azure CNI 可以在 用于基于容器、虚拟机和本地主机的工作负载。 Calico 支持多个数据面,包括:纯 Linux eBPF 的数据面、标准的 Linux 联网数据面 以及 Windows HNS 数据面。Calico 在提供完整的联网堆栈的同时,还可与 -[云驱动 CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) 联合使用,以保证网络策略实施。 +[云驱动 CNIs](https://projectcalico.docs.tigera.io/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) +联合使用,以保证网络策略实施。 -### 华为的 CNI-Genie +### 华为的 CNI-Genie {#cni-genie-from-huawei} [CNI-Genie](https://github.com/cni-genie/CNI-Genie) 是一个 CNI 插件, 可以让 Kubernetes 在运行时使用不同的[网络模型](#the-kubernetes-network-model)的 [实现同时被访问](https://github.com/cni-genie/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables)。 这包括以 [CNI 插件](https://github.com/containernetworking/cni#3rd-party-plugins)运行的任何实现,比如 -[Flannel](https://github.com/coreos/flannel#flannel)、 +[Flannel](https://github.com/flannel-io/flannel#flannel)、 [Calico](https://projectcalico.docs.tigera.io/about/about-calico/)、 [Weave-net](https://www.weave.works/oss/net/)。 @@ -240,13 +246,11 @@ Kubernetes, using the [fd.io](https://fd.io/) data plane. ### Contiv-VPP [Contiv-VPP](https://contivpp.io/) 是用于 Kubernetes 的用户空间、面向性能的网络插件,使用 [fd.io](https://fd.io/) 数据平面。 - -### Contrail/Tungsten Fabric - [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/) 是基于 [Tungsten Fabric](https://tungsten.io) 的,真正开放的多云网络虚拟化和策略管理平台。 Contrail 和 Tungsten Fabric 与各种编排系统集成在一起,例如 Kubernetes、OpenShift、OpenStack 和 Mesos, @@ -271,7 +275,7 @@ With this toolset DANM is able to provide multiple separated network interfaces, 它由以下几个组件构成: * 能够配置具有高级功能的 IPVLAN 接口的 CNI 插件 -* 一个内置的 IPAM 模块,能够管理多个、群集内的、不连续的 L3 网络,并按请求提供动态、静态或无 IP 分配方案 +* 一个内置的 IPAM 模块,能够管理多个、集群内的、不连续的 L3 网络,并按请求提供动态、静态或无 IP 分配方案 * CNI 元插件能够通过自己的 CNI 或通过将任务授权给其他任何流行的 CNI 解决方案(例如 SRI-OV 或 Flannel)来实现将多个网络接口连接到容器 * Kubernetes 控制器能够集中管理所有 Kubernetes 主机的 VxLAN 和 VLAN 接口 * 另一个 Kubernetes 控制器扩展了 Kubernetes 的基于服务的服务发现概念,以在 Pod 的所有网络接口上工作 @@ -298,7 +302,7 @@ Kubernetes 所需要的覆盖网络。已经有许多人报告了使用 Flannel ### Hybridnet [Hybridnet](https://github.com/alibaba/hybridnet) 是一个为混合云设计的开源 CNI 插件, -它为一个或多个集群中的容器提供覆盖和底层网络。 Overlay 和 underlay 容器可以在同一个节点上运行, +它为一个或多个集群中的容器提供覆盖和底层网络。Overlay 和 underlay 容器可以在同一个节点上运行, 并具有集群范围的双向网络连接。 ### L2 networks and linux bridging -如果你具有一个“哑”的L2网络,例如“裸机”环境中的简单交换机,则应该能够执行与上述 GCE 设置类似的操作。 +如果你具有一个“哑”的 L2 网络,例如“裸机”环境中的简单交换机,则应该能够执行与上述 GCE 设置类似的操作。 请注意,这些说明仅是非常简单的尝试过-似乎可行,但尚未经过全面测试。 -如果您使用此技术并完善了流程,请告诉我们。 +如果你使用此技术并完善了流程,请告诉我们。 -根据 Lars Kellogg-Stedman 的这份非常不错的“Linux 网桥设备” +根据 Lars Kellogg-Stedman 的这份非常不错的 “Linux 网桥设备” [使用说明](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)来进行操作。 +### OVN4NFV-K8s-Plugin(基于 OVN 的 CNI 控制器和插件) {#ovn4nfv-k8s-plugin-ovn-based-cni-controller-plugin} + +[OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是基于 OVN 的 +CNI 控制器插件,提供基于云原生的服务功能链 (SFC)、多个 OVN +覆盖网络、动态子网创建、虚拟网络的动态创建、VLAN Provider 网络、Direct Provider +网络且可与其他多网络插件组合,非常适合多集群网络中基于边缘的云原生工作负载。 + -### OVN (开放式虚拟网络) +### OVN(开放式虚拟网络) {#ovn-open-virtual-networking} OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方案。 它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。 -该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。 +该项目在 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) +提供特定的 Kubernetes 插件和文档。 -### Weaveworks 的 Weave Net +### Weaveworks 的 Weave Net {#weave-net-from-weaveworks} -[Weave Net](https://www.weave.works/oss/net/) 是 Kubernetes 及其 -托管应用程序的弹性且易于使用的网络系统。 +[Weave Net](https://www.weave.works/oss/net/) 为 Kubernetes +及其托管应用提供的、弹性且易用的网络系统。 Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni-plugin/) 运行或者独立运行。 在这两种运行方式里,都不需要任何配置或额外的代码即可运行,并且在两种情况下, 网络都为每个 Pod 提供一个 IP 地址 -- 这是 Kubernetes 的标准配置。 @@ -456,9 +473,8 @@ Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni- -网络模型的早期设计、运行原理以及未来的一些计划,都在 -[联网设计文档](https://git.k8s.io/community/contributors/design-proposals/network/networking.md) -里有更详细的描述。 +网络模型的早期设计、运行原理以及未来的一些计划, +都在[联网设计文档](https://git.k8s.io/community/contributors/design-proposals/network/networking.md)里有更详细的描述。 diff --git a/content/zh/docs/concepts/cluster-administration/proxies.md b/content/zh-cn/docs/concepts/cluster-administration/proxies.md similarity index 91% rename from content/zh/docs/concepts/cluster-administration/proxies.md rename to content/zh-cn/docs/concepts/cluster-administration/proxies.md index be933ccc9cd3b..350295c4f9eff 100644 --- a/content/zh/docs/concepts/cluster-administration/proxies.md +++ b/content/zh-cn/docs/concepts/cluster-administration/proxies.md @@ -36,7 +36,7 @@ There are several different proxies you may encounter when using Kubernetes: - locates apiserver - adds authentication headers --> -1. [kubectl proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api): +1. [kubectl proxy](/zh-cn/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api): - 运行在用户的桌面或 pod 中 - 从本机地址到 Kubernetes apiserver 的代理 @@ -56,10 +56,10 @@ There are several different proxies you may encounter when using Kubernetes: - can be used to reach a Node, Pod, or Service - does load balancing when used to reach a Service --> -2. [apiserver proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services): +2. [apiserver proxy](/zh-cn/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services): - 是一个建立在 apiserver 内部的“堡垒” - - 将集群外部的用户与群集 IP 相连接,这些IP是无法通过其他方式访问的 + - 将集群外部的用户与集群 IP 相连接,这些IP是无法通过其他方式访问的 - 运行在 apiserver 进程内 - 客户端到代理使用 HTTPS 协议 (如果配置 apiserver 使用 HTTP 协议,则使用 HTTP 协议) - 通过可用信息进行选择,代理到目的地可能使用 HTTP 或 HTTPS 协议 @@ -75,7 +75,7 @@ There are several different proxies you may encounter when using Kubernetes: - provides load balancing - is only used to reach services --> -3. [kube proxy](/zh/docs/concepts/services-networking/service/#ips-and-vips): +3. [kube proxy](/zh-cn/docs/concepts/services-networking/service/#ips-and-vips): - 在每个节点上运行 - 代理 UDP、TCP 和 SCTP diff --git a/content/zh/docs/concepts/cluster-administration/system-logs.md b/content/zh-cn/docs/concepts/cluster-administration/system-logs.md similarity index 76% rename from content/zh/docs/concepts/cluster-administration/system-logs.md rename to content/zh-cn/docs/concepts/cluster-administration/system-logs.md index 8548c82310185..2365e3bd23d6a 100644 --- a/content/zh/docs/concepts/cluster-administration/system-logs.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-logs.md @@ -38,7 +38,7 @@ klog 是 Kubernetes 的日志库。 [klog](https://github.com/kubernetes/klog) 为 Kubernetes 系统组件生成日志消息。 -有关 klog 配置的更多信息,请参见[命令行工具参考](/zh/docs/reference/command-line-tools-reference/)。 +有关 klog 配置的更多信息,请参见[命令行工具参考](/zh-cn/docs/reference/command-line-tools-reference/)。 +### 上下文日志 + +{{< feature-state for_k8s_version="v1.24" state="alpha" >}} + + +上下文日志建立在结构化日志之上。 +它主要是关于开发人员如何使用日志记录调用:基于该概念的代码将更加灵活, +并且支持在[结构化日志 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging) +中描述的额外用例。 + + +如果开发人员在他们的组件中使用额外的函数,比如 `WithValues` 或 `WithName`, +那么日志条目将会包含额外的信息,这些信息会被调用者传递给函数。 + + +目前这一特性是由 `StructuredLogging` 特性门控所控制的,默认关闭。 +这个基础设施是在 1.24 中被添加的,并不需要修改组件。 +该 [`component-base/logs/example`](https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/staging/src/k8s.io/component-base/logs/example/cmd/logger.go) +命令演示了如何使用新的日志记录调用以及组件如何支持上下文日志记录。 + +```console +$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/ +$ go run . --help +... + --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are: + AllAlpha=true|false (ALPHA - default=false) + AllBeta=true|false (BETA - default=false) + ContextualLogging=true|false (ALPHA - default=false) +$ go run . --feature-gates ContextualLogging=true +... +I0404 18:00:02.916429 451895 logger.go:94] "example/myname: runtime" foo="bar" duration="1m0s" +I0404 18:00:02.916447 451895 logger.go:95] "example: another runtime" foo="bar" duration="1m0s" +``` + + +`example` 前缀和 `foo="bar"` 会被函数的调用者添加上, +不需修改该函数,它就会记录 `runtime` 消息和 `duration="1m0s"` 值。 + +禁用上下文日志后,`WithValues` 和 `WithName` 什么都不会做, +并且会通过调用全局的 klog 日志记录器记录日志。 +因此,这些附加信息不再出现在日志输出中: + +```console +$ go run . --feature-gates ContextualLogging=false +... +I0404 18:03:31.171945 452150 logger.go:94] "runtime" duration="1m0s" +I0404 18:03:31.171962 452150 logger.go:95] "another runtime" duration="1m0s" +``` + @@ -208,7 +284,7 @@ Not all logs are guaranteed to be written in JSON format (for example, during pr Field names and JSON serialization are subject to change. --> JSON 输出并不支持太多标准 klog 参数。对于不受支持的 klog 参数的列表, -请参见[命令行工具参考](/zh/docs/reference/command-line-tools-reference/)。 +请参见[命令行工具参考](/zh-cn/docs/reference/command-line-tools-reference/)。 并不是所有日志都保证写成 JSON 格式(例如,在进程启动期间)。 如果你打算解析日志,请确保可以处理非 JSON 格式的日志行。 @@ -258,45 +334,6 @@ List of components currently supporting JSON format: * {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}} * {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} - -### 日志清洗 {#log-sanitization} - -{{< feature-state for_k8s_version="v1.20" state="alpha" >}} - -{{}} - -日志清洗(Log Sanitization)可能会导致大量的计算开销,因此不应在生产环境中启用。 -{{< /warning >}} - - -`--experimental-logging-sanitization` 参数可用来启用 klog 清洗过滤器。 -如果启用后,将检查所有日志参数中是否有标记为敏感数据的字段(比如:密码,密钥,令牌), -并且将阻止这些字段的记录。 - - -当前支持日志清洗的组件列表: - -* kube-controller-manager -* kube-apiserver -* kube-scheduler -* kubelet - -{{< note >}} - -日志清洗过滤器不会阻止用户工作负载日志泄漏敏感数据。 -{{< /note >}} - -* 阅读 [Kubernetes 日志架构](/zh/docs/concepts/cluster-administration/logging/) +* 阅读 [Kubernetes 日志架构](/zh-cn/docs/concepts/cluster-administration/logging/) * 阅读[结构化日志提案(英文)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging) +* 阅读[上下文日志提案(英文)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging) * 阅读 [klog 参数的废弃(英文)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) * 阅读[日志严重级别约定(英文)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) diff --git a/content/zh/docs/concepts/cluster-administration/system-metrics.md b/content/zh-cn/docs/concepts/cluster-administration/system-metrics.md similarity index 98% rename from content/zh/docs/concepts/cluster-administration/system-metrics.md rename to content/zh-cn/docs/concepts/cluster-administration/system-metrics.md index f054161998397..2f971177fedf5 100644 --- a/content/zh/docs/concepts/cluster-administration/system-metrics.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-metrics.md @@ -204,7 +204,7 @@ kubelet 在驱动程序上保持打开状态。这意味着为了执行基础结 现在,收集加速器指标的责任属于供应商,而不是 kubelet。供应商必须提供一个收集指标的容器, 并将其公开给指标服务(例如 Prometheus)。 -[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +[`DisableAcceleratorUsageMetrics` 特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 禁止由 kubelet 收集的指标。 关于[何时会在默认情况下启用此功能也有一定规划](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。 @@ -271,7 +271,7 @@ The kube-scheduler identifies the resource [requests and limits](/docs/concepts/ - the unit of the resource if known (for example, `cores`) --> kube-scheduler 组件能够辩识各个 Pod 所配置的资源 -[请求和约束](/zh/docs/concepts/configuration/manage-resources-containers/)。 +[请求和约束](/zh-cn/docs/concepts/configuration/manage-resources-containers/)。 在 Pod 的资源请求值或者约束值非零时,kube-scheduler 会以度量值时间序列的形式 生成报告。该时间序列值包含以下标签: - 名字空间 @@ -341,4 +341,4 @@ Here is an example: * Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior) --> * 阅读有关指标的 [Prometheus 文本格式](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) -* 阅读有关 [Kubernetes 弃用策略](/zh/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior) +* 阅读有关 [Kubernetes 弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior) diff --git a/content/zh/docs/concepts/cluster-administration/system-traces.md b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md similarity index 96% rename from content/zh/docs/concepts/cluster-administration/system-traces.md rename to content/zh-cn/docs/concepts/cluster-administration/system-traces.md index 71ede2f3e77b7..2a3860a361454 100644 --- a/content/zh/docs/concepts/cluster-administration/system-traces.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md @@ -114,7 +114,7 @@ with `--tracing-config-file=`. This is an example config that re spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint: --> 要启用追踪特性,需要启用 kube-apiserver 上的 `APIServerTracing` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 然后,使用 `--tracing-config-file=<<配置文件路径>` 为 kube-apiserver 提供追踪配置文件。 下面是一个示例配置,它为万分之一的请求记录 spans,并使用了默认的 OpenTelemetry 端口。 @@ -132,7 +132,7 @@ For more information about the `TracingConfiguration` struct, see --> 有关 TracingConfiguration 结构体的更多信息,请参阅 -[API 服务器配置 API (v1alpha1)](/zh/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration)。 +[API 服务器配置 API (v1alpha1)](/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration)。 ## ConfigMap 对象 -ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/), +ConfigMap 是一个 API [对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/), 让你可以存储其他对象所需要使用的配置。 和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 和 `binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData` @@ -81,7 +81,7 @@ ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects 则被设计用来保存二进制数据作为 base64 编码的字串。 ConfigMap 的名字必须是一个合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 {{< note >}} -使用 ConfigMap 作为 [subPath](/zh/docs/concepts/storage/volumes#using-subpath) 卷挂载的容器将不会收到 ConfigMap 的更新。 +使用 ConfigMap 作为 [subPath](/zh-cn/docs/concepts/storage/volumes#using-subpath) 卷挂载的容器将不会收到 ConfigMap 的更新。 {{< /note >}} 此功能特性由 `ImmutableEphemeralVolumes` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)来控制。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来控制。 你可以通过将 `immutable` 字段设置为 `true` 创建不可变更的 ConfigMap。 例如: @@ -465,7 +465,7 @@ to the deleted ConfigMap, it is recommended to recreate these pods. * Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for separating code from configuration. --> -* 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。 -* 阅读[配置 Pod 使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 -* 阅读[修改 ConfigMap(或任何其他 Kubernetes 对象)](/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)。 +* 阅读 [Secret](/zh-cn/docs/concepts/configuration/secret/)。 +* 阅读[配置 Pod 使用 ConfigMap](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +* 阅读[修改 ConfigMap(或任何其他 Kubernetes 对象)](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)。 * 阅读 [Twelve-Factor 应用](https://12factor.net/zh_cn/)来了解将代码和配置分开的动机。 diff --git a/content/zh/docs/concepts/configuration/manage-resources-containers.md b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md similarity index 96% rename from content/zh/docs/concepts/configuration/manage-resources-containers.md rename to content/zh-cn/docs/concepts/configuration/manage-resources-containers.md index 001ab761ef1cd..7c1d37f9fa3e6 100644 --- a/content/zh/docs/concepts/configuration/manage-resources-containers.md +++ b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md @@ -145,8 +145,8 @@ through the Kubernetes API server. --> CPU 和内存统称为“计算资源”,或简称为“资源”。 计算资源的数量是可测量的,可以被请求、被分配、被消耗。 -它们与 [API 资源](/zh/docs/concepts/overview/kubernetes-api/) 不同。 -API 资源(如 Pod 和 [Service](/zh/docs/concepts/services-networking/service/))是可通过 +它们与 [API 资源](/zh-cn/docs/concepts/overview/kubernetes-api/) 不同。 +API 资源(如 Pod 和 [Service](/zh-cn/docs/concepts/services-networking/service/))是可通过 Kubernetes API 服务器读取和修改的对象。 kubelet 也使用此类存储来保存 -[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +[节点层面的容器日志](/zh-cn/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), 容器镜像文件、以及运行中容器的可写入层。 {{< caution >}} @@ -502,7 +502,7 @@ Kubernetes 有两种方式支持节点上配置本地临时性存储: (kubelet)来保存数据的。 kubelet 也会生成 -[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +[节点层面的容器日志](/zh-cn/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), 并按临时性本地存储的方式对待之。 字段 `.status.allocatable` 描述节点上可以用于 Pod 的资源总量(例如:15 个虚拟 CPU、7538 MiB 内存)。关于 Kubernetes 中节点可分配资源的信息,可参阅 -[为系统守护进程预留计算资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/)。 +[为系统守护进程预留计算资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/)。 -你可以配置[资源配额](/zh/docs/concepts/policy/resource-quotas/)功能特性以限制每个名字空间可以使用的资源总量。 +你可以配置[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)功能特性以限制每个名字空间可以使用的资源总量。 当某名字空间中存在 ResourceQuota 时,Kubernetes 会在该名字空间中的对象强制实施配额。 例如,如果你为不同的团队分配名字空间,你可以为这些名字空间添加 ResourceQuota。 设置资源配额有助于防止一个团队占用太多资源,以至于这种占用会影响其他团队。 @@ -1375,10 +1375,10 @@ memory limit (and possibly request) for that container. * Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS * Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) --> -* 获取[分配内存资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验 -* 获取[分配 CPU 资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验 +* 获取[分配内存资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验 +* 获取[分配 CPU 资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验 * 阅读 API 参考中 [Container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container) 和其[资源请求](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)定义。 * 阅读 XFS 中[配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html)的文档 -* 进一步阅读 [kube-scheduler 配置参考 (v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) +* 进一步阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) diff --git a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig.md similarity index 90% rename from content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md rename to content/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index 9ce91ed7ce4b6..c46b91d3c7b7d 100644 --- a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -17,7 +17,8 @@ authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files find the information it needs to choose a cluster and communicate with the API server of a cluster. --> -使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。`kubectl` 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息,并与集群的 API 服务器进行通信。 +使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。 +`kubectl` 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息,并与集群的 API 服务器进行通信。 {{< note >}} -用于配置集群访问的文件称为 *kubeconfig 文件*。这是引用配置文件的通用方法。这并不意味着有一个名为 `kubeconfig` 的文件 +用于配置集群访问的文件称为“kubeconfig 文件”。 +这是引用配置文件的通用方法,并不意味着有一个名为 `kubeconfig` 的文件 {{< /note >}} 默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。 -您可以通过设置 `KUBECONFIG` 环境变量或者设置 +你可以通过设置 `KUBECONFIG` 环境变量或者设置 [`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/)参数来指定其他 kubeconfig 文件。 有关创建和指定 kubeconfig 文件的分步说明,请参阅 -[配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。 +[配置对多集群的访问](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。 @@ -69,7 +71,7 @@ For step-by-step instructions on creating and specifying kubeconfig files, see Suppose you have several clusters, and your users and components authenticate in a variety of ways. For example: --> -假设您有多个集群,并且您的用户和组件以多种方式进行身份认证。比如: +假设你有多个集群,并且你的用户和组件以多种方式进行身份认证。比如: -使用 kubeconfig 文件,您可以组织集群、用户和命名空间。您还可以定义上下文,以便在集群和命名空间之间快速轻松地切换。 +使用 kubeconfig 文件,你可以组织集群、用户和命名空间。你还可以定义上下文,以便在集群和命名空间之间快速轻松地切换。 -通过 kubeconfig 文件中的 *context* 元素,使用简便的名称来对访问参数进行分组。每个上下文都有三个参数:cluster、namespace 和 user。默认情况下,`kubectl` 命令行工具使用 *当前上下文* 中的参数与集群进行通信。 +通过 kubeconfig 文件中的 *context* 元素,使用简便的名称来对访问参数进行分组。 +每个 context 都有三个参数:cluster、namespace 和 user。 +默认情况下,`kubectl` 命令行工具使用 **当前上下文** 中的参数与集群进行通信。 有关设置 `KUBECONFIG` 环境变量的示例,请参阅 - [设置 KUBECONFIG 环境变量](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。 + [设置 KUBECONFIG 环境变量](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。 -* [配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +* [配置对多集群的访问](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) * [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) diff --git a/content/zh/docs/concepts/configuration/overview.md b/content/zh-cn/docs/concepts/configuration/overview.md similarity index 87% rename from content/zh/docs/concepts/configuration/overview.md rename to content/zh-cn/docs/concepts/configuration/overview.md index 01314daf3439a..b0d9c2a033040 100644 --- a/content/zh/docs/concepts/configuration/overview.md +++ b/content/zh-cn/docs/concepts/configuration/overview.md @@ -77,8 +77,8 @@ This is a living document. If you think of something that is not on this list bu - Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or [Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure. --> - 如果可能,不要使用独立的 Pods(即,未绑定到 -[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) 或 -[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 的 Pod)。 +[ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 或 +[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 的 Pod)。 如果节点发生故障,将不会重新调度独立的 Pods。 Deployment 既可以创建一个 ReplicaSet 来确保预期个数的 Pod 始终可用,也可以指定替换 Pod 的策略(例如 -[RollingUpdate](/zh/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment))。 -除了一些显式的 [`restartPolicy: Never`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) -场景外,Deployment 通常比直接创建 Pod 要好得多。[Job](/zh/docs/concepts/workloads/controllers/job/) 也可能是合适的选择。 +[RollingUpdate](/zh-cn/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment))。 +除了一些显式的 [`restartPolicy: Never`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +场景外,Deployment 通常比直接创建 Pod 要好得多。[Job](/zh-cn/docs/concepts/workloads/controllers/job/) 也可能是合适的选择。 - 在创建相应的后端工作负载(Deployment 或 ReplicaSet),以及在需要访问它的任何工作负载之前创建 - [服务](/zh/docs/concepts/services-networking/service/)。 + [服务](/zh-cn/docs/concepts/services-networking/service/)。 当 Kubernetes 启动容器时,它提供指向启动容器时正在运行的所有服务的环境变量。 例如,如果存在名为 `foo` 的服务,则所有容器将在其初始环境中获得以下变量。 @@ -118,7 +118,7 @@ Deployment 既可以创建一个 ReplicaSet 来确保预期个数的 Pod 始终 - An optional (though strongly recommended) [cluster add-on](/docs/concepts/cluster-administration/addons/) is a DNS server. The DNS server watches the Kubernetes API for new `Services` and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be able to do name resolution of `Services` automatically. --> -- 一个可选(尽管强烈推荐)的[集群插件](/zh/docs/concepts/cluster-administration/addons/) +- 一个可选(尽管强烈推荐)的[集群插件](/zh-cn/docs/concepts/cluster-administration/addons/) 是 DNS 服务器。DNS 服务器为新的 `Services` 监视 Kubernetes API,并为每个创建一组 DNS 记录。 如果在整个集群中启用了 DNS,则所有 `Pods` 应该能够自动对 `Services` 进行名称解析。 @@ -135,14 +135,14 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN If you only need access to the port for debugging purposes, you can use the [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). --> 如果你只需要访问端口以进行调试,则可以使用 - [apiserver proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或 - [`kubectl port-forward`](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。 + [apiserver proxy](/zh-cn/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或 + [`kubectl port-forward`](/zh-cn/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。 如果你明确需要在节点上公开 Pod 的端口,请在使用 `hostPort` 之前考虑使用 - [NodePort](/zh/docs/concepts/services-networking/service/#type-nodeport) 服务。 + [NodePort](/zh-cn/docs/concepts/services-networking/service/#type-nodeport) 服务。 - 当你不需要 `kube-proxy` 负载均衡时,使用 - [无头服务](/zh/docs/concepts/services-networking/service/#headless-services) + [无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services) (`ClusterIP` 被设置为 `None`)以便于服务发现。 -- 定义并使用[标签](/zh/docs/concepts/overview/working-with-objects/labels/)来识别应用程序 +- 定义并使用[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来识别应用程序 或 Deployment 的 __语义属性__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。 你可以使用这些标签为其他资源选择合适的 Pod; 例如,一个选择所有 `tier: frontend` Pod 的服务,或者 `app: myapp` 的所有 `phase: test` 组件。 @@ -175,7 +175,7 @@ services) (which have a `ClusterIP` of `None`) for service discovery when you do A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime. --> 通过从选择器中省略特定发行版的标签,可以使服务跨越多个 Deployment。 -当你需要不停机的情况下更新正在运行的服务,可以使用[Deployment](/zh/docs/concepts/workloads/controllers/deployment/)。 +当你需要不停机的情况下更新正在运行的服务,可以使用[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)。 -- 对于常见场景,应使用 [Kubernetes 通用标签](/zh/docs/concepts/overview/working-with-objects/common-labels/)。 +- 对于常见场景,应使用 [Kubernetes 通用标签](/zh-cn/docs/concepts/overview/working-with-objects/common-labels/)。 这些标准化的标签丰富了对象的元数据,使得包括 `kubectl` 和 - [仪表板(Dashboard)](/zh/docs/tasks/access-application-cluster/web-ui-dashboard) + [仪表板(Dashboard)](/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard) 这些工具能够以可互操作的方式工作。 - 使用标签选择器进行 `get` 和 `delete` 操作,而不是特定的对象名称。 -- 请参阅[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)和 - [有效使用标签](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 +- 请参阅[标签选择器](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)和 + [有效使用标签](/zh-cn/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 - 使用`kubectl run`和`kubectl expose`来快速创建单容器部署和服务。 - 有关示例,请参阅[使用服务访问集群中的应用程序](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/)。 + 有关示例,请参阅[使用服务访问集群中的应用程序](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/)。 diff --git a/content/zh/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md similarity index 88% rename from content/zh/docs/concepts/configuration/secret.md rename to content/zh-cn/docs/concepts/configuration/secret.md index 917d31870348c..b67534c23848c 100644 --- a/content/zh/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -35,12 +35,12 @@ Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象 这样的信息可能会被放在 {{< glossary_tooltip term_id="pod" >}} 规约中或者镜像中。 使用 Secret 意味着你不需要在应用程序代码中包含机密数据。 - -默认情况下,Kubernetes Secret 未加密地存储在 API 服务器的底层数据存储(etcd)中。 +默认情况下,Kubernetes Secret 未加密地存储在 API 服务器的底层数据存储(etcd)中。 任何拥有 API 访问权限的人都可以检索或修改 Secret,任何有权访问 etcd 的人也可以。 此外,任何有权限在命名空间中创建 Pod 的人都可以使用该访问权限读取该命名空间中的任何 Secret; 这包括间接访问,例如创建 Deployment 的能力。 为了安全地使用 Secret,请至少执行以下步骤: -1. 为 Secret [启用静态加密](/zh/docs/tasks/administer-cluster/encrypt-data/); -1. 启用或配置 [RBAC 规则](/zh/docs/reference/access-authn-authz/authorization/)来限制读取和写入 +1. 为 Secret [启用静态加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data/); +1. [启用或配置 RBAC 规则](/zh-cn/docs/reference/access-authn-authz/authorization/)来限制读取和写入 Secret 的数据(包括通过间接方式)。需要注意的是,被准许创建 Pod 的人也隐式地被授权获取 Secret 内容。 1. 在适当的情况下,还可以使用 RBAC 等机制来限制允许哪些主体创建新 Secret 或替换现有 Secret。 @@ -139,7 +139,7 @@ Here are some of your options: token). --> - 如果你的云原生组件需要执行身份认证来访问你所知道的、在同一 Kubernetes 集群中运行的另一个应用, - 你可以使用 [ServiceAccount](/zh/docs/reference/access-authn-authz/authentication/#service-account-tokens) + 你可以使用 [ServiceAccount](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens) 及其令牌来标识你的客户端身份。 - 你可以运行的第三方工具也有很多,这些工具可以运行在集群内或集群外,提供机密数据管理。 例如,这一工具可能是 Pod 通过 HTTPS 访问的一个服务,该服务在客户端能够正确地通过身份认证 @@ -153,9 +153,9 @@ Here are some of your options: trusted Pods onto nodes that provide a Trusted Platform Module, configured out-of-band. --> - 就身份认证而言,你可以为 X.509 证书实现一个定制的签名者,并使用 - [CertificateSigningRequest](/zh/docs/reference/access-authn-authz/certificate-signing-requests/) + [CertificateSigningRequest](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/) 来让该签名者为需要证书的 Pod 发放证书。 -- 你可以使用一个[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +- 你可以使用一个[设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) 来将节点本地的加密硬件暴露给特定的 Pod。例如,你可以将可信任的 Pod 调度到提供可信平台模块(Trusted Platform Module,TPM)的节点上。 这类节点是另行配置的。 @@ -191,9 +191,9 @@ There are several options to create a Secret: ### 创建 Secret {#creating-a-secret} -- [使用 `kubectl` 命令来创建 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- [基于配置文件来创建 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- [使用 kustomize 来创建 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +- [使用 `kubectl` 命令来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [基于配置文件来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [使用 kustomize 来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) 这一命令会启动你的默认编辑器,允许你更新 `data` 字段中存放的 base64 编码的 Secret 值; 例如: @@ -380,7 +384,7 @@ This is an example of a Pod that mounts a Secret named `mysecret` in a volume: 1. 更改 Pod 定义,在 `.spec.volumes[]` 下添加一个卷。根据需要为卷设置其名称, 并将 `.spec.volumes[].secret.secretName` 字段设置为 Secret 对象的名称。 1. 为每个需要该 Secret 的容器添加 `.spec.containers[].volumeMounts[]`。 - 并将 `.spec.containers[].volumeMounts[].readyOnly` 设置为 `true`, + 并将 `.spec.containers[].volumeMounts[].readOnly` 设置为 `true`, 将 `.spec.containers[].volumeMounts[].mountPath` 设置为希望 Secret 被放置的、目前尚未被使用的路径名。 1. 更改你的镜像或命令行,以便程序读取所设置的目录下的文件。Secret 的 `data` @@ -434,7 +438,7 @@ Kubernetes v1.22 版本之前都会自动创建用来访问 Kubernetes API 的 这一老的机制是基于创建可被挂载到 Pod 中的令牌 Secret 来实现的。 在最近的版本中,包括 Kubernetes v{{< skew currentVersion >}} 中,API 凭据是直接通过 [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) -API 来获得的,这一凭据会使用[投射卷](/zh/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume) +API 来获得的,这一凭据会使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume) 挂载到 Pod 中。使用这种方式获得的令牌有确定的生命期,并且在挂载它们的 Pod 被删除时自动作废。 @@ -443,11 +447,15 @@ You can still [manually create](/docs/tasks/configure-pod-container/configure-se a service account token Secret; for example, if you need a token that never expires. However, using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) subresource to obtain a token to access the API is recommended instead. +You can use the [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) +command to obtain a token from the `TokenRequest` API. --> -你仍然可以[手动创建](/zh/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) +你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) 服务账号令牌。例如,当你需要一个永远都不过期的令牌时。 不过,仍然建议使用 [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) 子资源来获得访问 API 服务器的令牌。 +你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) +命令调用 `TokenRequest` API 获得令牌。 {{< /note >}} -#### 将 Secret 键投射到特定目录 +#### 将 Secret 键投射到特定目录 {#projection-of-secret-keys-to-specific-paths} 你也可以控制 Secret 键所投射到的卷中的路径。 你可以使用 `.spec.volumes[].secret.items` 字段来更改每个主键的目标路径: @@ -517,7 +525,7 @@ You can also set a default mode for the entire Secret volume and override per ke For example, you can specify a default mode like this: --> -#### Secret 文件的访问权限 +#### Secret 文件的访问权限 {#secret-files-permissions} 你可以为某个 Secret 主键设置 POSIX 文件访问权限位。 如果你不指定访问权限,默认会使用 `0644`。 @@ -640,7 +648,7 @@ A container using a Secret as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount does not receive automated Secret updates. --> -对于以 [subPath](/zh/docs/concepts/storage/volumes#using-subpath) 形式挂载 Secret 卷的容器而言, +对于以 [subPath](/zh-cn/docs/concepts/storage/volumes#using-subpath) 形式挂载 Secret 卷的容器而言, 它们无法收到自动的 Secret 更新。 {{< /note >}} @@ -652,7 +660,7 @@ the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/) --> Kubelet 组件会维护一个缓存,在其中保存节点上 Pod 卷中使用的 Secret 的当前主键和取值。 你可以配置 kubelet 如何检测所缓存数值的变化。 -[kubelet 配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/)中的 +[kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)中的 `configMapAndSecretChangeDetectionStrategy` 字段控制 kubelet 所采用的策略。 默认的策略是 `Watch`。 @@ -782,7 +790,7 @@ of the secret data. This is the result of commands executed inside the container from the example above: --> -#### 通过环境变量使用 Secret 值 +#### 通过环境变量使用 Secret 值 {#consuming-secret-values-from-environment-variables} 在通过环境变量来使用 Secret 的容器中,Secret 主键展现为普通的环境变量。 这些变量的取值是 Secret 数据的 Base64 解码值。 @@ -862,7 +870,7 @@ You can use an `imagePullSecrets` to pass a secret that contains a Docker (or ot password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. --> -#### 使用 imagePullSecrets +#### 使用 imagePullSecrets {#using-imagepullsecrets-1} `imagePullSecrets` 字段是一个列表,包含对同一名字空间中 Secret 的引用。 你可以使用 `imagePullSecrets` 将包含 Docker(或其他)镜像仓库密码的 Secret @@ -876,9 +884,9 @@ See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "versio You can learn how to specify `imagePullSecrets` from the [container images](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) documentation. --> -##### 手动设定 imagePullSecret +##### 手动设定 imagePullSecret {#manually-specifying-an-imagepullsecret} -你可以通过阅读[容器镜像](/zh/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) +你可以通过阅读[容器镜像](/zh-cn/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 文档了解如何设置 `imagePullSecrets`。 -##### 设置 imagePullSecrets 为自动挂载 +##### 设置 imagePullSecrets 为自动挂载 {#arranging-for-imagepullsecrets-to-be-automatically-attached} 你可以手动创建 `imagePullSecret`,并在一个 ServiceAccount 中引用它。 对使用该 ServiceAccount 创建的所有 Pod,或者默认使用该 ServiceAccount 创建的 Pod 而言,其 `imagePullSecrets` 字段都会设置为该服务账号。 -请阅读[向服务账号添加 ImagePullSecrets](/zh/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) +请阅读[向服务账号添加 ImagePullSecrets](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) 来详细了解这一过程。 ## 使用场景 {#use-case} -### 使用场景:作为容器环境变量 +### 使用场景:作为容器环境变量 {#use-case-as-container-environment-variables} 创建 Secret: @@ -970,7 +978,7 @@ spec: Create a Secret containing some SSH keys: --> -### 使用场景:带 SSH 密钥的 Pod +### 使用场景:带 SSH 密钥的 Pod {#use-case-pod-with-ssh-keys} 创建包含一些 SSH 密钥的 Secret: @@ -1061,7 +1069,7 @@ credentials. You can create a `kustomization.yaml` with a `secretGenerator` field or run `kubectl create secret`. --> -### 使用场景:带有生产、测试环境凭据的 Pod +### 使用场景:带有生产、测试环境凭据的 Pod {#use-case-pods-with-prod-test-credentials} 这一示例所展示的一个 Pod 会使用包含生产环境凭据的 Secret,另一个 Pod 使用包含测试环境凭据的 Secret。 @@ -1247,7 +1255,7 @@ You can make your data "hidden" by defining a key that begins with a dot. This key represents a dotfile or "hidden" file. For example, when the following secret is mounted into a volume, `secret-volume`: --> -### 使用场景:在 Secret 卷中带句点的文件 +### 使用场景:在 Secret 卷中带句点的文件 {#use-case-dotfiles-in-a-secret-volume} 通过定义以句点(`.`)开头的主键,你可以“隐藏”你的数据。 这些主键代表的是以句点开头的文件或“隐藏”文件。 @@ -1308,7 +1316,7 @@ logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed remote file reading exploit in the server, which could expose the private key to an attacker. --> -### 使用场景:仅对 Pod 中一个容器可见的 Secret +### 使用场景:仅对 Pod 中一个容器可见的 Secret {#use-case-secret-visible-to-one-container-in-a-pod} 考虑一个需要处理 HTTP 请求,执行某些复杂的业务逻辑,之后使用 HMAC 来对某些消息进行签名的程序。因为这一程序的应用逻辑很复杂, @@ -1341,7 +1349,7 @@ the [Secret](/docs/reference/kubernetes-api/config-and-storage-resources/secret- resource, or certain equivalent `kubectl` command line flags (if available). The Secret type is used to facilitate programmatic handling of the Secret data. -Kubernetes provides several builtin types for some common usage scenarios. +Kubernetes provides several built-in types for some common usage scenarios. These types vary in terms of the validations performed and the constraints Kubernetes imposes on them. --> @@ -1355,10 +1363,10 @@ Kubernetes 提供若干种内置的类型,用于一些常见的使用场景。 针对这些类型,Kubernetes 所执行的合法性检查操作以及对其所实施的限制各不相同。 通过为 Secret 对象的 `type` 字段设置一个非空的字符串值,你也可以定义并使用自己 -Secret 类型。如果 `type` 值为空字符串,则被视为 `Opaque` 类型。 +Secret 类型(如果 `type` 值为空字符串,则被视为 `Opaque` 类型)。 Kubernetes 并不对类型的名称作任何限制。不过,如果你要使用内置类型之一, @@ -1433,35 +1441,70 @@ empty-secret Opaque 0 2m6s `DATA` 列显示 Secret 中保存的数据条目个数。 -在这个例子种,`0` 意味着我们刚刚创建了一个空的 Secret。 +在这个例子种,`0` 意味着你刚刚创建了一个空的 Secret。 +### 服务账号令牌 Secret {#service-account-token-secrets} + +类型为 `kubernetes.io/service-account-token` 的 Secret +用来存放标识某{{< glossary_tooltip text="服务账号" term_id="service-account" >}}的令牌凭据。 + + +从 v1.22 开始,这种类型的 Secret 不再被用来向 Pod 中加载凭据数据, +建议通过 [TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +API 来获得令牌,而不是使用服务账号令牌 Secret 对象。 +通过 `TokenRequest` API 获得的令牌比保存在 Secret 对象中的令牌更加安全, +因为这些令牌有着被限定的生命期,并且不会被其他 API 客户端读取。 +你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) +命令调用 `TokenRequest` API 获得令牌。 + + +只有在你无法使用 `TokenRequest` API 来获取令牌, +并且你能够接受因为将永不过期的令牌凭据写入到可读取的 API 对象而带来的安全风险时, +才应该创建服务账号令牌 Secret 对象。 + + +使用这种 Secret 类型时,你需要确保对象的注解 `kubernetes.io/service-account-name` +被设置为某个已有的服务账号名称。 +如果你同时负责 ServiceAccount 和 Secret 对象的创建,应该先创建 ServiceAccount 对象。 + + -### 服务账号令牌 Secret {#service-account-token-secrets} - -类型为 `kubernetes.io/service-account-token` 的 Secret 用来存放标识某 -{{< glossary_tooltip text="服务账号" term_id="service-account" >}}的令牌。 -使用这种 Secret 类型时,你需要确保对象的注解 `kubernetes.io/service-account-name` -被设置为某个已有的服务账号名称。某个 Kubernetes -{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 Secret -的其它字段,例如 `kubernetes.io/service-account.uid` 注解以及 `data` 字段中的 +当 Secret 对象被创建之后,某个 Kubernetes{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 +Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解以及 `data` 字段中的 `token` 键值,使之包含实际的令牌内容。 下面的配置实例声明了一个服务账号令牌 Secret: @@ -1494,45 +1537,33 @@ data: ``` -Kubernetes 在创建 Pod 时会自动创建一个服务账号 Secret 并自动修改你的 Pod -以使用该 Secret。该服务账号令牌 Secret 中包含了访问 Kubernetes API -所需要的凭据。 - -如果需要,可以禁止或者重载这种自动创建并使用 API 凭据的操作。 -不过,如果你仅仅是希望能够安全地访问 API 服务器,这是建议的工作方式。 +创建了 Secret 之后,等待 Kubernetes 在 `data` 字段中填充 `token` 主键。 -参考 [ServiceAccount](/zh/docs/tasks/configure-pod-container/configure-service-account/) +参考 [ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) 文档了解服务账号的工作原理。你也可以查看 [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) 资源中的 `automountServiceAccountToken` 和 `serviceAccountName` 字段文档, -进一步了解从 Pod 中引用服务账号。 +进一步了解从 Pod 中引用服务账号凭据。 ### Docker 配置 Secret {#docker-config-secrets} -你可以使用下面两种 `type` 值之一来创建 Secret,用以存放访问 Docker 仓库 -来下载镜像的凭据。 +你可以使用下面两种 `type` 值之一来创建 Secret,用以存放用于访问容器鏡像倉庫的凭据: - `kubernetes.io/dockercfg` - `kubernetes.io/dockerconfigjson` @@ -1550,7 +1581,7 @@ Secret 的 `data` 字段中包含名为 `.dockercfg` 的主键,其对应键值 编码的某 `~/.dockercfg` 文件的内容。 当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否 存在所期望的主键,并且验证其中所提供的键值是否是合法的 JSON 数据。 不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。 + +当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret +来访问容器倉庫时,你可以这样做: + ```shell kubectl create secret docker-registry secret-tiger-docker \ --docker-email=tiger@acme.example \ --docker-username=tiger \ - --docker-password=pass113 \ + --docker-password=pass1234 \ --docker-server=my-registry.example:5000 ``` 上面的命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。 如果你对 `.data.dockerconfigjson` 内容进行转储并执行 base64 解码: +```shell +kubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d +``` + + +那么输出等价于这个 JSON 文档(这也是一个有效的 Docker 配置文件): + ```json { "auths": { "my-registry.example:5000": { "username": "tiger", - "password": "pass113", - "email": "tiger@acme.com", - "auth": "dGlnZXI6cGFzczExMw==" + "password": "pass1234", + "email": "tiger@acme.example", + "auth": "dGlnZXI6cGFzczEyMzQ=" } } } @@ -1642,15 +1687,15 @@ Anyone who can read that Secret can learn the registry access bearer token. The `kubernetes.io/basic-auth` type is provided for storing credentials needed for basic authentication. When using this Secret type, the `data` field of the -Secret must contain the following two keys: +Secret must contain one of the following two keys: -- `username`: the user name for authentication; -- `password`: the password or token for authentication. +- `username`: the user name for authentication +- `password`: the password or token for authentication --> ### 基本身份认证 Secret {#basic-authentication-secret} `kubernetes.io/basic-auth` 类型用来存放用于基本身份认证所需的凭据信息。 -使用这种 Secret 类型时,Secret 的 `data` 字段必须包含以下两个键: +使用这种 Secret 类型时,Secret 的 `data` 字段必须包含以下两个键之一: - `username`: 用于身份认证的用户名; - `password`: 用于身份认证的密码或令牌。 @@ -1660,11 +1705,11 @@ Both values for the above two keys are base64 encoded strings. You can, of course, provide the clear text content using the `stringData` for Secret creation. -The following YAML is an example config for a basic authentication Secret: +The following manifest is an example of a basic authentication Secret: --> 以上两个键的键值都是 base64 编码的字符串。 当然你也可以在创建 Secret 时使用 `stringData` 字段来提供明文形式的内容。 -下面的 YAML 是基本身份认证 Secret 的一个示例清单: +以下清单是基本身份验证 Secret 的示例: ```yaml apiVersion: v1 @@ -1673,13 +1718,13 @@ metadata: name: secret-basic-auth type: kubernetes.io/basic-auth stringData: - username: admin # kubernetes.io/basic-auth 类型的必需字段 + username: admin # kubernetes.io/basic-auth 类型的必需字段 password: t0p-Secret # kubernetes.io/basic-auth 类型的必需字段 ``` ### 启动引导令牌 Secret {#bootstrap-token-secrets} @@ -1890,11 +1935,11 @@ data: ``` -## Secret 的信息安全问题 +## Secret 的信息安全问题 {#information-security-for-secrets} 尽管 ConfigMap 和 Secret 的工作方式类似,但 Kubernetes 对 Secret 有一些额外的保护。 @@ -2096,7 +2141,7 @@ on that node. variable configuration so that the other containers do not have access to that Secret. --> -### 针对开发人员的安全性建议 +### 针对开发人员的安全性建议 {#security-recommendations-for-developers} - 应用在从环境变量或卷中读取了机密信息内容之后仍要对其进行保护。例如, 你的应用应该避免用明文的方式将 Secret 数据写入日志,或者将其传递给不可信的第三方。 @@ -2117,10 +2162,10 @@ on that node. - When deploying applications that interact with the Secret API, you should limit access using [authorization policies](/docs/reference/access-authn-authz/authorization/) such as - [RBAC]( /docs/reference/access-authn-authz/rbac/). + [RBAC](/docs/reference/access-authn-authz/rbac/). --> -- 部署与 Secret API 交互的应用时,你应该使用 [RBAC](/zh/docs/reference/access-authn-authz/rbac/) - 这类[鉴权策略](/zh/docs/reference/access-authn-authz/authorization/)来限制访问。 +- 部署与 Secret API 交互的应用时,你应该使用 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) + 这类[鉴权策略](/zh-cn/docs/reference/access-authn-authz/authorization/)来限制访问。 -### 针对集群管理员的安全性建议 +### 针对集群管理员的安全性建议 {#security-recommendations-for-cluster-administrators} {{< caution >}} - 保留(使用 Kubernetes API)对集群中所有 Secret 对象执行 `watch` 或 `list` 操作的能力, 这样只有特权级最高、系统级别的组件能够执行这类操作。 - 在部署需要通过 Secret API 交互的应用时,你应该通过使用 - [RBAC](/zh/docs/reference/access-authn-authz/rbac/) - 这类[鉴权策略](/zh/docs/reference/access-authn-authz/authorization/)来限制访问。 + [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) + 这类[鉴权策略](/zh-cn/docs/reference/access-authn-authz/authorization/)来限制访问。 -- 学习如何[使用 `kubectl` 管理 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- 学习如何[使用配置文件管理 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- 学习如何[使用 kustomize 管理 Secret](/zh/docs/tasks/configmap-secret/managing-secret-using-kustomize/) -- 阅读 [API 参考](/zh/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/)了解 `Secret` +- 学习如何[使用 `kubectl` 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- 学习如何[使用配置文件管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- 学习如何[使用 kustomize 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +- 阅读 [API 参考](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/)了解 `Secret` diff --git a/content/zh-cn/docs/concepts/configuration/windows-resource-management.md b/content/zh-cn/docs/concepts/configuration/windows-resource-management.md new file mode 100644 index 0000000000000..4a4cefb483113 --- /dev/null +++ b/content/zh-cn/docs/concepts/configuration/windows-resource-management.md @@ -0,0 +1,134 @@ +--- +title: Windows 节点的资源管理 +content_type: concept +weight: 75 +--- + + + +本页概述了 Linux 和 Windows 在资源管理方式上的区别。 + + + +在 Linux 节点上,{{< glossary_tooltip text="cgroup" term_id="cgroup" >}} 用作资源控制的 Pod 边界。 +在这个边界内创建容器以便于隔离网络、进程和文件系统。 +Linux cgroup API 可用于收集 CPU、I/O 和内存使用统计数据。 + +与此相反,Windows 中每个容器对应一个[**作业对象**](https://docs.microsoft.com/zh-cn/windows/win32/procthread/job-objects), +与系统命名空间过滤器一起使用,将所有进程包含在一个容器中,提供与主机的逻辑隔离。 +(作业对象是一种 Windows 进程隔离机制,不同于 Kubernetes 提及的 {{< glossary_tooltip term_id="job" text="Job" >}})。 + +如果没有命名空间过滤,就无法运行 Windows 容器。 +这意味着在主机环境中无法让系统特权生效,因此特权容器在 Windows 上不可用。 +容器不能使用来自主机的标识,因为安全帐户管理器(Security Account Manager,SAM)是独立的。 + + +## 内存管理 {#resource-management-memory} + +Windows 不像 Linux 一样提供杀手(killer)机制,杀死内存不足的进程。 +Windows 始终将所有用户态内存分配视为虚拟内存,并强制使用页面文件(pagefile)。 + +Windows 节点不会为进程过量使用内存。 +最终结果是 Windows 不会像 Linux 那样达到内存不足的情况,Windows 将进程页面放到磁盘, +不会因为内存不足(OOM)而终止进程。 +如果内存配置过量且所有物理内存都已耗尽,则换页性能就会降低。 + +## CPU 管理 {#resource-management-cpu} + +Windows 可以限制为不同进程分配的 CPU 时间长度,但无法保证最小的 CPU 时间长度。 + +在 Windows 上,kubelet 支持使用命令行标志来设置 kubelet 进程的[调度优先级](https://docs.microsoft.com/zh-cn/windows/win32/procthread/scheduling-priorities): +`--windows-priorityclass`。 +与 Windows 主机上运行的其他进程相比,此标志允许 kubelet 进程获取更多的 CPU 时间片。 +有关允许值及其含义的更多信息,请访问 [Windows 优先级类](https://docs.microsoft.com/zh-cn/windows/win32/procthread/scheduling-priorities#priority-class)。 +为了确保运行的 Pod 不会耗尽 kubelet 的 CPU 时钟周期, +要将此标志设置为 `ABOVE_NORMAL_PRIORITY_CLASS` 或更高。 + + +## 资源预留 {#resource-reservation} + +为了满足操作系统、容器运行时和 kubelet 等 Kubernetes 主机进程使用的内存和 CPU, +你可以(且应该)用 `--kube-reserved` 和/或 `--system-reserved` kubelet 标志来预留内存和 CPU 资源。 +在 Windows 上,这些值仅用于计算节点的[可分配](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)资源。 + + +{{< caution >}} +在你部署工作负载时,需对容器设置内存和 CPU 资源的限制。 +这也会从 `NodeAllocatable` 中减去,帮助集群范围的调度器决定哪些 Pod 放到哪些节点上。 + +若调度 Pod 时未设置限制值,可能对 Windows 节点过量配置资源。 +在极端情况下,这会让节点变得不健康。 +{{< /caution >}} + + +在 Windows 上,一种好的做法是预留至少 2GiB 的内存。 + +要决定预留多少 CPU,需明确每个节点的最大 Pod 密度, +并监控节点上运行的系统服务的 CPU 使用率,然后选择一个满足工作负载需求的值。 diff --git a/content/zh/docs/concepts/containers/_index.md b/content/zh-cn/docs/concepts/containers/_index.md similarity index 85% rename from content/zh/docs/concepts/containers/_index.md rename to content/zh-cn/docs/concepts/containers/_index.md index c627f66f1e17a..de66ab39eab63 100644 --- a/content/zh/docs/concepts/containers/_index.md +++ b/content/zh-cn/docs/concepts/containers/_index.md @@ -15,16 +15,12 @@ run it. Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments. --> - 每个运行的容器都是可重复的; -包含依赖环境在内的标准,意味着无论您在哪里运行它,您都会得到相同的行为。 +包含依赖环境在内的标准,意味着无论你在哪里运行它都会得到相同的行为。 容器将应用程序从底层的主机设施中解耦。 这使得在不同的云或 OS 环境中部署更加容易。 - - - ## 容器镜像 {#container-images} -[容器镜像](/zh/docs/concepts/containers/images/)是一个随时可以运行的软件包, +[容器镜像](/zh-cn/docs/concepts/containers/images/)是一个随时可以运行的软件包, 包含运行应用程序所需的一切:代码和它需要的所有运行时、应用程序和系统库,以及一些基本设置的默认值。 根据设计,容器是不可变的:你不能更改已经运行的容器的代码。 @@ -57,7 +53,7 @@ the change, then recreate the container to start from the updated image. * Read about [Pods](/docs/concepts/workloads/pods/) --> -* 进一步阅读[容器镜像](/zh/docs/concepts/containers/images/) -* 进一步阅读 [Pods](/zh/docs/concepts/workloads/pods/) +* 进一步阅读[容器镜像](/zh-cn/docs/concepts/containers/images/) +* 进一步阅读 [Pods](/zh-cn/docs/concepts/workloads/pods/) diff --git a/content/zh/docs/concepts/containers/container-environment.md b/content/zh-cn/docs/concepts/containers/container-environment.md similarity index 87% rename from content/zh/docs/concepts/containers/container-environment.md rename to content/zh-cn/docs/concepts/containers/container-environment.md index 362e9d4c1b5a5..32ed23ba503ae 100644 --- a/content/zh/docs/concepts/containers/container-environment.md +++ b/content/zh-cn/docs/concepts/containers/container-environment.md @@ -34,8 +34,8 @@ The Kubernetes Container environment provides several important resources to Con Kubernetes 的容器环境给容器提供了几个重要的资源: -* 文件系统,其中包含一个[镜像](/zh/docs/concepts/containers/images/) - 和一个或多个的[卷](/zh/docs/concepts/storage/volumes/) +* 文件系统,其中包含一个[镜像](/zh-cn/docs/concepts/containers/images/) + 和一个或多个的[卷](/zh-cn/docs/concepts/storage/volumes/) * 容器自身的信息 * 集群中其他对象的信息 @@ -59,7 +59,7 @@ as are any environment variables specified statically in the container image. [`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) 函数来获取。 Pod 名称和命名空间可以通过 -[下行 API](/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) +[下行 API](/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) 转换为环境变量。 Pod 定义中的用户所定义的环境变量也可在容器中使用,就像在 container 镜像中静态指定的任何环境变量一样。 @@ -100,7 +100,7 @@ if [DNS addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addon * Get hands-on experience [attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). --> -* 学习更多有关[容器生命周期回调](/zh/docs/concepts/containers/container-lifecycle-hooks/)的知识 -* 动手[为容器生命周期事件添加处理程序](/zh/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) +* 学习更多有关[容器生命周期回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks/)的知识 +* 动手[为容器生命周期事件添加处理程序](/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) diff --git a/content/zh/docs/concepts/containers/container-lifecycle-hooks.md b/content/zh-cn/docs/concepts/containers/container-lifecycle-hooks.md similarity index 98% rename from content/zh/docs/concepts/containers/container-lifecycle-hooks.md rename to content/zh-cn/docs/concepts/containers/container-lifecycle-hooks.md index 32d86af26ab97..9791183f47ab1 100644 --- a/content/zh/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/zh-cn/docs/concepts/containers/container-lifecycle-hooks.md @@ -80,7 +80,7 @@ A more detailed description of the termination behavior can be found in [Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). --> 有关终止行为的更详细描述,请参见 -[终止 Pod](/zh/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)。 +[终止 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)。 -* 进一步了解[容器环境](/zh/docs/concepts/containers/container-environment/) -* 动手实践,[为容器生命周期事件添加处理程序](/zh/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) +* 进一步了解[容器环境](/zh-cn/docs/concepts/containers/container-environment/) +* 动手实践,[为容器生命周期事件添加处理程序](/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) diff --git a/content/zh/docs/concepts/containers/images.md b/content/zh-cn/docs/concepts/containers/images.md similarity index 95% rename from content/zh/docs/concepts/containers/images.md rename to content/zh-cn/docs/concepts/containers/images.md index 12c48ef46326b..f3783775cceb2 100644 --- a/content/zh/docs/concepts/containers/images.md +++ b/content/zh-cn/docs/concepts/containers/images.md @@ -101,7 +101,7 @@ these values have: --> ### 镜像拉取策略 {#image-pull-policy} -容器的 `imagePullPolicy` 和镜像的标签会影响 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 尝试拉取(下载)指定的镜像。 +容器的 `imagePullPolicy` 和镜像的标签会影响 [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) 尝试拉取(下载)指定的镜像。 以下列表包含了 `imagePullPolicy` 可以设置的值,以及这些值的效果: @@ -179,7 +179,7 @@ running the same code no matter what tag changes happen at the registry. 镜像摘要唯一标识了镜像的特定版本,因此 Kubernetes 每次启动具有指定镜像名称和摘要的容器时,都会运行相同的代码。 通过摘要指定镜像可固定你运行的代码,这样镜像仓库的变化就不会导致版本的混杂。 -有一些第三方的[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) +有一些第三方的[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) 在创建 Pod(和 Pod 模板)时产生变更,这样运行的工作负载就是根据镜像摘要,而不是标签来定义的。 无论镜像仓库上的标签发生什么变化,你都想确保你所有的工作负载都运行相同的代码,那么指定镜像摘要会很有用。 @@ -247,7 +247,7 @@ If you would like to always force a pull, you can do one of the following: 当你提交 Pod 时,Kubernetes 会将策略设置为 `Always`。 - 省略 `imagePullPolicy` 和镜像的标签; 当你提交 Pod 时,Kubernetes 会将策略设置为 `Always`。 -- 启用准入控制器 [AlwaysPullImages](/zh/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。 +- 启用准入控制器 [AlwaysPullImages](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。 有关配置私有容器镜像仓库的示例,请参阅任务 -[从私有镜像库中提取图像](/zh/docs/tasks/configure-pod-container/pull-image-private-registry)。 +[从私有镜像库中提取图像](/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry)。 该示例使用 Docker Hub 中的私有注册表。 你需要对使用私有仓库的每个 Pod 执行以上操作。 不过,设置该字段的过程也可以通过为 -[服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/) +[服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) 资源设置 `imagePullSecrets` 来自动完成。 有关详细指令可参见 -[将 ImagePullSecrets 添加到服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。 +[将 ImagePullSecrets 添加到服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。 你也可以将此方法与节点级别的 `.docker/config.json` 配置结合使用。 来自不同来源的凭据会被合并。 @@ -685,7 +685,7 @@ common use cases and suggested solutions. - Move sensitive data into a "Secret" resource, instead of packaging it in an image. --> 3. 集群使用专有镜像,且有些镜像需要更严格的访问控制 - - 确保 [AlwaysPullImages 准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。否则,所有 Pod 都可以使用所有镜像。 + - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。否则,所有 Pod 都可以使用所有镜像。 - 确保将敏感数据存储在 Secret 资源中,而不是将其打包在镜像里 4. 集群是多租户的并且每个租户需要自己的私有仓库 - - 确保 [AlwaysPullImages 准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。否则,所有租户的所有的 Pod 都可以使用所有镜像。 + - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。否则,所有租户的所有的 Pod 都可以使用所有镜像。 - 为私有仓库启用鉴权 - 为每个租户生成访问仓库的凭据,放置在 Secret 中,并将 Secrert 发布到各租户的命名空间下。 - 租户将 Secret 添加到每个名字空间中的 imagePullSecrets @@ -716,4 +716,4 @@ Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config. * Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection). --> * 阅读 [OCI Image Manifest 规范](https://github.com/opencontainers/image-spec/blob/master/manifest.md)。 -* 了解[容器镜像垃圾收集](/zh/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection)。 +* 了解[容器镜像垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection)。 diff --git a/content/zh/docs/concepts/containers/runtime-class.md b/content/zh-cn/docs/concepts/containers/runtime-class.md similarity index 77% rename from content/zh/docs/concepts/containers/runtime-class.md rename to content/zh-cn/docs/concepts/containers/runtime-class.md index 63e14b435c446..5eb922eee5d31 100644 --- a/content/zh/docs/concepts/containers/runtime-class.md +++ b/content/zh-cn/docs/concepts/containers/runtime-class.md @@ -92,7 +92,7 @@ The configurations have a corresponding `handler` name, referenced by the Runtim handler must be a valid [DNS label name](/docs/concepts/overview/working-with-objects/names/#dns-label-names). --> 所有这些配置都具有相应的 `handler` 名,并被 RuntimeClass 引用。 -handler 必须是有效的 [DNS 标签名](/zh/docs/concepts/overview/working-with-objects/names/#dns-label-names)。 +handler 必须是有效的 [DNS 标签名](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-label-names)。 {{< note >}} 建议将 RuntimeClass 写操作(create、update、patch 和 delete)限定于集群管理员使用。 -通常这是默认配置。参阅[授权概述](/zh/docs/reference/access-authn-authz/authorization/)了解更多信息。 +通常这是默认配置。参阅[授权概述](/zh-cn/docs/reference/access-authn-authz/authorization/)了解更多信息。 {{< /note >}} ## 使用说明 {#usage} -一旦完成集群中 RuntimeClasses 的配置,使用起来非常方便。 -在 Pod spec 中指定 `runtimeClassName` 即可。例如: +一旦完成集群中 RuntimeClasses 的配置, +你可以在 Pod spec 中指定 `runtimeClassName` 来使用它。例如: ```yaml apiVersion: v1 @@ -156,13 +159,13 @@ spec: This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the `Failed` terminal [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase). Look for a -corresponding [event](/docs/tasks/debug-application-cluster/debug-application-introspection/) for an +corresponding [event](/docs/tasks/debug/debug-application/debug-running-pod/) for an error message. --> 这一设置会告诉 kubelet 使用所指的 RuntimeClass 来运行该 pod。 如果所指的 RuntimeClass 不存在或者 CRI 无法运行相应的 handler, -那么 pod 将会进入 `Failed` 终止[阶段](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)。 -你可以查看相应的[事件](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/), +那么 pod 将会进入 `Failed` 终止[阶段](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)。 +你可以查看相应的[事件](/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/), 获取执行过程中的错误信息。 -Dockershim 自 Kubernetes v1.20 起已弃用,并将在 v1.24 中删除。 -有关弃用的更多信息查看 [dockershim 弃用](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)。 - - -为 dockershim 设置 RuntimeClass 时,必须将运行时处理程序设置为 `docker`。 -Dockershim 不支持自定义的可配置的运行时处理程序。 - -#### [containerd](https://containerd.io/) +#### {{< glossary_tooltip term_id="containerd" >}} -更详细信息,请查阅 containerd -[CRI 插件配置指南](https://github.com/containerd/cri/blob/master/docs/config.md) +更详细信息,请查阅 containerd 的[配置指南](https://github.com/containerd/cri/blob/master/docs/config.md) #### [cri-o](https://cri-o.io/) @@ -278,37 +263,32 @@ by each. 与 `nodeSelector` 一样,tolerations 也在 admission 阶段与 pod 的 tolerations 合并,取二者的并集。 更多有关 node selector 和 tolerations 的配置信息,请查阅 -[将 Pod 分派到节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)。 +[将 Pod 分派到节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)。 ### Pod 开销 {#pod-overhead} -{{< feature-state for_k8s_version="v1.18" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} 你可以指定与运行 Pod 相关的 _开销_ 资源。声明开销即允许集群(包括调度器)在决策 Pod 和资源时将其考虑在内。 -若要使用 Pod 开销特性,你必须确保 PodOverhead -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -处于启用状态(默认为启用状态)。 Pod 开销通过 RuntimeClass 的 `overhead` 字段定义。 -通过使用这些字段,你可以指定使用该 RuntimeClass 运行 Pod 时的开销并确保 Kubernetes 将这些开销计算在内。 +通过使用这个字段,你可以指定使用该 RuntimeClass 运行 Pod 时的开销并确保 Kubernetes 将这些开销计算在内。 ## {{% heading "whatsnext" %}} @@ -320,5 +300,5 @@ Pod 开销通过 RuntimeClass 的 `overhead` 字段定义。 --> - [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md) - [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling) -- 阅读关于 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) 的概念 +- 阅读关于 [Pod 开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) 的概念 - [PodOverhead 特性设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) diff --git a/content/zh/docs/concepts/extend-kubernetes/_index.md b/content/zh-cn/docs/concepts/extend-kubernetes/_index.md similarity index 88% rename from content/zh/docs/concepts/extend-kubernetes/_index.md rename to content/zh-cn/docs/concepts/extend-kubernetes/_index.md index e6f8d4bf86026..95b0b1cf31d19 100644 --- a/content/zh/docs/concepts/extend-kubernetes/_index.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/_index.md @@ -79,11 +79,11 @@ Customization approaches can be broadly divided into *configuration*, which only 配置文件和参数标志的说明位于在线文档的参考章节,按可执行文件组织: -* [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) -* [kube-proxy](/zh/docs/reference/command-line-tools-reference/kube-proxy/) -* [kube-apiserver](/zh/docs/reference/command-line-tools-reference/kube-apiserver/) -* [kube-controller-manager](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/) -* [kube-scheduler](/zh/docs/reference/command-line-tools-reference/kube-scheduler/). +* [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) +* [kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/) +* [kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/) +* [kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) +* [kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/). -*内置的策略 API*,例如[ResourceQuota](/zh/docs/concepts/policy/resource-quotas/)、 -[PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/)、 -[NetworkPolicy](/zh/docs/concepts/services-networking/network-policies/) -和基于角色的访问控制([RBAC](/zh/docs/reference/access-authn-authz/rbac/)) +*内置的策略 API*,例如[ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)、 +[PodSecurityPolicies](/zh-cn/docs/concepts/security/pod-security-policy/)、 +[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) +和基于角色的访问控制([RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/)) 等等都是内置的 Kubernetes API。 API 通常用于托管的 Kubernetes 服务和受控的 Kubernetes 安装环境中。 这些 API 是声明式的,与 Pod 这类其他 Kubernetes 资源遵从相同的约定, 所以新的集群配置是可复用的,并且可以当作应用程序来管理。 此外,对于稳定版本的 API 而言,它们与其他 Kubernetes API 一样, -采纳的是一种[预定义的支持策略](/zh/docs/reference/using-api/deprecation-policy/)。 +采纳的是一种[预定义的支持策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。 出于以上原因,在条件允许的情况下,基于 API 的方案应该优先于配置文件和参数标志。 1. 用户通常使用 `kubectl` 与 Kubernetes API 交互。 - [kubectl 插件](/zh/docs/tasks/extend-kubectl/kubectl-plugins/)能够扩展 kubectl 程序的行为。 + [kubectl 插件](/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/)能够扩展 kubectl 程序的行为。 这些插件只会影响到每个用户的本地环境,因此无法用来强制实施整个站点范围的策略。 2. API 服务器处理所有请求。API 服务器中的几种扩展点能够使用户对请求执行身份认证、 @@ -273,7 +273,7 @@ For more about Custom Resources, see the [Custom Resources concept guide](/docs/ 不要使用自定义资源来充当应用、用户或者监控数据的数据存储。 -关于自定义资源的更多信息,可参见[自定义资源概念指南](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。 +关于自定义资源的更多信息,可参见[自定义资源概念指南](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。 ### 身份认证 {#authentication} -[身份认证](/zh/docs/reference/access-authn-authz/authentication/)负责将所有请求中 +[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)负责将所有请求中 的头部或证书映射到发出该请求的客户端的用户名。 Kubernetes 提供若干种内置的认证方法,以及 -[认证 Webhook](/zh/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) +[认证 Webhook](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) 方法以备内置方法无法满足你的要求。 ### 鉴权 {#authorization} -[鉴权](/zh/docs/reference/access-authn-authz/authorization/) +[鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/) 操作负责确定特定的用户是否可以读、写 API 资源或对其执行其他操作。 此操作仅在整个资源集合的层面进行。 换言之,它不会基于对象的特定字段作出不同的判决。 如果内置的鉴权选项无法满足你的需要,你可以使用 -[鉴权 Webhook](/zh/docs/reference/access-authn-authz/webhook/)来调用用户提供 +[鉴权 Webhook](/zh-cn/docs/reference/access-authn-authz/webhook/)来调用用户提供 的代码,执行定制的鉴权操作。 ### 设备插件 {#device-plugins} -使用[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), +使用[设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), 节点能够发现新的节点资源(除了内置的类似 CPU 和内存这类资源)。 ### 网络插件 {#network-plugins} -通过节点层面的[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/), +通过节点层面的[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/), 可以支持不同的网络设施。 -* 进一步了解[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -* 了解[动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/) +* 进一步了解[自定义资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* 了解[动态准入控制](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/) * 进一步了解基础设施扩展 - * [网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) - * [设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) -* 了解 [kubectl 插件](/zh/docs/tasks/extend-kubectl/kubectl-plugins/) -* 了解 [Operator 模式](/zh/docs/concepts/extend-kubernetes/operator/) + * [网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) + * [设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +* 了解 [kubectl 插件](/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/) +* 了解 [Operator 模式](/zh-cn/docs/concepts/extend-kubernetes/operator/) diff --git a/content/zh/docs/concepts/extend-kubernetes/api-extension/_index.md b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/_index.md similarity index 100% rename from content/zh/docs/concepts/extend-kubernetes/api-extension/_index.md rename to content/zh-cn/docs/concepts/extend-kubernetes/api-extension/_index.md diff --git a/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md similarity index 91% rename from content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md rename to content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index d2a0c6c18844e..52a0c64e7e39e 100644 --- a/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -32,7 +32,7 @@ The aggregation layer is different from [Custom Resources](/docs/concepts/extend 或者你自己开发的 API。 聚合层不同于 -[定制资源(Custom Resources)](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。 +[定制资源(Custom Resources)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。 后者的目的是让 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} 能够认识新的对象类别(Kind)。 @@ -83,10 +83,10 @@ If your extension API server cannot achieve that latency requirement, consider m Alternatively: learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). --> -* 阅读[配置聚合层](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 文档, +* 阅读[配置聚合层](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 文档, 了解如何在自己的环境中启用聚合器。 -* 接下来,了解[安装扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/), +* 接下来,了解[安装扩展 API 服务器](/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/), 开始使用聚合层。 * 从 API 参考资料中研究关于 [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) 的内容。 -或者,学习如何[使用自定义资源定义扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。 +或者,学习如何[使用自定义资源定义扩展 Kubernetes API](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。 diff --git a/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources.md similarity index 91% rename from content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md rename to content/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index e2021c404341b..f69c13963f309 100644 --- a/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -37,9 +37,9 @@ collection of Pod objects. ## 定制资源 *资源(Resource)* 是 -[Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/) 中的一个端点, +[Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api/) 中的一个端点, 其中存储的是某个类别的 -[API 对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/) +[API 对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/) 的一个集合。 例如内置的 *pods* 资源包含一组 Pod 对象。 @@ -84,7 +84,7 @@ keep the current state of Kubernetes objects in sync with the desired state. The controller interprets the structured data as a record of the user's desired state, and continually maintains this state. --> -使用[声明式 API](/zh/docs/concepts/overview/kubernetes-api/), +使用[声明式 API](/zh-cn/docs/concepts/overview/kubernetes-api/), 你可以 _声明_ 或者设定你的资源的期望状态,并尝试让 Kubernetes 对象的当前状态 同步到其期望状态。控制器负责将结构化的数据解释为用户所期望状态的记录,并 持续地维护该状态。 @@ -99,7 +99,7 @@ for specific applications into an extension of the Kubernetes API. --> 你可以在一个运行中的集群上部署和更新定制控制器,这类操作与集群的生命周期无关。 定制控制器可以用于任何类别的资源,不过它们与定制资源结合起来时最为有效。 -[Operator 模式](/zh/docs/concepts/extend-kubernetes/operator/)就是将定制资源 +[Operator 模式](/zh-cn/docs/concepts/extend-kubernetes/operator/)就是将定制资源 与定制控制器相结合的。你可以使用定制控制器来将特定于某应用的领域知识组织 起来,以编码的形式构造对 Kubernetes API 的扩展。 @@ -113,7 +113,7 @@ or let your API stand alone. ## 我是否应该向我的 Kubernetes 集群添加定制资源? 在创建新的 API 时,请考虑是 -[将你的 API 与 Kubernetes 集群 API 聚合起来](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) +[将你的 API 与 Kubernetes 集群 API 聚合起来](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 还是让你的 API 独立运行。 {{< note >}} -请使用 [Secret](/zh/docs/concepts/configuration/secret/) 来保存敏感数据。 +请使用 [Secret](/zh-cn/docs/concepts/configuration/secret/) 来保存敏感数据。 Secret 类似于 configMap,但更为安全。 {{< /note >}} @@ -251,7 +251,7 @@ Kubernetes provides two ways to add custom resources to your cluster: Kubernetes 提供了两种方式供你向集群中添加定制资源: - CRD 相对简单,创建 CRD 可以不必编程。 -- [API 聚合](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) +- [API 聚合](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 需要编程,但支持对 API 行为进行更多的控制,例如数据如何存储以及在不同 API 版本间如何转换等。 ## CustomResourceDefinitions -[CustomResourceDefinition](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) +[CustomResourceDefinition](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) API 资源允许你定义定制资源。 定义 CRD 对象的操作会使用你所设定的名字和模式定义(Schema)创建一个新的定制资源, Kubernetes API 负责为你的定制资源提供存储和访问服务。 CRD 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 | 特性 | 描述 | CRDs | 聚合 API | | ------- | ----------- | ---- | -------------- | -| 合法性检查 | 帮助用户避免错误,允许你独立于客户端版本演化 API。这些特性对于由很多无法同时更新的客户端的场合。| 可以。大多数验证可以使用 [OpenAPI v3.0 合法性检查](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) 来设定。其他合法性检查操作可以通过添加[合法性检查 Webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9)来实现。 | 可以,可执行任何合法性检查。| -| 默认值设置 | 同上 | 可以。可通过 [OpenAPI v3.0 合法性检查](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting)的 `default` 关键词(自 1.17 正式发布)或[更改性(Mutating)Webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)来实现(不过从 etcd 中读取老的对象时不会执行这些 Webhook)。 | 可以。 | -| 多版本支持 | 允许通过两个 API 版本同时提供同一对象。可帮助简化类似字段更名这类 API 操作。如果你能控制客户端版本,这一特性将不再重要。 | [可以](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning)。 | 可以。 | +| 合法性检查 | 帮助用户避免错误,允许你独立于客户端版本演化 API。这些特性对于由很多无法同时更新的客户端的场合。| 可以。大多数验证可以使用 [OpenAPI v3.0 合法性检查](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) 来设定。其他合法性检查操作可以通过添加[合法性检查 Webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9)来实现。 | 可以,可执行任何合法性检查。| +| 默认值设置 | 同上 | 可以。可通过 [OpenAPI v3.0 合法性检查](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting)的 `default` 关键词(自 1.17 正式发布)或[更改性(Mutating)Webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)来实现(不过从 etcd 中读取老的对象时不会执行这些 Webhook)。 | 可以。 | +| 多版本支持 | 允许通过两个 API 版本同时提供同一对象。可帮助简化类似字段更名这类 API 操作。如果你能控制客户端版本,这一特性将不再重要。 | [可以](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning)。 | 可以。 | | 定制存储 | 支持使用具有不同性能模式的存储(例如,要使用时间序列数据库而不是键值存储),或者因安全性原因对存储进行隔离(例如对敏感信息执行加密)。 | 不可以。 | 可以。 | -| 定制业务逻辑 | 在创建、读取、更新或删除对象时,执行任意的检查或操作。 | 可以。要使用 [Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)。 | 可以。 | -| 支持 scale 子资源 | 允许 HorizontalPodAutoscaler 和 PodDisruptionBudget 这类子系统与你的新资源交互。 | [可以](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)。 | 可以。 | -| 支持 status 子资源 | 允许在用户写入 spec 部分而控制器写入 status 部分时执行细粒度的访问控制。允许在对定制资源的数据进行更改时增加对象的代际(Generation);这需要资源对 spec 和 status 部分有明确划分。| [可以](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#status-subresource)。 | 可以。 | +| 定制业务逻辑 | 在创建、读取、更新或删除对象时,执行任意的检查或操作。 | 可以。要使用 [Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)。 | 可以。 | +| 支持 scale 子资源 | 允许 HorizontalPodAutoscaler 和 PodDisruptionBudget 这类子系统与你的新资源交互。 | [可以](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)。 | 可以。 | +| 支持 status 子资源 | 允许在用户写入 spec 部分而控制器写入 status 部分时执行细粒度的访问控制。允许在对定制资源的数据进行更改时增加对象的代际(Generation);这需要资源对 spec 和 status 部分有明确划分。| [可以](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#status-subresource)。 | 可以。 | | 其他子资源 | 添加 CRUD 之外的操作,例如 "logs" 或 "exec"。 | 不可以。 | 可以。 | -| strategic-merge-patch | 新的端点要支持标记了 `Content-Type: application/strategic-merge-patch+json` 的 PATCH 操作。对于更新既可在本地更改也可在服务器端更改的对象而言是有用的。要了解更多信息,可参见[使用 `kubectl patch` 来更新 API 对象](/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)。 | 不可以。 | 可以。 | +| strategic-merge-patch | 新的端点要支持标记了 `Content-Type: application/strategic-merge-patch+json` 的 PATCH 操作。对于更新既可在本地更改也可在服务器端更改的对象而言是有用的。要了解更多信息,可参见[使用 `kubectl patch` 来更新 API 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)。 | 不可以。 | 可以。 | | 支持协议缓冲区 | 新的资源要支持想要使用协议缓冲区(Protocol Buffer)的客户端。 | 不可以。 | 可以。 | -| OpenAPI Schema | 是否存在新资源类别的 OpenAPI(Swagger)Schema 可供动态从服务器上读取?是否存在机制确保只能设置被允许的字段以避免用户犯字段拼写错误?是否实施了字段类型检查(换言之,不允许在 `string` 字段设置 `int` 值)? | 可以,依据 [OpenAPI v3.0 合法性检查](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) 模式(1.16 中进入正式发布状态)。 | 可以。| +| OpenAPI Schema | 是否存在新资源类别的 OpenAPI(Swagger)Schema 可供动态从服务器上读取?是否存在机制确保只能设置被允许的字段以避免用户犯字段拼写错误?是否实施了字段类型检查(换言之,不允许在 `string` 字段设置 `int` 值)? | 可以,依据 [OpenAPI v3.0 合法性检查](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) 模式(1.16 中进入正式发布状态)。 | 可以。| ## 访问定制资源 -Kubernetes [客户端库](/zh/docs/reference/using-api/client-libraries/)可用来访问定制资源。 +Kubernetes [客户端库](/zh-cn/docs/reference/using-api/client-libraries/)可用来访问定制资源。 并非所有客户端库都支持定制资源。_Go_ 和 _Python_ 客户端库是支持的。 当你添加了新的定制资源后,可以用如下方式之一访问它们: @@ -553,6 +553,6 @@ Kubernetes [客户端库](/zh/docs/reference/using-api/client-libraries/)可用 * Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/). * Learn how to [Extend the Kubernetes API with CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). --> -* 了解如何[使用聚合层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) -* 了解如何[使用 CustomResourceDefinition 来扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) +* 了解如何[使用聚合层扩展 Kubernetes API](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) +* 了解如何[使用 CustomResourceDefinition 来扩展 Kubernetes API](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) diff --git a/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/_index.md b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/_index.md similarity index 100% rename from content/zh/docs/concepts/extend-kubernetes/compute-storage-net/_index.md rename to content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/_index.md diff --git a/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md similarity index 96% rename from content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md rename to content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index c1f94153e0478..90a4494d0fea0 100644 --- a/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -67,7 +67,7 @@ to advertise that the node has 2 "Foo" devices installed and available. * 设备插件的 Unix 套接字。 * 设备插件的 API 版本。 * `ResourceName` 是需要公布的。这里 `ResourceName` 需要遵循 - [扩展资源命名方案](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources), + [扩展资源命名方案](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources), 类似于 `vendor-domain/resourcetype`。(比如 NVIDIA GPU 就被公布为 `nvidia.com/gpu`。) 成功注册后,设备插件就向 kubelet 发送它所管理的设备列表,然后 kubelet @@ -86,7 +86,7 @@ other resources, with the following differences: * Devices cannot be shared between containers. --> 然后,用户可以请求设备作为 Pod 规范的一部分, -参见[Container](/zh/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)。 +参见[Container](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)。 请求扩展资源类似于管理请求和限制的方式, 其他资源,有以下区别: @@ -441,7 +441,7 @@ it does (for example: hotplug/hotunplug, device health changes), client is expec However, calling `GetAllocatableResources` endpoint is not sufficient in case of cpu and/or memory update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable. --> -`GetAllocatableResources` 应该仅被用于评估一个节点上的[可分配的](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +`GetAllocatableResources` 应该仅被用于评估一个节点上的[可分配的](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) 资源。如果目标是评估空闲/未分配的资源,此调用应该与 List() 端点一起使用。 除非暴露给 kubelet 的底层资源发生变化 否则 `GetAllocatableResources` 得到的结果将保持不变。 这种情况很少发生,但当发生时(例如:热插拔,设备健康状况改变),客户端应该调用 `GetAlloctableResources` 端点。 @@ -471,7 +471,7 @@ Preceding Kubernetes v1.23, to enable this feature `kubelet` must be started wit --> 从 Kubernetes v1.23 开始,`GetAllocatableResources` 被默认启用。 你可以通过关闭 `KubeletPodResourcesGetAllocatable` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) 来禁用。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 来禁用。 在 Kubernetes v1.23 之前,要启用这一功能,`kubelet` 必须用以下标志启动: @@ -484,7 +484,7 @@ plugins report [when they register themselves to the kubelet](/docs/concepts/ext --> `ContainerDevices` 会向外提供各个设备所隶属的 NUMA 单元这类拓扑信息。 NUMA 单元通过一个整数 ID 来标识,其取值与设备插件所报告的一致。 -[设备插件注册到 kubelet 时](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +[设备插件注册到 kubelet 时](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) 会报告这类信息。 -* 查看[调度 GPU 资源](/zh/docs/tasks/manage-gpus/scheduling-gpus/) 来学习使用设备插件 -* 查看在上如何[公布节点上的扩展资源](/zh/docs/tasks/administer-cluster/extended-resource-node/) -* 学习[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/) -* 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](/zh/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) +* 查看[调度 GPU 资源](/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/) 来学习使用设备插件 +* 查看在上如何[公布节点上的扩展资源](/zh-cn/docs/tasks/administer-cluster/extended-resource-node/) +* 学习[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/) +* 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](/zh-cn/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) diff --git a/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md new file mode 100644 index 0000000000000..fea8c05193c57 --- /dev/null +++ b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -0,0 +1,244 @@ +--- +title: 网络插件 +content_type: concept +weight: 10 +--- + + + + + + +Kubernetes {{< skew currentVersion >}} 支持[容器网络接口](https://github.com/containernetworking/cni) (CNI) 集群网络插件。 +你必须使用和你的集群相兼容并且满足你的需求的 CNI 插件。 +在更广泛的 Kubernetes 生态系统中你可以使用不同的插件(开源和闭源)。 + + +要实现 [Kubernetes 网络模型](/zh-cn/docs/concepts/services-networking/#the-kubernetes-network-model),你需要一个 CNI 插件。 + + +你必须使用与 [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) +或更高版本的 CNI 规范相符合的 CNI 插件。 +Kubernetes 推荐使用一个兼容 [v1.0.0](https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md) +CNI 规范的插件(插件可以兼容多个规范版本)。 + + + + +## 安装 {#installation} + +在网络语境中,容器运行时(Container Runtime)是在节点上的守护进程, +被配置用来为 kubelet 提供 CRI 服务。具体而言,容器运行时必须配置为加载所需的 +CNI 插件,从而实现 Kubernetes 网络模型。 + +{{< note >}} + +在 Kubernetes 1.24 之前,CNI 插件也可以由 kubelet 使用命令行参数 `cni-bin-dir` +和 `network-plugin` 管理。Kubernetes 1.24 移除了这些命令行参数, +CNI 的管理不再是 kubelet 的工作。 + + +如果你在移除 dockershim 之后遇到问题,请参阅[排查 CNI 插件相关的错误](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)。 +{{< /note >}} + + +要了解容器运行时如何管理 CNI 插件的具体信息,可参见对应容器运行时的文档,例如: + +- [containerd](https://github.com/containerd/containerd/blob/main/script/setup/install-cni) +- [CRI-O](https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md) + + +要了解如何安装和管理 CNI 插件的具体信息,可参阅对应的插件或 +[网络驱动(Networking Provider)](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) +的文档。 + + +## 网络插件要求 {#network-plugin-requirements} + +对于插件开发人员以及时常会构建并部署 Kubernetes 的用户而言, +插件可能也需要特定的配置来支持 kube-proxy。 +iptables 代理依赖于 iptables,插件可能需要确保 iptables 能够监控容器的网络通信。 +例如,如果插件将容器连接到 Linux 网桥,插件必须将 `net/bridge/bridge-nf-call-iptables` +sysctl 参数设置为 `1`,以确保 iptables 代理正常工作。 +如果插件不使用 Linux 网桥(而是类似于 Open vSwitch 或者其它一些机制), +它应该确保为代理对容器通信执行正确的路由。 + + + +默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件, +该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置 +(如带网桥的 Docker )与 iptables 代理正常工作。 + + +### 本地回路 CNI {#loopback-cni} + +除了安装到节点上用于实现 Kubernetes 网络模型的 CNI 插件外,Kubernetes +还需要容器运行时提供一个本地回路接口 `lo`,用于各个沙箱(Pod 沙箱、虚机沙箱……)。 +实现本地回路接口的工作可以通过复用 +[CNI 本地回路插件](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go)来实现, +也可以通过开发自己的代码来实现 +(参阅 [CRI-O 中的示例](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91))。 + + +### 支持 hostPort {#support-hostport} + +CNI 网络插件支持 `hostPort`。 你可以使用官方 +[portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap) +插件,它由 CNI 插件团队提供,或者使用你自己的带有 portMapping 功能的插件。 + +如果你想要启动 `hostPort` 支持,则必须在 `cni-conf-dir` 指定 `portMappings capability`。 +例如: + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "portmap", + "capabilities": {"portMappings": true} + } + ] +} +``` + + +### 支持流量整形 {#support-traffic-shaping} + +**实验功能** + +CNI 网络插件还支持 pod 入口和出口流量整形。 +你可以使用 CNI 插件团队提供的 +[bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) +插件,也可以使用你自己的具有带宽控制功能的插件。 + +如果你想要启用流量整形支持,你必须将 `bandwidth` 插件添加到 CNI 配置文件 +(默认是 `/etc/cni/net.d`)并保证该可执行文件包含在你的 CNI 的 bin +文件夹内 (默认为 `/opt/cni/bin`)。 + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "bandwidth", + "capabilities": {"bandwidth": true} + } + ] +} +``` + +现在,你可以将 `kubernetes.io/ingress-bandwidth` 和 `kubernetes.io/egress-bandwidth` +注解添加到 pod 中。例如: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + annotations: + kubernetes.io/ingress-bandwidth: 1M + kubernetes.io/egress-bandwidth: 1M +... +``` + +## {{% heading "whatsnext" %}} + + + diff --git a/content/zh/docs/concepts/extend-kubernetes/operator.md b/content/zh-cn/docs/concepts/extend-kubernetes/operator.md similarity index 95% rename from content/zh/docs/concepts/extend-kubernetes/operator.md rename to content/zh-cn/docs/concepts/extend-kubernetes/operator.md index c9734f47988e0..54035560378db 100644 --- a/content/zh/docs/concepts/extend-kubernetes/operator.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/operator.md @@ -19,9 +19,9 @@ to manage applications and their components. Operators follow Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller/). --> Operator 是 Kubernetes 的扩展软件,它利用 -[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 管理应用及其组件。 -Operator 遵循 Kubernetes 的理念,特别是在[控制器](/zh/docs/concepts/architecture/controller/) +Operator 遵循 Kubernetes 的理念,特别是在[控制器](/zh-cn/docs/concepts/architecture/controller/) 方面。 @@ -67,7 +67,7 @@ Kubernetes 的 {{< glossary_tooltip text="Operator 模式" term_id="operator-pat Kubernetes 自身代码的情况下,通过为一个或多个自定义资源关联{{< glossary_tooltip text="控制器" term_id="controller" >}} 来扩展集群的能力。 Operator 是 Kubernetes API 的客户端,充当 -[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +[自定义资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 的控制器。 * 阅读 {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [Operator 白皮书](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md)。 -* 详细了解 [定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* 详细了解 [定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合你的 Operator * [发布](https://operatorhub.io/)你的 Operator,让别人也可以使用 * 阅读 [CoreOS 原始文章](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html),它介绍了 Operator 模式(这是一个存档版本的原始文章)。 diff --git a/content/zh/docs/concepts/extend-kubernetes/service-catalog.md b/content/zh-cn/docs/concepts/extend-kubernetes/service-catalog.md similarity index 98% rename from content/zh/docs/concepts/extend-kubernetes/service-catalog.md rename to content/zh-cn/docs/concepts/extend-kubernetes/service-catalog.md index fe5437c1bad79..cb4eb28e40b83 100644 --- a/content/zh/docs/concepts/extend-kubernetes/service-catalog.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/service-catalog.md @@ -66,7 +66,7 @@ It is implemented using a [CRDs-based](/docs/concepts/extend-kubernetes/api-exte 与服务代理进行通信,并作为 Kubernetes API 服务器的中介,以便协商启动部署和获取 应用程序使用托管服务时必须的凭据。 -它是[基于 CRDs](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) +它是[基于 CRDs](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) 架构实现的。 ![服务目录架构](/images/docs/service-catalog-architecture.svg) @@ -436,9 +436,9 @@ The following example describes how to map secret values into application enviro * Explore the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) project. --> * 如果你熟悉 {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}, - 可以[使用 Helm 安装服务目录](/zh/docs/tasks/service-catalog/install-service-catalog-using-helm/) + 可以[使用 Helm 安装服务目录](/zh-cn/docs/tasks/service-catalog/install-service-catalog-using-helm/) 到 Kubernetes 集群中。或者,你可以 - [使用 SC 工具安装服务目录](/zh/docs/tasks/service-catalog/install-service-catalog-using-sc/)。 + [使用 SC 工具安装服务目录](/zh-cn/docs/tasks/service-catalog/install-service-catalog-using-sc/)。 * 查看[服务代理示例](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers) * 浏览 [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) 项目 diff --git a/content/zh/docs/concepts/overview/_index.md b/content/zh-cn/docs/concepts/overview/_index.md similarity index 100% rename from content/zh/docs/concepts/overview/_index.md rename to content/zh-cn/docs/concepts/overview/_index.md diff --git a/content/zh/docs/concepts/overview/components.md b/content/zh-cn/docs/concepts/overview/components.md similarity index 74% rename from content/zh/docs/concepts/overview/components.md rename to content/zh-cn/docs/concepts/overview/components.md index f67ee07b6e31d..e7f844aaa6527 100644 --- a/content/zh/docs/concepts/overview/components.md +++ b/content/zh-cn/docs/concepts/overview/components.md @@ -33,10 +33,10 @@ a complete and working Kubernetes cluster. --> -当你部署完 Kubernetes, 即拥有了一个完整的集群。 +当你部署完 Kubernetes,便拥有了一个完整的集群。 {{< glossary_definition term_id="cluster" length="all" prepend="一个 Kubernetes">}} -本文档概述了交付正常运行的 Kubernetes 集群所需的各种组件。 +本文档概述了一个正常运行的 Kubernetes 集群所需的各种组件。 {{< figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes 的组件" caption="Kubernetes 集群的组件" class="diagram-large" >}} @@ -49,8 +49,9 @@ The control plane's components make global decisions about the cluster (for exam --> ## 控制平面组件(Control Plane Components) {#control-plane-components} -控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 -`replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 +控制平面组件会为集群做出全局决策,比如资源的调度。 +以及检测和响应集群事件,例如当不满足部署的 `replicas` 字段时, +要启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 -这些控制器包括: +这些控制器包括: -* 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应 -* 任务控制器(Job controller): 监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成 -* 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod) -* 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌 +* 节点控制器(Node Controller):负责在节点出现故障时进行通知和响应 +* 任务控制器(Job Controller):监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成 +* 端点控制器(Endpoints Controller):填充端点(Endpoints)对象(即加入 Service 与 Pod) +* 服务帐户和令牌控制器(Service Account & Token Controllers):为新的命名空间创建默认帐户和 API 访问令牌 ## Node 组件 {#node-components} -节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行环境。 +节点组件会在每个节点上运行,负责维护运行的 Pod 并提供 Kubernetes 运行环境。 ### kubelet @@ -167,7 +169,7 @@ for addons belong within the `kube-system` namespace. ## 插件(Addons) {#addons} 插件使用 Kubernetes 资源({{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}、 -{{< glossary_tooltip text="Deployment" term_id="deployment" >}}等)实现集群功能。 +{{< glossary_tooltip text="Deployment" term_id="deployment" >}} 等)实现集群功能。 因为这些插件提供集群级别的功能,插件中命名空间域的资源属于 `kube-system` 命名空间。 下面描述众多插件中的几种。有关可用插件的完整列表,请参见 -[插件(Addons)](/zh/docs/concepts/cluster-administration/addons/)。 +[插件(Addons)](/zh-cn/docs/concepts/cluster-administration/addons/)。 -### Web 界面(仪表盘) +### Web 界面(仪表盘) {#web-ui-dashboard} -[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) +[Dashboard](/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/) 是 Kubernetes 集群的通用的、基于 Web 的用户界面。 -它使用户可以管理集群中运行的应用程序以及集群本身并进行故障排除。 +它使用户可以管理集群中运行的应用程序以及集群本身, +并进行故障排除。 -### 容器资源监控 +### 容器资源监控 {#container-resource-monitoring} -[容器资源监控](/zh/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) -将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中,并提供用于浏览这些数据的界面。 +[容器资源监控](/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) +将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中, +并提供浏览这些数据的界面。 -### 集群层面日志 +### 集群层面日志 {#cluster-level-logging} -[集群层面日志](/zh/docs/concepts/cluster-administration/logging/) 机制负责将容器的日志数据 -保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。 +[集群层面日志](/zh-cn/docs/concepts/cluster-administration/logging/) +机制负责将容器的日志数据保存到一个集中的日志存储中, +这种集中日志存储提供搜索和浏览接口。 ## {{% heading "whatsnext" %}} @@ -237,7 +242,7 @@ saving container logs to a central log store with search/browsing interface. * Learn about [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) * Read etcd's official [documentation](https://etcd.io/docs/) --> -* 进一步了解[节点](/zh/docs/concepts/architecture/nodes/) -* 进一步了解[控制器](/zh/docs/concepts/architecture/controller/) -* 进一步了解 [kube-scheduler](/zh/docs/concepts/scheduling-eviction/kube-scheduler/) +* 进一步了解[节点](/zh-cn/docs/concepts/architecture/nodes/) +* 进一步了解[控制器](/zh-cn/docs/concepts/architecture/controller/) +* 进一步了解 [kube-scheduler](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/) * 阅读 etcd 官方[文档](https://etcd.io/docs/) diff --git a/content/zh/docs/concepts/overview/kubernetes-api.md b/content/zh-cn/docs/concepts/overview/kubernetes-api.md similarity index 76% rename from content/zh/docs/concepts/overview/kubernetes-api.md rename to content/zh-cn/docs/concepts/overview/kubernetes-api.md index 61cf1a2c2ef75..ce6a1575d7820 100644 --- a/content/zh/docs/concepts/overview/kubernetes-api.md +++ b/content/zh-cn/docs/concepts/overview/kubernetes-api.md @@ -35,8 +35,8 @@ API 服务器负责提供 HTTP API,以供用户、集群中的不同部分和 Kubernetes API 使你可以查询和操纵 Kubernetes API 中对象(例如:Pod、Namespace、ConfigMap 和 Event)的状态。 -大部分操作都可以通过 [kubectl](/zh/docs/reference/kubectl/) 命令行接口或 -类似 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 这类命令行工具来执行, +大部分操作都可以通过 [kubectl](/zh-cn/docs/reference/kubectl/) 命令行接口或 +类似 [kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/) 这类命令行工具来执行, 这些工具在背后也是调用 API。不过,你也可以使用 REST 调用来访问这些 API。 如果你正在编写程序来访问 Kubernetes API,可以考虑使用 -[客户端库](/zh/docs/reference/using-api/client-libraries/)之一。 +[客户端库](/zh-cn/docs/reference/using-api/client-libraries/)之一。 @@ -145,29 +145,63 @@ Kubernetes 为 API 实现了一种基于 Protobuf 的序列化格式,主要用 ### OpenAPI V3 -{{< feature-state state="alpha" for_k8s_version="v1.23" >}} +{{< feature-state state="beta" for_k8s_version="v1.24" >}} -Kubernetes v1.23 提供将其 API 以 OpenAPI v3 形式发布的初始支持;这一功能特性处于 Alpha -状态,默认被禁用。 -你可以通过为 kube-apiserver 组件启用 `OpenAPIV3` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)来启用此 -Alpha 特性。 +Kubernetes {{< param "version" >}} 提供将其 API 以 OpenAPI v3 形式发布的 beta 支持; +这一功能特性处于 beta 状态,默认被开启。 +你可以通过为 kube-apiserver 组件关闭 `OpenAPIV3` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用此 beta 特性。 +发现端点 `/openapi/v3` 被提供用来查看可用的所有组、版本列表。 +此列表仅返回 JSON。这些组、版本以下面的格式提供: +```json +{ + "paths": { + ... + "api/v1": { + "serverRelativeURL": "/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF" + }, + "apis/admissionregistration.k8s.io/v1": { + "serverRelativeURL": "/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597" + }, + ... +} +``` + + +为了改进客户端缓存,相对的 URL 会指向不可变的 OpenAPI 描述。 +为了此目的,API 服务器也会设置正确的 HTTP 缓存标头 +(`Expires` 为未来 1 年,和 `Cache-Control` 为 `immutable`)。 +当一个过时的 URL 被使用时,API 服务器会返回一个指向最新 URL 的重定向。 + + -特性被启用时,Kubernetes API 服务器会在端点 `/openapi/v3/apis//` -提供按 Kubernetes 组版本聚合的 OpenAPI v3 规范。 +Kubernetes API 服务器会在端点 `/openapi/v3/apis//?hash=` +发布一个 Kubernetes 组版本的 OpenAPI v3 规范。 + 请参阅下表了解可接受的请求头部。 @@ -201,13 +235,6 @@ table below for accepted request headers.
        - -发现端点 `/openapi/v3` 被提供用来查看可用的所有组、版本列表。 -此列表仅返回 JSON。 - 一般而言,新的 API 资源和新的资源字段可以被频繁地添加进来。 删除资源或者字段则要遵从 -[API 废弃策略](/zh/docs/reference/using-api/deprecation-policy/)。 +[API 废弃策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。 关于 API 版本分级的定义细节,请参阅 -[API 版本参考](/zh/docs/reference/using-api/#api-versioning)页面。 +[API 版本参考](/zh-cn/docs/reference/using-api/#api-versioning)页面。 -1. 你可以使用[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +1. 你可以使用[自定义资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 来以声明式方式定义 API 服务器如何提供你所选择的资源 API。 1. 你也可以选择实现自己的 - [聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) + [聚合层](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 来扩展 Kubernetes API。 ## {{% heading "whatsnext" %}} @@ -296,11 +323,11 @@ The Kubernetes API can be extended in one of two ways: [API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme). --> - 了解如何通过添加你自己的 - [CustomResourceDefinition](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) + [CustomResourceDefinition](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) 来扩展 Kubernetes API。 -- [控制 Kubernetes API 访问](/zh/docs/concepts/security/controlling-access/)页面描述了集群如何针对 +- [控制 Kubernetes API 访问](/zh-cn/docs/concepts/security/controlling-access/)页面描述了集群如何针对 API 访问管理身份认证和鉴权。 -- 通过阅读 [API 参考](/zh/docs/reference/kubernetes-api/)了解 API 端点、资源类型以及示例。 +- 通过阅读 [API 参考](/zh-cn/docs/reference/kubernetes-api/)了解 API 端点、资源类型以及示例。 - 阅读 [API 变更(英文)](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme) 以了解什么是兼容性的变更以及如何变更 API。 diff --git a/content/zh/docs/concepts/overview/what-is-kubernetes.md b/content/zh-cn/docs/concepts/overview/what-is-kubernetes.md similarity index 71% rename from content/zh/docs/concepts/overview/what-is-kubernetes.md rename to content/zh-cn/docs/concepts/overview/what-is-kubernetes.md index 9d2f772089de5..25feebb7fcb81 100644 --- a/content/zh/docs/concepts/overview/what-is-kubernetes.md +++ b/content/zh-cn/docs/concepts/overview/what-is-kubernetes.md @@ -2,7 +2,7 @@ title: Kubernetes 是什么? content_type: concept description: > - Kubernetes 是一个可移植的,可扩展的开源平台,用于管理容器化的工作负载和服务,方便了声明式配置和自动化。它拥有一个庞大且快速增长的生态系统。Kubernetes 的服务,支持和工具广泛可用。 + Kubernetes 是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,方便进行声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统,其服务、支持和工具的使用范围广泛。 weight: 10 card: name: concepts @@ -31,24 +31,24 @@ This page is an overview of Kubernetes. -Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。 -Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。 +Kubernetes 是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。 +Kubernetes 拥有一个庞大且快速增长的生态,其服务、支持和工具的使用范围相当广泛。 **Kubernetes** 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 -Google 在 2014 年开源了 Kubernetes 项目。Kubernetes 建立在 -[Google 在大规模运行生产工作负载方面拥有十几年的经验](https://research.google/pubs/pub43438) -的基础上,结合了社区中最好的想法和实践。 +Google 在 2014 年开源了 Kubernetes 项目。 +Kubernetes 建立在[Google 大规模运行生产工作负载十几年经验](https://research.google/pubs/pub43438)的基础上, +结合了社区中最优秀的想法和实践。 -## 时光回溯 +## 时光回溯 {#going-back-in-time} -让我们回顾一下为什么 Kubernetes 如此有用。 +让我们回顾一下为何 Kubernetes 能够裨益四方。 **传统部署时代:** -早期,各个组织机构在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。 -例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况, -结果可能导致其他应用程序的性能下降。 -一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展, -并且维护许多物理服务器的成本很高。 +早期,各机构是在物理服务器上运行应用程序。 +由于无法限制在物理服务器中运行的应用程序资源使用,因此会导致资源分配问题。 +例如,如果在物理服务器上运行多个应用程序, +则可能会出现一个应用程序占用大部分资源的情况,而导致其他应用程序的性能下降。 +一种解决方案是将每个应用程序都运行在不同的物理服务器上, +但是当某个应用程式资源利用率不高时,剩余资源无法被分配给其他应用程式, +而且维护许多物理服务器的成本很高。 **虚拟化部署时代:** -作为解决方案,引入了虚拟化。虚拟化技术允许你在单个物理服务器的 CPU 上运行多个虚拟机(VM)。 -虚拟化允许应用程序在 VM 之间隔离,并提供一定程度的安全,因为一个应用程序的信息 -不能被另一应用程序随意访问。 +因此,虚拟化技术被引入了。虚拟化技术允许你在单个物理服务器的 CPU 上运行多台虚拟机(VM)。 +虚拟化能使应用程序在不同 VM 之间被彼此隔离,且能提供一定程度的安全性, +因为一个应用程序的信息不能被另一应用程序随意访问。 -虚拟化技术能够更好地利用物理服务器上的资源,并且因为可轻松地添加或更新应用程序 -而可以实现更好的可伸缩性,降低硬件成本等等。 +虚拟化技术能够更好地利用物理服务器的资源,并且因为可轻松地添加或更新应用程序, +而因此可以具有更高的可伸缩性,以及降低硬件成本等等的好处。 -每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。 +每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统(OS)。 **容器部署时代:** -容器类似于 VM,但是它们具有被放宽的隔离属性,可以在应用程序之间共享操作系统(OS)。 -因此,容器被认为是轻量级的。容器与 VM 类似,具有自己的文件系统、CPU、内存、进程空间等。 +容器类似于 VM,但是更宽松的隔离特性,使容器之间可以共享操作系统(OS)。 +因此,容器比起 VM 被认为是更轻量级的。且与 VM 类似,每个容器都具有自己的文件系统、CPU、内存、进程空间等。 由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。 * 敏捷应用程序的创建和部署:与使用 VM 镜像相比,提高了容器镜像创建的简便性和效率。 -* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性),支持可靠且频繁的 - 容器镜像构建和部署。 -* 关注开发与运维的分离:在构建/发布时而不是在部署时创建应用程序容器镜像, +* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性), + 提供可靠且频繁的容器镜像构建和部署。 +* 关注开发与运维的分离:在构建、发布时创建应用程序容器镜像,而不是在部署时, 从而将应用程序与基础架构分离。 -* 可观察性:不仅可以显示操作系统级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。 -* 跨开发、测试和生产的环境一致性:在便携式计算机上与在云中相同地运行。 +* 可观察性:不仅可以显示 OS 级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。 +* 跨开发、测试和生产的环境一致性:在笔记本计算机上也可以和在云中运行一样的应用程序。 * 跨云和操作系统发行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方运行。 -* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 - OS 上运行应用程序。 +* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 OS 上运行应用程序。 * 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分, 并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。 * 资源隔离:可预测的应用程序性能。 @@ -134,18 +135,20 @@ Containers are becoming popular because they have many benefits. Some of the con -## 为什么需要 Kubernetes,它能做什么? +## 为什么需要 Kubernetes,它能做什么? {#why-you-need-kubernetes-and-what-can-it-do} -容器是打包和运行应用程序的好方式。在生产环境中,你需要管理运行应用程序的容器,并确保不会停机。 -例如,如果一个容器发生故障,则需要启动另一个容器。如果系统处理此行为,会不会更容易? +容器是打包和运行应用程序的好方式。在生产环境中, +你需要管理运行着应用程序的容器,并确保服务不会下线。 +例如,如果一个容器发生故障,则你需要启动另一个容器。 +如果此行为交由给系统处理,是不是会更容易一些? -这就是 Kubernetes 来解决这些问题的方法! +这就是 Kubernetes 要来做的事情! Kubernetes 为你提供了一个可弹性运行分布式系统的框架。 Kubernetes 会满足你的扩展要求、故障转移、部署模式等。 例如,Kubernetes 可以轻松管理系统的 Canary 部署。 @@ -161,7 +164,8 @@ Kubernetes can expose a container using the DNS name or using their own IP addre --> * **服务发现和负载均衡** - Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, + Kubernetes 可以使用 DNS 名称或自己的 IP 地址来曝露容器。 + 如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。 * **自动部署和回滚** - 你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 - 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, + 你可以使用 Kubernetes 描述已部署容器的所需状态, + 它可以以受控的速率将实际状态更改为期望状态。 + 例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。 * **自我修复** - Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 - 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。 + Kubernetes 将重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器, + 并且在准备好服务之前不将其通告给客户端。 -## Kubernetes 不是什么 +## Kubernetes 不是什么 {#what-kubernetes-is-not} Kubernetes 不是传统的、包罗万象的 PaaS(平台即服务)系统。 -由于 Kubernetes 在容器级别而不是在硬件级别运行,它提供了 PaaS 产品共有的一些普遍适用的功能, +由于 Kubernetes 是在容器级别运行,而非在硬件级别, +它提供了 PaaS 产品共有的一些普遍适用的功能, 例如部署、扩展、负载均衡、日志记录和监视。 -但是,Kubernetes 不是单体系统,默认解决方案都是可选和可插拔的。 -Kubernetes 提供了构建开发人员平台的基础,但是在重要的地方保留了用户的选择和灵活性。 +但是,Kubernetes 不是单体式(monolithic)系统,那些默认解决方案都是可选、可插拔的。 +Kubernetes 为构建开发人员平台提供了基础,但是在重要的地方保留了用户选择权,能有更高的灵活性。 -* 不要求日志记录、监视或警报解决方案。 - 它提供了一些集成作为概念证明,并提供了收集和导出指标的机制。 -* 不提供或不要求配置语言/系统(例如 jsonnet),它提供了声明性 API, +* 不是日志记录、监视或警报的解决方案。 + 它集成了一些功能作为概念证明,并提供了收集和导出指标的机制。 +* 不提供也不要求配置用的语言、系统(例如 jsonnet),它提供了声明性 API, 该声明性 API 可以由任意形式的声明性规范所构成。 * 不提供也不采用任何全面的机器配置、维护、管理或自我修复系统。 * 此外,Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。 编排的技术定义是执行已定义的工作流程:首先执行 A,然后执行 B,再执行 C。 - 相比之下,Kubernetes 包含一组独立的、可组合的控制过程, - 这些过程连续地将当前状态驱动到所提供的所需状态。 - 如何从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用 - 且功能更强大、系统更健壮、更为弹性和可扩展。 + 而 Kubernetes 包含了一组独立可组合的控制过程, + 可以连续地将当前状态驱动到所提供的预期状态。 + 你不需要在乎如何从 A 移动到 C,也不需要集中控制,这使得系统更易于使用 + 且功能更强大、系统更健壮,更为弹性和可扩展。 ## {{% heading "whatsnext" %}} @@ -267,5 +273,5 @@ Kubernetes: * Take a look at the [Kubernetes Components](/docs/concepts/overview/components/) * Ready to [Get Started](/docs/setup/)? --> -* 查阅 [Kubernetes 组件](/zh/docs/concepts/overview/components/) -* 开始 [Kubernetes 入门](/zh/docs/setup/)? +* 查阅[Kubernetes 组件](/zh-cn/docs/concepts/overview/components/) +* 开始[Kubernetes 的建置](/zh-cn/docs/setup/)吧! diff --git a/content/zh/docs/concepts/overview/working-with-objects/_index.md b/content/zh-cn/docs/concepts/overview/working-with-objects/_index.md similarity index 100% rename from content/zh/docs/concepts/overview/working-with-objects/_index.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/_index.md diff --git a/content/zh/docs/concepts/overview/working-with-objects/annotations.md b/content/zh-cn/docs/concepts/overview/working-with-objects/annotations.md similarity index 92% rename from content/zh/docs/concepts/overview/working-with-objects/annotations.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/annotations.md index 053fbbc341d55..c8a03e0bc606e 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/annotations.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/annotations.md @@ -37,7 +37,7 @@ Annotations, like labels, are key/value maps: 标签可以用来选择对象和查找满足某些条件的对象集合。 相反,注解不用于标识和选择对象。 注解中的元数据,可以很小,也可以很大,可以是结构化的,也可以是非结构化的,能够包含标签不允许的字符。 -注解和标签一样,是键/值对: +注解和标签一样,是键/值对: ```json "metadata": { @@ -60,7 +60,7 @@ Map 中的键和值必须是字符串。 -以下是一些例子,用来说明哪些信息可以使用注解来记录: +以下是一些例子,用来说明哪些信息可以使用注解来记录: -`kubernetes.io/` 和 `k8s.io/` 前缀是为Kubernetes核心组件保留的。 +`kubernetes.io/` 和 `k8s.io/` 前缀是为 Kubernetes 核心组件保留的。 例如,下面是一个 Pod 的配置文件,其注解中包含 `imageregistry: https://hub.docker.com/`: @@ -163,5 +163,5 @@ spec: -* 进一步了解[标签和选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)。 +* 进一步了解[标签和选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。 diff --git a/content/zh/docs/concepts/overview/working-with-objects/common-labels.md b/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md similarity index 97% rename from content/zh/docs/concepts/overview/working-with-objects/common-labels.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md index 2fbf964b27804..629c3ca7da7f5 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md @@ -15,7 +15,8 @@ You can visualize and manage Kubernetes objects with more tools than kubectl and the dashboard. A common set of labels allows tools to work interoperably, describing objects in a common manner that all tools can understand. --> -除了 kubectl 和 dashboard 之外,您可以使用其他工具来可视化和管理 Kubernetes 对象。一组通用的标签可以让多个工具之间相互操作,用所有工具都能理解的通用方式描述对象。 +除了 kubectl 和 dashboard 之外,你可以使用其他工具来可视化和管理 Kubernetes 对象。 +一组通用的标签可以让多个工具之间相互操作,用所有工具都能理解的通用方式描述对象。 -使用 MySQL `StatefulSet` 和 `Service`,您会注意到有关 MySQL 和 Wordpress 的信息,包括更广泛的应用程序。 +使用 MySQL `StatefulSet` 和 `Service`,你会注意到有关 MySQL 和 Wordpress 的信息,包括更广泛的应用程序。 diff --git a/content/zh/docs/concepts/overview/working-with-objects/field-selectors.md b/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md similarity index 87% rename from content/zh/docs/concepts/overview/working-with-objects/field-selectors.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md index bfd155fb734b2..35fc5cdda4bd2 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/field-selectors.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md @@ -11,7 +11,7 @@ weight: 60 _Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/working-with-objects/kubernetes-objects) based on the value of one or more resource fields. Here are some example field selector queries: --> “字段选择器(Field selectors)”允许你根据一个或多个资源字段的值 -[筛选 Kubernetes 资源](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects)。 +[筛选 Kubernetes 资源](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects)。 下面是一些使用字段选择器查询的例子: * `metadata.name=my-service` @@ -21,7 +21,7 @@ _Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/ -下面这个 `kubectl` 命令将筛选出 [`status.phase`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) +下面这个 `kubectl` 命令将筛选出 [`status.phase`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) 字段值为 `Running` 的所有 Pod: ```shell @@ -31,7 +31,7 @@ kubectl get pods --field-selector status.phase=Running Field selectors are essentially resource *filters*. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the following `kubectl` queries equivalent: --> {{< note >}} -字段选择器本质上是资源*过滤器(Filters)*。默认情况下,字段选择器/过滤器是未被应用的, +字段选择器本质上是资源“过滤器(Filters)”。默认情况下,字段选择器/过滤器是未被应用的, 这意味着指定类型的所有资源都会被筛选出来。 这使得以下的两个 `kubectl` 查询是等价的: @@ -67,7 +67,7 @@ You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` --> ## 支持的操作符 {#supported-operators} -你可在字段选择器中使用 `=`、`==`和 `!=` (`=` 和 `==` 的意义是相同的)操作符。 +你可在字段选择器中使用 `=`、`==` 和 `!=` (`=` 和 `==` 的意义是相同的)操作符。 例如,下面这个 `kubectl` 命令将筛选所有不属于 `default` 命名空间的 Kubernetes 服务: ```shell @@ -81,7 +81,7 @@ As with [label](/docs/concepts/overview/working-with-objects/labels) and other s --> ## 链式选择器 {#chained-selectors} -同[标签](/zh/docs/concepts/overview/working-with-objects/labels/)和其他选择器一样, +同[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)和其他选择器一样, 字段选择器可以通过使用逗号分隔的列表组成一个选择链。 下面这个 `kubectl` 命令将筛选 `status.phase` 字段不等于 `Running` 同时 `spec.restartPolicy` 字段等于 `Always` 的所有 Pod: diff --git a/content/zh/docs/concepts/overview/working-with-objects/finalizers.md b/content/zh-cn/docs/concepts/overview/working-with-objects/finalizers.md similarity index 96% rename from content/zh/docs/concepts/overview/working-with-objects/finalizers.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/finalizers.md index 88bcad141378f..21654acda9e20 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/finalizers.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/finalizers.md @@ -87,7 +87,7 @@ Kubernetes 清除 `pv-protection` Finalizer,控制器就会删除该卷。 ## Owner references, labels, and finalizers {#owners-labels-finalizers} Like {{}}, -[owner references](/concepts/overview/working-with-objects/owners-dependents/) +[owner references](/docs/concepts/overview/working-with-objects/owners-dependents/) describe the relationships between objects in Kubernetes, but are used for a different purpose. When a {{}} manages objects @@ -99,7 +99,7 @@ any Pods in the cluster with the same label. ## 属主引用、标签和 Finalizers {#owners-labels-finalizers} 与{{}}类似, -[属主引用](/zh/concepts/overview/working-with-objects/owners-dependents/) +[属主引用](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/) 描述了 Kubernetes 中对象之间的关系,但它们作用不同。 当一个{{}} 管理类似于 Pod 的对象时,它使用标签来跟踪相关对象组的变化。 @@ -121,7 +121,7 @@ longer than expected without being fully deleted. In these situations, you should check finalizers and owner references on the target owner and dependent objects to troubleshoot the cause. --> -Job 控制器还为这些 Pod 添加了*属主引用*,指向创建 Pod 的 Job。 +Job 控制器还为这些 Pod 添加了“属主引用”,指向创建 Pod 的 Job。 如果你在这些 Pod 运行的时候删除了 Job, Kubernetes 会使用属主引用(而不是标签)来确定集群中哪些 Pod 需要清理。 @@ -154,4 +154,3 @@ Finalizers 通常因为特殊原因被添加到资源上,所以强行删除它 on the Kubernetes blog. --> * 在 Kubernetes 博客上阅读[使用 Finalizers 控制删除](/blog/2021/05/14/using-finalizers-to-control-deletion/)。 - diff --git a/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md similarity index 59% rename from content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 8d320064d1396..36c1212fcdaf9 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -20,42 +20,54 @@ card: -本页说明了 Kubernetes 对象在 Kubernetes API 中是如何表示的,以及如何在 `.yaml` 格式的文件中表示。 - +本页说明了在 Kubernetes API 中是如何表示 Kubernetes 对象的, +以及使用 `.yaml` 格式的文件表示 Kubernetes 对象。 -## 理解 Kubernetes 对象 +## 理解 Kubernetes 对象 {#kubernetes-objects} 在 Kubernetes 系统中,*Kubernetes 对象* 是持久化的实体。 -Kubernetes 使用这些实体去表示整个集群的状态。特别地,它们描述了如下信息: +Kubernetes 使用这些实体去表示整个集群的状态。 +比較特别地是,它们描述了如下信息: -* 哪些容器化应用在运行(以及在哪些节点上) +* 哪些容器化应用正在运行(以及在哪些节点上运行) * 可以被应用使用的资源 * 关于应用运行时表现的策略,比如重启策略、升级策略,以及容错策略 -Kubernetes 对象是 “目标性记录” —— 一旦创建对象,Kubernetes 系统将持续工作以确保对象存在。 -通过创建对象,本质上是在告知 Kubernetes 系统,所需要的集群工作负载看起来是什么样子的, -这就是 Kubernetes 集群的 **期望状态(Desired State)**。 +Kubernetes 对象是“目标性记录”——一旦创建对象,Kubernetes 系统将不断工作以确保对象存在。 +通过创建对象,你就是在告知 Kubernetes 系统,你想要的集群工作负载状态看起来应是什么样子的, +这就是 Kubernetes 集群所谓的 **期望状态(Desired State)**。 操作 Kubernetes 对象 —— 无论是创建、修改,或者删除 —— 需要使用 -[Kubernetes API](/zh/docs/concepts/overview/kubernetes-api)。 -比如,当使用 `kubectl` 命令行接口时,CLI 会执行必要的 Kubernetes API 调用, -也可以在程序中使用 -[客户端库](/zh/docs/reference/using-api/client-libraries/)直接调用 Kubernetes API。 +[Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api)。 +比如,当使用 `kubectl` 命令行接口(CLI)时,CLI 会调用必要的 Kubernetes API; +也可以在程序中使用[客户端库](/zh-cn/docs/reference/using-api/client-libraries/), +来直接调用 Kubernetes API。 -`status` 描述了对象的 _当前状态(Current State)_,它是由 Kubernetes 系统和组件 -设置并更新的。在任何时刻,Kubernetes -{{< glossary_tooltip text="控制平面" term_id="control-plane" >}} -都一直积极地管理着对象的实际状态,以使之与期望状态相匹配。 +`status` 描述了对象的**当前状态(Current State)**,它是由 Kubernetes 系统和组件设置并更新的。 +在任何时刻,Kubernetes {{< glossary_tooltip text="控制平面" term_id="control-plane" >}} +都一直都在积极地管理着对象的实际状态,以使之达成期望状态。 例如,Kubernetes 中的 Deployment 对象能够表示运行在集群中的应用。 -当创建 Deployment 时,可能需要设置 Deployment 的 `spec`,以指定该应用需要有 3 个副本运行。 -Kubernetes 系统读取 Deployment 规约,并启动我们所期望的应用的 3 个实例 -—— 更新状态以与规约相匹配。 -如果这些实例中有的失败了(一种状态变更),Kubernetes 系统通过执行修正操作 -来响应规约和状态间的不一致 —— 在这里意味着它会启动一个新的实例来替换。 +当创建 Deployment 时,可能会去设置 Deployment 的 `spec`,以指定该应用要有 3 个副本运行。 +Kubernetes 系统读取 Deployment 的 `spec`, +并启动我们所期望的应用的 3 个实例 —— 更新状态以与规约相匹配。 +如果这些实例中有的失败了(一种状态变更),Kubernetes 系统会通过执行修正操作 +来响应 `spec` 和状态间的不一致 —— 意味着它会启动一个新的实例来替换。 - 关于对象 spec、status 和 metadata 的更多信息,可参阅 [Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)。 -### 描述 Kubernetes 对象 +### 描述 Kubernetes 对象 {#describing-a-kubernetes-object} -创建 Kubernetes 对象时,必须提供对象的规约,用来描述该对象的期望状态, +创建 Kubernetes 对象时,必须提供对象的 `spec`,用来描述该对象的期望状态, 以及关于对象的一些基本信息(例如名称)。 -当使用 Kubernetes API 创建对象时(或者直接创建,或者基于`kubectl`), -API 请求必须在请求体中包含 JSON 格式的信息。 -**大多数情况下,需要在 .yaml 文件中为 `kubectl` 提供这些信息**。 +当使用 Kubernetes API 创建对象时(直接创建,或经由 `kubectl`), +API 请求必须在请求本体中包含 JSON 格式的信息。 +**大多数情况下,你需要提供 `.yaml` 文件为 kubectl 提供这些信息**。 `kubectl` 在发起 API 请求时,将这些信息转换成 JSON 格式。 -这里有一个 `.yaml` 示例文件,展示了 Kubernetes Deployment 的必需字段和对象规约: +这里有一个 `.yaml` 示例文件,展示了 Kubernetes Deployment 的必需字段和对象 `spec`: {{< codenew file="application/deployment.yaml" >}} @@ -135,7 +150,7 @@ One way to create a Deployment using a `.yaml` file like the one above is to use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example: --> -使用类似于上面的 `.yaml` 文件来创建 Deployment的一种方式是使用 `kubectl` 命令行接口(CLI)中的 +相较于上面使用 `.yaml` 文件来创建 Deployment,另一种类似的方式是使用 `kubectl` 命令行接口(CLI)中的 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 命令, 将 `.yaml` 文件作为参数。下面是一个示例: @@ -146,7 +161,7 @@ kubectl apply -f https://k8s.io/examples/application/deployment.yaml -输出类似如下这样: +输出类似下面这样: ``` deployment.apps/nginx-deployment created @@ -162,21 +177,24 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to * `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace` * `spec` - What state you desire for the object --> -### 必需字段 {#required-fields} +### 必需字段 {#required-fields} -在想要创建的 Kubernetes 对象对应的 `.yaml` 文件中,需要配置如下的字段: +在想要创建的 Kubernetes 对象所对应的 `.yaml` 文件中,需要配置的字段如下: * `apiVersion` - 创建该对象所使用的 Kubernetes API 的版本 * `kind` - 想要创建的对象的类别 -* `metadata` - 帮助唯一性标识对象的一些数据,包括一个 `name` 字符串、UID 和可选的 `namespace` +* `metadata` - 帮助唯一性标识对象的一些数据,包括一个 `name` 字符串、`UID` 和可选的 `namespace` * `spec` - 你所期望的该对象的状态 -对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。 -[Kubernetes API 参考](https://kubernetes.io/docs/reference/kubernetes-api/) -能够帮助我们找到任何我们想创建的对象的规约格式。 +对每个 Kubernetes 对象而言,其 `spec` 之精确格式都是不同的,包含了特定于该对象的嵌套字段。 +我们能在 [Kubernetes API 参考](/zh-cn/docs/reference/kubernetes-api/) +找到我们想要在 Kubernetes 上创建的任何对象的规约格式。 -* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh/docs/concepts/workloads/pods/)。 -* 了解 Kubernetes 中的[控制器](/zh/docs/concepts/architecture/controller/)。 -* [使用 Kubernetes API](/zh/docs/reference/using-api/) 一节解释了一些 API 概念。 +* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh-cn/docs/concepts/workloads/pods/)。 +* 了解 Kubernetes 中的[控制器](/zh-cn/docs/concepts/architecture/controller/)。 +* [使用 Kubernetes API](/zh-cn/docs/reference/using-api/) 一节解释了一些 API 概念。 diff --git a/content/zh/docs/concepts/overview/working-with-objects/labels.md b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md similarity index 90% rename from content/zh/docs/concepts/overview/working-with-objects/labels.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/labels.md index 618b09eb12871..650eb02a4ee94 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/labels.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md @@ -40,7 +40,7 @@ and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/). --> 标签能够支持高效的查询和监听操作,对于用户界面和命令行是很理想的。 -应使用[注解](/zh/docs/concepts/overview/working-with-objects/annotations/) 记录非识别信息。 +应使用[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)记录非识别信息。 @@ -72,7 +72,7 @@ Example labels: -有一些[常用标签](/zh/docs/concepts/overview/working-with-objects/common-labels/)的例子; 你可以任意制定自己的约定。 +有一些[常用标签](/zh-cn/docs/concepts/overview/working-with-objects/common-labels/)的例子;你可以任意制定自己的约定。 请记住,标签的 Key 对于给定对象必须是唯一的。 ## 标签选择算符 {#label-selectors} -与[名称和 UID](/zh/docs/concepts/overview/working-with-objects/names/) 不同, +与[名称和 UID](/zh-cn/docs/concepts/overview/working-with-objects/names/) 不同, 标签不支持唯一性。通常,我们希望许多对象携带相同的标签。 _基于集合_ 的标签选择算符是相等标签选择算符的一般形式,因为 `environment=production` -等同于 `environment in(production)`;`!=` 和 `notin` 也是类似的。 +等同于 `environment in (production)`;`!=` 和 `notin` 也是类似的。 -* _基于等值_ 的需求: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` -* _基于集合_ 的需求: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` +* _基于等值_ 的需求:`?labelSelector=environment%3Dproduction,tier%3Dfrontend` +* _基于集合_ 的需求:`?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` ### 在 API 对象中设置引用 -一些 Kubernetes 对象,例如 [`services`](/zh/docs/concepts/services-networking/service/) -和 [`replicationcontrollers`](/zh/docs/concepts/workloads/controllers/replicationcontroller/) , +一些 Kubernetes 对象,例如 [`services`](/zh-cn/docs/concepts/services-networking/service/) +和 [`replicationcontrollers`](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) , 也使用了标签选择算符去指定了其他资源的集合,例如 -[pods](/zh/docs/concepts/workloads/pods/)。 +[pods](/zh-cn/docs/concepts/workloads/pods/)。 -这个选择算符(分别在 `json` 或者 `yaml` 格式中) 等价于 `component=redis` 或 `component in (redis)` 。 +这个选择算符(分别在 `json` 或者 `yaml` 格式中)等价于 `component=redis` 或 `component in (redis)`。 #### 支持基于集合需求的资源 -比较新的资源,例如 [`Job`](/zh/docs/concepts/workloads/controllers/job/)、 -[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/)、 -[`Replica Set`](/zh/docs/concepts/workloads/controllers/replicaset/) 和 -[`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/) , +比较新的资源,例如 [`Job`](/zh-cn/docs/concepts/workloads/controllers/job/)、 +[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/)、 +[`Replica Set`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 和 +[`DaemonSet`](/zh-cn/docs/concepts/workloads/controllers/daemonset/), 也支持 _基于集合的_ 需求。 ```yaml @@ -383,7 +383,7 @@ selector: --> `matchLabels` 是由 `{key,value}` 对组成的映射。 -`matchLabels` 映射中的单个 `{key,value }` 等同于 `matchExpressions` 的元素, +`matchLabels` 映射中的单个 `{key,value}` 等同于 `matchExpressions` 的元素, 其 `key` 字段为 "key",`operator` 为 "In",而 `values` 数组仅包含 "value"。 `matchExpressions` 是 Pod 选择算符需求的列表。 有效的运算符包括 `In`、`NotIn`、`Exists` 和 `DoesNotExist`。 @@ -400,5 +400,5 @@ See the documentation on [node selection](/docs/concepts/configuration/assign-po #### 选择节点集 通过标签进行选择的一个用例是确定节点集,方便 Pod 调度。 -有关更多信息,请参阅[选择节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)文档。 +有关更多信息,请参阅[选择节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)文档。 diff --git a/content/zh/docs/concepts/overview/working-with-objects/names.md b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md similarity index 82% rename from content/zh/docs/concepts/overview/working-with-objects/names.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/names.md index 131ce0c6bab05..91eb354810558 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/names.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md @@ -13,19 +13,19 @@ Every Kubernetes object also has a [_UID_](#uids) that is unique across your who For example, you can only have one Pod named `myapp-1234` within the same [namespace](/docs/concepts/overview/working-with-objects/namespaces/), but you can have one Pod and one Deployment that are each named `myapp-1234`. --> -集群中的每一个对象都有一个[_名称_](#names) 来标识在同类资源中的唯一性。 +集群中的每一个对象都有一个[_名称_](#names)来标识在同类资源中的唯一性。 -每个 Kubernetes 对象也有一个[_UID_](#uids) 来标识在整个集群中的唯一性。 +每个 Kubernetes 对象也有一个 [_UID_](#uids) 来标识在整个集群中的唯一性。 -比如,在同一个[名字空间](/zh/docs/concepts/overview/working-with-objects/namespaces/) -中有一个名为 `myapp-1234` 的 Pod, 但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`. +比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/) +中有一个名为 `myapp-1234` 的 Pod,但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`。 对于用户提供的非唯一性的属性,Kubernetes 提供了 -[标签(Labels)](/zh/docs/concepts/working-with-objects/labels)和 -[注解(Annotation)](/zh/docs/concepts/overview/working-with-objects/annotations/)机制。 +[标签(Labels)](/zh-cn/docs/concepts/working-with-objects/labels)和 +[注解(Annotation)](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)机制。 @@ -69,10 +69,10 @@ This means the name must: DNS 子域名的定义可参见 [RFC 1123](https://tools.ietf.org/html/rfc1123)。 这一要求意味着名称必须满足如下规则: -- 不能超过253个字符 -- 只能包含小写字母、数字,以及'-' 和 '.' -- 须以字母数字开头 -- 须以字母数字结尾 +- 不能超过 253 个字符 +- 只能包含小写字母、数字,以及 '-' 和 '.' +- 必须以字母数字开头 +- 必须以字母数字结尾 -下面是一个名为`nginx-demo`的 Pod 的配置清单: +下面是一个名为 `nginx-demo` 的 Pod 的配置清单: ```yaml apiVersion: v1 @@ -165,7 +165,7 @@ Kubernetes UIDs are universally unique identifiers (also known as UUIDs). UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. --> Kubernetes UIDs 是全局唯一标识符(也叫 UUIDs)。 -UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667. +UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。 ## {{% heading "whatsnext" %}} @@ -173,7 +173,7 @@ UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667. * Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. * See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document. --> -* 进一步了解 Kubernetes [标签](/zh/docs/concepts/overview/working-with-objects/labels/) +* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/) * 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md)的设计文档 diff --git a/content/zh/docs/concepts/overview/working-with-objects/namespaces.md b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md similarity index 90% rename from content/zh/docs/concepts/overview/working-with-objects/namespaces.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md index e57a82a972b3d..46c3f35ac9207 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md @@ -49,7 +49,7 @@ resource can only be in one namespace. -名字空间是在多个用户之间划分集群资源的一种方法(通过[资源配额](/zh/docs/concepts/policy/resource-quotas/))。 +名字空间是在多个用户之间划分集群资源的一种方法(通过[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/))。 ## 使用名字空间 -名字空间的创建和删除在[名字空间的管理指南文档](/zh/docs/tasks/administer-cluster/namespaces/)描述。 +名字空间的创建和删除在[名字空间的管理指南文档](/zh-cn/docs/tasks/administer-cluster/namespaces/)描述。 ## 名字空间和 DNS -当你创建一个[服务](/zh/docs/concepts/services-networking/service/) 时, -Kubernetes 会创建一个相应的 [DNS 条目](/zh/docs/concepts/services-networking/dns-pod-service/)。 +当你创建一个[服务](/zh-cn/docs/concepts/services-networking/service/)时, +Kubernetes 会创建一个相应的 [DNS 条目](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。 因此,所有的名字空间名称都必须是合法的 -[RFC 1123 DNS 标签](/zh/docs/concepts/overview/working-with-objects/names/#dns-label-names)。 +[RFC 1123 DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-label-names)。 {{< warning >}} 为了缓解这类问题,需要将创建名字空间的权限授予可信的用户。 如果需要,你可以额外部署第三方的安全控制机制,例如以 -[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/) +[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/) 的形式,阻止用户创建与公共 [TLD](https://data.iana.org/TLD/tlds-alpha-by-domain.txt) 同名的名字空间。 {{< /warning >}} @@ -224,7 +224,7 @@ persistentVolumes, are not in any namespace. --> 大多数 kubernetes 资源(例如 Pod、Service、副本控制器等)都位于某些名字空间中。 但是名字空间资源本身并不在名字空间中。而且底层资源,例如 -[节点](/zh/docs/concepts/architecture/nodes/) 和持久化卷不属于任何名字空间。 +[节点](/zh-cn/docs/concepts/architecture/nodes/)和持久化卷不属于任何名字空间。 -* 进一步了解[建立新的名字空间](/zh/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace)。 -* 进一步了解[删除名字空间](/zh/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace)。 +* 进一步了解[建立新的名字空间](/zh-cn/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace)。 +* 进一步了解[删除名字空间](/zh-cn/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace)。 diff --git a/content/zh/docs/concepts/overview/working-with-objects/object-management.md b/content/zh-cn/docs/concepts/overview/working-with-objects/object-management.md similarity index 97% rename from content/zh/docs/concepts/overview/working-with-objects/object-management.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/object-management.md index b1dd99ca60c67..47c758d2672bc 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/object-management.md @@ -325,10 +325,10 @@ Disadvantages compared to imperative object configuration: - [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) --> -- [使用指令式命令管理 Kubernetes 对象](/zh/docs/tasks/manage-kubernetes-objects/imperative-command/) -- [使用对象配置管理 Kubernetes 对象(指令式)](/zh/docs/tasks/manage-kubernetes-objects/imperative-config/) -- [使用对象配置管理 Kubernetes 对象(声明式)](/zh/docs/tasks/manage-kubernetes-objects/declarative-config/) -- [使用 Kustomize(声明式)管理 Kubernetes 对象](/zh/docs/tasks/manage-kubernetes-objects/kustomization/) +- [使用指令式命令管理 Kubernetes 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-command/) +- [使用对象配置管理 Kubernetes 对象(指令式)](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-config/) +- [使用对象配置管理 Kubernetes 对象(声明式)](/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/) +- [使用 Kustomize(声明式)管理 Kubernetes 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/) - [Kubectl 命令参考](/docs/reference/generated/kubectl/kubectl-commands/) - [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) diff --git a/content/zh/docs/concepts/overview/working-with-objects/owners-dependents.md b/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md similarity index 92% rename from content/zh/docs/concepts/overview/working-with-objects/owners-dependents.md rename to content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md index b5e8228bd30c4..a02810877f3e5 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/owners-dependents.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md @@ -18,9 +18,9 @@ In Kubernetes, some objects are *owners* of other objects. For example, a of their owner. --> -在 Kubernetes 中,一些对象是其他对象的*属主(Owner)*。 +在 Kubernetes 中,一些对象是其他对象的“属主(Owner)”。 例如,{{}} 是一组 Pod 的属主。 -具有属主的对象是属主的*附属(Dependent)* 。 +具有属主的对象是属主的“附属(Dependent)”。 -属主关系不同于一些资源使用的[标签和选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)机制。 +属主关系不同于一些资源使用的[标签和选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)机制。 例如,有一个创建 `EndpointSlice` 对象的 Service, 该 Service 使用标签来让控制平面确定,哪些 `EndpointSlice` 对象属于该 Service。 除开标签,每个代表 Service 所管理的 `EndpointSlice` 都有一个属主引用。 @@ -56,7 +56,7 @@ automatically manage the relationships. Kubernetes 自动为一些对象的附属资源设置属主引用的值, 这些对象包含 ReplicaSet、DaemonSet、Deployment、Job、CronJob、ReplicationController 等。 你也可以通过改变这个字段的值,来手动配置这些关系。 -然而,你通常不需要这么做,你可以让 Kubernetes 自动管理附属关系。 +然而,通常不需要这么做,你可以让 Kubernetes 自动管理附属关系。 -当你使用[前台或孤立级联删除](/zh/docs/concepts/architecture/garbage-collection/#cascading-deletion)时, +当你使用[前台或孤立级联删除](/zh-cn/docs/concepts/architecture/garbage-collection/#cascading-deletion)时, Kubernetes 也会向属主资源添加 Finalizer。 在前台删除中,会添加 `foreground` Finalizer,这样控制器必须在删除了拥有 `ownerReferences.blockOwnerDeletion=true` 的附属资源后,才能删除属主对象。 @@ -164,6 +164,6 @@ Kubernetes 也会向属主资源添加 Finalizer。 * Learn about [garbage collection](/docs/concepts/architecture/garbage-collection). * Read the API reference for [object metadata](/docs/reference/kubernetes-api/common-definitions/object-meta/#System). --> -* 了解更多关于 [Kubernetes Finalizer](/zh/docs/concepts/overview/working-with-objects/finalizers/)。 -* 了解关于[垃圾收集](/zh/docs/concepts/architecture/garbage-collection)。 +* 了解更多关于 [Kubernetes Finalizer](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)。 +* 了解关于[垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection)。 * 阅读[对象元数据](/docs/reference/kubernetes-api/common-definitions/object-meta/#System)的 API 参考文档。 diff --git a/content/zh/docs/concepts/policy/_index.md b/content/zh-cn/docs/concepts/policy/_index.md similarity index 100% rename from content/zh/docs/concepts/policy/_index.md rename to content/zh-cn/docs/concepts/policy/_index.md diff --git a/content/zh/docs/concepts/policy/limit-range.md b/content/zh-cn/docs/concepts/policy/limit-range.md similarity index 90% rename from content/zh/docs/concepts/policy/limit-range.md rename to content/zh-cn/docs/concepts/policy/limit-range.md index 8412d9e80c43f..da2560b6a0b88 100644 --- a/content/zh/docs/concepts/policy/limit-range.md +++ b/content/zh-cn/docs/concepts/policy/limit-range.md @@ -11,7 +11,7 @@ By default, containers run with unbounded [compute resources](/docs/concepts/con With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. --> -默认情况下, Kubernetes 集群上的容器运行使用的[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)没有限制。 +默认情况下, Kubernetes 集群上的容器运行使用的[计算资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)没有限制。 使用资源配额,集群管理员可以以{{< glossary_tooltip text="名字空间" term_id="namespace" >}}为单位,限制其资源的使用与创建。 在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。 有人担心,一个 Pod 或 Container 会垄断所有可用的资源。 @@ -53,7 +53,7 @@ The name of a LimitRange object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> LimitRange 的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 关于使用限值的例子,可参看 -- [如何配置每个命名空间最小和最大的 CPU 约束](/zh/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。 -- [如何配置每个命名空间最小和最大的内存约束](/zh/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。 -- [如何配置每个命名空间默认的 CPU 申请值和限制值](/zh/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。 -- [如何配置每个命名空间默认的内存申请值和限制值](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。 -- [如何配置每个命名空间最小和最大存储使用量](/zh/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)。 -- [配置每个命名空间的配额的详细例子](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)。 +- [如何配置每个命名空间最小和最大的 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。 +- [如何配置每个命名空间最小和最大的内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。 +- [如何配置每个命名空间默认的 CPU 申请值和限制值](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。 +- [如何配置每个命名空间默认的内存申请值和限制值](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。 +- [如何配置每个命名空间最小和最大存储使用量](/zh-cn/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)。 +- [配置每个命名空间的配额的详细例子](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)。 diff --git a/content/zh/docs/concepts/policy/node-resource-managers.md b/content/zh-cn/docs/concepts/policy/node-resource-managers.md similarity index 79% rename from content/zh/docs/concepts/policy/node-resource-managers.md rename to content/zh-cn/docs/concepts/policy/node-resource-managers.md index 73f28da383a98..5636547075792 100644 --- a/content/zh/docs/concepts/policy/node-resource-managers.md +++ b/content/zh-cn/docs/concepts/policy/node-resource-managers.md @@ -30,7 +30,7 @@ The main manager, the Topology Manager, is a Kubelet component that co-ordinates The configuration of individual managers is elaborated in dedicated documents: --> 主管理器,也叫拓扑管理器(Topology Manager),是一个 Kubelet 组件, -它通过[策略](/zh/docs/tasks/administer-cluster/topology-manager/), +它通过[策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/), 协调全局的资源管理过程。 各个管理器的配置方式会在专项文档中详细阐述: @@ -40,6 +40,6 @@ The configuration of individual managers is elaborated in dedicated documents: - [Device Manager](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager) - [Memory Manager Policies](/docs/tasks/administer-cluster/memory-manager/) --> -- [CPU 管理器策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/) -- [设备管理器](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager) -- [内存管理器策略](/zh/docs/tasks/administer-cluster/memory-manager/) +- [CPU 管理器策略](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/) +- [设备管理器](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager) +- [内存管理器策略](/zh-cn/docs/tasks/administer-cluster/memory-manager/) diff --git a/content/zh/docs/concepts/policy/pid-limiting.md b/content/zh-cn/docs/concepts/policy/pid-limiting.md similarity index 94% rename from content/zh/docs/concepts/policy/pid-limiting.md rename to content/zh-cn/docs/concepts/policy/pid-limiting.md index 85b1c531aaf72..61fed9674b36c 100644 --- a/content/zh/docs/concepts/policy/pid-limiting.md +++ b/content/zh-cn/docs/concepts/policy/pid-limiting.md @@ -105,7 +105,7 @@ and limits. However, you specify it in a different way: rather than defining a Pod's resource limit in the `.spec` for a Pod, you configure the limit as a setting on the kubelet. Pod-defined PID limits are not currently supported. --> -PID 限制是与[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/) +PID 限制是与[计算资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/) 请求和限制相辅相成的一种机制。不过,你需要用一种不同的方式来设置这一限制: 你需要将其设置到 kubelet 上而不是在 Pod 的 `.spec` 中为 Pod 设置资源限制。 目前还不支持在 Pod 级别设置 PID 限制。 @@ -146,7 +146,7 @@ gate](/docs/reference/command-line-tools-reference/feature-gates/) `SupportNodePidsLimit` to work. --> 在 Kubernetes 1.20 版本之前,在节点级别通过 PID 资源限制预留 PID 的能力 -需要启用[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +需要启用[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) `SupportNodePidsLimit` 才行。 {{< /note >}} @@ -166,7 +166,7 @@ Kubernetes 允许你限制 Pod 中运行的进程个数。你可以在节点级 而不是为特定的 Pod 来将其设置为资源限制。 每个节点都可以有不同的 PID 限制设置。 要设置限制值,你可以设置 kubelet 的命令行参数 `--pod-max-pids`,或者 -在 kubelet 的[配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/) +在 kubelet 的[配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 中设置 `PodPidsLimit`。 {{< note >}} @@ -176,7 +176,7 @@ the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `SupportPodPidsLimit` to work. --> 在 Kubernetes 1.20 版本之前,为 Pod 设置 PID 资源限制的能力需要启用 -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) `SupportNodePidsLimit` 才行。 {{< /note >}} @@ -197,7 +197,7 @@ Eviction signal value is calculated periodically and does NOT enforce the limit. 你可以配置 kubelet 使之在 Pod 行为不正常或者消耗不正常数量资源的时候将其终止。 这一特性称作驱逐。你可以针对不同的驱逐信号 -[配置资源不足的处理](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 +[配置资源不足的处理](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 使用 `pid.available` 驱逐信号来配置 Pod 使用的 PID 个数的阈值。 你可以设置硬性的和软性的驱逐策略。不过,即使使用硬性的驱逐策略, 如果 PID 个数增长过快,节点仍然可能因为触及节点 PID 限制而进入一种不稳定状态。 @@ -233,6 +233,6 @@ Pod 行为不正常而没有 PID 可用。 - 关于历史背景,请阅读 [Kubernetes 1.14 中限制进程 ID 以提升稳定性](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/) 的博文。 -- 请阅读[为容器管理资源](/zh/docs/concepts/configuration/manage-resources-containers/)。 -- 学习如何[配置资源不足情况的处理](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 +- 请阅读[为容器管理资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)。 +- 学习如何[配置资源不足情况的处理](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 diff --git a/content/zh/docs/concepts/policy/resource-quotas.md b/content/zh-cn/docs/concepts/policy/resource-quotas.md similarity index 96% rename from content/zh/docs/concepts/policy/resource-quotas.md rename to content/zh-cn/docs/concepts/policy/resource-quotas.md index 26837a0ede1e0..75498e16b82d3 100644 --- a/content/zh/docs/concepts/policy/resource-quotas.md +++ b/content/zh-cn/docs/concepts/policy/resource-quotas.md @@ -41,8 +41,7 @@ Resource quotas work like this: 资源配额的工作方式如下: -- 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过 - ACL (Access Control List 访问控制列表) 来实现强制性约束。 +- 不同的团队可以在不同的命名空间下工作。这可以通过 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。 - 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。 - 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会 跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。 @@ -65,14 +63,14 @@ Resource quotas work like this: 提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。 若想避免这类问题,请参考 - [演练](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。 + [演练](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。 ResourceQuota 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 ## 存储资源配额 -用户可以对给定命名空间下的[存储资源](/zh/docs/concepts/storage/persistent-volumes/) +用户可以对给定命名空间下的[存储资源](/zh-cn/docs/concepts/storage/persistent-volumes/) 总量进行限制。 此外,还可以根据相关的存储类(Storage Class)来限制存储资源的消耗。 @@ -218,9 +216,9 @@ In addition, you can limit consumption of storage resources based on associated | 资源名称 | 描述 | | --------------------- | ----------------------------------------------------------- | | `requests.storage` | 所有 PVC,存储资源的需求总量不能超过该值。 | -| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 | +| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 | | `.storageclass.storage.k8s.io/requests.storage` | 在所有与 `` 相关的持久卷申领中,存储请求的总和不能超过该值。 | -| `.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 | 如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。 这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点。 -参考[日志架构](/zh/docs/concepts/cluster-administration/logging/) +参考[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/) 了解详细信息。 {{< /note >}} @@ -343,7 +341,7 @@ The following types are supported: | 资源名称 | 描述 | | ------------------------------- | ------------------------------------------------- | | `configmaps` | 在该命名空间中允许存在的 ConfigMap 总数上限。 | -| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 | +| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 | | `pods` | 在该命名空间中允许存在的非终止状态的 Pod 总数上限。Pod 终止状态等价于 Pod 的 `.status.phase in (Failed, Succeeded)` 为真。 | | `replicationcontrollers` | 在该命名空间中允许存在的 ReplicationController 总数上限。 | | `resourcequotas` | 在该命名空间中允许存在的 ResourceQuota 总数上限。 | @@ -396,8 +394,8 @@ Resources specified on the quota outside of the allowed set results in a validat | `NotTerminating` | 匹配所有 `spec.activeDeadlineSeconds` 是 nil 的 Pod。 | | `BestEffort` | 匹配所有 Qos 是 BestEffort 的 Pod。 | | `NotBestEffort` | 匹配所有 Qos 不是 BestEffort 的 Pod。 | -| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 | -| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 | +| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 | +| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 | -Pod 可以创建为特定的[优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。 +Pod 可以创建为特定的[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。 通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。 - 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) -- 查看[如何使用资源配额的详细示例](/zh/docs/tasks/administer-cluster/quota-api-object/)。 +- 查看[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。 - 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。 了解更多信息。 - 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) diff --git a/content/zh/docs/concepts/scheduling-eviction/_index.md b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md similarity index 63% rename from content/zh/docs/concepts/scheduling-eviction/_index.md rename to content/zh-cn/docs/concepts/scheduling-eviction/_index.md index de33763b462e0..3919a6cfa0715 100644 --- a/content/zh/docs/concepts/scheduling-eviction/_index.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md @@ -35,18 +35,18 @@ of terminating one or more Pods on Nodes. ## 调度 -* [Kubernetes 调度器](/zh/docs/concepts/scheduling-eviction/kube-scheduler/) -* [将 Pods 指派到节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) -* [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) -* [污点和容忍](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/) -* [调度框架](/zh/docs/concepts/scheduling-eviction/scheduling-framework) -* [调度器的性能调试](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) -* [扩展资源的资源装箱](/zh/docs/concepts/scheduling-eviction/resource-bin-packing/) +* [Kubernetes 调度器](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/) +* [将 Pods 指派到节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/) +* [Pod 开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) +* [污点和容忍](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/) +* [调度框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework) +* [调度器的性能调试](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) +* [扩展资源的资源装箱](/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/) ## Pod 干扰 -* [Pod 优先级和抢占](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) -* [节点压力驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) -* [API发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/) +* [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* [节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* [API发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/) diff --git a/content/zh/docs/concepts/scheduling-eviction/api-eviction.md b/content/zh-cn/docs/concepts/scheduling-eviction/api-eviction.md similarity index 92% rename from content/zh/docs/concepts/scheduling-eviction/api-eviction.md rename to content/zh-cn/docs/concepts/scheduling-eviction/api-eviction.md index f51af2f678d40..4b9615951ba64 100644 --- a/content/zh/docs/concepts/scheduling-eviction/api-eviction.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/api-eviction.md @@ -30,12 +30,12 @@ on the Pod. 此操作创建一个 `Eviction` 对象,该对象再驱动 API 服务器终止选定的 Pod。 API 发起的驱逐将遵从你的 -[`PodDisruptionBudgets`](/zh/docs/tasks/run-application/configure-pdb/) -和 [`terminationGracePeriodSeconds`](/zh/docs/concepts/workloads/pods/pod-lifecycle#pod-termination) +[`PodDisruptionBudgets`](/zh-cn/docs/tasks/run-application/configure-pdb/) +和 [`terminationGracePeriodSeconds`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-termination) 配置。 使用 API 创建 Eviction 对象,就像对 Pod 执行策略控制的 -[`DELETE` 操作](/zh/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod) +[`DELETE` 操作](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod) ## 调用 Eviction API -你可以使用 [Kubernetes 语言客户端](/zh/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api) +你可以使用 [Kubernetes 语言客户端](/zh-cn/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api) 来访问 Kubernetes API 并创建 `Eviction` 对象。 要执行此操作,你应该用 POST 发出要尝试的请求,类似于下面的示例: @@ -201,6 +201,6 @@ If you notice stuck evictions, try one of the following solutions: * Learn about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/). * Learn about [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/). --> -* 了解如何使用 [Pod 干扰预算](/zh/docs/tasks/run-application/configure-pdb/) 保护你的应用。 -* 了解[节点压力引发的驱逐](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 -* 了解 [Pod 优先级和抢占](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/)。 +* 了解如何使用 [Pod 干扰预算](/zh-cn/docs/tasks/run-application/configure-pdb/) 保护你的应用。 +* 了解[节点压力引发的驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 +* 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)。 diff --git a/content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md similarity index 92% rename from content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md rename to content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md index a210302498ae3..c6d249d46507c 100644 --- a/content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -30,7 +30,7 @@ services that communicate a lot into the same availability zone. 你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行。 有几种方法可以实现这点,推荐的方法都是用 -[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。 +[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。 通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上, 而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制 Pod 被部署到的节点。例如,确保 Pod 最终落在连接了 SSD 的机器上, @@ -62,10 +62,10 @@ for a list of common node labels. --> ## 节点标签 {#built-in-node-labels} -与很多其他 Kubernetes 对象类似,节点也有[标签](/zh/docs/concepts/overview/working-with-objects/labels/)。 -你可以[手动地添加标签](/zh/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)。 +与很多其他 Kubernetes 对象类似,节点也有[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。 +你可以[手动地添加标签](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)。 Kubernetes 也会为集群中所有节点添加一些标准的标签。 -参见[常用的标签、注解和污点](/zh/docs/reference/labels-annotations-taints/)以了解常见的节点标签。 +参见[常用的标签、注解和污点](/zh-cn/docs/reference/labels-annotations-taints/)以了解常见的节点标签。 {{< note >}} -[`NodeRestriction` 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)防止 +[`NodeRestriction` 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)防止 kubelet 使用 `node-restriction.kubernetes.io/` 前缀设置或修改标签。 要使用该标签前缀进行节点隔离: @@ -118,8 +118,8 @@ kubelet 使用 `node-restriction.kubernetes.io/` 前缀设置或修改标签。 2. Add labels with the `node-restriction.kubernetes.io/` prefix to your nodes, and use those labels in your [node selectors](#nodeselector). For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`. --> -1. 确保你在使用[节点鉴权](/zh/docs/reference/access-authn-authz/node/)机制并且已经启用了 - [NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。 +1. 确保你在使用[节点鉴权](/zh-cn/docs/reference/access-authn-authz/node/)机制并且已经启用了 + [NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。 2. 将带有 `node-restriction.kubernetes.io/` 前缀的标签添加到 Node 对象, 然后在[节点选择器](#nodeSelector)中使用这些标签。 例如,`example.com.node-restriction.kubernetes.io/fips=true` 或 @@ -142,7 +142,7 @@ Kubernetes 只会将 Pod 调度到拥有你所指定的每个标签的节点上 See [Assign Pods to Nodes](/docs/tasks/configure-pod-container/assign-pods-nodes) for more information. --> -进一步的信息可参见[将 Pod 指派给节点](/zh/docs/tasks/configure-pod-container/assign-pods-nodes)。 +进一步的信息可参见[将 Pod 指派给节点](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes)。 `NotIn` 和 `DoesNotExist` 可用来实现节点反亲和性行为。 -你也可以使用[节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/) +你也可以使用[节点污点](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/) 将 Pod 从特定节点上驱逐。 {{< note >}} @@ -279,7 +279,7 @@ satisfied. See [Assign Pods to Nodes using Node Affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) for more information. --> -参阅[使用节点亲和性来为 Pod 指派节点](/zh/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/), +参阅[使用节点亲和性来为 Pod 指派节点](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/), 以了解进一步的信息。 -在配置多个[调度方案](/zh/docs/reference/scheduling/config/#multiple-profiles)时, +在配置多个[调度方案](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)时, 你可以将某个方案与节点亲和性关联起来,如果某个调度方案仅适用于某组特殊的节点时, 这样做是很有用的。 -要实现这点,可以在[调度器配置](/zh/docs/reference/scheduling/config/)中为 -[`NodeAffinity` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)的 +要实现这点,可以在[调度器配置](/zh-cn/docs/reference/scheduling/config/)中为 +[`NodeAffinity` 插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)的 `args` 字段添加 `addedAffinity`。例如: ```yaml @@ -397,7 +397,7 @@ does not support scheduling profiles. When the DaemonSet controller creates Pods, the default Kubernetes scheduler places those Pods and honors any `nodeAffinity` rules in the DaemonSet controller. --> -DaemonSet 控制器[为 DaemonSet 创建 Pods](/zh/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler), +DaemonSet 控制器[为 DaemonSet 创建 Pods](/zh-cn/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler), 但该控制器不理会调度方案。 DaemonSet 控制器创建 Pod 时,默认的 Kubernetes 调度器负责放置 Pod, 并遵从 DaemonSet 控制器中奢侈的 `nodeAffinity` 规则。 @@ -434,7 +434,7 @@ Kubernetes, so Pod labels also implicitly have namespaces. Any label selectors for Pod labels should specify the namespaces in which Kubernetes should look for those labels. --> -你通过[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors) +你通过[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors) 的形式来表达规则(Y),并可根据需要指定选关联的名字空间列表。 Pod 在 Kubernetes 中是名字空间作用域的对象,因此 Pod 的标签也隐式地具有名字空间属性。 针对 Pod 标签的所有标签选择算符都要指定名字空间,Kubernetes @@ -446,7 +446,7 @@ the node label that the system uses to denote the domain. For examples, see [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/). --> 你会通过 `topologyKey` 来表达拓扑域(X)的概念,其取值是系统用来标示域的节点标签键。 -相关示例可参见[常用标签、注解和污点](/zh/docs/reference/labels-annotations-taints/)。 +相关示例可参见[常用标签、注解和污点](/zh-cn/docs/reference/labels-annotations-taints/)。 {{< note >}} -查阅[设计文档](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) -以了解 Pod 亲和性与反亲和性的更多示例。 +查阅[设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md) +以进一步熟悉 Pod 亲和性与反亲和性的示例。 -参阅 [ZooKeeper 教程](/zh/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) +参阅 [ZooKeeper 教程](/zh-cn/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) 了解一个 StatefulSet 的示例,该 StatefulSet 配置了反亲和性以实现高可用, 所使用的是与此例相同的技术。 @@ -811,11 +810,11 @@ The above Pod will only run on the node `kube-01`. * Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/). * Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/). --> -* 进一步阅读[污点与容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)文档。 +* 进一步阅读[污点与容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)文档。 * 阅读[节点亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md) 和[Pod 间亲和性与反亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) 的设计文档。 -* 了解[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/)如何参与节点层面资源分配决定。 -* 了解如何使用 [nodeSelector](/zh/docs/tasks/configure-pod-container/assign-pods-nodes/)。 -* 了解如何使用[亲和性和反亲和性](/zh/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)。 +* 了解[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/)如何参与节点层面资源分配决定。 +* 了解如何使用 [nodeSelector](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/)。 +* 了解如何使用[亲和性和反亲和性](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)。 diff --git a/content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md similarity index 88% rename from content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md rename to content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md index 30791e6249005..067dd959fa139 100644 --- a/content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -57,7 +57,7 @@ is the default scheduler for Kubernetes and runs as part of the kube-scheduler is designed so that, if you want and need to, you can write your own scheduling component and use that instead. --> -[kube-scheduler](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) +[kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) 是 Kubernetes 集群的默认调度器,并且是集群 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 的一部分。 如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许 @@ -162,9 +162,9 @@ of the scheduler: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You can also configure the kube-scheduler to run different profiles. --> -1. [调度策略](/zh/docs/reference/scheduling/policies) 允许你配置过滤的 _断言(Predicates)_ +1. [调度策略](/zh-cn/docs/reference/scheduling/policies) 允许你配置过滤的 _断言(Predicates)_ 和打分的 _优先级(Priorities)_ 。 -2. [调度配置](/zh/docs/reference/scheduling/config/#profiles) 允许你配置实现不同调度阶段的插件, +2. [调度配置](/zh-cn/docs/reference/scheduling/config/#profiles) 允许你配置实现不同调度阶段的插件, 包括:`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 等等。 你也可以配置 kube-scheduler 运行不同的配置文件。 @@ -178,10 +178,10 @@ of the scheduler: * Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/) * Learn about [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) --> -* 阅读关于 [调度器性能调优](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) -* 阅读关于 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/) -* 阅读关于 kube-scheduler 的 [参考文档](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) -* 阅读 [kube-scheduler 配置参考 (v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) -* 了解关于 [配置多个调度器](/zh/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式 -* 了解关于 [拓扑结构管理策略](/zh/docs/tasks/administer-cluster/topology-manager/) -* 了解关于 [Pod 额外开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) +* 阅读关于 [调度器性能调优](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) +* 阅读关于 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* 阅读关于 kube-scheduler 的 [参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) +* 阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) +* 了解关于 [配置多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式 +* 了解关于 [拓扑结构管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/) +* 了解关于 [Pod 额外开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) diff --git a/content/zh/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md similarity index 98% rename from content/zh/docs/concepts/scheduling-eviction/node-pressure-eviction.md rename to content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 810324c0c2810..8a0910315e232 100644 --- a/content/zh/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -128,9 +128,9 @@ memory is reclaimable under pressure. --> `memory.available` 的值来自 cgroupfs,而不是像 `free -m` 这样的工具。 这很重要,因为 `free -m` 在容器中不起作用,如果用户使用 -[节点可分配资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +[节点可分配资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) 这一功能特性,资源不足的判定是基于 CGroup 层次结构中的用户 Pod 所处的局部及 CGroup 根节点作出的。 -这个[脚本](/zh/examples/admin/resource/memory-available.sh) +这个[脚本](/zh-cn/examples/admin/resource/memory-available.sh) 重现了 kubelet 为计算 `memory.available` 而执行的相同步骤。 kubelet 在其计算中排除了 inactive_file(即非活动 LRU 列表上基于文件来虚拟的内存的字节数), 因为它假定在压力下内存是可回收的。 @@ -161,7 +161,7 @@ For a list of the deprecated features, see [kubelet garbage collection deprecati --> 一些 kubelet 垃圾收集功能已被弃用,以支持驱逐。 有关已弃用功能的列表,请参阅 -[kubelet 垃圾收集弃用](/zh/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation)。 +[kubelet 垃圾收集弃用](/zh-cn/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation)。 {{}} 你可以使用 `--eviction-minimum-reclaim` 标志或 -[kubelet 配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/) +[kubelet 配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 为每个资源配置最小回收量。 当 kubelet 注意到某个资源耗尽时,它会继续回收该资源,直到回收到你所指定的数量为止。 @@ -772,7 +772,7 @@ to estimate or measure an optimal memory limit value for that container. * Check out the [Eviction API](/docs/reference/generated/kubernetes-api/{{}}/#create-eviction-pod-v1-core) --> * 了解 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/) -* 了解 [Pod 优先级和驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* 了解 [Pod 优先级和驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) * 了解 [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/) -* 了解[服务质量](/zh/docs/tasks/configure-pod-container/quality-service-pod/)(QoS) +* 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)(QoS) * 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{}}/#create-eviction-pod-v1-core) diff --git a/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md b/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md similarity index 68% rename from content/zh/docs/concepts/scheduling-eviction/pod-overhead.md rename to content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md index 6c9f119d4d9f2..046a27cd99f97 100644 --- a/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md @@ -18,17 +18,17 @@ weight: 30 -{{< feature-state for_k8s_version="v1.18" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} 在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些是运行 Pod 内容器所需资源之外的资源。 -_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。 +在 Kubernetes 中,_POD 开销_ 是一种方法,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。 @@ -40,8 +40,8 @@ time according to the overhead associated with the Pod's [RuntimeClass](/docs/concepts/containers/runtime-class/). --> -在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) -相关联的开销在[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)时设置的。 +在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/) +相关联的开销在[准入](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)时设置的。 -## 启用 Pod 开销 {#set-up} +## 配置 Pod 开销 {#set-up} -你需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -(在 1.18 默认是开启的),以及一个定义了 `overhead` 字段的 `RuntimeClass`。 +你需要确保使用一个定义了 `overhead` 字段的 `RuntimeClass`。 -要使用 PodOverhead 特性,需要一个定义了 `overhead` 字段的 RuntimeClass。 +要使用 Pod 开销,你需要一个定义了 `overhead` 字段的 RuntimeClass。 作为例子,下面的 RuntimeClass 定义中包含一个虚拟化所用的容器运行时, RuntimeClass 如下,其中每个 Pod 大约使用 120MiB 用来运行虚拟机和寄宿操作系统: ```yaml ---- -kind: RuntimeClass apiVersion: node.k8s.io/v1 +kind: RuntimeClass metadata: - name: kata-fc + name: kata-fc handler: kata-fc overhead: - podFixed: - memory: "120Mi" - cpu: "250m" + podFixed: + memory: "120Mi" + cpu: "250m" ``` -在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) +在准入阶段 RuntimeClass [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含 RuntimeClass 中定义的 `overhead`。如果 PodSpec 中已定义该字段,该 Pod 将会被拒绝。 在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod,使之包含 `overhead`。 @@ -141,8 +137,7 @@ RuntimeClass 中定义的 `overhead`。如果 PodSpec 中已定义该字段, -在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec: - +在 RuntimeClass 准入控制器进行修改后,你可以查看更新后的 PodSpec: ```bash kubectl get pod test-pod -o jsonpath='{.spec.overhead}' ``` @@ -156,10 +151,11 @@ map[cpu:250m memory:120Mi] ``` -如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。 +如果定义了 [ResourceQuata](/zh-cn/docs/concepts/policy/resource-quotas/), +则容器请求的总量以及 `overhead` 字段都将计算在内。 +Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip +text="cgroup" term_id="cgroup" >}} for the Pod. It is within this pod that the underlying +container runtime will create containers. +--> 一旦 Pod 被调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个 {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}。 底层容器运行时将在这个 Pod 中创建容器。 @@ -189,8 +187,8 @@ Burstable QoS),kubelet 会为与该资源(CPU 的 `cpu.cfs_quota_us` 以 相关的 Pod cgroup 设定一个上限。该上限基于 PodSpec 中定义的容器限制总量与 `overhead` 之和。 对于 CPU,如果 Pod 的 QoS 是 Guaranteed 或者 Burstable,kubelet 会基于容器请求总量与 PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`。 @@ -199,6 +197,7 @@ PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`。 Looking at our example, verify the container requests for the workload: --> 请看这个例子,验证工作负载的容器请求: + ```bash kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' ``` @@ -207,6 +206,7 @@ kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' The total container requests are 2000m CPU and 200MiB of memory: --> 容器请求总计 2000m CPU 和 200MiB 内存: + ``` map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] ``` @@ -215,18 +215,19 @@ map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] Check this against what is observed by the node: --> 对照从节点观察到的情况来检查一下: + ```bash kubectl describe node | grep test-pod -B2 ``` -该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内: +The output shows requests for 2250m CPU, and for 320MiB of memory. The requests include Pod overhead: +--> +该输出显示请求了 2250m CPU 以及 320MiB 内存。请求包含了 Pod 开销在内: ``` - Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE - --------- ---- ------------ ---------- --------------- ------------- --- - default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE + --------- ---- ------------ ---------- --------------- ------------- --- + default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m ``` 执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 在即上一层目录。 + ``` - "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a" + "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a" ``` +In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. +Verify the Pod level cgroup setting for memory: +--> 在这个例子中,该 Pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。 验证内存的 Pod 级别 cgroup 设置: @@ -300,6 +304,7 @@ In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-94 This is 320 MiB, as expected: --> 和预期的一样,这一数值为 320 MiB。 + ``` 335544320 ``` @@ -310,14 +315,12 @@ This is 320 MiB, as expected: ### 可观察性 在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 -`kube_pod_overhead` 指标来协助确定何时使用 PodOverhead +`kube_pod_overhead_*` 指标来协助确定何时使用 Pod 开销, 以及协助观察以一个既定开销运行的工作负载的稳定性。 该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。 在此之前,用户需要从源代码构建 kube-state-metrics。 @@ -325,9 +328,9 @@ from source in the meantime. ## {{% heading "whatsnext" %}} - -* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) -* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) +* 学习更多关于 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/) 的信息 +* 阅读 [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)增强建议以获取更多上下文 diff --git a/content/zh/docs/concepts/scheduling-eviction/pod-priority-preemption.md b/content/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption.md similarity index 96% rename from content/zh/docs/concepts/scheduling-eviction/pod-priority-preemption.md rename to content/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption.md index f22065f8f921a..afa224896ad60 100644 --- a/content/zh/docs/concepts/scheduling-eviction/pod-priority-preemption.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption.md @@ -23,7 +23,7 @@ importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. --> -[Pod](/zh/docs/concepts/workloads/pods/) 可以有 _优先级_。 +[Pod](/zh-cn/docs/concepts/workloads/pods/) 可以有 _优先级_。 优先级表示一个 Pod 相对于其他 Pod 的重要性。 如果一个 Pod 无法被调度,调度程序会尝试抢占(驱逐)较低优先级的 Pod, 以使悬决 Pod 可以被调度。 @@ -44,7 +44,7 @@ for details. 在一个并非所有用户都是可信的集群中,恶意用户可能以最高优先级创建 Pod, 导致其他 Pod 被驱逐或者无法被调度。 管理员可以使用 ResourceQuota 来阻止用户创建高优先级的 Pod。 -参见[默认限制优先级消费](/zh/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)。 +参见[默认限制优先级消费](/zh-cn/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)。 {{< /warning >}} @@ -82,7 +82,7 @@ These are common classes and are used to [ensure that critical components are al --> Kubernetes 已经提供了 2 个 PriorityClass: `system-cluster-critical` 和 `system-node-critical`。 -这些是常见的类,用于[确保始终优先调度关键组件](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。 +这些是常见的类,用于[确保始终优先调度关键组件](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。 {{< /note >}} #### 支持 PodDisruptionBudget,但不保证 -[PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/) +[PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/) (PDB) 允许多副本应用程序的所有者限制因自愿性质的干扰而同时终止的 Pod 数量。 Kubernetes 在抢占 Pod 时支持 PDB,但对 PDB 的支持是基于尽力而为原则的。 调度器会尝试寻找不会因被抢占而违反 PDB 的牺牲者,但如果没有找到这样的牺牲者, @@ -639,7 +639,7 @@ exceeding its requests, it won't be evicted. Another Pod with higher priority that exceeds its requests may be evicted. --> kubelet 使用优先级来确定 -[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) Pod 的顺序。 +[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) Pod 的顺序。 你可以使用 QoS 类来估计 Pod 最有可能被驱逐的顺序。kubelet 根据以下因素对 Pod 进行驱逐排名: 1. 对紧俏资源的使用是否超过请求值 @@ -647,7 +647,7 @@ kubelet 使用优先级来确定 1. 相对于请求的资源使用量 有关更多详细信息,请参阅 -[kubelet 驱逐时 Pod 的选择](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)。 +[kubelet 驱逐时 Pod 的选择](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)。 当某 Pod 的资源用量未超过其请求时,kubelet 节点压力驱逐不会驱逐该 Pod。 如果优先级较低的 Pod 没有超过其请求,则不会被驱逐。 @@ -663,7 +663,7 @@ kubelet 使用优先级来确定 * Learn about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) --> * 阅读有关将 ResourceQuota 与 PriorityClass 结合使用的信息: - [默认限制优先级消费](/zh/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default) -* 了解 [Pod 干扰](/zh/docs/concepts/workloads/pods/disruptions/) + [默认限制优先级消费](/zh-cn/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default) +* 了解 [Pod 干扰](/zh-cn/docs/concepts/workloads/pods/disruptions/) * 了解 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/) -* 了解[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* 了解[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing.md new file mode 100644 index 0000000000000..42b35709b26c9 --- /dev/null +++ b/content/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing.md @@ -0,0 +1,358 @@ +--- +title: 扩展资源的资源装箱 +content_type: concept +weight: 80 +--- + + + + + +在 kube-scheduler 的[调度插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins) +`NodeResourcesFit` 中存在两种支持资源装箱(bin packing)的策略:`MostAllocated` 和 +`RequestedToCapacityRatio`。 + + + + +## 使用 MostAllocated 策略启用资源装箱 {#enabling-bin-packing-using-mostallocated-strategy} + +`MostAllocated` 策略基于资源的利用率来为节点计分,优选分配比率较高的节点。 +针对每种资源类型,你可以设置一个权重值以改变其对节点得分的影响。 + +要为插件 `NodeResourcesFit` 设置 `MostAllocated` 策略, +可以使用一个类似于下面这样的[调度器配置](/zh-cn/docs/reference/scheduling/config/): + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration +profiles: +- pluginConfig: + - args: + scoringStrategy: + resources: + - name: cpu + weight: 1 + - name: memory + weight: 1 + - name: intel.com/foo + weight: 3 + - name: intel.com/bar + weight: 3 + type: MostAllocated + name: NodeResourcesFit +``` + + +要进一步了解其它参数及其默认配置,请参阅 +[`NodeResourcesFitArgs`](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs) +的 API 文档。 + + +## 使用 RequestedToCapacityRatio 策略来启用资源装箱 {#enabling-bin-packing-using-requestedtocapacityratio} + +`RequestedToCapacityRatio` 策略允许用户基于请求值与容量的比率,针对参与节点计分的每类资源设置权重。 +这一策略是的用户可以使用合适的参数来对扩展资源执行装箱操作,进而提升大规模集群中稀有资源的利用率。 +此策略根据所分配资源的一个配置函数来评价节点。 +`NodeResourcesFit` 计分函数中的 `RequestedToCapacityRatio` 可以通过字段 +[scoringStrategy](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) +来控制。 +在 `scoringStrategy` 字段中,你可以配置两个参数:`requestedToCapacityRatioParam` +和 `resources`。`requestedToCapacityRatioParam` 参数中的 `shape` +设置使得用户能够调整函数的算法,基于 `utilization` 和 `score` 值计算最少请求或最多请求。 +`resources` 参数中包含计分过程中需要考虑的资源的 `name`,以及用来设置每种资源权重的 `weight`。 + + +下面是一个配置示例,使用 `requestedToCapacityRatio` 字段为扩展资源 `intel.com/foo` +和 `intel.com/bar` 设置装箱行为: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration +profiles: +- pluginConfig: + - args: + scoringStrategy: + resources: + - name: intel.com/foo + weight: 3 + - name: intel.com/bar + weight: 3 + requestedToCapacityRatioParam: + shape: + - utilization: 0 + score: 0 + - utilization: 100 + score: 10 + type: RequestedToCapacityRatio + name: NodeResourcesFit +``` + + +使用 kube-scheduler 标志 `--config=/path/to/config/file` +引用 `KubeSchedulerConfiguration` 文件,可以将配置传递给调度器。 + + +要进一步了解其它参数及其默认配置,可以参阅 +[`NodeResourcesFitArgs`](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs) +的 API 文档。 + + +### 调整计分函数 {#tuning-the-score-function} + +`shape` 用于指定 `RequestedToCapacityRatio` 函数的行为。 + +```yaml +shape: + - utilization: 0 + score: 0 + - utilization: 100 + score: 10 +``` + + +上面的参数在 `utilization` 为 0% 时给节点评分为 0,在 `utilization` 为 +100% 时给节点评分为 10,因此启用了装箱行为。 +要启用最少请求(least requested)模式,必须按如下方式反转得分值。 + +```yaml + shape: + - utilization: 0 + score: 10 + - utilization: 100 + score: 0 +``` + + +`resources` 是一个可选参数,默认情况下设置为: + +``` yaml +resources: + - name: cpu + weight: 1 + - name: memory + weight: 1 +``` + + +它可以像下面这样用来添加扩展资源: + +```yaml +resources: + - name: intel.com/foo + weight: 5 + - name: cpu + weight: 3 + - name: memory + weight: 1 +``` + + +`weight` 参数是可选的,如果未指定,则设置为 1。 +同时,`weight` 不能设置为负值。 + + +### 节点容量分配的评分 {#node-scoring-for-capacity-allocation} + +本节适用于希望了解此功能的内部细节的人员。 +以下是如何针对给定的一组值来计算节点得分的示例。 + + +请求的资源: + +``` +intel.com/foo : 2 +memory: 256MB +cpu: 2 +``` + + +资源权重: + +``` +intel.com/foo : 5 +memory: 1 +cpu: 3 +``` + +``` +FunctionShapePoint {{0, 0}, {100, 10}} +``` + + +节点 1 配置: + +``` +可用: + intel.com/foo : 4 + memory : 1 GB + cpu: 8 + +已用: + intel.com/foo: 1 + memory: 256MB + cpu: 1 +``` + + +节点得分: + +``` +intel.com/foo = resourceScoringFunction((2+1),4) + = (100 - ((4-3)*100/4) + = (100 - 25) + = 75 # requested + used = 75% * available + = rawScoringFunction(75) + = 7 # floor(75/10) + +memory = resourceScoringFunction((256+256),1024) + = (100 -((1024-512)*100/1024)) + = 50 # requested + used = 50% * available + = rawScoringFunction(50) + = 5 # floor(50/10) + +cpu = resourceScoringFunction((2+1),8) + = (100 -((8-3)*100/8)) + = 37.5 # requested + used = 37.5% * available + = rawScoringFunction(37.5) + = 3 # floor(37.5/10) + +NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3) + = 5 +``` + + +节点 2 配置: + +``` +可用: + intel.com/foo: 8 + memory: 1GB + cpu: 8 + +已用: + intel.com/foo: 2 + memory: 512MB + cpu: 6 +``` + + +节点得分: + +``` +intel.com/foo = resourceScoringFunction((2+2),8) + = (100 - ((8-4)*100/8) + = (100 - 50) + = 50 + = rawScoringFunction(50) + = 5 + +memory = resourceScoringFunction((256+512),1024) + = (100 -((1024-768)*100/1024)) + = 75 + = rawScoringFunction(75) + = 7 + +cpu = resourceScoringFunction((2+6),8) + = (100 -((8-8)*100/8)) + = 100 + = rawScoringFunction(100) + = 10 + +NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3) + = 7 +``` + +## {{% heading "whatsnext" %}} + + +- 继续阅读[调度器框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/) +- 继续阅读[调度器配置](/zh-cn/docs/reference/scheduling/config/) + diff --git a/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md similarity index 96% rename from content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md rename to content/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index dd6e1da395f39..6ceffe84613c1 100644 --- a/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -23,7 +23,7 @@ is the Kubernetes default scheduler. It is responsible for placement of Pods on Nodes in a cluster. --> 作为 kubernetes 集群的默认调度器, -[kube-scheduler](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) +[kube-scheduler](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) 主要负责将 Pod 调度到集群的 Node 上。 -要修改这个值,先编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) +要修改这个值,先编辑 [kube-scheduler 的配置文件](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) 然后重启调度器。 大多数情况下,这个配置文件是 `/etc/kubernetes/config/kube-scheduler.yaml`。 @@ -128,7 +128,7 @@ stops searching for more feasible nodes and moves on to the kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果 kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量, kube-scheduler 将停止继续查找可调度节点并继续进行 -[打分阶段](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 +[打分阶段](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 -* 参见 [kube-scheduler 配置参考 (v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) +* 参见 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) diff --git a/content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework.md similarity index 99% rename from content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md rename to content/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework.md index 9e0fee45ac81f..80cefde2e1266 100644 --- a/content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -451,7 +451,7 @@ enabled by default. --> 你可以在调度器配置中启用或禁用插件。 如果你在使用 Kubernetes v1.18 或更高版本,大部分调度 -[插件](/zh/docs/reference/scheduling/config/#scheduling-plugins) +[插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins) 都在使用中且默认启用。 如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为 一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载。 -了解更多关于[多配置文件](/zh/docs/reference/scheduling/config/#multiple-profiles)。 +了解更多关于[多配置文件](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)。 diff --git a/content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md similarity index 91% rename from content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md rename to content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md index d1d400217c7a1..39ad284a00a0f 100644 --- a/content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -11,7 +11,7 @@ is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attrac a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods. --> -[_节点亲和性_](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) +[_节点亲和性_](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) 是 {{< glossary_tooltip text="Pod" term_id="pod" >}} 的一种属性,它使 Pod 被吸引到一类特定的{{< glossary_tooltip text="节点" term_id="node" >}} (这可能出于一种偏好,也可能是硬性要求)。 @@ -42,7 +42,7 @@ marks that the node should not accept any pods that do not tolerate the taints. You add a taint to a node using [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint). For example, --> -您可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个污点。比如, +你可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个污点。比如, ```shell kubectl taint nodes node1 key1=value1:NoSchedule @@ -75,7 +75,7 @@ You specify a toleration for a pod in the PodSpec. Both of the following tolerat taint created by the `kubectl taint` line above, and thus a pod with either toleration would be able to schedule onto `node1`: --> -您可以在 PodSpec 中定义 Pod 的容忍度。 +你可以在 PodSpec 中定义 Pod 的容忍度。 下面两个容忍度均与上面例子中使用 `kubectl taint` 命令创建的污点相匹配, 因此如果一个 Pod 拥有其中的任何一个容忍度都能够被分配到 `node1` : @@ -140,7 +140,7 @@ This is a "preference" or "soft" version of `NoSchedule` - the system will *try* pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is `NoExecute`, described later. --> -上述例子中 `effect` 使用的值为 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。 +上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。 这是“优化”或“软”版本的 `NoSchedule` —— 系统会 *尽量* 避免将 Pod 调度到存在其不能容忍污点的节点上, 但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 @@ -150,13 +150,12 @@ The way Kubernetes processes multiple taints and tolerations is like a filter: s with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular, --> -您可以给一个节点添加多个污点,也可以给一个 Pod 添加多个容忍度设置。 +你可以给一个节点添加多个污点,也可以给一个 Pod 添加多个容忍度设置。 Kubernetes 处理多个污点和容忍度的过程就像一个过滤器:从一个节点的所有污点开始遍历, 过滤掉那些 Pod 中存在与之相匹配的容忍度的污点。余下未被过滤的污点的 effect 值决定了 Pod 是否会被分配到该节点,特别是以下情况: -例如,假设您给一个节点添加了如下污点 +例如,假设你给一个节点添加了如下污点 ```shell kubectl taint nodes node1 key1=value1:NoSchedule @@ -253,7 +252,7 @@ taint is removed before that time, the pod will not be evicted. Taints and tolerations are a flexible way to steer pods *away* from nodes or evict pods that shouldn't be running. A few of the use cases are --> -通过污点和容忍度,可以灵活地让 Pod *避开* 某些节点或者将 Pod 从某些节点驱逐。下面是几个使用例子: +通过污点和容忍度,可以灵活地让 Pod **避开** 某些节点或者将 Pod 从某些节点驱逐。下面是几个使用例子: -* **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个污点(即, +* **专用节点**:如果你想将某些节点专门分配给特定的一组用户使用,你可以给这些节点添加一个污点(即, `kubectl taint nodes nodename dedicated=groupName:NoSchedule`), 然后给这组用户的 Pod 添加一个相对应的 toleration(通过编写一个自定义的 - [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/),很容易就能做到)。 + [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/),很容易就能做到)。 拥有上述容忍度的 Pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。 - 如果您希望这些 Pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 - 污点类似的 label (例如:`dedicated=groupName`),同时 还要在上述准入控制器中给 Pod + 如果你希望这些 Pod 只能被分配到上述专用节点,那么你还需要给这些专用节点另外添加一个和上述 + 污点类似的 label (例如:`dedicated=groupName`),同时 还要在上述准入控制器中给 Pod 增加节点亲和性要求上述 Pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 -前文提到过污点的 effect 值 `NoExecute`会影响已经在节点上运行的 Pod +前文提到过污点的 effect 值 `NoExecute` 会影响已经在节点上运行的 Pod * 如果 Pod 不能忍受 effect 值为 `NoExecute` 的污点,那么 Pod 将马上被驱逐 * 如果 Pod 能够忍受 effect 值为 `NoExecute` 的污点,但是在容忍度定义中没有指定 @@ -396,7 +395,7 @@ as the master becoming partitioned from the nodes. --> {{< note >}} 为了保证由于节点问题引起的 Pod 驱逐 -[速率限制](/zh/docs/concepts/architecture/nodes/)行为正常, +[速率限制](/zh-cn/docs/concepts/architecture/nodes/)行为正常, 系统实际上会以限定速率的方式添加污点。在像主控节点与工作节点间通信中断等场景下, 这样做可以避免 Pod 被大量驱逐。 {{< /note >}} @@ -462,7 +461,7 @@ Nodes for 5 minutes after one of these problems is detected. This ensures that DaemonSet pods are never evicted due to these problems. --> -[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) 中的 Pod 被创建时, +[DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) 中的 Pod 被创建时, 针对以下污点自动添加的 `NoExecute` 的容忍度将不会指定 `tolerationSeconds`: * `node.kubernetes.io/unreachable` @@ -488,7 +487,7 @@ control plane adds the `node.kubernetes.io/memory-pressure` taint. --> 控制平面使用节点{{}}自动创建 -与[节点状况](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions)对应的带有 `NoSchedule` 效应的污点。 +与[节点状况](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions)对应的带有 `NoSchedule` 效应的污点。 调度器在进行调度时检查污点,而不是检查节点状况。这确保节点状况不会直接影响调度。 例如,如果 `DiskPressure` 节点状况处于活跃状态,则控制平面 @@ -507,7 +506,7 @@ onto the affected node. --> 对于新创建的 Pod,可以通过添加相应的 Pod 容忍度来忽略节点状况。 -控制平面还在具有除 `BestEffort` 之外的 {{}}的 pod 上 +控制平面还在具有除 `BestEffort` 之外的 {{}}的 Pod 上 添加 `node.kubernetes.io/memory-pressure` 容忍度。 这是因为 Kubernetes 将 `Guaranteed` 或 `Burstable` QoS 类中的 Pod(甚至没有设置内存请求的 Pod) 视为能够应对内存压力,而新创建的 `BestEffort` Pod 不会被调度到受影响的节点上。 @@ -530,14 +529,14 @@ DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` 容忍 * `node.kubernetes.io/disk-pressure` * `node.kubernetes.io/pid-pressure` (1.14 或更高版本) * `node.kubernetes.io/unschedulable` (1.10 或更高版本) - * `node.kubernetes.io/network-unavailable` (*只适合主机网络配置*) + * `node.kubernetes.io/network-unavailable` (**只适合主机网络配置**) -添加上述容忍度确保了向后兼容,您也可以选择自由向 DaemonSet 添加容忍度。 +添加上述容忍度确保了向后兼容,你也可以选择自由向 DaemonSet 添加容忍度。 ## {{% heading "whatsnext" %}} @@ -545,5 +544,5 @@ arbitrary tolerations to DaemonSets. * Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it * Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/) --> -* 阅读[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/),以及如何配置其行为 -* 阅读 [Pod 优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* 阅读[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/),以及如何配置其行为 +* 阅读 [Pod 优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) diff --git a/content/zh/docs/concepts/security/_index.md b/content/zh-cn/docs/concepts/security/_index.md similarity index 100% rename from content/zh/docs/concepts/security/_index.md rename to content/zh-cn/docs/concepts/security/_index.md diff --git a/content/zh/docs/concepts/security/controlling-access.md b/content/zh-cn/docs/concepts/security/controlling-access.md similarity index 88% rename from content/zh/docs/concepts/security/controlling-access.md rename to content/zh-cn/docs/concepts/security/controlling-access.md index 5946a002b6882..b8b671cb8a6ae 100644 --- a/content/zh/docs/concepts/security/controlling-access.md +++ b/content/zh-cn/docs/concepts/security/controlling-access.md @@ -1,6 +1,7 @@ --- title: Kubernetes API 访问控制 content_type: concept +weight: 50 --- @@ -28,8 +30,8 @@ authorized for API access. When a request reaches the API, it goes through several stages, illustrated in the following diagram: --> -用户使用 `kubectl`、客户端库或构造 REST 请求来访问 [Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/)。 -人类用户和 [Kubernetes 服务账户](/zh/docs/tasks/configure-pod-container/configure-service-account/)都可以被鉴权访问 API。 +用户使用 `kubectl`、客户端库或构造 REST 请求来访问 [Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api/)。 +人类用户和 [Kubernetes 服务账户](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)都可以被鉴权访问 API。 当请求到达 API 时,它会经历多个阶段,如下图所示: ![Kubernetes API 请求处理步骤示意图](/images/docs/admin/access-control-overview.svg) @@ -72,7 +74,7 @@ Authenticators are described in more detail in --> 如上图步骤 **1** 所示,建立 TLS 后, HTTP 请求将进入认证(Authentication)步骤。 集群创建脚本或者集群管理员配置 API 服务器,使之运行一个或多个身份认证组件。 -身份认证组件在[认证](/zh/docs/reference/access-authn-authz/authentication/)节中有更详细的描述。 +身份认证组件在[认证](/zh-cn/docs/reference/access-authn-authz/authentication/)节中有更详细的描述。 ## 准入控制 {#admission-control} @@ -223,7 +225,7 @@ for the corresponding API object, and then written to the object store (shown as 除了拒绝对象之外,准入控制器还可以为字段设置复杂的默认值。 -可用的准入控制模块在[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)中进行了描述。 +可用的准入控制模块在[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)中进行了描述。 请求通过所有准入控制器后,将使用检验例程检查对应的 API 对象,然后将其写入对象存储(如步骤 **4** 所示)。 @@ -241,7 +243,7 @@ For more information, see [Auditing](/docs/tasks/debug/debug-cluster/audit/). Kubernetes 审计提供了一套与安全相关的、按时间顺序排列的记录,其中记录了集群中的操作序列。 集群对用户、使用 Kubernetes API 的应用程序以及控制平面本身产生的活动进行审计。 -更多信息请参考 [审计](/zh/docs/tasks/debug/debug-cluster/audit/). +更多信息请参考 [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/). ## API 服务器端口和 IP {#api-server-ports-and-ips} @@ -327,23 +329,23 @@ You can learn about: --> 阅读更多有关身份认证、鉴权和 API 访问控制的文档: -- [认证](/zh/docs/reference/access-authn-authz/authentication/) - - [使用 Bootstrap 令牌进行身份认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) -- [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) - - [动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/) -- [鉴权](/zh/docs/reference/access-authn-authz/authorization/) - - [基于角色的访问控制](/zh/docs/reference/access-authn-authz/rbac/) - - [基于属性的访问控制](/zh/docs/reference/access-authn-authz/abac/) - - [节点鉴权](/zh/docs/reference/access-authn-authz/node/) - - [Webhook 鉴权](/zh/docs/reference/access-authn-authz/webhook/) -- [证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/) - - 包括 [CSR 认证](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection) - 和[证书签名](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#signing) +- [认证](/zh-cn/docs/reference/access-authn-authz/authentication/) + - [使用 Bootstrap 令牌进行身份认证](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) +- [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) + - [动态准入控制](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/) +- [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/) + - [基于角色的访问控制](/zh-cn/docs/reference/access-authn-authz/rbac/) + - [基于属性的访问控制](/zh-cn/docs/reference/access-authn-authz/abac/) + - [节点鉴权](/zh-cn/docs/reference/access-authn-authz/node/) + - [Webhook 鉴权](/zh-cn/docs/reference/access-authn-authz/webhook/) +- [证书签名请求](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/) + - 包括 [CSR 认证](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection) + 和[证书签名](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#signing) - 服务账户 - - [开发者指导](/zh/docs/tasks/configure-pod-container/configure-service-account/) - - [管理](/zh/docs/reference/access-authn-authz/service-accounts-admin/) + - [开发者指导](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) + - [管理](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/) 你可以了解 - Pod 如何使用 - [Secrets](/zh/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials) + [Secrets](/zh-cn/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials) 获取 API 凭证. diff --git a/content/zh/docs/concepts/security/overview.md b/content/zh-cn/docs/concepts/security/overview.md similarity index 83% rename from content/zh/docs/concepts/security/overview.md rename to content/zh-cn/docs/concepts/security/overview.md index f13c1e06f5ad2..165c339de1fc0 100644 --- a/content/zh/docs/concepts/security/overview.md +++ b/content/zh-cn/docs/concepts/security/overview.md @@ -5,6 +5,15 @@ description: > content_type: concept weight: 1 --- + -## 云原生安全的 4 个 C +## 云原生安全的 4 个 C {#the-4c-s-of-cloud-native-security} 你可以分层去考虑安全性,云原生安全的 4 个 C 分别是云(Cloud)、集群(Cluster)、容器(Container)和代码(Code)。 @@ -50,9 +59,13 @@ The Code layer benefits from strong base (Cloud, Cluster, Container) security la You cannot safeguard against poor security standards in the base layers by addressing security at the Code level. --> -云原生安全模型的每一层都是基于下一个最外层,代码层受益于强大的基础安全层(云、集群、容器)。你无法通过在代码层解决安全问题来为基础层中糟糕的安全标准提供保护。 +云原生安全模型的每一层都是基于下一个最外层,代码层受益于强大的基础安全层(云、集群、容器)。 +你无法通过在代码层解决安全问题来为基础层中糟糕的安全标准提供保护。 -## 云 + +## 云 {#cloud} -### 云提供商安全性 +### 云提供商安全性 {#cloud-provider-security} -如果您是在您自己的硬件或者其他不同的云提供商上运行 Kubernetes 集群, +如果你是在你自己的硬件或者其他不同的云提供商上运行 Kubernetes 集群, 请查阅相关文档来获取最好的安全实践。 下面是一些比较流行的云提供商的安全性文档链接: @@ -108,7 +121,7 @@ Network access to API Server (Control plane) | All access to the Kubernetes cont Network access to Nodes (nodes) | Nodes should be configured to _only_ accept connections (via network access control lists) from the control plane on the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If possible, these nodes should not be exposed on the public internet entirely. Kubernetes access to Cloud Provider API | Each cloud provider needs to grant a different set of permissions to the Kubernetes control plane and nodes. It is best to provide the cluster with cloud provider access that follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) for the resources it needs to administer. The [Kops documentation](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles) provides information about IAM policies and roles. Access to etcd | Access to etcd (the datastore of Kubernetes) should be limited to the control plane only. Depending on your configuration, you should attempt to use etcd over TLS. More information can be found in the [etcd documentation](https://github.com/etcd-io/etcd/tree/master/Documentation). -etcd Encryption | Wherever possible it's a good practice to encrypt all drives at rest, and since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest. +etcd Encryption | Wherever possible it's a good practice to encrypt all storage at rest, and since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest. {{< /table >}} --> @@ -118,13 +131,13 @@ etcd Encryption | Wherever possible it's a good practice to encrypt all drives a {{< table caption="基础设施安全" >}} -Kubetnetes 基础架构关注领域 | 建议 | +Kubernetes 基础架构关注领域 | 建议 | --------------------------------------------- | -------------- | 通过网络访问 API 服务(控制平面)|所有对 Kubernetes 控制平面的访问不允许在 Internet 上公开,同时应由网络访问控制列表控制,该列表包含管理集群所需的 IP 地址集。| 通过网络访问 Node(节点)| 节点应配置为 _仅能_ 从控制平面上通过指定端口来接受(通过网络访问控制列表)连接,以及接受 NodePort 和 LoadBalancer 类型的 Kubernetes 服务连接。如果可能的话,这些节点不应完全暴露在公共互联网上。| Kubernetes 访问云提供商的 API | 每个云提供商都需要向 Kubernetes 控制平面和节点授予不同的权限集。为集群提供云提供商访问权限时,最好遵循对需要管理的资源的[最小特权原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege)。[Kops 文档](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles)提供有关 IAM 策略和角色的信息。| 访问 etcd | 对 etcd(Kubernetes 的数据存储)的访问应仅限于控制平面。根据配置情况,你应该尝试通过 TLS 来使用 etcd。更多信息可以在 [etcd 文档](https://github.com/etcd-io/etcd/tree/master/Documentation)中找到。| -etcd 加密 | 在所有可能的情况下,最好对所有驱动器进行静态数据加密,并且由于 etcd 拥有整个集群的状态(包括机密信息),因此其磁盘更应该进行静态数据加密。| +etcd 加密 | 在所有可能的情况下,最好对所有存储进行静态数据加密,并且由于 etcd 拥有整个集群的状态(包括机密信息),因此其磁盘更应该进行静态数据加密。| {{< /table >}} @@ -136,7 +149,7 @@ There are two areas of concern for securing Kubernetes: * Securing the cluster components that are configurable * Securing the applications which run in the cluster --> -## 集群 +## 集群 {#cluster} 保护 Kubernetes 有两个方面需要注意: @@ -152,7 +165,7 @@ good information practices, read and follow the advice about --> ### 集群组件 {#cluster-components} -如果想要保护集群免受意外或恶意的访问,采取良好的信息管理实践,请阅读并遵循有关[保护集群](/zh/docs/tasks/administer-cluster/securing-a-cluster/)的建议。 +如果想要保护集群免受意外或恶意的访问,采取良好的信息管理实践,请阅读并遵循有关[保护集群](/zh-cn/docs/tasks/administer-cluster/securing-a-cluster/)的建议。 -### 集群中的组件(您的应用) {#cluster-applications} +### 集群中的组件(你的应用) {#cluster-applications} -根据您的应用程序的受攻击面,您可能需要关注安全性的特定面,比如: -如果您正在运行中的一个服务(A 服务)在其他资源链中很重要,并且所运行的另一工作负载(服务 B) +根据你的应用程序的受攻击面,你可能需要关注安全性的特定面,比如: +如果你正在运行中的一个服务(A 服务)在其他资源链中很重要,并且所运行的另一工作负载(服务 B) 容易受到资源枯竭的攻击,则如果你不限制服务 B 的资源的话,损害服务 A 的风险就会很高。 下表列出了安全性关注的领域和建议,用以保护 Kubernetes 中运行的工作负载: 工作负载安全性关注领域 | 建议 | ------------------------------ | --------------------- | -RBAC 授权(访问 Kubernetes API) | https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/ -认证方式 | https://kubernetes.io/zh/docs/concepts/security/controlling-access/ -应用程序 Secret 管理 (并在 etcd 中对其进行静态数据加密) | https://kubernetes.io/zh/docs/concepts/configuration/secret/
        https://kubernetes.io/zh/docs/tasks/administer-cluster/encrypt-data/ -确保 Pod 符合定义的 Pod 安全标准 | https://kubernetes.io/zh/docs/concepts/security/pod-security-standards/#policy-instantiation -服务质量(和集群资源管理)| https://kubernetes.io/zh/docs/tasks/configure-pod-container/quality-service-pod/ -网络策略 | https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/ -Kubernetes Ingress 的 TLS 支持 | https://kubernetes.io/zh/docs/concepts/services-networking/ingress/#tls +RBAC 授权(访问 Kubernetes API) | https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/ +认证方式 | https://kubernetes.io/zh-cn/docs/concepts/security/controlling-access/ +应用程序 Secret 管理 (并在 etcd 中对其进行静态数据加密) | https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/
        https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/encrypt-data/ +确保 Pod 符合定义的 Pod 安全标准 | https://kubernetes.io/zh-cn/docs/concepts/security/pod-security-standards/#policy-instantiation +服务质量(和集群资源管理)| https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/ +网络策略 | https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/ +Kubernetes Ingress 的 TLS 支持 | https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/#tls -## 容器 +## 容器 {#container} 容器安全性不在本指南的探讨范围内。下面是一些探索此主题的建议和连接: 容器关注领域 | 建议 | ------------------------------ | -------------- | -容器漏洞扫描和操作系统依赖安全性 | 作为镜像构建的一部分,您应该扫描您的容器里的已知漏洞。 +容器漏洞扫描和操作系统依赖安全性 | 作为镜像构建的一部分,你应该扫描你的容器里的已知漏洞。 镜像签名和执行 | 对容器镜像进行签名,以维护对容器内容的信任。 禁止特权用户 | 构建容器时,请查阅文档以了解如何在具有最低操作系统特权级别的容器内部创建用户,以实现容器的目标。 -使用带有较强隔离能力的容器运行时 | 选择提供较强隔离能力的[容器运行时类](/zh/docs/concepts/containers/runtime-class/)。 +使用带有较强隔离能力的容器运行时 | 选择提供较强隔离能力的[容器运行时类](/zh-cn/docs/concepts/containers/runtime-class/)。 -## 代码 +## 代码 {#code} -应用程序代码是您最能够控制的主要攻击面之一,虽然保护应用程序代码不在 Kubernetes 安全主题范围内,但以下是保护应用程序代码的建议: +应用程序代码是你最能够控制的主要攻击面之一,虽然保护应用程序代码不在 Kubernetes 安全主题范围内,但以下是保护应用程序代码的建议: -### 代码安全性 +### 代码安全性 {#code-security} {{< table caption="代码安全" >}} 代码关注领域 | 建议 | -------------------------| -------------- | -仅通过 TLS 访问 | 如果您的代码需要通过 TCP 通信,请提前与客户端执行 TLS 握手。除少数情况外,请加密传输中的所有内容。更进一步,加密服务之间的网络流量是一个好主意。这可以通过被称为双向 TLS 或 [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) 的过程来完成,该过程对两个证书持有服务之间的通信执行双向验证。 | +仅通过 TLS 访问 | 如果你的代码需要通过 TCP 通信,请提前与客户端执行 TLS 握手。除少数情况外,请加密传输中的所有内容。更进一步,加密服务之间的网络流量是一个好主意。这可以通过被称为双向 TLS 或 [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) 的过程来完成,该过程对两个证书持有服务之间的通信执行双向验证。 | 限制通信端口范围 | 此建议可能有点不言自明,但是在任何可能的情况下,你都只应公开服务上对于通信或度量收集绝对必要的端口。| 第三方依赖性安全 | 最好定期扫描应用程序的第三方库以了解已知的安全漏洞。每种编程语言都有一个自动执行此检查的工具。 | -静态代码分析 | 大多数语言都提供给了一种方法,来分析代码段中是否存在潜在的不安全的编码实践。只要有可能,你都应该使用自动工具执行检查,该工具可以扫描代码库以查找常见的安全错误,一些工具可以在以下连接中找到:https://owasp.org/www-community/Source_Code_Analysis_Tools | -动态探测攻击 | 您可以对服务运行一些自动化工具,来尝试一些众所周知的服务攻击。这些攻击包括 SQL 注入、CSRF 和 XSS。[OWASP Zed Attack](https://owasp.org/www-project-zap/) 代理工具是最受欢迎的动态分析工具之一。 | +静态代码分析 | 大多数语言都提供给了一种方法,来分析代码段中是否存在潜在的不安全的编码实践。只要有可能,你都应该使用自动工具执行检查,该工具可以扫描代码库以查找常见的安全错误,一些工具可以在以下连接中找到: https://owasp.org/www-community/Source_Code_Analysis_Tools | +动态探测攻击 | 你可以对服务运行一些自动化工具,来尝试一些众所周知的服务攻击。这些攻击包括 SQL 注入、CSRF 和 XSS。[OWASP Zed Attack](https://owasp.org/www-project-zap/) 代理工具是最受欢迎的动态分析工具之一。 | {{< /table >}} @@ -270,12 +283,12 @@ Learn about related Kubernetes security topics: --> 学习了解相关的 Kubernetes 安全主题: -* [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/) -* [Pod 的网络策略](/zh/docs/concepts/services-networking/network-policies/) -* [控制对 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/) -* [保护您的集群](/zh/docs/tasks/administer-cluster/securing-a-cluster/) -* 为控制面[加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/) -* [加密静止状态的数据](/zh/docs/tasks/administer-cluster/encrypt-data/) -* [Kubernetes 中的 Secret](/zh/docs/concepts/configuration/secret/) -* [运行时类](/zh/docs/concepts/containers/runtime-class) +* [Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/) +* [Pod 的网络策略](/zh-cn/docs/concepts/services-networking/network-policies/) +* [控制对 Kubernetes API 的访问](/zh-cn/docs/concepts/security/controlling-access/) +* [保护你的集群](/zh-cn/docs/tasks/administer-cluster/securing-a-cluster/) +* 为控制面[加密通信中的数据](/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/) +* [加密静止状态的数据](/zh-cn/docs/tasks/administer-cluster/encrypt-data/) +* [Kubernetes 中的 Secret](/zh-cn/docs/concepts/configuration/secret/) +* [运行时类](/zh-cn/docs/concepts/containers/runtime-class) diff --git a/content/zh/docs/concepts/security/pod-security-admission.md b/content/zh-cn/docs/concepts/security/pod-security-admission.md similarity index 79% rename from content/zh/docs/concepts/security/pod-security-admission.md rename to content/zh-cn/docs/concepts/security/pod-security-admission.md index 87fddd9ddb4c6..69f629ee682a5 100644 --- a/content/zh/docs/concepts/security/pod-security-admission.md +++ b/content/zh-cn/docs/concepts/security/pod-security-admission.md @@ -29,11 +29,11 @@ The Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-sta different isolation levels for Pods. These standards let you define how you want to restrict the behavior of pods in a clear, consistent fashion. --> -Kubernetes [Pod 安全性标准(Security Standards)](/zh/docs/concepts/security/pod-security-standards/) +Kubernetes [Pod 安全性标准(Security Standards)](/zh-cn/docs/concepts/security/pod-security-standards/) 为 Pod 定义不同的隔离级别。这些标准能够让你以一种清晰、一致的方式定义如何限制 Pod 行为。 作为一项 Beta 功能特性,Kubernetes 提供一种内置的 _Pod 安全性_ {{< glossary_tooltip text="准入控制器" term_id="admission-controller" >}}, -作为 [PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/) +作为 [PodSecurityPolicies](/zh-cn/docs/concepts/security/pod-security-policy/) 特性的后继演化版本。Pod 安全性限制是在 Pod 被创建时在 {{< glossary_tooltip text="名字空间" term_id="namespace" >}}层面实施的。 @@ -51,45 +51,46 @@ The PodSecurityPolicy API is deprecated and will be [removed](/docs/reference/using-api/deprecation-guide/#v1-25) from Kubernetes in v1.25. --> PodSecurityPolicy API 已经被废弃,会在 Kubernetes v1.25 发行版中 -[移除](/zh/docs/reference/using-api/deprecation-guide/#v1-25)。 +[移除](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-25)。 {{< /note >}} -## 启用 `PodSecurity` 准入插件 {#enabling-the-podsecurity-admission-plugin} +## {{% heading "prerequisites" %}} -在 v1.23 中,`PodSecurity` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -是一项 Beta 功能特性,默认被启用。 +要使用此机制,你的集群必须强制执行 Pod 安全准入。 + +### 内置 Pod 安全准入强制执行 + + +在 Kubernetes v{{< skew currentVersion >}} 中,`PodSecurity` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)是一项 Beta 特性, +默认被启用。你必须启用此功能门控。如果你运行的是不同版本的 Kubernetes,请查阅该版本的文档。 -在 v1.22 中,`PodSecurity` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -是一项 Alpha 功能特性,必须在 `kube-apiserver` 上启用才能使用内置的准入插件。 -```shell ---feature-gates="...,PodSecurity=true" -``` -## 替代方案:安装 `PodSecurity` 准入 Webhook {#webhook} +### 替代方案:安装 `PodSecurity` 准入 Webhook {#webhook} -对于无法应用内置 `PodSecurity` 准入插件的环境,无论是因为集群版本低于 v1.22, -或者 `PodSecurity` 特性无法被启用,都可以使用 Beta 版本的 -[验证性准入 Webhook](https://git.k8s.io/pod-security-admission/webhook)。 -来使用 `PodSecurity` 准入逻辑。 +`PodSecurity` 准入逻辑也可用作[验证性准入 Webhook](https://git.k8s.io/pod-security-admission/webhook)。 +该实现也是 Beta 版本。 +对于无法启用内置 `PodSecurity` 准入插件的环境,你可以改为通过验证准入 Webhook 启用该逻辑。 所生成的证书合法期限为 2 年。在证书过期之前, -需要重新生成证书或者去掉 Webhook 以使用内置的准入查件。 +需要重新生成证书或者去掉 Webhook 以使用内置的准入插件。 {{< /note >}} + + @@ -129,11 +132,11 @@ Standards](/docs/concepts/security/pod-security-standards): `privileged`, `basel `restricted`. Refer to the [Pod Security Standards](/docs/concepts/security/pod-security-standards) page for an in-depth look at those requirements. --> -Pod 安全性准入插件对 Pod 的[安全性上下文](/zh/docs/tasks/configure-pod-container/security-context/) -有一定的要求,并且依据 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards) +Pod 安全性准入插件对 Pod 的[安全性上下文](/zh-cn/docs/tasks/configure-pod-container/security-context/) +有一定的要求,并且依据 [Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards) 所定义的三个级别(`privileged`、`baseline` 和 `restricted`)对其他字段也有要求。 关于这些需求的更进一步讨论,请参阅 -[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards/)页面。 +[Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards/)页面。 关于用法示例,可参阅 -[使用名字空间标签来强制实施 Pod 安全标准](/zh/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/)。 +[使用名字空间标签来强制实施 Pod 安全标准](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/)。 -- [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards/) -- [强制实施 Pod 安全性标准](/zh/docs/setup/best-practices/enforcing-pod-security-standards/) -- [通过配置内置的准入控制器强制实施 Pod 安全性标准](/zh/docs/tasks/configure-pod-container/enforce-standards-admission-controller/) -- [使用名字空间标签来实施 Pod 安全性标准](/zh/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/) -- [从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器](/zh/docs/tasks/configure-pod-container/migrate-from-psp/) +- [Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards/) +- [强制实施 Pod 安全性标准](/zh-cn/docs/setup/best-practices/enforcing-pod-security-standards/) +- [通过配置内置的准入控制器强制实施 Pod 安全性标准](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/) +- [使用名字空间标签来实施 Pod 安全性标准](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/) +- [从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器](/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/) diff --git a/content/zh-cn/docs/concepts/security/pod-security-policy.md b/content/zh-cn/docs/concepts/security/pod-security-policy.md new file mode 100644 index 0000000000000..31137aab7f2c6 --- /dev/null +++ b/content/zh-cn/docs/concepts/security/pod-security-policy.md @@ -0,0 +1,1362 @@ +--- +title: Pod 安全策略 +content_type: concept +weight: 30 +--- + + + +{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} + +{{< caution >}} + +PodSecurityPolicy 在 Kubernetes v1.21 版本中被弃用,**将在 v1.25 中删除**。 +我们建议迁移到 [Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission), +或者第三方的准入插件。 +若需了解迁移指南,可参阅[从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器](/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/)。 +关于弃用的更多信息,请查阅 [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。 +{{< /caution >}} + + +Pod 安全策略使得对 Pod 创建和更新进行细粒度的权限控制成为可能。 + + + + +## 什么是 Pod 安全策略? {#what-is-a-pod-security-policy} + +_Pod 安全策略(Pod Security Policy)_ 是集群级别的资源,它能够控制 Pod 规约 +中与安全性相关的各个方面。 +[PodSecurityPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) +对象定义了一组 Pod 运行时必须遵循的条件及相关字段的默认值,只有 Pod 满足这些条件才会被系统接受。 +Pod 安全策略允许管理员控制如下方面: + + +| 控制的角度 | 字段名称 | +| ----------------------------------- | --------------------------------- | +| 运行特权容器 | [`privileged`](#privileged) | +| 使用宿主名字空间 | [`hostPID`、`hostIPC`](#host-namespaces) | +| 使用宿主的网络和端口 | [`hostNetwork`, `hostPorts`](#host-namespaces) | +| 控制卷类型的使用 | [`volumes`](#volumes-and-file-systems) | +| 使用宿主文件系统 | [`allowedHostPaths`](#volumes-and-file-systems) | +| 允许使用特定的 FlexVolume 驱动 | [`allowedFlexVolumes`](#flexvolume-drivers) | +| 分配拥有 Pod 卷的 FSGroup 账号 | [`fsGroup`](#volumes-and-file-systems) | +| 以只读方式访问根文件系统 | [`readOnlyRootFilesystem`](#volumes-and-file-systems) | +| 设置容器的用户和组 ID | [`runAsUser`, `runAsGroup`, `supplementalGroups`](#users-and-groups) | +| 限制 root 账号特权级提升 | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#privilege-escalation) | +| Linux 权能字(Capabilities) | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#capabilities) | +| 设置容器的 SELinux 上下文 | [`seLinux`](#selinux) | +| 指定容器可以挂载的 proc 类型 | [`allowedProcMountTypes`](#allowedprocmounttypes) | +| 指定容器使用的 AppArmor 模版 | [annotations](#apparmor) | +| 指定容器使用的 seccomp 模版 | [annotations](#seccomp) | +| 指定容器使用的 sysctl 模版 | [`forbiddenSysctls`,`allowedUnsafeSysctls`](#sysctl) | + + +## 启用 Pod 安全策略 {#enabling-pod-security-policies} + +Pod 安全策略实现为一种可选的[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)。 +[启用了准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)即可强制实施 +Pod 安全策略,不过如果没有授权认可策略之前即启用准入控制器 **将导致集群中无法创建任何 Pod**。 + + +由于 Pod 安全策略 API(`policy/v1beta1/podsecuritypolicy`)是独立于准入控制器 +来启用的,对于现有集群而言,建议在启用准入控制器之前先添加策略并对其授权。 + + +## 授权策略 {#authorizing-policies} + +PodSecurityPolicy 资源被创建时,并不执行任何操作。为了使用该资源, +需要对发出请求的用户或者目标 Pod +的[服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)授权, +通过允许其对策略执行 `use` 动词允许其使用该策略。 + + +大多数 Kubernetes Pod 不是由用户直接创建的。相反,这些 Pod 是由 +[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)、 +[ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) +或者经由控制器管理器模版化的控制器创建。 +赋予控制器访问策略的权限意味着对应控制器所创建的 *所有* Pod 都可访问策略。 +因此,对策略进行授权的优先方案是为 Pod 的服务账号授予访问权限 +(参见[示例](#run-another-pod))。 + + +### 通过 RBAC 授权 {#via-rbac} + +[RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 是一种标准的 Kubernetes +鉴权模式,可以很容易地用来授权策略访问。 + +首先,某 `Role` 或 `ClusterRole` 需要获得使用 `use` 访问目标策略的权限。 +访问授权的规则看起来像这样: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - <要授权的策略列表> +``` + + +接下来将该 `Role`(或 `ClusterRole`)绑定到授权的用户: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: <绑定名称> +roleRef: + kind: ClusterRole + name: <角色名称> + apiGroup: rbac.authorization.k8s.io +subjects: + # 授权命名空间下的所有服务账号(推荐): + - kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:serviceaccounts: + # 授权特定的服务账号(不建议这样操作): + - kind: ServiceAccount + name: <被授权的服务账号名称> + namespace: <被授权的 Pod 名字空间> + # 授权特定的用户(不建议这样操作): + - kind: User + apiGroup: rbac.authorization.k8s.io + name: <被授权的用户名> +``` + + +如果使用的是 `RoleBinding`(而不是 `ClusterRoleBinding`),授权仅限于与该 +`RoleBinding` 处于同一名字空间中的 Pod。 +可以考虑将这种授权模式和系统组结合,对名字空间中的所有 Pod 授予访问权限。 + +```yaml +# 授权某名字空间中所有服务账号 +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:serviceaccounts +# 或者与此等价,授权给某名字空间中所有被认证过的用户 +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:authenticated +``` + + +参阅[角色绑定示例](/zh-cn/docs/reference/access-authn-authz/rbac#role-binding-examples)查看 +RBAC 绑定的更多实例。 +参阅[下文](#example),查看对 PodSecurityPolicy 进行授权的完整示例。 + + +## 推荐实践 {#recommended-practice} + +PodSecurityPolicy 正在被一个新的、简化的 `PodSecurity` +{{< glossary_tooltip text="准入控制器" term_id="admission-controller" >}}替代。 +有关此变更的更多详细信息,请参阅 +[PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。 +参照下述指导,简化从 PodSecurityPolicy 迁移到新的准入控制器步骤: + + +1. 将 PodSecurityPolicies 限制为 + [Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards)所定义的策略: + + - {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}} + - {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}} + - {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}} + + +2. 通过配置 `system:serviceaccounts:` 组(`` 是目标名字空间), + 仅将 PSP 绑定到整个命名空间。示例: + + ```yaml + apiVersion: rbac.authorization.k8s.io/v1 + # 此集群角色绑定允许 "development" 名字空间中的所有 Pod 使用 baseline PSP。 + kind: ClusterRoleBinding + metadata: + name: psp-baseline-namespaces + roleRef: + kind: ClusterRole + name: psp-baseline + apiGroup: rbac.authorization.k8s.io + subjects: + - kind: Group + name: system:serviceaccounts:development + apiGroup: rbac.authorization.k8s.io + - kind: Group + name: system:serviceaccounts:canary + apiGroup: rbac.authorization.k8s.io + ``` + + +### 故障排查 {#troubleshooting} + +- [控制器管理器组件](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) + 必须运行在安全的 API 端口之上,并且不能拥有超级用户的访问权限。 + 参阅[控制 Kubernetes API 的访问](/zh-cn/docs/concepts/security/controlling-access)以了解 + API 服务器的访问控制。 + + 如果控制器管理器通过可信的 API 端口连接(也称作 `localhost` 监听组件), + 其请求会绕过身份认证和鉴权模块控制,从而导致所有 PodSecurityPolicy 对象都被允许, + 用户亦能授予自身创建特权容器的特权。 + + 关于配置控制器管理器鉴权的进一步细节, + 请参阅[控制器角色](/zh-cn/docs/reference/access-authn-authz/rbac/#controller-roles)。 + + +## 策略顺序 {#policy-order} + +除了限制 Pod 创建与更新,Pod 安全策略也可用来为其所控制的很多字段设置默认值。 +当存在多个策略对象时,Pod 安全策略控制器依据以下条件选择策略: + + +1. 优先考虑允许 Pod 保持原样,不会更改 Pod 字段默认值或其他配置的 PodSecurityPolicy。 + 这类非更改性质的 PodSecurityPolicy 对象之间的顺序无关紧要。 +2. 如果必须要为 Pod 设置默认值或者其他配置,(按名称顺序)选择第一个允许 + Pod 操作的 PodSecurityPolicy 对象。 + +{{< note >}} + +在更新操作期间(这时不允许更改 Pod 规约),仅使用非更改性质的 +PodSecurityPolicy 来对 Pod 执行验证操作。 +{{< /note >}} + + +## 示例 {#example} + +本示例假定你已经有一个启动了 PodSecurityPolicy 准入控制器的集群并且你拥有集群管理员特权。 + + +### 配置 {#set-up} + +为运行此示例,配置一个名字空间和一个服务账号。我们将用这个服务账号来模拟一个非管理员账号的用户。 + +```shell +kubectl create namespace psp-example +kubectl create serviceaccount -n psp-example fake-user +kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user +``` + + +创建两个别名,以更清晰地展示我们所使用的用户账号,同时减少一些键盘输入: + +```shell +alias kubectl-admin='kubectl -n psp-example' +alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example' +``` + + +### 创建一个策略和一个 Pod {#create-a-policy-and-a-pod} + +在一个文件中定义一个示例的 PodSecurityPolicy 对象。 +这里的策略只是用来禁止创建有特权要求的 Pods。 +PodSecurityPolicy 对象的名称必须是合法的 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +{{< codenew file="policy/example-psp.yaml" >}} + + +使用 kubectl 执行创建操作: + +```shell +kubectl-admin create -f example-psp.yaml +``` + + +现在,作为一个非特权用户,尝试创建一个简单的 Pod: + +```shell +kubectl-user create -f- < +输出类似于: +``` +Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden: unable to validate against any pod security policy: [] +``` + + +**发生了什么?** 尽管 PodSecurityPolicy 被创建,Pod 的服务账号或者 +`fake-user` 用户都没有使用该策略的权限。 + +```shell +kubectl-user auth can-i use podsecuritypolicy/example +``` + +``` +no +``` + +创建角色绑定,赋予 `fake-user` `use`(使用)示例策略的权限: + +{{< note >}} + +不建议使用这种方法! +欲了解优先考虑的方法,请参见[下节](#run-another-pod)。 +{{< /note >}} + +```shell +kubectl-admin create role psp:unprivileged \ + --verb=use \ + --resource=podsecuritypolicy \ + --resource-name=example +``` + +输出: + +``` +role "psp:unprivileged" created +``` + +```shell +kubectl-admin create rolebinding fake-user:psp:unprivileged \ + --role=psp:unprivileged \ + --serviceaccount=psp-example:fake-user +``` + +输出: + +``` +rolebinding "fake-user:psp:unprivileged" created +``` + +```shell +kubectl-user auth can-i use podsecuritypolicy/example +``` + +输出: + +``` +yes +``` + + +现在重试创建 Pod: + +```shell +kubectl-user create -f- < +输出类似于: + +``` +pod "pause" created +``` + + +此次尝试不出所料地成功了! +不过任何创建特权 Pod 的尝试还是会被拒绝: + +```shell +kubectl-user create -f- < + +输出类似于: +``` +Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] +``` + + +继续此例之前先删除该 Pod: + +```shell +kubectl-user delete pod pause +``` + + +### 运行另一个 Pod {#run-another-pod} + +我们再试一次,稍微有些不同: + +```shell +kubectl-user create deployment pause --image=k8s.gcr.io/pause +``` + +输出为: + +``` +deployment "pause" created +``` + +```shell +kubectl-user get pods +``` + +输出为: + +``` +No resources found. +``` + +```shell +kubectl-user get events | head -n 2 +``` + +输出为: +``` +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE +1m 2m 15 pause-7774d79b5 ReplicaSet Warning FailedCreate replicaset-controller Error creating: pods "pause-7774d79b5-" is forbidden: no providers available to validate pod request +``` + + +**发生了什么?** 我们已经为用户 `fake-user` 绑定了 `psp:unprivileged` 角色, +为什么还会收到错误 `Error creating: pods "pause-7774d79b5-" is +forbidden: no providers available to validate pod request +(创建错误:pods "pause-7774d79b5" 被禁止:没有可用来验证 pod 请求的驱动)`? +答案在于源文件 - `replicaset-controller`。 +`fake-user` 用户成功地创建了 Deployment,而后者也成功地创建了 ReplicaSet, +不过当 ReplicaSet 创建 Pod 时,发现未被授权使用示例 PodSecurityPolicy 资源。 + + +为了修复这一问题,将 `psp:unprivileged` 角色绑定到 Pod 的服务账号。 +在这里,因为我们没有给出服务账号名称,默认的服务账号是 `default`。 + +```shell +kubectl-admin create rolebinding default:psp:unprivileged \ + --role=psp:unprivileged \ + --serviceaccount=psp-example:default +``` + +输出为: + +```none +rolebinding "default:psp:unprivileged" created +``` + + +现在如果你给 ReplicaSet 控制器一分钟的时间来重试,该控制器最终将能够 +成功地创建 Pod: + +```shell +kubectl-user get pods --watch +``` + +输出类似于: + +```none +NAME READY STATUS RESTARTS AGE +pause-7774d79b5-qrgcb 0/1 Pending 0 1s +pause-7774d79b5-qrgcb 0/1 Pending 0 1s +pause-7774d79b5-qrgcb 0/1 ContainerCreating 0 1s +pause-7774d79b5-qrgcb 1/1 Running 0 2s +``` + + +### 清理 {#clean-up} + +删除名字空间即可清理大部分示例资源: + +```shell +kubectl-admin delete ns psp-example +``` + +输出类似于: + +``` +namespace "psp-example" deleted +``` + + +注意 `PodSecurityPolicy` 资源不是名字空间域的资源,必须单独清理: + +```shell +kubectl-admin delete psp example +``` + +输出类似于: + +```none +podsecuritypolicy "example" deleted +``` + + +### 示例策略 {#example-policies} + +下面是一个你可以创建的约束性非常弱的策略,其效果等价于没有使用 Pod 安全策略准入控制器: + +{{< codenew file="policy/privileged-psp.yaml" >}} + + +下面是一个具有约束性的策略,要求用户以非特权账号运行,禁止可能的向 root 权限的升级, +同时要求使用若干安全机制。 + +{{< codenew file="policy/restricted-psp.yaml" >}} + + +更多的示例可参考 +[Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/#policy-instantiation)。 + + +## 策略参考 {#policy-reference} + +### Privileged + +**Privileged** - 决定是否 Pod 中的某容器可以启用特权模式。 +默认情况下,容器是不可以访问宿主上的任何设备的,不过一个“privileged(特权的)” +容器则被授权访问宿主上所有设备。 +这种容器几乎享有宿主上运行的进程的所有访问权限。 +对于需要使用 Linux 权能字(如操控网络堆栈和访问设备)的容器而言是有用的。 + + +### 宿主名字空间 {#host-namespaces} + +**HostPID** - 控制 Pod 中容器是否可以共享宿主上的进程 ID 空间。 +注意,如果与 `ptrace` 相结合,这种授权可能被利用,导致向容器外的特权逃逸 +(默认情况下 `ptrace` 是被禁止的)。 + +**HostIPC** - 控制 Pod 容器是否可共享宿主上的 IPC 名字空间。 + +**HostNetwork** - 控制是否 Pod 可以使用节点的网络名字空间。 +此类授权将允许 Pod 访问本地回路(loopback)设备、在本地主机(localhost) +上监听的服务、还可能用来监听同一节点上其他 Pod 的网络活动。 + +**HostPorts** -提供可以在宿主网络名字空间中可使用的端口范围列表。 +该属性定义为一组 `HostPortRange` 对象的列表,每个对象中包含 +`min`(含)与 `max`(含)值的设置。 +默认不允许访问宿主端口。 + + +### 卷和文件系统 {#volumes-and-file-systems} + +**Volumes** - 提供一组被允许的卷类型列表。可被允许的值对应于创建卷时可以设置的卷来源。 +卷类型的完整列表可参见[卷类型](/zh-cn/docs/concepts/storage/volumes/#types-of-volumes)。 +此外,`*` 可以用来允许所有卷类型。 + +对于新的 Pod 安全策略设置而言,建议设置的卷类型的**最小列表**包含: + +- `configMap` +- `downwardAPI` +- `emptyDir` +- `persistentVolumeClaim` +- `secret` +- `projected` + +{{< warning >}} + +PodSecurityPolicy 并不限制可以被 `PersistentVolumeClaim` 所引用的 +`PersistentVolume` 对象的类型。 +此外 `hostPath` 类型的 `PersistentVolume` 不支持只读访问模式。 +应该仅赋予受信用户创建 `PersistentVolume` 对象的访问权限。 +{{< /warning >}} + + +**FSGroup** - 控制应用到某些卷上的附加用户组。 + +- *MustRunAs* - 要求至少指定一个 `range`。 + 使用范围中的最小值作为默认值。所有 range 值都会被用来执行验证。 +- *MayRunAs* - 要求至少指定一个 `range`。 + 允许不设置 `FSGroups`,且无默认值。 + 如果 `FSGroup` 被设置,则所有 range 值都会被用来执行验证检查。 +- *RunAsAny* - 不提供默认值。允许设置任意 `fsGroup` ID 值。 + + +**AllowedHostPaths** - 设置一组宿主文件目录,这些目录项可以在 `hostPath` 卷中使用。 +列表为空意味着对所使用的宿主目录没有限制。 +此选项定义包含一个对象列表,表中对象包含 `pathPrefix` 字段,用来表示允许 +`hostPath` 卷挂载以所指定前缀开头的路径。 +对象中还包含一个 `readOnly` 字段,用来表示对应的卷必须以只读方式挂载。 +例如: + + +```yaml +allowedHostPaths: + # 下面的设置允许 "/foo"、"/foo/"、"/foo/bar" 等路径,但禁止 + # "/fool"、"/etc/foo" 这些路径。 + # "/foo/../" 总会被当作非法路径。 + - pathPrefix: "/foo" + readOnly: true # 仅允许只读模式挂载 +``` + +{{< warning >}} + +容器如果对宿主文件系统拥有不受限制的访问权限,就可以有很多种方式提升自己的特权, +包括读取其他容器中的数据、滥用系统服务(如 `kubelet`)的凭据信息等。 + + +由可写入的目录所构造的 `hostPath` 卷能够允许容器写入数据到宿主文件系统, +并且在写入时避开 `pathPrefix` 所设置的目录限制。 +`readOnly: true` 这一设置在 Kubernetes 1.11 版本之后可用。 +必须针对 `allowedHostPaths` 中的 *所有* 条目设置此属性才能有效地限制容器只能访问 +`pathPrefix` 所指定的目录。 +{{< /warning >}} + + +**ReadOnlyRootFilesystem** - 要求容器必须以只读方式挂载根文件系统来运行 +(即不允许存在可写入层)。 + + +### FlexVolume 驱动 {#flexvolume-drivers} + +此配置指定一个可以被 FlexVolume 卷使用的驱动程序的列表。 +空的列表或者 nil 值意味着对驱动没有任何限制。 +请确保[`volumes`](#volumes-and-file-systems) 字段包含了 `flexVolume` 卷类型, +否则所有 FlexVolume 驱动都被禁止。 + + + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: allow-flex-volumes +spec: + # spec 的其他字段 + volumes: + - flexVolume + allowedFlexVolumes: + - driver: example/lvm + - driver: example/cifs +``` + + +### 用户和组 {#users-and-groups} + +**RunAsUser** - 控制使用哪个用户 ID 来运行容器。 + +- *MustRunAs* - 必须至少设置一个 `range`。使用该范围内的第一个值作为默认值。 + 所有范围值都会被验证检查。 + +- *MustRunAsNonRoot* - 要求提交的 Pod 具有非零 `runAsUser` 值,或在镜像中 + (使用 UID 数值)定义了 `USER` 环境变量。 + 如果 Pod 既没有设置 `runAsNonRoot`,也没有设置 `runAsUser`,则该 Pod + 会被修改以设置 `runAsNonRoot=true`,从而要求容器通过 `USER` 指令给出非零的数值形式的用户 ID。 + 此配置没有默认值。采用此配置时,强烈建议设置 `allowPrivilegeEscalation=false`。 +- *RunAsAny* - 没有提供默认值。允许指定任何 `runAsUser` 配置。 + + +**RunAsGroup** - 控制运行容器时使用的主用户组 ID。 + +- *MustRunAs* - 要求至少指定一个 `range` 值。第一个范围中的最小值作为默认值。 + 所有范围值都被用来执行验证检查。 +- *MayRunAs* - 不要求设置 `RunAsGroup`。 + 不过,如果指定了 `RunAsGroup` 被设置,所设置值必须处于所定义的范围内。 +- *RunAsAny* - 未指定默认值。允许 `runAsGroup` 设置任何值。 + + +**SupplementalGroups** - 控制容器可以添加的组 ID。 + +- *MustRunAs* - 要求至少指定一个 `range` 值。第一个范围中的最小值用作默认值。 + 所有范围值都被用来执行验证检查。 +- *MayRunAs* - 要求至少指定一个 `range` 值。 + 允许不指定 `supplementalGroups` 且不设置默认值。 + 如果 `supplementalGroups` 被设置,则所有 range 值都被用来执行验证检查。 +- *RunAsAny* - 未指定默认值。允许为 `supplementalGroups` 设置任何值。 + + +### 特权提升 {#privilege-escalation} + +这一组选项控制容器的 `allowPrivilegeEscalation` 属性。该属性直接决定是否为容器进程设置 +[`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) +参数。此参数会禁止 `setuid` 属性的可执行文件更改有效用户 ID(EUID), +并且禁止启用额外权能的文件。例如,`no_new_privs` 会禁止使用 `ping` 工具。 +如果想有效地实施 `MustRunAsNonRoot` 控制,需要配置这一选项。 + + +**AllowPrivilegeEscalation** - 决定是否用户可以将容器的安全上下文设置为 +`allowPrivilegeEscalation=true`。默认设置下,这样做是允许的, +目的是避免造成现有的 `setuid` 应用无法运行。将此选项设置为 `false` +可以确保容器的所有子进程都无法获得比父进程更多的特权。 + + +**DefaultAllowPrivilegeEscalation** - 为 `allowPrivilegeEscalation` 选项设置默认值。 +不设置此选项时的默认行为是允许特权提升,以便运行 setuid 程序。 +如果不希望运行 setuid 程序,可以使用此字段将选项的默认值设置为禁止, +同时仍然允许 Pod 显式地请求 `allowPrivilegeEscalation`。 + + +### 权能字 {#capabilities} + +Linux 权能字(Capabilities)将传统上与超级用户相关联的特权作了细粒度的分解。 +其中某些权能字可以用来提升特权,打破容器边界,可以通过 PodSecurityPolicy 来限制。 +关于 Linux 权能字的更多细节,可参阅 +[capabilities(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html)。 + +下列字段都可以配置为权能字的列表。表中的每一项都是 `ALL_CAPS` 中的一个权能字名称, +只是需要去掉 `CAP_` 前缀。 + + +**AllowedCapabilities** - 给出可以被添加到容器的权能字列表。 +默认的权能字集合是被隐式允许的那些。空集合意味着只能使用默认权能字集合, +不允许添加额外的权能字。`*` 可以用来设置允许所有权能字。 + + +**RequiredDropCapabilities** - 必须从容器中去除的权能字。 +所给的权能字会从默认权能字集合中去除,并且一定不可以添加。 +`RequiredDropCapabilities` 中列举的权能字不能出现在 +`AllowedCapabilities` 或 `DefaultAddCapabilities` 所给的列表中。 + + +**DefaultAddCapabilities** - 默认添加到容器的权能字集合。 +这一集合是作为容器运行时所设值的补充。 +关于使用 Docker 容器运行引擎时默认的权能字列表, +可参阅你的容器运行时的文档来了解使用 Linux 权能字的信息。 + + +### SELinux + +- *MustRunAs* - 要求必须配置 `seLinuxOptions`。默认使用 `seLinuxOptions`。 + 针对 `seLinuxOptions` 所给值执行验证检查。 +- *RunAsAny* - 没有提供默认值。允许任意指定的 `seLinuxOptions` 选项。 + + +### AllowedProcMountTypes + +`allowedProcMountTypes` 是一组可以允许的 proc 挂载类型列表。 +空表或者 nil 值表示只能使用 `DefaultProcMountType`。 + +`DefaultProcMount` 使用容器运行时的默认值设置来决定 `/proc` 的只读挂载模式和路径屏蔽。 +大多数容器运行时都会屏蔽 `/proc` 下面的某些路径以避免特殊设备或信息被不小心暴露给容器。 +这一配置使所有 `Default` 字符串值来表示。 + +此外唯一的ProcMountType 是 `UnmaskedProcMount`,意味着即将绕过容器运行时的路径屏蔽行为, +确保新创建的 `/proc` 不会被容器修改。此配置用字符串 `Unmasked` 来表示。 + + +### AppArmor + +通过 PodSecurityPolicy 上的注解来控制。 +详情请参阅 +[AppArmor 文档](/zh-cn/docs/tutorials/security/apparmor/#podsecuritypolicy-annotations)。 + + + +### Seccomp + +从 Kubernetes v1.19 开始,你可以使用 Pod 或容器的 `securityContext` 中的 `seccompProfile` +字段来[控制 seccomp 配置的使用](/zh-cn/docs/tutorials/security/seccomp/)。 +在更早的版本中,seccomp 是通过为 Pod 添加注解来控制的。 +相同的 PodSecurityPolicy 可以用于不同版本,进而控制如何应用对应的字段或注解。 + +**seccomp.security.alpha.kubernetes.io/defaultProfileName** - +注解用来指定为容器配置默认的 seccomp 模版。可选值为: + + +- `unconfined` - 如果没有指定其他替代方案,Seccomp 不会被应用到容器进程上 + (Kubernets 中的默认设置)。 +- `runtime/default` - 使用默认的容器运行时模版。 +- `docker/default` - 使用 Docker 的默认 seccomp 模版。自 1.11 版本废弃。 + 应改为使用 `runtime/default`。 +- `localhost/<路径名>` - 指定节点上路径 `/<路径名>` 下的一个文件作为其模版。 + 其中 `` 是通过 `kubelet` 的标志 `--seccomp-profile-root` 来指定的。 + 如果未定义 `--seccomp-profile-root` 标志,则使用默认的路径 `/seccomp`, + 其中 `` 是通过 `--root-dir` 标志来设置的。 + + {{< note >}} + + 从 Kubernetes v1.19 开始,`--seccomp-profile-root` 标志已被启用。 + 用户应尝试使用默认路径。 + {{< /note >}} + + +**seccomp.security.alpha.kubernetes.io/allowedProfileNames** - 指定可以为 +Pod seccomp 注解配置的值的注解。取值为一个可用值的列表。 +表中每项可以是上述各值之一,还可以是 `*`,用来表示允许所有的模版。 +如果没有设置此注解,意味着默认的 seccomp 模版是不可更改的。 + + +### Sysctl + +默认情况下,所有的安全的 sysctl 都是被允许的。 + + +- `forbiddenSysctls` - 用来排除某些特定的 sysctl。 + 你可以在此列表中禁止一些安全的或者不安全的 sysctl。 + 此选项设置为 `*` 意味着禁止设置所有 sysctl。 +- `allowedUnsafeSysctls` - 用来启用那些被默认列表所禁用的 sysctl, + 前提是所启用的 sysctl 没有被列在 `forbiddenSysctls` 中。 + + +参阅 [Sysctl 文档](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy)。 + +## {{% heading "whatsnext" %}} + + +- 参阅 [PodSecurityPolicy Deprecation: Past, Present, and + Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/), + 了解 Pod 安全策略的未来。 + +- 参阅 [Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/), + 了解策略建议。 +- 阅读 [PodSecurityPolicy 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy), + 了解 API 细节。 + diff --git a/content/zh-cn/docs/concepts/security/pod-security-standards.md b/content/zh-cn/docs/concepts/security/pod-security-standards.md new file mode 100644 index 0000000000000..243bd9529c360 --- /dev/null +++ b/content/zh-cn/docs/concepts/security/pod-security-standards.md @@ -0,0 +1,655 @@ +--- +title: Pod 安全性标准 +description: > + 详细了解 Pod 安全性标准(Pod Security Standards)中所定义的不同策略级别。 +content_type: concept +weight: 10 +--- + + + + + +Pod 安全性标准定义了三种不同的 **策略(Policy)**,以广泛覆盖安全应用场景。 +这些策略是 **叠加式的(Cumulative)**,安全级别从高度宽松至高度受限。 +本指南概述了每个策略的要求。 + + +| Profile | 描述 | +| ------ | ----------- | +| Privileged | 不受限制的策略,提供最大可能范围的权限许可。此策略允许已知的特权提升。 | +| Baseline | 限制性最弱的策略,禁止已知的策略提升。允许使用默认的(规定最少)Pod 配置。 | +| Restricted | 限制性非常强的策略,遵循当前的保护 Pod 的最佳实践。 | + + + + +## Profile 细节 {#profile-details} + +### Privileged + + +**_Privileged_ 策略是有目的地开放且完全无限制的策略。** +此类策略通常针对由特权较高、受信任的用户所管理的系统级或基础设施级负载。 + +Privileged 策略定义中限制较少。默认允许的(Allow-by-default)实施机制(例如 gatekeeper) +可以缺省设置为 Privileged。 +与此不同,对于默认拒绝(Deny-by-default)的实施机制(如 Pod 安全策略)而言, +Privileged 策略应该禁止所有限制。 + +### Baseline + + +**_Baseline_ 策略的目标是便于常见的容器化应用采用,同时禁止已知的特权提升。** +此策略针对的是应用运维人员和非关键性应用的开发人员。 +下面列举的控制应该被实施(禁止): + +{{< note >}} + +在下述表格中,通配符(`*`)意味着一个列表中的所有元素。 +例如 `spec.containers[*].securityContext` 表示 _所定义的所有容器_ 的安全性上下文对象。 +如果所列出的任一容器不能满足要求,整个 Pod 将无法通过校验。 +{{< /note >}} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        Baseline 策略规范
        控制(Control)策略(Policy)
        HostProcess +

        + Windows Pod 提供了运行 HostProcess 容器 的能力,这使得对 Windows 节点的特权访问成为可能。Baseline 策略中禁止对宿主的特权访问。{{< feature-state for_k8s_version="v1.23" state="beta" >}} +

        +

        限制的字段

        +
          +
        • spec.securityContext.windowsOptions.hostProcess
        • +
        • spec.containers[*].securityContext.windowsOptions.hostProcess
        • +
        • spec.initContainers[*].securityContext.windowsOptions.hostProcess
        • +
        • spec.ephemeralContainers[*].securityContext.windowsOptions.hostProcess
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • false
        • +
        +
        宿主名字空间 +

        必须禁止共享宿主上的名字空间。

        +

        限制的字段

        +
          +
        • spec.hostNetwork
        • +
        • spec.hostPID
        • +
        • spec.hostIPC
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • false
        • +
        +
        特权容器 +

        特权 Pod 会使大多数安全性机制失效,必须被禁止。

        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.privileged
        • +
        • spec.initContainers[*].securityContext.privileged
        • +
        • spec.ephemeralContainers[*].securityContext.privileged
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • false
        • +
        +
        权能 +

        必须禁止添加除下列字段之外的权能。

        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.capabilities.add
        • +
        • spec.initContainers[*].securityContext.capabilities.add
        • +
        • spec.ephemeralContainers[*].securityContext.capabilities.add
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • AUDIT_WRITE
        • +
        • CHOWN
        • +
        • DAC_OVERRIDE
        • +
        • FOWNER
        • +
        • FSETID
        • +
        • KILL
        • +
        • MKNOD
        • +
        • NET_BIND_SERVICE
        • +
        • SETFCAP
        • +
        • SETGID
        • +
        • SETPCAP
        • +
        • SETUID
        • +
        • SYS_CHROOT
        • +
        +
        HostPath 卷 +

        必须禁止 HostPath 卷。

        +

        限制的字段

        +
          +
        • spec.volumes[*].hostPath
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        +
        +
        宿主端口 +

        应该禁止使用宿主端口,或者至少限制只能使用某确定列表中的端口。

        +

        限制的字段

        +
          +
        • spec.containers[*].ports[*].hostPort
        • +
        • spec.initContainers[*].ports[*].hostPort
        • +
        • spec.ephemeralContainers[*].ports[*].hostPort
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • 已知列表
        • +
        • 0
        • +
        +
        AppArmor +

        在受支持的主机上,默认使用 runtime/default AppArmor 配置。Baseline 策略应避免覆盖或者禁用默认策略,以及限制覆盖一些配置集合的权限。

        +

        限制的字段

        +
          +
        • metadata.annotations["container.apparmor.security.beta.kubernetes.io/*"]
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • runtime/default
        • +
        • localhost/*
        • +
        +
        SELinux +

        设置 SELinux 类型的操作是被限制的,设置自定义的 SELinux 用户或角色选项是被禁止的。

        +

        限制的字段

        +
          +
        • spec.securityContext.seLinuxOptions.type
        • +
        • spec.containers[*].securityContext.seLinuxOptions.type
        • +
        • spec.initContainers[*].securityContext.seLinuxOptions.type
        • +
        • spec.ephemeralContainers[*].securityContext.seLinuxOptions.type
        • +
        +

        准许的取值

        +
          +
        • 未定义、""
        • +
        • container_t
        • +
        • container_init_t
        • +
        • container_kvm_t
        • +
        +
        +

        限制的字段

        +
          +
        • spec.securityContext.seLinuxOptions.user
        • +
        • spec.containers[*].securityContext.seLinuxOptions.user
        • +
        • spec.initContainers[*].securityContext.seLinuxOptions.user
        • +
        • spec.ephemeralContainers[*].securityContext.seLinuxOptions.user
        • +
        • spec.securityContext.seLinuxOptions.role
        • +
        • spec.containers[*].securityContext.seLinuxOptions.role
        • +
        • spec.initContainers[*].securityContext.seLinuxOptions.role
        • +
        • spec.ephemeralContainers[*].securityContext.seLinuxOptions.role
        • +
        +

        准许的取值

        +
          +
        • 未定义、""
        • +
        +
        /proc挂载类型 +

        要求使用默认的 /proc 掩码以减小攻击面。

        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.procMount
        • +
        • spec.initContainers[*].securityContext.procMount
        • +
        • spec.ephemeralContainers[*].securityContext.procMount
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • Default
        • +
        +
        Seccomp +

        Seccomp 配置必须不能显式设置为 Unconfined

        +

        限制的字段

        +
          +
        • spec.securityContext.seccompProfile.type
        • +
        • spec.containers[*].securityContext.seccompProfile.type
        • +
        • spec.initContainers[*].securityContext.seccompProfile.type
        • +
        • spec.ephemeralContainers[*].securityContext.seccompProfile.type
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • RuntimeDefault
        • +
        • Localhost
        • +
        +
        Sysctls +

        Sysctls 可以禁用安全机制或影响宿主上所有容器,因此除了若干“安全”的子集之外,应该被禁止。如果某 sysctl 是受容器或 Pod 的名字空间限制,且与节点上其他 Pod 或进程相隔离,可认为是安全的。

        +

        限制的字段

        +
          +
        • spec.securityContext.sysctls[*].name
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • kernel.shm_rmid_forced
        • +
        • net.ipv4.ip_local_port_range
        • +
        • net.ipv4.ip_unprivileged_port_start
        • +
        • net.ipv4.tcp_syncookies
        • +
        • net.ipv4.ping_group_range
        • +
        +
        + +### Restricted + + +**_Restricted_ 策略旨在实施当前保护 Pod 的最佳实践,尽管这样作可能会牺牲一些兼容性。** +该类策略主要针对运维人员和安全性很重要的应用的开发人员,以及不太被信任的用户。 +下面列举的控制需要被实施(禁止): + +{{< note >}} + +在下述表格中,通配符(`*`)意味着一个列表中的所有元素。 +例如 `spec.containers[*].securityContext` 表示 **所定义的所有容器** 的安全性上下文对象。 +如果所列出的任一容器不能满足要求,整个 Pod 将无法通过校验。 +{{< /note >}} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        Restricted 策略规范
        控制策略
        Baseline 策略的所有要求。
        卷类型 +

        除了限制 HostPath 卷之外,此类策略还限制可以通过 PersistentVolumes 定义的非核心卷类型。

        +

        限制的字段

        +
          +
        • spec.volumes[*]
        • +
        +

        准许的取值

        + spec.volumes[*] 列表中的每个条目必须将下面字段之一设置为非空值: +
          +
        • spec.volumes[*].configMap
        • +
        • spec.volumes[*].csi
        • +
        • spec.volumes[*].downwardAPI
        • +
        • spec.volumes[*].emptyDir
        • +
        • spec.volumes[*].ephemeral
        • +
        • spec.volumes[*].persistentVolumeClaim
        • +
        • spec.volumes[*].projected
        • +
        • spec.volumes[*].secret
        • +
        +
        特权提升(v1.8+) +

        禁止(通过 SetUID 或 SetGID 文件模式)获得特权提升。

        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.allowPrivilegeEscalation
        • +
        • spec.initContainers[*].securityContext.allowPrivilegeEscalation
        • +
        • spec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation
        • +
        +

        允许的取值

        +
          +
        • false
        • +
        +
        以非 root 账号运行 +

        容器必须以非 root 账号运行。

        +

        限制的字段

        +
          +
        • spec.securityContext.runAsNonRoot
        • +
        • spec.containers[*].securityContext.runAsNonRoot
        • +
        • spec.initContainers[*].securityContext.runAsNonRoot
        • +
        • spec.ephemeralContainers[*].securityContext.runAsNonRoot
        • +
        +

        准许的取值

        +
          +
        • true
        • +
        + + 如果 Pod 级别 spec.securityContext.runAsNonRoot 设置为 true,则允许容器组的安全上下文字段设置为 未定义/nil。 + +
        非 root 用户(v1.23+) +

        容器不可以将 runAsUser 设置为 0

        +

        限制的字段

        +
          +
        • spec.securityContext.runAsUser
        • +
        • spec.containers[*].securityContext.runAsUser
        • +
        • spec.initContainers[*].securityContext.runAsUser
        • +
        • spec.ephemeralContainers[*].securityContext.runAsUser
        • +
        +

        准许的取值

        +
          +
        • 所有的非零值
        • +
        • undefined/null
        • +
        +
        Seccomp (v1.19+) +

        Seccomp Profile 必须被显式设置成一个允许的值。禁止使用 Unconfined Profile 或者指定 不存在的 Profile。

        +

        限制的字段

        +
          +
        • spec.securityContext.seccompProfile.type
        • +
        • spec.containers[*].securityContext.seccompProfile.type
        • +
        • spec.initContainers[*].securityContext.seccompProfile.type
        • +
        • spec.ephemeralContainers[*].securityContext.seccompProfile.type
        • +
        +

        准许的取值

        +
          +
        • RuntimeDefault
        • +
        • Localhost
        • +
        + + 如果 Pod 级别的 spec.securityContext.seccompProfile.type 已设置得当,容器级别的安全上下文字段可以为 未定义/nil。反而言之,如果 所有的 容器级别的安全上下文字段已设置,则 Pod 级别的字段可为 未定义/nil。 + +
        权能(v1.22+) +

        + 容器必须弃用 ALL 权能,并且只允许添加 NET_BIND_SERVICE 权能。 +

        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.capabilities.drop
        • +
        • spec.initContainers[*].securityContext.capabilities.drop
        • +
        • spec.ephemeralContainers[*].securityContext.capabilities.drop
        • +
        +

        准许的取值

        +
          +
        • 包括 ALL 在内的任意权能列表。
        • +
        +
        +

        限制的字段

        +
          +
        • spec.containers[*].securityContext.capabilities.add
        • +
        • spec.initContainers[*].securityContext.capabilities.add
        • +
        • spec.ephemeralContainers[*].securityContext.capabilities.add
        • +
        +

        准许的取值

        +
          +
        • 未定义、nil
        • +
        • NET_BIND_SERVICE
        • +
        +
        + + +## 策略实例化 {#policy-instantiation} + +将策略定义从策略实例中解耦出来有助于形成跨集群的策略理解和语言陈述, +以免绑定到特定的下层实施机制。 + +随着相关机制的成熟,这些机制会按策略分别定义在下面。特定策略的实施方法不在这里定义。 + + +[**Pod 安全性准入控制器**](/zh-cn/docs/concepts/security/pod-security-admission/) + +- {{< example file="security/podsecurity-privileged.yaml" >}}Privileged 名字空间{{< /example >}} +- {{< example file="security/podsecurity-baseline.yaml" >}}Baseline 名字空间{{< /example >}} +- {{< example file="security/podsecurity-restricted.yaml" >}}Restricted 名字空间{{< /example >}} + + +[**PodSecurityPolicy**](/zh-cn/docs/concepts/security/pod-security-policy/) (已弃用) + +- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}} +- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}} +- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}} + + +### 替代方案 {#alternatives} + +{{% thirdparty-content %}} + + +在 Kubernetes 生态系统中还在开发一些其他的替代方案,例如: + +- [Kubewarden](https://github.com/kubewarden) +- [Kyverno](https://kyverno.io/policies/pod-security/) +- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) + + +## 常见问题 {#faq} + +### 为什么不存在介于 Privileged 和 Baseline 之间的策略类型 + + +这里定义的三种策略框架有一个明晰的线性递进关系,从最安全(Restricted)到最不安全, +并且覆盖了很大范围的工作负载。特权要求超出 Baseline 策略者通常是特定于应用的需求, +所以我们没有在这个范围内提供标准框架。 +这并不意味着在这样的情形下仍然只能使用 Privileged 框架, +只是说处于这个范围的策略需要因地制宜地定义。 + +SIG Auth 可能会在将来考虑这个范围的框架,前提是有对其他框架的需求。 + + +### 安全策略与安全上下文的区别是什么? + +[安全上下文](/zh-cn/docs/tasks/configure-pod-container/security-context/)在运行时配置 Pod +和容器。安全上下文是在 Pod 清单中作为 Pod 和容器规约的一部分来定义的, +所代表的是传递给容器运行时的参数。 + + +安全策略则是控制面用来对安全上下文以及安全性上下文之外的参数实施某种设置的机制。 +在 2020 年 7 月, +[Pod 安全性策略](/zh-cn/docs/concepts/security/pod-security-policy/)已被废弃, +取而代之的是内置的 [Pod 安全性准入控制器](/zh-cn/docs/concepts/security/pod-security-admission/)。 + + +### 我应该为我的 Windows Pod 实施哪种框架? + +Kubernetes 中的 Windows 负载与标准的基于 Linux 的负载相比有一些局限性和区别。 +尤其是 Pod SecurityContext +字段[对 Windows 不起作用](/zh-cn/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext)。 +因此,目前没有对应的标准 Pod 安全性框架。 + + +如果你为一个 Windows Pod 应用了 Restricted 策略,**可能会** 对该 Pod 的运行时产生影响。 +Restricted 策略需要强制执行 Linux 特有的限制(如 seccomp Profile,并且禁止特权提升)。 +如果 kubelet 和/或其容器运行时忽略了 Linux 特有的值,那么应该不影响 Windows Pod 正常工作。 +然而,对于使用 Windows 容器的 Pod 来说,缺乏强制执行意味着相比于 Restricted 策略,没有任何额外的限制。 + + +你应该只在 Privileged 策略下使用 HostProcess 标志来创建 HostProcess Pod。 +在 Baseline 和 Restricted 策略下,创建 Windows HostProcess Pod 是被禁止的, +因此任何 HostProcess Pod 都应该被认为是有特权的。 + + +### 沙箱(Sandboxed)Pod 怎么处理? {#what-about-sandboxed-pods} + +现在还没有 API 标准来控制 Pod 是否被视作沙箱化 Pod。 +沙箱 Pod 可以通过其是否使用沙箱化运行时(如 gVisor 或 Kata Container)来辨别, +不过目前还没有关于什么是沙箱化运行时的标准定义。 + + +沙箱化负载所需要的保护可能彼此各不相同。例如,当负载与下层内核直接隔离开来时, +限制特权化操作的许可就不那么重要。这使得那些需要更多许可权限的负载仍能被有效隔离。 + +此外,沙箱化负载的保护高度依赖于沙箱化的实现方法。 +因此,现在还没有针对所有沙箱化负载的建议策略。 + diff --git a/content/zh-cn/docs/concepts/security/rbac-good-practices.md b/content/zh-cn/docs/concepts/security/rbac-good-practices.md new file mode 100644 index 0000000000000..a8ccbaca4e77e --- /dev/null +++ b/content/zh-cn/docs/concepts/security/rbac-good-practices.md @@ -0,0 +1,357 @@ +--- +title: 基于角色的访问控制良好实践 +description: > + 为集群操作人员提供的良好的 RBAC 设计原则和实践。 +content_type: concept +weight: 60 +--- + + + + + + + +Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} +是一项重要的安全控制措施,用于保证集群用户和工作负载只能访问履行自身角色所需的资源。 +在为集群用户设计权限时,请务必确保集群管理员知道可能发生特权提级的地方, +降低因过多权限而导致安全事件的风险。 + +此文档的良好实践应该与通用 +[RBAC 文档](/zh-cn/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update)一起阅读。 + + + + +## 通用的良好实践 {#general-good-practice} + +### 最小特权 {#least-privilege} + + +理想情况下,分配给用户和服务帐户的 RBAC 权限应该是最小的。 +仅应使用操作明确需要的权限,虽然每个集群会有所不同,但可以应用的一些常规规则: + + +- 尽可能在命名空间级别分配权限。授予用户在特定命名空间中的权限时使用 RoleBinding + 而不是 ClusterRoleBinding。 +- 尽可能避免通过通配符设置权限,尤其是对所有资源的权限。 + 由于 Kubernetes 是一个可扩展的系统,因此通过通配符来授予访问权限不仅会授予集群中当前的所有对象类型, + 还包含所有未来被创建的所有对象类型。 +- 管理员不应使用 `cluster-admin` 账号,除非特别需要。为低特权帐户提供 + [伪装权限](/zh-cn/docs/reference/access-authn-authz/authentication/#user-impersonation) + 可以避免意外修改集群资源。 +- 避免将用户添加到 `system:masters` 组。任何属于此组成员的用户都会绕过所有 RBAC 权限检查, + 始终具有不受限制的超级用户访问权限,并且不能通过删除 `RoleBinding` 或 `ClusterRoleBinding` + 来取消其权限。顺便说一句,如果集群是使用 Webhook 鉴权,此组的成员身份也会绕过该 + Webhook(来自属于该组成员的用户的请求永远不会发送到 Webhook)。 + + +### 最大限度地减少特权令牌的分发 {#minimize-distribution-of-privileged-tokens} + + +理想情况下,不应为 Pod 分配具有强大权限(例如,在[特权提级的风险](#privilege-escalation-risks)中列出的任一权限)的服务帐户。 +如果工作负载需要比较大的权限,请考虑以下做法: +- 限制运行此类 Pod 的节点数量。确保你运行的任何 DaemonSet 都是必需的, + 并且以最小权限运行,以限制容器逃逸的影响范围。 +- 避免将此类 Pod 与不可信任或公开的 Pod 在一起运行。 + 考虑使用[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)、 + [节点亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)或 + [Pod 反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)确保 + Pod 不会与不可信或不太受信任的 Pod 一起运行。 + 特别注意可信度不高的 Pod 不符合 **Restricted** Pod 安全标准的情况。 + +### 加固 {#hardening} + +Kubernetes 默认提供访问权限并非是每个集群都需要的。 +审查默认提供的 RBAC 权限为安全加固提供了机会。 +一般来说,不应该更改 `system:` 帐户的某些权限,有一些方式来强化现有集群的权限: + + +- 审查 `system:unauthenticated` 组的绑定,并在可能的情况下将其删除, + 因为这会给所有能够访问 API 服务器的人以网络级别的权限。 +- 通过设置 `automountServiceAccountToken: false` 来避免服务账号令牌的默认自动挂载, + 有关更多详细信息,请参阅[使用默认服务账号令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)。 + 此参数可覆盖 Pod 服务账号设置,而需要服务账号令牌的工作负载仍可以挂载。 + + +### 定期检查 {#periodic-review} +定期检查 Kubernetes RBAC 设置是否有冗余条目和提权可能性是至关重要的。 +如果攻击者能够创建与已删除用户同名的用户账号, +他们可以自动继承被删除用户的所有权限,尤其是分配给该用户的权限。 + + +## Kubernetes RBAC - 权限提权的风险 {#privilege-escalation-risks} + +在 Kubernetes RBAC 中有许多特权,如果被授予, +用户或服务帐户可以提升其在集群中的权限并可能影响集群外的系统。 + +本节旨在提醒集群操作员需要注意的不同领域, +以确保他们不会无意中授予超出预期的集群访问权限。 + + +### 列举 Secret {#listing-secrets} + +大家都很清楚,若允许对 Secrets 执行 `get` 访问,用户就获得了访问 Secret 内容的能力。 +同样需要注意的是:`list` 和 `watch` 访问也会授权用户获取 Secret 的内容。 +例如,当返回 List 响应时(例如,通过 +`kubectl get secrets -A -o yaml`),响应包含所有 Secret 的内容。 + + +### 工作负载的创建 {#workload-creation} + +能够创建工作负载的用户(Pod 或管理 Pod 的[工作负载资源](/zh-cn/docs/concepts/workloads/controllers/)) +能够访问下层的节点,除非基于 Kubernetes 的 +[Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/)做限制。 + + +可以运行特权 Pod 的用户可以利用该访问权限获得节点访问权限, +并可能进一步提升他们的特权。如果你不完全信任某用户或其他主体, +不相信他们能够创建比较安全且相互隔离的 Pod,你应该强制实施 **Baseline** +或 **Restricted** Pod 安全标准。 +你可以使用 [Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission/)或其他(第三方)机制来强制实施这些限制。 + + +你还可以使用已弃用的 [PodSecurityPolicy](/zh-cn/docs/concepts/security/pod-security-policy/) +机制以限制用户创建特权 Pod 的能力 (特别注意:PodSecurityPolicy 已计划在版本 1.25 中删除)。 + + +在命名空间中创建工作负载还会授予对该命名空间中 Secret 的间接访问权限。 +在 kube-system 或类似特权的命名空间中创建 Pod +可以授予用户不需要通过 RBAC 即可获取 Secret 访问权限。 + + +### 持久卷的创建 {#persistent-volume-creation} + +如 [PodSecurityPolicy](/zh-cn/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) +文档中所述,创建 PersistentVolumes 的权限可以提权访问底层主机。 +如果需要访问 PersistentVolume,受信任的管理员应该创建 `PersistentVolume`, +受约束的用户应该使用 `PersistentVolumeClaim` 访问该存储。 + + +### 访问 Node 的 `proxy` 子资源 {#access-to-proxy-subresource-of-nodes} + +有权访问 Node 对象的 proxy 子资源的用户有权访问 Kubelet API, +这允许在他们有权访问的节点上的所有 Pod 上执行命令。 +此访问绕过审计日志记录和准入控制,因此在授予对此资源的权限前应小心。 + + +### esclate 动词 {#escalate-verb} +通常,RBAC 系统会阻止用户创建比他所拥有的更多权限的 `ClusterRole`。 +而 `escalate` 动词是个例外。如 +[RBAC 文档](/zh-cn/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update) +中所述,拥有此权限的用户可以有效地提升他们的权限。 + + +### bind 动词 {#bind-verb} + +与 `escalate` 动作类似,授予此权限的用户可以绕过 Kubernetes +对权限提升的内置保护,用户可以创建并绑定尚不具有的权限的角色。 + + +### impersonate 动词 {#impersonate-verb} + +此动词允许用户伪装并获得集群中其他用户的权限。 +授予它时应小心,以确保通过其中一个伪装账号不会获得过多的权限。 + + +### CSR 和证书颁发 {#csrs-and-certificate-issuing} + +CSR API 允许用户拥有 `create` CSR 的权限和 `update` +`certificatesigningrequests/approval` 的权限, +其中签名者是 `kubernetes.io/kube-apiserver-client`, +通过此签名创建的客户端证书允许用户向集群进行身份验证。 +这些客户端证书可以包含任意的名称,包括 Kubernetes 系统组件的副本。 +这将有利于特权提级。 + + +### 令牌请求 {#token-request} + +拥有 `serviceaccounts/token` 的 `create` 权限的用户可以创建 +TokenRequest 来发布现有服务帐户的令牌。 + + +### 控制准入 Webhook {#control-admission-webhooks} + +可以控制 `validatingwebhookconfigurations` 或 `mutatingwebhookconfigurations` +的用户可以控制能读取任何允许进入集群的对象的 webhook, +并且在有变更 webhook 的情况下,还可以变更准入的对象。 + + +## Kubernetes RBAC - 拒绝服务攻击的风险 {#denial-of-service-risks} + +### 对象创建拒绝服务 {#object-creation-dos} +有权在集群中创建对象的用户根据创建对象的大小和数量可能会创建足够大的对象, +产生拒绝服务状况,如 [Kubernetes 使用的 etcd 容易受到 OOM 攻击](https://github.com/kubernetes/kubernetes/issues/107325)中的讨论。 +允许太不受信任或者不受信任的用户对系统进行有限的访问在多租户集群中是特别重要的。 + +缓解此问题的一种选择是使用[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/#object-count-quota)以限制可以创建的对象数量。 \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/security/windows-security.md b/content/zh-cn/docs/concepts/security/windows-security.md new file mode 100644 index 0000000000000..3cdf1090396a1 --- /dev/null +++ b/content/zh-cn/docs/concepts/security/windows-security.md @@ -0,0 +1,109 @@ +--- +title: Windows 节点的安全性 +content_type: concept +weight: 40 +--- + + + + + +本篇介绍特定于 Windows 操作系统的安全注意事项和最佳实践。 + + + + +## 保护节点上的 Secret 数据 {#protection-for-secret-data-on-nodes} + + +在 Windows 上,来自 Secret 的数据以明文形式写入节点的本地存储 +(与在 Linux 上使用 tmpfs / 内存中文件系统不同)。 +作为集群操作员,你应该采取以下两项额外措施: + + +1. 使用文件 ACL 来保护 Secret 的文件位置。 +2. 使用 [BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server) + 进行卷级加密。 + + +## 容器用户 {#container-users} + + +可以为 Windows Pod 或容器指定 [RunAsUsername](/zh-cn/docs/tasks/configure-pod-container/configure-runasusername) +以作为特定用户执行容器进程。这大致相当于 [RunAsUser](/zh-cn/docs/concepts/security/pod-security-policy/#users-and-groups)。 + + +Windows 容器提供两个默认用户帐户,ContainerUser 和 ContainerAdministrator。 +在微软的 Windows 容器安全文档 +[何时使用 ContainerAdmin 和 ContainerUser 用户帐户](https://docs.microsoft.com/zh-cn/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts) +中介绍了这两个用户帐户之间的区别。 + + +在容器构建过程中,可以将本地用户添加到容器镜像中。 + +{{< note >}} + +* 基于 [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) 的镜像默认以 `ContainerUser` 运行 +* 基于 [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) 的镜像默认以 `ContainerAdministrator` 运行 +{{< /note >}} + + +Windows 容器还可以通过使用[组管理的服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/)作为 +Active Directory 身份运行。 + + +## Pod 级安全隔离 {#pod-level-security-isolation} + + +Windows 节点不支持特定于 Linux 的 Pod 安全上下文机制(例如 SELinux、AppArmor、Seccomp 或自定义 POSIX 权能字)。 + + +Windows 上[不支持](/zh-cn/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext)特权容器。 +然而,可以在 Windows 上使用 [HostProcess 容器](/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod)来执行 +Linux 上特权容器执行的许多任务。 diff --git a/content/zh/docs/concepts/services-networking/_index.md b/content/zh-cn/docs/concepts/services-networking/_index.md similarity index 71% rename from content/zh/docs/concepts/services-networking/_index.md rename to content/zh-cn/docs/concepts/services-networking/_index.md index 01d327577fe53..80c78e4b8b419 100644 --- a/content/zh/docs/concepts/services-networking/_index.md +++ b/content/zh-cn/docs/concepts/services-networking/_index.md @@ -7,43 +7,46 @@ description: Kubernetes 网络背后的概念和资源。 ## Kubernetes 网络模型 {#the-kubernetes-network-model} -每一个 [`Pod`](/zh/docs/concepts/workloads/pods/) 都有它自己的IP地址, -这就意味着你不需要显式地在 `Pod` 之间创建链接, 你几乎不需要处理容器端口到主机端口之间的映射。 +集群中每一个 [`Pod`](/zh-cn/docs/concepts/workloads/pods/) 都会获得自己的、 +独一无二的 IP 地址, +这就意味着你不需要显式地在 `Pod` 之间创建链接,你几乎不需要处理容器端口到主机端口之间的映射。 这将形成一个干净的、向后兼容的模型;在这个模型里,从端口分配、命名、服务发现、 -[负载均衡](/zh/docs/concepts/services-networking/ingress/#load-balancing)、应用配置和迁移的角度来看, -`Pod` 可以被视作虚拟机或者物理主机。 +[负载均衡](/zh-cn/docs/concepts/services-networking/ingress/#load-balancing)、 +应用配置和迁移的角度来看,`Pod` 可以被视作虚拟机或者物理主机。 + Kubernetes 强制要求所有网络设施都满足以下基本要求(从而排除了有意隔离网络的策略): -* [节点](/zh/docs/concepts/architecture/nodes/)上的 Pod 可以不通过 NAT 和其他任何节点上的 Pod 通信 + +* Pod 能够与所有其他[节点](/zh-cn/docs/concepts/architecture/nodes/)上的 Pod 通信, + 且不需要网络地址转译(NAT) * 节点上的代理(比如:系统守护进程、kubelet)可以和节点上的所有 Pod 通信 -备注:对于支持在主机网络中运行 `Pod` 的平台(比如:Linux): - -* 运行在节点主机网络里的 Pod 可以不通过 NAT 和所有节点上的 Pod 通信 + +说明:对于支持在主机网络中运行 `Pod` 的平台(比如:Linux), +当 Pod 挂接到节点的宿主网络上时,它们仍可以不通过 NAT 和所有节点上的 Pod 通信。 Kubernetes 网络解决四方面的问题: -- 一个 Pod 中的容器之间[通过本地回路(loopback)通信](/zh/docs/concepts/services-networking/dns-pod-service/)。 + +- 一个 Pod 中的容器之间[通过本地回路(loopback)通信](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。 - 集群网络在不同 pod 之间提供通信。 -- [Service 资源](/zh/docs/concepts/services-networking/service/)允许你 - [对外暴露 Pods 中运行的应用程序](/zh/docs/concepts/services-networking/connect-applications-service/), +- [Service 资源](/zh-cn/docs/concepts/services-networking/service/)允许你 + [向外暴露 Pods 中运行的应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/), 以支持来自于集群外部的访问。 -- 可以使用 Services 来[发布仅供集群内部使用的服务](/zh/docs/concepts/services-networking/service-traffic-policy/)。 +- 可以使用 Services 来[发布仅供集群内部使用的服务](/zh-cn/docs/concepts/services-networking/service-traffic-policy/)。 diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh-cn/docs/concepts/services-networking/connect-applications-service.md similarity index 97% rename from content/zh/docs/concepts/services-networking/connect-applications-service.md rename to content/zh-cn/docs/concepts/services-networking/connect-applications-service.md index d24745a3f0af1..16668c1e466be 100644 --- a/content/zh/docs/concepts/services-networking/connect-applications-service.md +++ b/content/zh-cn/docs/concepts/services-networking/connect-applications-service.md @@ -82,7 +82,7 @@ You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster Pod 或节点上使用 IP 的方式访问到它们。 如果你想的话,你依然可以将宿主节点的某个端口的流量转发到 Pod 中,但是出于网络模型的原因,你不必这么做。 -如果对此好奇,请参考 [Kubernetes 网络模型](/zh/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。 +如果对此好奇,请参考 [Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。 -* 进一步了解如何[使用 Service 访问集群中的应用](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/) -* 进一步了解如何[使用 Service 将前端连接到后端](/zh/docs/tasks/access-application-cluster/connecting-frontend-backend/) -* 进一步了解如何[创建外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/) +* 进一步了解如何[使用 Service 访问集群中的应用](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/) +* 进一步了解如何[使用 Service 将前端连接到后端](/zh-cn/docs/tasks/access-application-cluster/connecting-frontend-backend/) +* 进一步了解如何[创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/) diff --git a/content/zh/docs/concepts/services-networking/dns-pod-service.md b/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md similarity index 69% rename from content/zh/docs/concepts/services-networking/dns-pod-service.md rename to content/zh-cn/docs/concepts/services-networking/dns-pod-service.md index 2b9dfeb36f434..9946a88eb3262 100644 --- a/content/zh/docs/concepts/services-networking/dns-pod-service.md +++ b/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md @@ -15,11 +15,11 @@ weight: 20 -Kubernetes 为服务和 Pods 创建 DNS 记录。 -你可以使用一致的 DNS 名称而非 IP 地址来访问服务。 +Kubernetes 为 Service 和 Pod 创建 DNS 记录。 +你可以使用一致的 DNS 名称而非 IP 地址访问 Service。 @@ -30,40 +30,39 @@ Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service's IP to resolve DNS names. --> -## 介绍 +## 介绍 {#introduction} -Kubernetes DNS 在集群上调度 DNS Pod 和服务,并配置 kubelet 以告知各个容器 -使用 DNS 服务的 IP 来解析 DNS 名称。 +Kubernetes DNS 除了在集群上调度 DNS Pod 和 Service, +还配置 kubelet 以告知各个容器使用 DNS Service 的 IP 来解析 DNS 名称。 集群中定义的每个 Service (包括 DNS 服务器自身)都被赋予一个 DNS 名称。 -默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的名字空间和集群 -的默认域。 +默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的名字空间和集群的默认域。 -### Service 的名字空间 +### Service 的名字空间 {#namespaces-of-services} DNS 查询可能因为执行查询的 Pod 所在的名字空间而返回不同的结果。 不指定名字空间的 DNS 查询会被限制在 Pod 所在的名字空间内。 -要访问其他名字空间中的服务,需要在 DNS 查询中给出名字空间。 +要访问其他名字空间中的 Service,需要在 DNS 查询中指定名字空间。 例如,假定名字空间 `test` 中存在一个 Pod,`prod` 名字空间中存在一个服务 `data`。 @@ -73,11 +72,11 @@ Pod 查询 `data` 时没有返回结果,因为使用的是 Pod 的名字空间 Pod 查询 `data.prod` 时则会返回预期的结果,因为查询中指定了名字空间。 DNS 查询可以使用 Pod 中的 `/etc/resolv.conf` 展开。kubelet 会为每个 Pod 生成此文件。例如,对 `data` 的查询可能被展开为 `data.test.svc.cluster.local`。 @@ -91,7 +90,7 @@ options ndots:5 ``` 概括起来,名字空间 `test` 中的 Pod 可以成功地解析 `data.prod` 或者 @@ -116,7 +115,7 @@ considered implementation details and are subject to change without warning. For more up-to-date specification, see [Kubernetes DNS-Based Service Discovery](https://github.com/kubernetes/dns/blob/master/docs/specification.md). --> -以下各节详细介绍了被支持的 DNS 记录类型和被支持的布局。 +以下各节详细介绍已支持的 DNS 记录类型和布局。 其它布局、名称或者查询即使碰巧可以工作,也应视为实现细节, 将来很可能被更改而且不会因此发出警告。 有关最新规范请查看 @@ -127,28 +126,30 @@ For more up-to-date specification, see ### A/AAAA records -"Normal" (not headless) Services are assigned a DNS A or AAAA record for a name of the -form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP +"Normal" (not headless) Services are assigned a DNS A or AAAA record, +depending on the IP family of the Service, for a name of the form +`my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP of the Service. -"Headless" (without a cluster IP) Services are also assigned a DNS A record for -a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal -Services, this resolves to the set of IPs of the pods selected by the Service. +"Headless" (without a cluster IP) Services are also assigned a DNS A or AAAA record, +depending on the IP family of the Service, for a name of the form +`my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal +Services, this resolves to the set of IPs of the Pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set. --> -### 服务 {#services} +### Services -#### A/AAAA 记录 +#### A/AAAA 记录 {#a-aaaa-records} -“普通” 服务(除了无头服务)会以 `my-svc.my-namespace.svc.cluster-domain.example` -这种名字的形式被分配一个 DNS A 或 AAAA 记录,取决于服务的 IP 协议族。 -该名称会解析成对应服务的集群 IP。 +“普通” Service(除了无头 Service)会以 `my-svc.my-namespace.svc.cluster-domain.example` +这种名字的形式被分配一个 DNS A 或 AAAA 记录,取决于 Service 的 IP 协议族。 +该名称会解析成对应 Service 的集群 IP。 -“无头(Headless)” 服务(没有集群 IP)也会以 +“无头(Headless)” Service (没有集群 IP)也会以 `my-svc.my-namespace.svc.cluster-domain.example` 这种名字的形式被指派一个 DNS A 或 AAAA 记录, -具体取决于服务的 IP 协议族。 -与普通服务不同,这一记录会被解析成对应服务所选择的 Pod 集合的 IP。 +具体取决于 Service 的 IP 协议族。 +与普通 Service 不同,这一记录会被解析成对应 Service 所选择的 Pod IP 的集合。 客户端要能够使用这组 IP,或者使用标准的轮转策略从这组 IP 中进行选择。 #### SRV 记录 {#srv-records} -Kubernetes 会为命名端口创建 SRV 记录,这些端口是普通服务或 -[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)的一部分。 -对每个命名端口,SRV 记录具有 `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example` 这种形式。 -对普通服务,该记录会被解析成端口号和域名:`my-svc.my-namespace.svc.cluster-domain.example`。 -对无头服务,该记录会被解析成多个结果,服务对应的每个后端 Pod 各一个; -其中包含 Pod 端口号和形为 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` +Kubernetes 根据普通 Service 或 +[Headless Service](/zh-cn/docs/concepts/services-networking/service/#headless-services) +中的命名端口创建 SRV 记录。每个命名端口, +SRV 记录格式为 `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`。 +普通 Service,该记录会被解析成端口号和域名:`my-svc.my-namespace.svc.cluster-domain.example`。 +无头 Service,该记录会被解析成多个结果,及该服务的每个后端 Pod 各一个 SRV 记录, +其中包含 Pod 端口号和格式为 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` 的域名。 ## Pods @@ -179,20 +181,20 @@ Kubernetes 会为命名端口创建 SRV 记录,这些端口是普通服务或 -### A/AAAA 记录 +### A/AAAA 记录 {#a-aaaa-records} 一般而言,Pod 会对应如下 DNS 名字解析: @@ -210,11 +212,11 @@ Any pods exposed by a Service have the following DNS resolution available: -### Pod 的 hostname 和 subdomain 字段 +### Pod 的 hostname 和 subdomain 字段 {#pod-s-hostname-and-subdomain-fields} 当前,创建 Pod 时其主机名取自 Pod 的 `metadata.name` 值。 @@ -288,21 +290,21 @@ spec: ``` -如果某无头服务与某 Pod 在同一个名字空间中,且它们具有相同的子域名, +如果某无头 Service 与某 Pod 在同一个名字空间中,且它们具有相同的子域名, 集群的 DNS 服务器也会为该 Pod 的全限定主机名返回 A 记录或 AAAA 记录。 例如,在同一个名字空间中,给定一个主机名为 “busybox-1”、 子域名设置为 “default-subdomain” 的 Pod,和一个名称为 “`default-subdomain`” -的无头服务,Pod 将看到自己的 FQDN 为 +的无头 Service,Pod 将看到自己的 FQDN 为 "`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`"。 DNS 会为此名字提供一个 A 记录或 AAAA 记录,指向该 Pod 的 IP。 “`busybox1`” 和 “`busybox2`” 这两个 Pod 分别具有它们自己的 A 或 AAAA 记录。 @@ -314,18 +316,16 @@ along with its IP. Endpoints 对象可以为任何端点地址及其 IP 指定 `hostname`。 {{< note >}} -因为没有为 Pod 名称创建 A 记录或 AAAA 记录,所以要创建 Pod 的 A 记录 -或 AAAA 记录需要 `hostname`。 - +由于不是为 Pod 名称创建 A 或 AAAA 记录的,因此 Pod 的 A 或 AAAA 需要 `hostname`。 没有设置 `hostname` 但设置了 `subdomain` 的 Pod 只会为 -无头服务创建 A 或 AAAA 记录(`default-subdomain.my-namespace.svc.cluster-domain.example`) +无头 Service 创建 A 或 AAAA 记录(`default-subdomain.my-namespace.svc.cluster-domain.example`) 指向 Pod 的 IP 地址。 另外,除非在服务上设置了 `publishNotReadyAddresses=True`,否则只有 Pod 进入就绪状态 才会有与之对应的记录。 @@ -341,12 +341,13 @@ record unless `publishNotReadyAddresses=True` is set on the Service. {{< feature-state for_k8s_version="v1.22" state="stable" >}} -**前置条件**:`SetHostnameAsFQDN` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -必须在 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} -上启用。 +当 Pod 配置为具有全限定域名 (FQDN) 时,其主机名是短主机名。 + 例如,如果你有一个具有完全限定域名 `busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example` 的 Pod, + 则默认情况下,该 Pod 内的 `hostname` 命令返回 `busybox-1`,而 `hostname --fqdn` 命令返回 FQDN。 当你在 Pod 规约中设置了 `setHostnameAsFQDN: true` 时,kubelet 会将 Pod 的全限定域名(FQDN)作为该 Pod 的主机名记录到 Pod 所在名字空间。 @@ -356,7 +357,7 @@ When a Pod is configured to have fully qualified domain name (FQDN), its hostnam 在 Linux 中,内核的主机名字段(`struct utsname` 的 `nodename` 字段)限定 最多 64 个字符。 @@ -364,24 +365,24 @@ If a Pod enables this feature and its FQDN is longer than 64 character, it will 如果 Pod 启用这一特性,而其 FQDN 超出 64 字符,Pod 的启动会失败。 Pod 会一直出于 `Pending` 状态(通过 `kubectl` 所看到的 `ContainerCreating`), 并产生错误事件,例如 -"Failed to construct FQDN from pod hostname and cluster domain, FQDN +"Failed to construct FQDN from Pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested)." (无法基于 Pod 主机名和集群域名构造 FQDN,FQDN `long-FQDN` 过长,至多 64 字符,请求字符数为 70)。 对于这种场景而言,改善用户体验的一种方式是创建一个 -[准入 Webhook 控制器](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks), +[准入 Webhook 控制器](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks), 在用户创建顶层对象(如 Deployment)的时候控制 FQDN 的长度。 {{< /note >}} ### Pod 的 DNS 策略 {#pod-s-dns-policy} @@ -403,14 +405,15 @@ DNS 策略可以逐个 Pod 来设定。目前 Kubernetes 支持以下特定 Pod 这些策略可以在 Pod 规约中的 `dnsPolicy` 字段设置: - "`Default`": Pod 从运行所在的节点继承名称解析配置。参考 - [相关讨论](/zh/docs/tasks/administer-cluster/dns-custom-nameservers) + [相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers) 获取更多信息。 - "`ClusterFirst`": 与配置的集群域后缀不匹配的任何 DNS 查询(例如 "www.kubernetes.io") 都将转发到从节点继承的上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。 - 参阅[相关讨论](/zh/docs/tasks/administer-cluster/dns-custom-nameservers) + 参阅[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers) 了解在这些场景中如何处理 DNS 查询的信息。 - "`ClusterFirstWithHostNet`":对于以 hostNetwork 方式运行的 Pod,应显式设置其 DNS 策略 "`ClusterFirstWithHostNet`"。 + - 注意:这在 Windows 上不支持。 有关详细信息,请参见[下文](#dns-windows)。 - "`None`": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 `dnsConfig` 字段 所提供的 DNS 设置。 参见 [Pod 的 DNS 配置](#pod-dns-config)节。 @@ -450,7 +453,7 @@ spec: ``` -输出类似于 - +输出类似于: ``` nameserver fd00:79:30::a search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example @@ -565,6 +567,41 @@ a list of search domains of up to 2048 characters. 如果启用 kube-apiserver 和 kubelet 的特性门控 `ExpandedDNSConfig`,Kubernetes 将可以有最多 32 个 搜索域以及一个最多 2048 个字符的搜索域列表。 + +## Windows 节点上的 DNS 解析 {#dns-windows} + +- 在 Windows 节点上运行的 Pod 不支持 ClusterFirstWithHostNet。 + Windows 将所有带有 `.` 的名称视为全限定域名(FQDN)并跳过全限定域名(FQDN)解析。 +- 在 Windows 上,可以使用的 DNS 解析器有很多。 + 由于这些解析器彼此之间会有轻微的行为差别,建议使用 + [`Resolve-DNSName`](https://docs.microsoft.com/powershell/module/dnsclient/resolve-dnsname) + powershell cmdlet 进行名称查询解析。 +- 在 Linux 上,有一个 DNS 后缀列表,当解析全名失败时可以使用。 + 在 Windows 上,你只能有一个 DNS 后缀, + 即与该 Pod 的命名空间相关联的 DNS 后缀(例如:`mydns.svc.cluster.local`)。 + Windows 可以解析全限定域名(FQDN),和使用了该 DNS 后缀的 Services 或者网络名称。 + 例如,在 `default` 命名空间中生成一个 Pod,该 Pod 会获得的 DNS 后缀为 `default.svc.cluster.local`。 + 在 Windows 的 Pod 中,你可以解析 `kubernetes.default.svc.cluster.local` 和 `kubernetes`, + 但是不能解析部分限定名称(`kubernetes.default` 和 `kubernetes.default.svc`)。 + ## {{% heading "whatsnext" %}} 有关管理 DNS 配置的指导,请查看 -[配置 DNS 服务](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/) +[配置 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/) diff --git a/content/zh/docs/concepts/services-networking/dual-stack.md b/content/zh-cn/docs/concepts/services-networking/dual-stack.md similarity index 59% rename from content/zh/docs/concepts/services-networking/dual-stack.md rename to content/zh-cn/docs/concepts/services-networking/dual-stack.md index 8b1fb7f822b06..9476214ef3c55 100644 --- a/content/zh/docs/concepts/services-networking/dual-stack.md +++ b/content/zh-cn/docs/concepts/services-networking/dual-stack.md @@ -9,18 +9,17 @@ weight: 70 --- @@ -29,14 +28,16 @@ weight: 70 {{< feature-state for_k8s_version="v1.23" state="stable" >}} IPv4/IPv6 双协议栈网络能够将 IPv4 和 IPv6 地址分配给 {{< glossary_tooltip text="Pod" term_id="pod" >}} 和 {{< glossary_tooltip text="Service" term_id="service" >}}。 从 1.21 版本开始,Kubernetes 集群默认启用 IPv4/IPv6 双协议栈网络, 以支持同时分配 IPv4 和 IPv6 地址。 @@ -54,9 +55,9 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: Kubernetes 集群的 IPv4/IPv6 双协议栈可提供下面的功能: * 双协议栈 pod 网络 (每个 pod 分配一个 IPv4 和 IPv6 地址) * IPv4 和 IPv6 启用的服务 @@ -73,49 +74,54 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack 为了使用 IPv4/IPv6 双栈的 Kubernetes 集群,需要满足以下先决条件: * Kubernetes 1.20 版本或更高版本,有关更早 Kubernetes 版本的使用双栈服务的信息, 请参考对应版本的 Kubernetes 文档。 * 提供商支持双协议栈网络(云提供商或其他提供商必须能够为 Kubernetes 节点提供可路由的 IPv4/IPv6 网络接口) -* 支持双协议栈的网络插件(如 Kubenet 或 Calico) +* 支持双协议栈的[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) ## 配置 IPv4/IPv6 双协议栈 - +如果配置 IPv4/IPv6 双栈,请分配双栈集群网络: + * kube-apiserver: * `--service-cluster-ip-range=,` * kube-controller-manager: * `--cluster-cidr=,` * `--service-cluster-ip-range=,` - * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` 对于 IPv4 默认为 /24,对于 IPv6 默认为 /64 + * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` 对于 IPv4 默认为 /24, + 对于 IPv6 默认为 /64 * kube-proxy: * `--cluster-cidr=,` * kubelet: @@ -128,7 +134,8 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack IPv4 CIDR 的一个例子:`10.244.0.0/16`(尽管你会提供你自己的地址范围)。 @@ -139,31 +146,35 @@ IPv6 CIDR 的一个例子:`fdXY:IJKL:MNOP:15::/64` -## 服务 +## 服务 {#services} 你可以使用 IPv4 或 IPv6 地址来创建 {{< glossary_tooltip text="Service" term_id="service" >}}。 + 服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-apiserver 的 `--service-cluster-ip-range` 参数配置)。 + 当你定义服务时,可以选择将其配置为双栈。若要指定所需的行为,你可以设置 `.spec.ipFamilyPolicy` 字段为以下值之一: - * `SingleStack`:单栈服务。控制面使用第一个配置的服务集群 IP 范围为服务分配集群 IP。 * `PreferDualStack`: * 为服务分配 IPv4 和 IPv6 集群 IP 地址。 @@ -172,14 +183,18 @@ set the `.spec.ipFamilyPolicy` field to one of the following values: 列表中选择 `.spec.ClusterIP` 如果你想要定义哪个 IP 族用于单栈或定义双栈 IP 族的顺序,可以通过设置 服务上的可选字段 `.spec.ipFamilies` 来选择地址族。 {{< note >}} `.spec.ipFamilies` 字段是不可变的,因为系统无法为已经存在的服务重新分配 `.spec.ClusterIP`。如果你想改变 `.spec.ipFamilies`,则需要删除并重新创建服务。 @@ -195,7 +210,6 @@ You can set `.spec.ipFamilies` to any of the following array values: - `["IPv6"]` - `["IPv4","IPv6"]` (dual stack) - `["IPv6","IPv4"]` (dual stack) - --> - `["IPv4"]` - `["IPv6"]` @@ -212,35 +226,46 @@ The first family you list is used for the legacy `.spec.ClusterIP` field. These examples demonstrate the behavior of various dual-stack Service configuration scenarios. --> -### 双栈服务配置场景 +### 双栈服务配置场景 {#dual-stack-service-configuration-scenarios} 以下示例演示多种双栈服务配置场景下的行为。 -#### 新服务的双栈选项 +#### 新服务的双栈选项 {#dual-stack-options-on-new-services} - 1. 此服务规约中没有显式设定 `.spec.ipFamilyPolicy`。当你创建此服务时,Kubernetes 从所配置的第一个 `service-cluster-ip-range` 种为服务分配一个集群IP,并设置 `.spec.ipFamilyPolicy` 为 `SingleStack`。 - ([无选择算符的服务](/zh/docs/concepts/services-networking/service/#services-without-selectors) - 和[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)的行为方式 + ([无选择算符的服务](/zh-cn/docs/concepts/services-networking/service/#services-without-selectors) + 和[无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services)的行为方式 与此相同。) - + {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} 2. 此服务规约显式地将 `.spec.ipFamilyPolicy` 设置为 `PreferDualStack`。 当你在双栈集群上创建此服务时,Kubernetes 会为该服务分配 IPv4 和 IPv6 地址。 @@ -258,7 +283,10 @@ These examples demonstrate the behavior of various dual-stack Service configurat {{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}} 3. 下面的服务规约显式地在 `.spec.ipFamilies` 中指定 `IPv6` 和 `IPv4`,并 将 `.spec.ipFamilyPolicy` 设定为 `PreferDualStack`。 @@ -271,16 +299,21 @@ These examples demonstrate the behavior of various dual-stack Service configurat -#### 现有服务的双栈默认值 +#### 现有服务的双栈默认值 {#dual-stack-defaults-on-existing-services} 下面示例演示了在服务已经存在的集群上新启用双栈时的默认行为。 (将现有集群升级到 1.21 或者更高版本会启用双协议栈支持。) 1. 在集群上启用双栈时,控制面会将现有服务(无论是 `IPv4` 还是 `IPv6`)配置 `.spec.ipFamilyPolicy` 为 `SingleStack` 并设置 `.spec.ipFamilies` @@ -296,7 +329,7 @@ These examples demonstrate the default behavior when dual-stack is newly enabled ```shell kubectl get svc my-service -o yaml ``` - + ```yaml apiVersion: v1 kind: Service @@ -323,10 +356,15 @@ These examples demonstrate the default behavior when dual-stack is newly enabled ``` 2. 在集群上启用双栈时,带有选择算符的现有 - [无头服务](/zh/docs/concepts/services-networking/service/#headless-services) + [无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services) 由控制面设置 `.spec.ipFamilyPolicy` 为 `SingleStack` 并设置 `.spec.ipFamilies` 为第一个服务集群 IP 范围的地址族(通过配置 kube-apiserver 的 `--service-cluster-ip-range` 参数),即使 `.spec.ClusterIP` 的设置值为 `None` 也如此。 @@ -341,7 +379,7 @@ These examples demonstrate the default behavior when dual-stack is newly enabled ```shell kubectl get svc my-service -o yaml ``` - + ```yaml apiVersion: v1 kind: Service @@ -361,13 +399,13 @@ These examples demonstrate the default behavior when dual-stack is newly enabled protocol: TCP targetPort: 80 selector: - app: MyApp + app: MyApp ``` -#### 在单栈和双栈之间切换服务 +#### 在单栈和双栈之间切换服务 {#switching-services-between-single-stack-and-dual-stack} 1. 要将服务从单栈更改为双栈,根据需要将 `.spec.ipFamilyPolicy` 从 `SingleStack` 改为 `PreferDualStack` 或 `RequireDualStack`。 - 当你将此服务从单栈更改为双栈时,Kubernetes 将分配缺失的地址族,以便现在 - 该服务具有 IPv4 和 IPv6 地址。 + 当你将此服务从单栈更改为双栈时,Kubernetes 将分配缺失的地址族, + 以便现在该服务具有 IPv4 和 IPv6 地址。 编辑服务规约将 `.spec.ipFamilyPolicy` 从 `SingleStack` 改为 `PreferDualStack`。 - 2. 要将服务从双栈更改为单栈,请将 `.spec.ipFamilyPolicy` 从 `PreferDualStack` 或 `RequireDualStack` 改为 `SingleStack`。 当你将此服务从双栈更改为单栈时,Kubernetes 只保留 `.spec.ClusterIPs` @@ -418,24 +462,27 @@ Services can be changed from single-stack to dual-stack and from dual-stack to s -### 无选择算符的无头服务 +### 无选择算符的无头服务 {#headless-services-without-selector} -对于[不带选择算符的无头服务](/zh/docs/concepts/services-networking/service/#without-selectors), +对于[不带选择算符的无头服务](/zh-cn/docs/concepts/services-networking/service/#without-selectors), 若没有显式设置 `.spec.ipFamilyPolicy`,则 `.spec.ipFamilyPolicy` 字段默认设置为 `RequireDualStack`。 -### LoadBalancer 类型服务 +### LoadBalancer 类型服务 {#service-type-loadbalancer} 要为你的服务提供双栈负载均衡器: @@ -444,7 +491,8 @@ To provision a dual-stack load balancer for your Service: {{< note >}} 为了使用双栈的负载均衡器类型服务,你的云驱动必须支持 IPv4 和 IPv6 的负载均衡器。 {{< /note >}} @@ -452,10 +500,14 @@ To use a dual-stack `LoadBalancer` type Service, your cloud provider must suppor -## 出站流量 +## 出站流量 {#egress-traffic} 如果你要启用出站流量,以便使用非公开路由 IPv6 地址的 Pod 到达集群外地址 (例如公网),则需要通过透明代理或 IP 伪装等机制使 Pod 使用公共路由的 @@ -470,11 +522,41 @@ Ensure your {{< glossary_tooltip text="CNI" term_id="cni" >}} provider supports 确认你的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 驱动支持 IPv6。 {{< /note >}} + +## Windows 支持 {#windows-support} + +Windows 上的 Kubernetes 不支持单栈“仅 IPv6” 网络。 然而, +对于 Pod 和节点而言,仅支持单栈形式服务的双栈 IPv4/IPv6 网络是被支持的。 + +你可以使用 `l2bridge` 网络来实现 IPv4/IPv6 双栈联网。 + +{{< note >}} + +Windows 上的 Overlay (VXLAN) 网络**不**支持双栈网络。 +{{< /note >}} + + +关于 Windows 的不同网络模式,你可以进一步阅读 +[Windows 上的网络](/zh-cn/docs/concepts/services-networking/windows-networking#network-modes)。 + ## {{% heading "whatsnext" %}} -* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络 -* [使用 kubeadm 启用双协议栈网络](/zh/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) +* [验证 IPv4/IPv6 双协议栈](/zh-cn/docs/tasks/network/validate-dual-stack)网络 +* [使用 kubeadm 启用双协议栈网络](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) diff --git a/content/zh/docs/concepts/services-networking/endpoint-slices.md b/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md similarity index 83% rename from content/zh/docs/concepts/services-networking/endpoint-slices.md rename to content/zh-cn/docs/concepts/services-networking/endpoint-slices.md index 447caae39c8ed..b3f094929802b 100644 --- a/content/zh/docs/concepts/services-networking/endpoint-slices.md +++ b/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md @@ -52,11 +52,11 @@ significant amounts of network traffic and processing when Endpoints changed. EndpointSlices help you mitigate those issues as well as provide an extensible platform for additional features such as topological routing. --> -由于任一服务的所有网络端点都保存在同一个 Endpoints 资源中,这类资源可能变得 -非常巨大,而这一变化会影响到 Kubernetes 组件(比如主控组件)的性能,并 -在 Endpoints 变化时产生大量的网络流量和额外的处理。 -EndpointSlice 能够帮助你缓解这一问题,还能为一些诸如拓扑路由这类的额外 -功能提供一个可扩展的平台。 +由于任一 Service 的所有网络端点都保存在同一个 Endpoints 资源中, +这类资源可能变得非常巨大,而这一变化会影响到 Kubernetes +组件(比如主控组件)的性能,并在 Endpoints 变化时产生大量的网络流量和额外的处理。 +EndpointSlice 能够帮助你缓解这一问题, +还能为一些诸如拓扑路由这类的额外功能提供一个可扩展的平台。 -在 v1 API 中,逐个端点设置的 `topology` 实际上被去除,以鼓励使用专用 -的字段 `nodeName` 和 `zone`。 +在 v1 API 中,逐个端点设置的 `topology` 实际上被去除, +以鼓励使用专用的字段 `nodeName` 和 `zone`。 -对 `EndpointSlice` 对象的 `endpoint` 字段设置任意的拓扑结构信息这一操作已被 -废弃,不再被 v1 API 所支持。取而代之的是 v1 API 所支持的 `nodeName` 和 `zone` +对 `EndpointSlice` 对象的 `endpoint` 字段设置任意的拓扑结构信息这一操作已被废弃, +不再被 v1 API 所支持。取而代之的是 v1 API 所支持的 `nodeName` 和 `zone` 这些独立的字段。这些字段可以在不同的 API 版本之间自动完成转译。 -例如,v1beta1 API 中 `topology` 字段的 `topology.kubernetes.io/zone` 取值可以 -在 v1 API 中通过 `zone` 字段访问。 +例如,v1beta1 API 中 `topology` 字段的 `topology.kubernetes.io/zone` +取值可以在 v1 API 中通过 `zone` 字段访问。 {{< /note >}} ### 属主关系 {#ownership} -在大多数场合下,EndpointSlice 都由某个 Service 所有,(因为)该端点切片正是 -为该服务跟踪记录其端点。这一属主关系是通过为每个 EndpointSlice 设置一个 -属主(owner)引用,同时设置 `kubernetes.io/service-name` 标签来标明的, -目的是方便查找隶属于某服务的所有 EndpointSlice。 +在大多数场合下,EndpointSlice 都由某个 Service 所有, +(因为)该端点切片正是为该服务跟踪记录其端点。这一属主关系是通过为每个 EndpointSlice +设置一个属主(owner)引用,同时设置 `kubernetes.io/service-name` 标签来标明的, +目的是方便查找隶属于某 Service 的所有 EndpointSlice。 ### EndpointSlice 镜像 {#endpointslice-mirroring} -在某些场合,应用会创建定制的 Endpoints 资源。为了保证这些应用不需要并发 -的更改 Endpoints 和 EndpointSlice 资源,集群的控制面将大多数 Endpoints +在某些场合,应用会创建定制的 Endpoints 资源。为了保证这些应用不需要并发的更改 +Endpoints 和 EndpointSlice 资源,集群的控制面将大多数 Endpoints 映射到对应的 EndpointSlice 之上。 -控制面尝试尽量将 EndpointSlice 填满,不过不会主动地在若干 EndpointSlice 之间 -执行再平衡操作。这里的逻辑也是相对直接的: +控制面尝试尽量将 EndpointSlice 填满,不过不会主动地在若干 EndpointSlice +之间执行再平衡操作。这里的逻辑也是相对直接的: -1. 列举所有现有的 EndpointSlices,移除那些不再需要的端点并更新那些已经 - 变化的端点。 +1. 列举所有现有的 EndpointSlices,移除那些不再需要的端点并更新那些已经变化的端点。 2. 列举所有在第一步中被更改过的 EndpointSlices,用新增加的端点将其填满。 3. 如果还有新的端点未被添加进去,尝试将这些端点添加到之前未更改的切片中, 或者创建新切片。 @@ -403,11 +402,11 @@ this approach will create a new EndpointSlice instead of filling up the 2 existing EndpointSlices. In other words, a single EndpointSlice creation is preferrable to multiple EndpointSlice updates. --> -这里比较重要的是,与在 EndpointSlice 之间完成最佳的分布相比,第三步中更看重 -限制 EndpointSlice 更新的操作次数。例如,如果有 10 个端点待添加,有两个 -EndpointSlice 中各有 5 个空位,上述方法会创建一个新的 EndpointSlice 而不是 -将现有的两个 EndpointSlice 都填满。换言之,与执行多个 EndpointSlice 更新操作 -相比较,方法会优先考虑执行一个 EndpointSlice 创建操作。 +这里比较重要的是,与在 EndpointSlice 之间完成最佳的分布相比,第三步中更看重限制 +EndpointSlice 更新的操作次数。例如,如果有 10 个端点待添加,有两个 EndpointSlice +中各有 5 个空位,上述方法会创建一个新的 EndpointSlice 而不是将现有的两个 +EndpointSlice 都填满。换言之,与执行多个 EndpointSlice 更新操作相比较, +方法会优先考虑执行一个 EndpointSlice 创建操作。 -由于 kube-proxy 在每个节点上运行并监视 EndpointSlice 状态,EndpointSlice 的 -每次变更都变得相对代价较高,因为这些状态变化要传递到集群中每个节点上。 -这一方法尝试限制要发送到所有节点上的变更消息个数,即使这样做可能会导致有 -多个 EndpointSlice 没有被填满。 +由于 kube-proxy 在每个节点上运行并监视 EndpointSlice 状态,EndpointSlice +的每次变更都变得相对代价较高,因为这些状态变化要传递到集群中每个节点上。 +这一方法尝试限制要发送到所有节点上的变更消息个数,即使这样做可能会导致有多个 +EndpointSlice 没有被填满。 -在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice 控制器 -处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。并且,假使无法 -添加到已有的切片中,不管怎样都会快就会需要一个新的 EndpointSlice 对象。 -Deployment 的滚动更新为重新为 EndpointSlice 打包提供了一个自然的机会,所有 -Pod 及其对应的端点在这一期间都会被替换掉。 +在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice +控制器处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。 +并且,假使无法添加到已有的切片中,不管怎样都会快就会需要一个新的 +EndpointSlice 对象。Deployment 的滚动更新为重新为 EndpointSlice +打包提供了一个自然的机会,所有 Pod 及其对应的端点在这一期间都会被替换掉。 -* 阅读[使用服务连接应用](/zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读[使用 Service 连接到应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/docs/concepts/services-networking/ingress-controllers.md b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md similarity index 93% rename from content/zh/docs/concepts/services-networking/ingress-controllers.md rename to content/zh-cn/docs/concepts/services-networking/ingress-controllers.md index a85e07340400a..1e2d6ae078ba5 100644 --- a/content/zh/docs/concepts/services-networking/ingress-controllers.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md @@ -143,13 +143,13 @@ You may deploy any number of ingress controllers using [ingress class](/docs/con within a cluster. Note the `.metadata.name` of your ingress class resource. When you create an ingress you would need that name to specify the `ingressClassName` field on your Ingress object (refer to [IngressSpec v1 reference](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec). `ingressClassName` is a replacement of the older [annotation method](/docs/concepts/services-networking/ingress/#deprecated-annotation). --> 你可以使用 -[Ingress 类](/zh/docs/concepts/services-networking/ingress/#ingress-class)在集群中部署任意数量的 +[Ingress 类](/zh-cn/docs/concepts/services-networking/ingress/#ingress-class)在集群中部署任意数量的 Ingress 控制器。 请注意你的 Ingress 类资源的 `.metadata.name` 字段。 当你创建 Ingress 时,你需要用此字段的值来设置 Ingress 对象的 `ingressClassName` 字段(请参考 [IngressSpec v1 reference](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec))。 `ingressClassName` -是之前的[注解](/zh/docs/concepts/services-networking/ingress/#deprecated-annotation)做法的替代。 +是之前的[注解](/zh-cn/docs/concepts/services-networking/ingress/#deprecated-annotation)做法的替代。 -如果你不为 Ingress 指定一个 IngressClass,并且你的集群中只有一个 IngressClass 被标记为了集群默认,那么 -Kubernetes 会[应用](/zh/docs/concepts/services-networking/ingress/#default-ingress-class)此默认 +如果你不为 Ingress 指定 IngressClass,并且你的集群中只有一个 IngressClass 被标记为默认,那么 +Kubernetes 会将此集群的默认 IngressClass +[应用](/zh-cn/docs/concepts/services-networking/ingress/#default-ingress-class)到 Ingress 上。 IngressClass。 你可以通过将 -[`ingressclass.kubernetes.io/is-default-class` 注解](/zh/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) +[`ingressclass.kubernetes.io/is-default-class` 注解](/zh-cn/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) 的值设置为 `"true"` 来将一个 IngressClass 标记为集群默认。 理想情况下,所有 Ingress 控制器都应满足此规范,但各种 Ingress 控制器的操作略有不同。 @@ -180,6 +181,6 @@ Make sure you review your ingress controller's documentation to understand the c * Learn more about [Ingress](/docs/concepts/services-networking/ingress/). * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). --> -* 进一步了解 [Ingress](/zh/docs/concepts/services-networking/ingress/)。 -* [在 Minikube 上使用 NGINX 控制器安装 Ingress](/zh/docs/tasks/access-application-cluster/ingress-minikube)。 +* 进一步了解 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/)。 +* [在 Minikube 上使用 NGINX 控制器安装 Ingress](/zh-cn/docs/tasks/access-application-cluster/ingress-minikube)。 diff --git a/content/zh/docs/concepts/services-networking/ingress.md b/content/zh-cn/docs/concepts/services-networking/ingress.md similarity index 91% rename from content/zh/docs/concepts/services-networking/ingress.md rename to content/zh-cn/docs/concepts/services-networking/ingress.md index 037b135c0de61..6b5bf60c38ece 100644 --- a/content/zh/docs/concepts/services-networking/ingress.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress.md @@ -20,7 +20,7 @@ weight: 40 For clarity, this guide defines the following terms: --> -## 术语 +## 术语 {#terminology} 为了表达更加清晰,本指南定义了以下术语: @@ -36,7 +36,7 @@ For clarity, this guide defines the following terms: 在此示例和在大多数常见的 Kubernetes 部署环境中,集群中的节点都不在公共网络中。 * 边缘路由器(Edge Router): 在集群中强制执行防火墙策略的路由器。可以是由云提供商管理的网关,也可以是物理硬件。 * 集群网络(Cluster Network): 一组逻辑的或物理的连接,根据 Kubernetes - [网络模型](/zh/docs/concepts/cluster-administration/networking/)在集群内实现通信。 + [网络模型](/zh-cn/docs/concepts/cluster-administration/networking/)在集群内实现通信。 * 服务(Service):Kubernetes {{< glossary_tooltip term_id="service" >}}, 使用{{< glossary_tooltip text="标签" term_id="label" >}}选择器(selectors)辨认一组 Pod。 除非另有说明,否则假定服务只具有在集群网络中可路由的虚拟 IP。 @@ -48,10 +48,11 @@ For clarity, this guide defines the following terms: {{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. --> -## Ingress 是什么? +## Ingress 是什么? {#what-is-ingress} [Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) -公开了从集群外部到集群内[服务](/zh/docs/concepts/services-networking/service/)的 HTTP 和 HTTPS 路由。 +公开从集群外部到集群内[服务](/zh-cn/docs/concepts/services-networking/service/)的 +HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。 下面是一个将所有流量都发送到同一 Service 的简单 Ingress 示例: -{{< mermaid >}} -graph LR; - client([客户端])-. Ingress-管理的
        负载均衡器 .->ingress[Ingress]; - ingress-->|路由规则|service[Service]; - subgraph cluster - ingress; - service-->pod1[Pod]; - service-->pod2[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service,pod1,pod2 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/zh-cn/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="图. Ingress" link="https://mermaid.live/edit#pako:eNqNkstuwyAQRX8F4U0r2VHqPlSRKqt0UamLqlnaWWAYJygYLB59KMm_Fxcix-qmGwbuXA7DwAEzzQETXKutof0Ovb4vaoUQkwKUu6pi3FwXM_QSHGBt0VFFt8DRU2OWSGrKUUMlVQwMmhVLEV1Vcm9-aUksiuXRaO_CEhkv4WjBfAgG1TrGaLa-iaUw6a0DcwGI-WgOsF7zm-pN881fvRx1UDzeiFq7ghb1kgqFWiElyTjnuXVG74FkbdumefEpuNuRu_4rZ1pqQ7L5fL6YQPaPNiFuywcG9_-ihNyUkm6YSONWkjVNM8WUIyaeOJLO3clTB_KhL8NQDmVe-OJjxgZM5FhFiiFTK5zjDkxHBQ9_4zB4a-x20EGNSZhyaKmXrg7f5hSsvufUwTMXThtMWiot5Jh6p9ffimHijIezaSVoeN0uiqcfMJvf7w" >}} Ingress 可为 Service 提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS,以及基于名称的虚拟托管。 -[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) +[Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers) 通常负责通过负载均衡器来实现 Ingress,尽管它也可以配置边缘路由器或其他前端来帮助处理流量。 Ingress 不会公开任意端口或协议。 将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 -[Service.Type=NodePort](/zh/docs/concepts/services-networking/service/#type-nodeport) -或 [Service.Type=LoadBalancer](/zh/docs/concepts/services-networking/service/#loadbalancer) +[Service.Type=NodePort](/zh-cn/docs/concepts/services-networking/service/#type-nodeport) +或 [Service.Type=LoadBalancer](/zh-cn/docs/concepts/services-networking/service/#loadbalancer) 类型的 Service。 ## 环境准备 -你必须拥有一个 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。 +你必须拥有一个 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。 仅创建 Ingress 资源本身没有任何效果。 你可能需要部署 Ingress 控制器,例如 [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/)。 -你可以从许多 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 中进行选择。 +你可以从许多 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers) 中进行选择。 Ingress 需要指定 `apiVersion`、`kind`、 `metadata`和 `spec` 字段。 -Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 -关于如何使用配置文件,请参见[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、 -[配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)、 -[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/)。 +Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +关于如何使用配置文件,请参见[部署应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)、 +[配置容器](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/)、 +[管理资源](/zh-cn/docs/concepts/cluster-administration/manage-deployment/)。 Ingress 经常使用注解(annotations)来配置一些选项,具体取决于 Ingress 控制器,例如[重写目标注解](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)。 -不同的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)支持不同的注解。 +不同的 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers)支持不同的注解。 查看你所选的 Ingress 控制器的文档,以了解其支持哪些注解。 |/foo|service1[Service service1:4200]; - ingress-->|/bar|service2[Service service2:8080]; - subgraph cluster - ingress; - service1-->pod1[Pod]; - service1-->pod2[Pod]; - service2-->pod3[Pod]; - service2-->pod4[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/zh-cn/docs/images/ingressFanOut.svg" alt="ingress-fanout-diagram" class="diagram-large" caption="图. Ingress 扇出" link="https://mermaid.live/edit#pako:eNqNUslOwzAQ_RXLvYCUhMQpUFzUUzkgcUBwbHpw4klr4diR7bCo8O8k2FFbFomLPZq3jP00O1xpDpjijWHtFt09zAuFUCUFKHey8vf6NE7QrdoYsDZumGIb4Oi6NAskNeOoZJKpCgxK4oXwrFVgRyi7nCVXWZKRPMlysv5yD6Q4Xryf1Vq_WzDPooJs9egLNDbolKTpT03JzKgh3zWEztJZ0Niu9L-qZGcdmAMfj4cxvWmreba613z9C0B-AMQD-V_AdA-A4j5QZu0SatRKJhSqhZR0wjmPrDP6CeikrutQxy-Cuy2dtq9RpaU2dJKm6fzI5Glmg0VOLio4_5dLjx27hFSC015KJ2VZHtuQvY2fuHcaE43G0MaCREOow_FV5cMxHZ5-oPX75UM5avuXhXuOI9yAaZjg_aLuBl6B3RYaKDDtSw4166QrcKE-emrXcubghgunDaY1kxYizDqnH99UhakzHYykpWD9hjS--fEJoIELqQ" >}} + {{< note >}} -取决于你所使用的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers), -你可能需要创建默认 HTTP 后端[服务](/zh/docs/concepts/services-networking/service/)。 +取决于你所使用的 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers), +你可能需要创建默认 HTTP 后端[服务](/zh-cn/docs/concepts/services-networking/service/)。 {{< /note >}} |Host: foo.bar.com|service1[Service service1:80]; - ingress-->|Host: bar.foo.com|service2[Service service2:80]; - subgraph cluster - ingress; - service1-->pod1[Pod]; - service1-->pod2[Pod]; - service2-->pod3[Pod]; - service2-->pod4[Pod]; - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; - class client plain; - class cluster cluster; -{{}} +{{< figure src="/zh-cn/docs/images/ingressNameBased.svg" alt="ingress-namebase-diagram" class="diagram-large" caption="图. 基于名称实现虚拟托管的 Ingress" link="https://mermaid.live/edit#pako:eNqNkl9PwyAUxb8KYS-atM1Kp05m9qSJJj4Y97jugcLtRqTQAPVPdN_dVlq3qUt8gZt7zvkBN7xjbgRgiteW1Rt0_zjLNUJcSdD-ZBn21WmcoDu9tuBcXDHN1iDQVWHnSBkmUMEU0xwsSuK5DK5l745QejFNLtMkJVmSZmT1Re9NcTz_uDXOU1QakxTMJtxUHw7ss-SQLhehQEODTsdH4l20Q-zFyc84-Y67pghv5apxHuweMuj9eS2_NiJdPhix-kMgvwQShOyYMNkJoEUYM3PuGkpUKyY1KqVSdCSEiJy35gnoqCzLvo5fpPAbOqlfI26UsXQ0Ho9nB5CnqesRGTnncPYvSqsdUvqp9KRdlI6KojjEkB0mnLgjDRONhqENBYm6oXbLV5V1y6S7-l42_LowlIN2uFm_twqOcAW2YlK0H_i9c-bYb6CCHNO2FFCyRvkc53rbWptaMA83QnpjMS2ZchBh1nizeNMcU28bGEzXkrV_pArN7Sc0rBTu" >}} 值得注意的是,尽管健康检查不是通过 Ingress 直接暴露的,在 Kubernetes 中存在并行的概念,比如 -[就绪检查](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/), +[就绪检查](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/), 允许你实现相同的目的。 请检查特定控制器的说明文档([nginx](https://git.k8s.io/ingress-nginx/README.md)、 [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks))以了解它们是怎样处理健康检查的。 @@ -1049,7 +1000,7 @@ Please check the documentation of the relevant [Ingress controller](/docs/concep ## 跨可用区失败 {#failing-across-availability-zones} 不同的云厂商使用不同的技术来实现跨故障域的流量分布。详情请查阅相关 Ingress 控制器的文档。 -请查看相关 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)的文档以了解详细信息。 +请查看相关 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers)的文档以了解详细信息。 -* 使用 [Service.Type=LoadBalancer](/zh/docs/concepts/services-networking/service/#loadbalancer) -* 使用 [Service.Type=NodePort](/zh/docs/concepts/services-networking/service/#nodeport) +* 使用 [Service.Type=LoadBalancer](/zh-cn/docs/concepts/services-networking/service/#loadbalancer) +* 使用 [Service.Type=NodePort](/zh-cn/docs/concepts/services-networking/service/#nodeport) ## {{% heading "whatsnext" %}} @@ -1075,6 +1026,6 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube/) --> * 进一步了解 [Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API -* 进一步了解 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/) -* [使用 NGINX 控制器在 Minikube 上安装 Ingress](/zh/docs/tasks/access-application-cluster/ingress-minikube/) +* 进一步了解 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers/) +* [使用 NGINX 控制器在 Minikube 上安装 Ingress](/zh-cn/docs/tasks/access-application-cluster/ingress-minikube/) diff --git a/content/zh/docs/concepts/services-networking/network-policies.md b/content/zh-cn/docs/concepts/services-networking/network-policies.md similarity index 84% rename from content/zh/docs/concepts/services-networking/network-policies.md rename to content/zh-cn/docs/concepts/services-networking/network-policies.md index ff36905777dd2..88bbebdfaa417 100644 --- a/content/zh/docs/concepts/services-networking/network-policies.md +++ b/content/zh-cn/docs/concepts/services-networking/network-policies.md @@ -13,7 +13,7 @@ weight: 50 如果你希望在 IP 地址或端口层面(OSI 第 3 层或第 4 层)控制网络流量, 则你可以考虑为集群中特定应用使用 Kubernetes 网络策略(NetworkPolicy)。 @@ -21,6 +21,7 @@ NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许 {{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体” (我们这里使用实体以避免过度使用诸如“端点”和“服务”这类常用术语, 这些术语在 Kubernetes 中有特定含义)通信。 +NetworkPolicies 适用于一端或两端与 Pod 的连接,与其他连接无关。 ## 前置条件 {#prerequisites} -网络策略通过[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +网络策略通过[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) 来实现。要使用网络策略,你必须使用支持 NetworkPolicy 的网络解决方案。 创建一个 NetworkPolicy 资源对象而没有控制器来使它生效的话,是没有任何作用的。 @@ -67,7 +68,7 @@ Network policies are implemented by the [network plugin](/docs/concepts/extend-k There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. They concern what connections may be established. "Isolation" here is not absolute, rather it means "some restrictions apply". The alternative, "non-isolated for $direction", means that no restrictions apply in the stated direction. The two sorts of isolation (or not) are declared independently, and are both relevant for a connection from one pod to another. --> -## Pod 隔离的两种类型 +## Pod 隔离的两种类型 {#the-two-sorts-of-pod-isolation} Pod 有两种隔离: 出口的隔离和入口的隔离。它们涉及到可以建立哪些连接。 这里的“隔离”不是绝对的,而是意味着“有一些限制”。 @@ -90,7 +91,7 @@ By default, a pod is non-isolated for ingress; all inbound connections are allow 默认情况下,一个 Pod 对入口是非隔离的,即所有入站连接都是被允许的。如果有任何的 NetworkPolicy 选择该 Pod 并在其 `policyTypes` 中包含 “Ingress”,则该 Pod 被隔离入口, -我们称这种策略适用于该 Pod 的入口。 当一个 Pod 的入口被隔离时,唯一允许进入该 Pod +我们称这种策略适用于该 Pod 的入口。当一个 Pod 的入口被隔离时,唯一允许进入该 Pod 的连接是来自该 Pod 节点的连接和适用于入口的 Pod 的某个 NetworkPolicy 的 `ingress` 列表所允许的连接。这些 `ingress` 列表的效果是相加的。 @@ -134,7 +135,7 @@ POSTing this to the API server for your cluster will have no effect unless your __Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see -[Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), +[Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), and [Object Management](/docs/concepts/overview/working-with-objects/object-management). __spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace. @@ -143,8 +144,8 @@ __podSelector__: Each NetworkPolicy includes a `podSelector` which selects the g --> __必需字段__:与所有其他的 Kubernetes 配置一样,NetworkPolicy 需要 `apiVersion`、 `kind` 和 `metadata` 字段。关于配置文件操作的一般信息,请参考 -[使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/), -和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。 +[配置 Pod 以使用 ConfigMap](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/), +和[对象管理](/zh-cn/docs/concepts/overview/working-with-objects/object-management)。 __spec__:NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 中包含了在一个名字空间中定义特定网络策略所需的所有信息。 @@ -169,7 +170,7 @@ __policyTypes__: 每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其 __ingress__: 每个 NetworkPolicy 可包含一个 `ingress` 规则的白名单列表。 每个规则都允许同时匹配 `from` 和 `ports` 部分的流量。示例策略中包含一条 -简单的规则: 它匹配某个特定端口,来自三个来源中的一个,第一个通过 `ipBlock` +简单的规则:它匹配某个特定端口,来自三个来源中的一个,第一个通过 `ipBlock` 指定,第二个通过 `namespaceSelector` 指定,第三个通过 `podSelector` 指定。 __egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列表。 @@ -180,7 +181,7 @@ __egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列 So, the example NetworkPolicy: 1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated) -2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from: +2. (Ingress rules) allows connections to all pods in the "default" namespace with the label "role=db" on TCP port 6379 from: * any pod in the "default" namespace with the label "role=frontend" * any pod in a namespace with the label "project=myproject" @@ -200,10 +201,10 @@ See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network- * IP 地址范围为 172.17.0.0–172.17.0.255 和 172.17.2.0–172.17.255.255 (即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16) -3. (Egress 规则)允许从带有 "role=db" 标签的名字空间下的任何 Pod 到 CIDR +3. (Egress 规则)允许 “default” 命名空间中任何带有标签 “role=db” 的 Pod 到 CIDR 10.0.0.0/24 下 5978 TCP 端口的连接。 -参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)演练 +参阅[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/)演练 了解更多示例。 -### 默认拒绝所有入站流量 +### 默认拒绝所有入站流量 {#default-deny-all-ingress-traffic} + 你可以通过创建选择所有容器但不允许任何进入这些容器的入站流量的 NetworkPolicy 来为名字空间创建 “default” 隔离策略。 {{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} -这样可以确保即使容器没有选择其他任何 NetworkPolicy,也仍然可以被隔离。 -此策略不会更改默认的出口隔离行为。 +这确保即使没有被任何其他 NetworkPolicy 选择的 Pod 仍将被隔离以进行入口。 +此策略不影响任何 Pod 的出口隔离。 -### 默认允许所有入站流量 +### 允许所有入站流量 {#allow-all-ingress-traffic} -如果要允许所有流量进入某个名字空间中的所有 Pod(即使添加了导致某些 Pod 被视为 -“隔离”的策略),则可以创建一个策略来明确允许该名字空间中的所有流量。 + +如果你想允许一个命名空间中所有 Pod 的所有入站连接,你可以创建一个明确允许的策略。 {{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} + +有了这个策略,任何额外的策略都不会导致到这些 Pod 的任何入站连接被拒绝。 +此策略对任何 Pod 的出口隔离没有影响。 + -### 默认拒绝所有出站流量 +### 默认拒绝所有出站流量 {#default-deny-all-egress-traffic} 你可以通过创建选择所有容器但不允许来自这些容器的任何出站流量的 NetworkPolicy 来为名字空间创建 “default” 隔离策略。 @@ -358,29 +366,36 @@ You can create a "default" egress isolation policy for a namespace by creating a 此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。 -此策略不会更改默认的入站流量隔离行为。 +此策略不会更改任何 Pod 的入站流量隔离行为。 -### 默认允许所有出站流量 +### 允许所有出站流量 {#allow-all-egress-traffic} -如果要允许来自名字空间中所有 Pod 的所有流量(即使添加了导致某些 Pod 被视为“隔离”的策略), -则可以创建一个策略,该策略明确允许该名字空间中的所有出站流量。 + +如果要允许来自命名空间中所有 Pod 的所有连接, +则可以创建一个明确允许来自该命名空间中 Pod 的所有出站连接的策略。 {{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} + +有了这个策略,任何额外的策略都不会导致来自这些 Pod 的任何出站连接被拒绝。 +此策略对进入任何 Pod 的隔离没有影响。 + -### 默认拒绝所有入口和所有出站流量 +### 默认拒绝所有入站和所有出站流量 {#default-deny-all-ingress-and-all-egress-traffic} 你可以为名字空间创建“默认”策略,以通过在该名字空间中创建以下 NetworkPolicy 来阻止所有入站和出站流量。 @@ -396,7 +411,7 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will -## SCTP 支持 +## SCTP 支持 {#sctp-support} {{< feature-state for_k8s_version="v1.20" state="stable" >}} @@ -407,7 +422,7 @@ When the feature gate is enabled, you can set the `protocol` field of a NetworkP 作为一个稳定特性,SCTP 支持默认是被启用的。 要在集群层面禁用 SCTP,你(或你的集群管理员)需要为 API 服务器指定 `--feature-gates=SCTPSupport=false,...` -来禁用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +来禁用 `SCTPSupport` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 启用该特性门控后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`。 {{< note >}} @@ -465,7 +480,10 @@ port is between the range 32000 and 32768. 你的集群所使用的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件 必须支持在 NetworkPolicy 规约中使用 `endPort` 字段。 -如果你的[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +如果你的[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) 不支持 `endPort` 字段,而你指定了一个包含 `endPort` 字段的 NetworkPolicy, 策略只对单个 `port` 字段生效。 {{< /note >}} @@ -512,7 +530,7 @@ While NetworkPolicy cannot target a namespace by its name with some object field standardized label to target a specific namespace. --> 只要 `NamespaceDefaultLabelName` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 被启用,Kubernetes 控制面会在所有名字空间上设置一个不可变更的标签 `kubernetes.io/metadata.name`。该标签的值是名字空间的名称。 @@ -522,11 +540,11 @@ standardized label to target a specific namespace. -## 通过网络策略(至少目前还)无法完成的工作 +## 通过网络策略(至少目前还)无法完成的工作 {#what-you-can-t-do-with-network-policies-at-least-not-yet} -到 Kubernetes {{< skew latestVersion >}} 为止,NetworkPolicy API 还不支持以下功能,不过 +到 Kubernetes {{< skew currentVersion >}} 为止,NetworkPolicy API 还不支持以下功能,不过 你可能可以使用操作系统组件(如 SELinux、OpenVSwitch、IPTables 等等) 或者第七层技术(Ingress 控制器、服务网格实现)或准入控制器来实现一些 替代方案。 @@ -570,7 +588,7 @@ As of Kubernetes {{< skew latestVersion >}}, the following functionality does no walkthrough for further examples. - See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. --> -- 参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) +- 参阅[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/) 演练了解更多示例; - 有关 NetworkPolicy 资源所支持的常见场景的更多信息,请参见 [此指南](https://github.com/ahmetb/kubernetes-network-policy-recipes)。 diff --git a/content/zh/docs/concepts/services-networking/service-topology.md b/content/zh-cn/docs/concepts/services-networking/service-topology.md similarity index 96% rename from content/zh/docs/concepts/services-networking/service-topology.md rename to content/zh-cn/docs/concepts/services-networking/service-topology.md index 4ab71cee415f9..e899786565dc0 100644 --- a/content/zh/docs/concepts/services-networking/service-topology.md +++ b/content/zh-cn/docs/concepts/services-networking/service-topology.md @@ -25,7 +25,7 @@ introduced in Kubernetes v1.21, provide similar functionality. --> 此功能特性,尤其是 Alpha 阶段的 `topologyKeys` API,在 Kubernetes v1.21 版本中已被废弃。Kubernetes v1.21 版本中引入的 -[拓扑感知的提示](/zh/docs/concepts/services-networking/topology-aware-hints/), +[拓扑感知的提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/), 提供类似的功能。 {{}} @@ -104,7 +104,7 @@ as the last value in the list. ## 使用服务拓扑 {#using-service-topology} 如果集群启用了 `ServiceTopology` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 你就可以在 Service 规约中设定 `topologyKeys` 字段,从而控制其流量路由。 此字段是 `Node` 标签的优先顺序字段,将用于在访问这个 `Service` 时对端点进行排序。 流量会被定向到第一个标签值和源 `Node` 标签值相匹配的 `Node`。 @@ -300,6 +300,6 @@ spec: * Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) --> -* 阅读关于[启用服务拓扑](/zh/docs/tasks/administer-cluster/enabling-service-topology/) -* 阅读[用服务连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读关于[启用服务拓扑](/zh-cn/docs/tasks/administer-cluster/enabling-service-topology/) +* 阅读[用服务连接应用程序](/zh-cn/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/docs/concepts/services-networking/service-traffic-policy.md b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md similarity index 88% rename from content/zh/docs/concepts/services-networking/service-traffic-policy.md rename to content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md index dad9dcc79b21c..b291c17758d1b 100644 --- a/content/zh/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md @@ -43,7 +43,7 @@ When the feature is enabled, you can enable the internal-only traffic policy for This tells kube-proxy to only use node local endpoints for cluster internal traffic. --> `ServiceInternalTrafficPolicy` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) 是 Beta 功能,默认启用。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 是 Beta 功能,默认启用。 启用该功能后,你就可以通过将 {{< glossary_tooltip text="Services" term_id="service" >}} 的 `.spec.internalTrafficPolicy` 项设置为 `Local`, 来为它指定一个内部专用的流量策略。 @@ -99,7 +99,7 @@ When the [feature gate](/docs/reference/command-line-tools-reference/feature-gat kube-proxy 基于 `spec.internalTrafficPolicy` 的设置来过滤路由的目标服务端点。 当它的值设为 `Local` 时,只选择节点本地的服务端点。 当它的值设为 `Cluster` 或缺省时,则选择所有的服务端点。 -启用[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +启用[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) `ServiceInternalTrafficPolicy` 后, `spec.internalTrafficPolicy` 的值默认设为 `Cluster`。 @@ -123,6 +123,6 @@ kube-proxy 基于 `spec.internalTrafficPolicy` 的设置来过滤路由的目标 * Read about [Service External Traffic Policy](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) --> -* 请阅读[拓扑感知提示](/zh/docs/concepts/services-networking/topology-aware-hints) -* 请阅读[Service 的外部流量策略](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) -* 请阅读[用 Service 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/) +* 请阅读[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints) +* 请阅读[Service 的外部流量策略](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) +* 请阅读[用 Service 连接应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/docs/concepts/services-networking/service.md b/content/zh-cn/docs/concepts/services-networking/service.md similarity index 91% rename from content/zh/docs/concepts/services-networking/service.md rename to content/zh-cn/docs/concepts/services-networking/service.md index 3d2db97c3cbfb..bedb122737dd7 100644 --- a/content/zh/docs/concepts/services-networking/service.md +++ b/content/zh-cn/docs/concepts/services-networking/service.md @@ -1,5 +1,5 @@ --- -title: 服务 +title: 服务(Service) feature: title: 服务发现与负载均衡 description: > @@ -39,7 +39,7 @@ Kubernetes 为 Pods 提供自己的 IP 地址,并为一组 Pod 提供相同的 ## Motivation Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed -to match the state of your cluster. Pods are nonpermanent resources. +to match the desired state of your cluster. Pods are nonpermanent resources. If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app, it can create and destroy Pods dynamically. @@ -57,7 +57,7 @@ Enter _Services_. ## 动机 -创建和销毁 Kubernetes {{< glossary_tooltip term_id="pod" text="Pod" >}} 以匹配集群状态。 +创建和销毁 Kubernetes {{< glossary_tooltip term_id="pod" text="Pod" >}} 以匹配集群的期望状态。 Pod 是非永久性资源。 如果你使用 {{< glossary_tooltip term_id="deployment">}} 来运行你的应用程序,则它可以动态创建和销毁 Pod。 @@ -189,13 +189,55 @@ field. +Pod 中的端口定义是有名字的,你可以在 Service 的 `targetPort` 属性中引用这些名称。 +例如,我们可以通过以下方式将 Service 的 `targetPort` 绑定到 Pod 端口: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app.kubernetes.io/name: proxy +spec: + containers: + - name: nginx + image: nginx:stable + ports: + - containerPort: 80 + name: http-web-svc + +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + selector: + app.kubernetes.io/name: proxy + ports: + - name: name-of-service-port + protocol: TCP + port: 80 + targetPort: http-web-svc +``` + +即使 Service 中使用同一配置名称混合使用多个 Pod,各 Pod 通过不同的端口号支持相同的网络协议, +此功能也可以使用。这为 Service 的部署和演化提供了很大的灵活性。 +例如,你可以在新版本中更改 Pod 中后端软件公开的端口号,而不会破坏客户端。 + + + -Pod 中的端口定义是有名字的,你可以在服务的 `targetPort` 属性中引用这些名称。 -即使服务中使用单个配置的名称混合使用 Pod,并且通过不同的端口号提供相同的网络协议,此功能也可以使用。 -这为部署和发展服务提供了很大的灵活性。 -例如,你可以更改 Pods 在新版本的后端软件中公开的端口号,而不会破坏客户端。 + 服务的默认协议是 TCP;你还可以使用任何其他[受支持的协议](#protocol-support)。 @@ -216,9 +255,9 @@ Pod 中的端口定义是有名字的,你可以在服务的 `targetPort` 属 ### 没有选择算符的 Service {#services-without-selectors} -服务最常见的是抽象化对 Kubernetes Pod 的访问,但是它们也可以抽象化其他种类的后端。 -实例: +由于选择器的存在,服务最常见的用法是为 Kubernetes Pod 的访问提供抽象, +但是当与相应的 Endpoints 对象一起使用且没有选择器时, +服务也可以为其他类型的后端提供抽象,包括在集群外运行的后端。 +例如: * 希望在生产环境中使用外部的数据库集群,但测试环境使用自己的数据库。 * 希望服务指向另一个 {{< glossary_tooltip term_id="namespace" >}} 中或其它集群中的服务。 @@ -266,6 +307,7 @@ where it's running, by adding an Endpoints object manually: apiVersion: v1 kind: Endpoints metadata: + # 这里的 name 要与 Service 的名字相同 name: my-service subsets: - addresses: @@ -278,8 +320,17 @@ The name of the Endpoints object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> Endpoints 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +当你为某个 Service 创建一个 [Endpoints](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoints-v1/) +对象时,你要将新对象的名称设置为与 Service 的名称相同。 + +{{< note >}} -{{< note >}} 端点 IPs _必须不可以_ 是:本地回路(IPv4 的 127.0.0.0/8, IPv6 的 ::1/128)或 本地链接(IPv4 的 169.254.0.0/16 和 224.0.0.0/24,IPv6 的 fe80::/64)。 @@ -351,7 +401,7 @@ EndpointSlices 是一种 API 资源,可以为 Endpoints 提供更可扩展的 届时将创建其他 EndpointSlices 来存储任何其他 Endpoints。 EndpointSlices 提供了附加的属性和功能,这些属性和功能在 -[EndpointSlices](/zh/docs/concepts/services-networking/endpoint-slices/) +[EndpointSlices](/zh-cn/docs/concepts/services-networking/endpoint-slices/) 中有详细描述。 +{{< note >}} +在 Windows 上,不支持为服务设置最大会话停留时间。 +{{< /note >}} + + @@ -694,7 +752,7 @@ has local endpoints and whether or not all the local endpoints are marked as ter --> 如果你启用了 kube-proxy 的 `ProxyTerminatingEndpoints` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), kube-proxy 会检查节点是否有本地的端点,以及是否所有的本地端点都被标记为终止中。 -你可以(几乎总是应该)使用[附加组件](/zh/docs/concepts/cluster-administration/addons/) +你可以(几乎总是应该)使用[附加组件](/zh-cn/docs/concepts/cluster-administration/addons/) 为 Kubernetes 集群设置 DNS 服务。 支持集群的 DNS 服务器(例如 CoreDNS)监视 Kubernetes API 中的新服务,并为每个服务创建一组 DNS 记录。 @@ -851,7 +905,7 @@ Kubernetes 还支持命名端口的 DNS SRV(服务)记录。 Kubernetes DNS 服务器是唯一的一种能够访问 `ExternalName` 类型的 Service 的方式。 更多关于 `ExternalName` 信息可以查看 -[DNS Pod 和 Service](/zh/docs/concepts/services-networking/dns-pod-service/)。 +[DNS Pod 和 Service](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。 -你也可以使用 [Ingress](/zh/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 +你也可以使用 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 Ingress 不是一种服务类型,但它充当集群的入口点。 它可以将路由规则整合到一个资源中,因为它可以在同一IP地址下公开多个服务。 @@ -1145,13 +1199,15 @@ securityGroupName。 #### 混合协议类型的负载均衡器 @@ -1160,14 +1216,16 @@ If the feature gate `MixedProtocolLBService` is enabled for the kube-apiserver i 默认情况下,对于 LoadBalancer 类型的服务,当定义了多个端口时,所有 端口必须具有相同的协议,并且该协议必须是受云提供商支持的协议。 -如果为 kube-apiserver 启用了 `MixedProtocolLBService` 特性门控, -则当定义了多个端口时,允许使用不同的协议。 +当服务中定义了多个端口时,特性门控 `MixedProtocolLBService`(在 kube-apiserver 1.24 版本默认为启用)允许 +LoadBalancer 类型的服务使用不同的协议。 {{< note >}} 可用于 LoadBalancer 类型服务的协议集仍然由云提供商决定。 +如果云提供商不支持混合协议,他们将只提供单一协议。 {{< /note >}} ### 禁用负载均衡器节点端口分配 {#load-balancer-nodeport-allocation} -{{< feature-state for_k8s_version="v1.20" state="alpha" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -从 v1.20 版本开始, 你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false` +你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false` 对类型为 LoadBalancer 的服务禁用节点端口分配。 这仅适用于直接将流量路由到 Pod 而不是使用节点端口的负载均衡器实现。 默认情况下,`spec.allocateLoadBalancerNodePorts` 为 `true`, LoadBalancer 类型的服务继续分配节点端口。 如果现有服务已被分配节点端口,将参数 `spec.allocateLoadBalancerNodePorts` -设置为 `false` 时,这些服务上已分配置的节点端口不会被自动释放。 +设置为 `false` 时,这些服务上已分配置的节点端口**不会**被自动释放。 你必须显式地在每个服务端口中删除 `nodePorts` 项以释放对应端口。 -你必须启用 `ServiceLBNodePortControl` 特性门控才能使用该字段。 #### 设置负载均衡器实现的类别 {#load-balancer-class} -{{< feature-state for_k8s_version="v1.22" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} `spec.loadBalancerClass` 允许你不使用云提供商的默认负载均衡器实现,转而使用指定的负载均衡器实现。 -这个特性从 v1.21 版本开始可以使用,你在 v1.21 版本中使用这个字段必须启用 `ServiceLoadBalancerClass` -特性门控,这个特性门控从 v1.22 版本及以后默认打开。 默认情况下,`.spec.loadBalancerClass` 的取值是 `nil`,如果集群使用 `--cloud-provider` 配置了云提供商, `LoadBalancer` 类型服务会使用云提供商的默认负载均衡器实现。 如果设置了 `.spec.loadBalancerClass`,则假定存在某个与所指定的类相匹配的 @@ -1353,6 +1407,17 @@ metadata: [...] ``` +{{% /tab %}} +{{% tab name="OCI" %}} + +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/oci-load-balancer-internal: true +[...] +``` {{% /tab %}} {{< /tabs >}} @@ -1681,10 +1746,10 @@ groups are modified with the following IP rules: --> 为了获得均衡流量,请使用 DaemonSet 或指定 -[Pod 反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) +[Pod 反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) 使其不在同一节点上。 -你还可以将 NLB 服务与[内部负载平衡器](/zh/docs/concepts/services-networking/service/#internal-load-balancer) +你还可以将 NLB 服务与[内部负载平衡器](/zh-cn/docs/concepts/services-networking/service/#internal-load-balancer) 注解一起使用。 为了使客户端流量能够到达 NLB 后面的实例,使用以下 IP 规则修改了节点安全组: @@ -1972,7 +2037,8 @@ someone else's choice. That is an isolation failure. In order to allow you to choose a port number for your Services, we must ensure that no two Services can collide. Kubernetes does that by allocating each -Service its own IP address. +Service its own IP address from within the `service-cluster-ip-range` +CIDR range that is configured for the API server. To ensure each Service receives a unique IP, an internal allocator atomically updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}} @@ -1992,8 +2058,9 @@ Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致 对于 Service 资源的设计,这意味着如果用户的选择有可能与他人冲突,那就不要让用户自行选择端口号。 这是一个隔离性的失败。 -为了使用户能够为他们的 Service 选择一个端口号,我们必须确保不能有2个 Service 发生冲突。 -Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。 +为了使用户能够为他们的 Service 选择一个端口号,我们必须确保不能有 2 个 Service 发生冲突。 +Kubernetes 通过在为 API 服务器配置的 `service-cluster-ip-range` CIDR +范围内为每个服务分配自己的 IP 地址来实现。 为了保证每个 Service 被分配到一个唯一的 IP,需要一个内部的分配器能够原子地更新 {{< glossary_tooltip term_id="etcd" >}} 中的一个全局分配映射表, @@ -2006,6 +2073,42 @@ Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。 同时 Kubernetes 会通过控制器检查不合理的分配(如管理员干预导致的) 以及清理已被分配但不再被任何 Service 使用的 IP 地址。 + +#### `type: ClusterIP` 服务的 IP 地址范围 {#service-ip-static-sub-range} + +{{< feature-state for_k8s_version="v1.24" state="alpha" >}} +但是,这种 `ClusterIP` 分配策略存在一个问题,因为用户还可以[为服务选择自己的地址](#choosing-your-own-ip-address)。 +如果内部分配器为另一个服务选择相同的 IP 地址,这可能会导致冲突。 + + +如果启用 `ServiceIPStaticSubrange`[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +分配策略根据配置的 `service-cluster-ip-range` 的大小,使用以下公式 +`min(max(16, cidrSize / 16), 256)` 进行划分,该公式可描述为 +“在不小于 16 且不大于 256 之间有一个步进量(Graduated Step)”,将 +`ClusterIP` 范围分成两段。动态 IP 分配将优先从上半段地址中选择, +从而降低与下半段地址分配的 IP 冲突的风险。 +这允许用户将 `service-cluster-ip-range` 的下半段地址用于他们的服务, +与所分配的静态 IP 的冲突风险非常低。 + -* 阅读[使用服务访问应用](/zh/docs/concepts/services-networking/connect-applications-service/) -* 阅读了解 [Ingress](/zh/docs/concepts/services-networking/ingress/) -* 阅读了解[端点切片(Endpoint Slices)](/zh/docs/concepts/services-networking/endpoint-slices/) +* 阅读[使用服务访问应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 阅读了解 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) +* 阅读了解[端点切片(Endpoint Slices)](/zh-cn/docs/concepts/services-networking/endpoint-slices/) diff --git a/content/zh/docs/concepts/services-networking/topology-aware-hints.md b/content/zh-cn/docs/concepts/services-networking/topology-aware-hints.md similarity index 97% rename from content/zh/docs/concepts/services-networking/topology-aware-hints.md rename to content/zh-cn/docs/concepts/services-networking/topology-aware-hints.md index f0b28aca2ac93..a867c2a537fb3 100644 --- a/content/zh/docs/concepts/services-networking/topology-aware-hints.md +++ b/content/zh-cn/docs/concepts/services-networking/topology-aware-hints.md @@ -41,7 +41,7 @@ by default. To try out this feature, you have to enable the `TopologyAwareHints` {{< note >}} “拓扑感知提示”特性处于 Beta 阶段,并且默认情况下**未**启用。 要试用此特性,你必须启用 `TopologyAwareHints` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 {{< /note >}} @@ -117,7 +117,7 @@ as many endpoints to the zone with 2 CPU cores. 此特性开启后,EndpointSlice 控制器负责在 EndpointSlice 上设置提示信息。 控制器按比例给每个区域分配一定比例数量的端点。 这个比例来源于此区域中运行节点的 -[可分配](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +[可分配](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) CPU 核心数。 例如,如果一个区域拥有 2 CPU 核心,而另一个区域只有 1 CPU 核心, 那控制器将给那个有 2 CPU 的区域分配两倍数量的端点。 @@ -292,4 +292,4 @@ Kubernetes 控制平面和每个节点上的 kube-proxy,在使用拓扑感知 * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) --> -* 参阅[通过服务连通应用](/zh/docs/concepts/services-networking/connect-applications-service/) +* 参阅[通过服务连通应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh-cn/docs/concepts/services-networking/windows-networking.md b/content/zh-cn/docs/concepts/services-networking/windows-networking.md new file mode 100644 index 0000000000000..ea2df2e8154ec --- /dev/null +++ b/content/zh-cn/docs/concepts/services-networking/windows-networking.md @@ -0,0 +1,310 @@ +--- +title: Windows 网络 +content_type: concept +weight: 75 +--- + + + +Kubernetes 支持运行 Linux 或 Windows 节点。 +你可以在统一集群内混布这两种节点。 +本页提供了特定于 Windows 操作系统的网络概述。 + + + +## Windows 容器网络 {#networking} + +Windows 容器网络通过 [CNI 插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)暴露。 +Windows 容器网络的工作方式与虚拟机类似。 +每个容器都有一个连接到 Hyper-V 虚拟交换机(vSwitch)的虚拟网络适配器(vNIC)。 +主机网络服务(Host Networking Service,HNS)和主机计算服务(Host Comute Service,HCS) +协同创建容器并将容器 vNIC 挂接到网络。 +HCS 负责管理容器,而 HNS 负责管理以下网络资源: + +* 虚拟网络(包括创建 vSwitch) +* Endpoint / vNIC +* 命名空间 +* 包括数据包封装、负载均衡规则、ACL 和 NAT 规则在内的策略。 + + +Windows HNS 和 vSwitch 实现命名空间划分,且可以按需为 Pod 或容器创建虚拟 NIC。 +然而,诸如 DNS、路由和指标等许多配置将存放在 Windows 注册表数据库中, +而不是像 Linux 将这些配置作为文件存放在 `/etc` 内。 +针对容器的 Windows 注册表与主机的注册表是分开的,因此将 `/etc/resolv.conf` +从主机映射到一个容器的类似概念与 Linux 上的效果不同。 +这些必须使用容器环境中运行的 Windows API 进行配置。 +因此,实现 CNI 时需要调用 HNS,而不是依赖文件映射将网络详情传递到 Pod 或容器中。 + + +## 网络模式 {#network-mode} + +Windows 支持五种不同的网络驱动/模式:L2bridge、L2tunnel、Overlay (Beta)、Transparent 和 NAT。 +在 Windows 和 Linux 工作节点组成的异构集群中,你需要选择一个同时兼容 Windows 和 Linux 的网络方案。 +下表列出了 Windows 支持的树外插件,并给出了何时使用每种 CNI 的建议: + + +| 网络驱动 | 描述 | 容器数据包修改 | 网络插件 | 网络插件特点 | +| -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | +| L2bridge | 容器挂接到一个外部 vSwitch。容器挂接到下层网络,但物理网络不需要了解容器的 MAC,因为这些 MAC 在入站/出站时被重写。 | MAC 被重写为主机 MAC,可使用 HNS OutboundNAT 策略将 IP 重写为主机 IP。 | [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannel host-gateway 使用 win-bridge| win-bridge 使用 L2bridge 网络模式,将容器连接到主机的下层,提供最佳性能。节点间连接需要用户定义的路由(UDR)。 | +| L2Tunnel | 这是 L2bridge 的一种特例,但仅用在 Azure 上。所有数据包都会被发送到应用了 SDN 策略的虚拟化主机。 | MAC 被重写,IP 在下层网络上可见。| [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNI 允许将容器集成到 Azure vNET,允许容器充分利用 [Azure 虚拟网络](https://azure.microsoft.com/zh-cn/services/virtual-network/)所提供的能力集合。例如,安全地连接到 Azure 服务或使用 Azure NSG。参考 [azure-cni 了解有关示例](https://docs.microsoft.com/zh-cn/azure/aks/concepts-network#azure-cni-advanced-networking)。 | +| Overlay | 容器被赋予一个 vNIC,连接到外部 vSwitch。每个上层网络都有自己的 IP 子网,由自定义 IP 前缀进行定义。该上层网络驱动使用 VXLAN 封装。 | 用外部头进行封装。 | [win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN(使用 win-overlay) | 当需要将虚拟容器网络与主机的下层隔离时(例如出于安全原因),应使用 win-overlay。如果你的数据中心的 IP 个数有限,可以将 IP 在不同的上层网络中重用(带有不同的 VNID 标记)。在 Windows Server 2019 上这个选项需要 [KB4489899](https://support.microsoft.com/zh-cn/help/4489899)。 | +| Transparent([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) 的特殊用例) | 需要一个外部 vSwitch。容器挂接到一个外部 vSwitch,由后者通过逻辑网络(逻辑交换机和路由器)实现 Pod 内通信。 | 数据包通过 [GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/) 或 [STT](https://datatracker.ietf.org/doc/draft-davie-stt/) 隧道进行封装,以到达其它主机上的 Pod。
        数据包基于 OVN 网络控制器提供的隧道元数据信息被转发或丢弃。
        南北向通信使用 NAT。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [通过 ansible 部署](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)。通过 Kubernetes 策略可以实施分布式 ACL。支持 IPAM。无需 kube-proxy 即可实现负载均衡。无需 iptables/netsh 即可进行 NAT。 | +| NAT(**Kubernetes 中未使用**) | 容器被赋予一个 vNIC,连接到内部 vSwitch。DNS/DHCP 是使用一个名为 [WinNAT 的内部组件](https://techcommunity.microsoft.com/t5/virtualization/windows-nat-winnat-capabilities-and-limitations/ba-p/382303)实现的 | MAC 和 IP 重写为主机 MAC/IP。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 放在此处保持完整性。 | + + +如上所述,Windows 通过 [VXLAN 网络后端](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)(**Beta 支持**;委派给 win-overlay) +和 [host-gateway 网络后端](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw)(稳定支持;委派给 win-bridge) +也[支持](https://github.com/flannel-io/cni-plugin#windows-support-experimental) [Flannel](https://github.com/coreos/flannel) 的 [CNI 插件](https://github.com/flannel-io/cni-plugin)。 + + +此插件支持委派给参考 CNI 插件(win-overlay、win-bridge)之一,配合使用 Windows +上的 Flannel 守护程序(Flanneld),以便自动分配节点子网租赁并创建 HNS 网络。 +该插件读取自己的配置文件(cni.conf),并聚合 FlannelD 生成的 subnet.env 文件中的环境变量。 +然后,委派给网络管道的参考 CNI 插件之一,并将包含节点分配子网的正确配置发送给 IPAM 插件(例如:`host-local`)。 + + +对于 Node、Pod 和 Service 对象,TCP/UDP 流量支持以下网络流: + +* Pod → Pod(IP) +* Pod → Pod(名称) +* Pod → Service(集群 IP) +* Pod → Service(PQDN,但前提是没有 ".") +* Pod → Service(FQDN) +* Pod → 外部(IP) +* Pod → 外部(DNS) +* Node → Pod +* Pod → Node + + +## IP 地址管理(IPAM) {#ipam} + +Windows 支持以下 IPAM 选项: + +* [host-local](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local) +* [azure-vnet-ipam](https://github.com/Azure/azure-container-networking/blob/master/docs/ipam.md)(仅适用于 azure-cni) +* [Windows Server IPAM](https://docs.microsoft.com/zh-cn/windows-server/networking/technologies/ipam/ipam-top)(未设置 IPAM 时的回滚选项) + + +## 负载均衡和 Service {#load-balancing-and-services} + +Kubernetes {{< glossary_tooltip text="Service" term_id="service" >}} 是一种抽象:定义了逻辑上的一组 Pod 和一种通过网络访问这些 Pod 的方式。 +在包含 Windows 节点的集群中,你可以使用以下类别的 Service: + +* `NodePort` +* `ClusterIP` +* `LoadBalancer` +* `ExternalName` + + +Windows 容器网络与 Linux 网络有着很重要的差异。 +更多细节和背景信息,参考 [Microsoft Windows 容器网络文档](https://docs.microsoft.com/zh-cn/virtualization/windowscontainers/container-networking/architecture)。 + +在 Windows 上,你可以使用以下设置来配置 Service 和负载均衡行为: + + +{{< table caption="Windows Service 设置" >}} +| 功能特性 | 描述 | 支持的 Windows 操作系统最低版本 | 启用方式 | +| ------- | ----------- | -------------------------- | ------------- | +| 会话亲和性 | 确保每次都将来自特定客户端的连接传递到同一个 Pod。 | Windows Server 2022 | 将 `service.spec.sessionAffinity` 设为 “ClientIP” | +| Direct Server Return (DSR) | 在负载均衡模式中 IP 地址修正和 LBNAT 直接发生在容器 vSwitch 端口;服务流量到达时源 IP 设置为原始 Pod IP。 | Windows Server 2019 | 在 kube-proxy 中设置以下标志:`--feature-gates="WinDSR=true" --enable-dsr=true` | +| 保留目标(Preserve-Destination) | 跳过服务流量的 DNAT,从而在到达后端 Pod 的数据包中保留目标服务的虚拟 IP。也会禁用节点间的转发。 | Windows Server,version 1903 | 在服务注解中设置 `"preserve-destination": "true"` 并在 kube-proxy 中启用 DSR。 | +| IPv4/IPv6 双栈网络 | 进出集群和集群内通信都支持原生的 IPv4 间与 IPv6 间流量 | Windows Server 2019 | 参考 [IPv4/IPv6 双栈](#ipv4ipv6-dual-stack)。 | +| 客户端 IP 保留 | 确保入站流量的源 IP 得到保留。也会禁用节点间转发。 | Windows Server 2019 | 将 `service.spec.externalTrafficPolicy` 设置为 “Local” 并在 kube-proxy 中启用 DSR。 | +{{< /table >}} + + +{{< warning >}} +如果目的地节点在运行 Windows Server 2022,则上层网络的 NodePort Service 存在已知问题。 +要完全避免此问题,可以使用 `externalTrafficPolicy: Local` 配置服务。 + +在安装了 KB5005619 的 Windows Server 2022 或更高版本上,采用 L2bridge 网络时 +Pod 间连接存在已知问题。 +要解决此问题并恢复 Pod 间连接,你可以在 kube-proxy 中禁用 WinDSR 功能。 + +这些问题需要操作系统修复。 +有关更新,请参考 https://github.com/microsoft/Windows-Containers/issues/204。 +{{< /warning >}} + + +## 限制 {#limitations} + +Windows 节点**不支持**以下网络功能: + +* 主机网络模式 +* 从节点本身访问本地 NodePort(可以从其他节点或外部客户端进行访问) +* 为同一 Service 提供 64 个以上后端 Pod(或不同目的地址) +* 在连接到上层网络的 Windows Pod 之间使用 IPv6 通信 +* 非 DSR 模式中的本地流量策略(Local Traffic Policy) + + +* 通过 `win-overlay`、`win-bridge` 使用 ICMP 协议,或使用 Azure-CNI 插件进行出站通信。 + 具体而言,Windows 数据平面([VFP](https://www.microsoft.com/research/project/azure-virtual-filtering-platform/))不支持 ICMP 数据包转换,这意味着: + * 指向同一网络内目的地址的 ICMP 数据包(例如 Pod 间的 ping 通信)可正常工作; + * TCP/UDP 数据包可正常工作; + * 通过远程网络指向其它地址的 ICMP 数据包(例如通过 ping 从 Pod 到外部公网的通信)无法被转换, + 因此无法被路由回到这些数据包的源点; + * 由于 TCP/UDP 数据包仍可被转换,所以在调试与外界的连接时, + 你可以将 `ping ` 替换为 `curl `。 + + +其他限制: + +* 由于缺少 `CHECK` 实现,Windows 参考网络插件 win-bridge 和 win-overlay 未实现 +[CNI 规约](https://github.com/containernetworking/cni/blob/master/SPEC.md) 的 v0.4.0 版本。 +* Flannel VXLAN CNI 插件在 Windows 上有以下限制: + * 使用 Flannel v0.12.0(或更高版本)时,节点到 Pod 的连接仅适用于本地 Pod。 + * Flannel 仅限于使用 VNI 4096 和 UDP 端口 4789。 + 有关这些参数的更多详细信息,请参考官方的 [Flannel VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) 后端文档。 diff --git a/content/zh/docs/concepts/storage/_index.md b/content/zh-cn/docs/concepts/storage/_index.md similarity index 100% rename from content/zh/docs/concepts/storage/_index.md rename to content/zh-cn/docs/concepts/storage/_index.md diff --git a/content/zh/docs/concepts/storage/dynamic-provisioning.md b/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md similarity index 90% rename from content/zh/docs/concepts/storage/dynamic-provisioning.md rename to content/zh-cn/docs/concepts/storage/dynamic-provisioning.md index 69b965545da3d..f256992599b8f 100644 --- a/content/zh/docs/concepts/storage/dynamic-provisioning.md +++ b/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md @@ -23,7 +23,7 @@ automatically provisions storage when it is requested by users. 动态卷供应允许按需创建存储卷。 如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, 然后在 Kubernetes 集群创建 -[`PersistentVolume` 对象](/zh/docs/concepts/storage/persistent-volumes/)来表示这些卷。 +[`PersistentVolume` 对象](/zh-cn/docs/concepts/storage/persistent-volumes/)来表示这些卷。 动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。 @@ -58,7 +58,7 @@ have the ability to select from multiple storage options. More information on storage classes can be found [here](/docs/concepts/storage/storage-classes/). --> -点击[这里](/zh/docs/concepts/storage/storage-classes/)查阅有关存储类的更多信息。 +点击[这里](/zh-cn/docs/concepts/storage/storage-classes/)查阅有关存储类的更多信息。 要启用动态供应功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。 `StorageClass` 对象定义当动态供应被调用时,哪一个驱动将被使用和哪些参数将被传递给驱动。 -StorageClass 对象的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +StorageClass 对象的名字必须是一个合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 以下清单创建了一个 `StorageClass` 存储类 "slow",它提供类似标准磁盘的永久磁盘。 ```yaml @@ -163,7 +163,7 @@ Dynamic provisioning can be enabled on a cluster such that all claims are dynamically provisioned if no storage class is specified. A cluster administrator can enable this behavior by: --> -可以在群集上启用动态卷供应,以便在未指定存储类的情况下动态设置所有声明。 +可以在集群上启用动态卷供应,以便在未指定存储类的情况下动态设置所有声明。 集群管理员可以通过以下方式启用此行为: - 标记一个 `StorageClass` 为 *默认*; -- 确保 [`DefaultStorageClass` 准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在 API 服务端被启用。 +- 确保 [`DefaultStorageClass` 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在 API 服务端被启用。 -请注意,群集上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定 +请注意,集群上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定 `storageClassName` 的 `PersistentVolumeClaim`。 -在[多区域](/zh/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到多个区域。 +在[多区域](/zh-cn/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到多个区域。 单区域存储后端应该被供应到 Pod 被调度到的区域。 -这可以通过设置[卷绑定模式](/zh/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。 +这可以通过设置[卷绑定模式](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。 diff --git a/content/zh/docs/concepts/storage/ephemeral-volumes.md b/content/zh-cn/docs/concepts/storage/ephemeral-volumes.md similarity index 83% rename from content/zh/docs/concepts/storage/ephemeral-volumes.md rename to content/zh-cn/docs/concepts/storage/ephemeral-volumes.md index fde93e070b7ff..2e7c7980df35e 100644 --- a/content/zh/docs/concepts/storage/ephemeral-volumes.md +++ b/content/zh-cn/docs/concepts/storage/ephemeral-volumes.md @@ -23,7 +23,7 @@ with [volumes](/docs/concepts/storage/volumes/) is suggested, in particular PersistentVolumeClaim and PersistentVolume. --> 本文档描述 Kubernetes 中的 _临时卷(Ephemeral Volume)_。 -建议先了解[卷](/zh/docs/concepts/storage/volumes/),特别是 PersistentVolumeClaim 和 PersistentVolume。 +建议先了解[卷](/zh-cn/docs/concepts/storage/volumes/),特别是 PersistentVolumeClaim 和 PersistentVolume。 -临时卷在 Pod 规范中以 _内联_ 方式定义,这简化了应用程序的部署和管理。 +临时卷在 Pod 规约中以 _内联_ 方式定义,这简化了应用程序的部署和管理。 Kubernetes 为了不同的目的,支持几种不同类型的临时卷: -- [emptyDir](/zh/docs/concepts/storage/volumes/#emptydir): +- [emptyDir](/zh-cn/docs/concepts/storage/volumes/#emptydir): Pod 启动时为空,存储空间来自本地的 kubelet 根目录(通常是根磁盘)或内存 -- [configMap](/zh/docs/concepts/storage/volumes/#configmap)、 - [downwardAPI](/zh/docs/concepts/storage/volumes/#downwardapi)、 - [secret](/zh/docs/concepts/storage/volumes/#secret): +- [configMap](/zh-cn/docs/concepts/storage/volumes/#configmap)、 + [downwardAPI](/zh-cn/docs/concepts/storage/volumes/#downwardapi)、 + [secret](/zh-cn/docs/concepts/storage/volumes/#secret): 将不同类型的 Kubernetes 数据注入到 Pod 中 -- [CSI 临时卷](/zh/docs/concepts/storage/volumes/#csi-ephemeral-volumes): +- [CSI 临时卷](/zh-cn/docs/concepts/storage/volumes/#csi-ephemeral-volumes): 类似于前面的卷类型,但由专门[支持此特性](https://kubernetes-csi.github.io/docs/drivers.html) 的指定 [CSI 驱动程序](https://github.com/container-storage-interface/spec/blob/master/spec.md)提供 @@ -103,7 +103,7 @@ CSI ephemeral volumes *must* be provided by third-party CSI storage drivers. --> `emptyDir`、`configMap`、`downwardAPI`、`secret` 是作为 -[本地临时存储](/zh/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage) +[本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage) 提供的。它们由各个节点上的 kubelet 管理。 CSI 临时卷 *必须* 由第三方 CSI 存储驱动程序提供。 @@ -144,7 +144,7 @@ shows which drivers support ephemeral volumes. --> 该特性需要启用参数 `CSIInlineVolume` -[特性门控(feature gate)](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +[特性门控(feature gate)](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 该参数从 Kubernetes 1.16 开始默认启用。 {{< note >}} @@ -171,7 +171,7 @@ Here's an example manifest for a Pod that uses CSI ephemeral storage: 从概念上讲,CSI 临时卷类似于 `configMap`、`downwardAPI` 和 `secret` 类型的卷: 其存储在每个节点本地管理,并在将 Pod 调度到节点后与其他本地资源一起创建。 在这个阶段,Kubernetes 没有重新调度 Pods 的概念。卷创建不太可能失败,否则 Pod 启动将会受阻。 -特别是,这些卷 **不** 支持[感知存储容量的 Pod 调度](/zh/docs/concepts/storage/storage-capacity/)。 +特别是,这些卷 **不** 支持[感知存储容量的 Pod 调度](/zh-cn/docs/concepts/storage/storage-capacity/)。 它们目前也没包括在 Pod 的存储资源使用限制中,因为 kubelet 只能对它自己管理的存储强制执行。 下面是使用 CSI 临时存储的 Pod 的示例清单: @@ -211,43 +211,36 @@ instructions. ### CSI 驱动程序限制 {#csi-driver-restrictions} -{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} +CSI 临时卷允许用户直接向 CSI 驱动程序提供 `volumeAttributes`,它会作为 Pod 规约的一部分。 +允许 `volumeAttributes` 的 CSI 驱动程序通常仅限于管理员使用,不适合在内联临时卷中使用。 +例如,通常在 StorageClass 中定义的参数不应通过使用内联临时卷向用户公开。 作为一个集群管理员,你可以使用 -[PodSecurityPolicy](/zh/docs/concepts/security/pod-security-policy/) +[PodSecurityPolicy](/zh-cn/docs/concepts/security/pod-security-policy/) 来控制在 Pod 中可以使用哪些 CSI 驱动程序, 具体则是通过 [`allowedCSIDrivers` 字段](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy) 指定。 - -{{< note >}} -PodSecurityPolicy 已弃用,并将在 Kubernetes v1.25 版本中移除。 -{{< /note >}} - - - - -{{< note >}} -CSI 临时卷仅有 CSI 驱动程序的一个子集支持。 -Kubernetes CSI [驱动列表](https://kubernetes-csi.github.io/docs/drivers.html)显示了哪些驱动程序支持临时卷。 -{{< /note >}} +如果集群管理员需要限制 CSI 驱动程序在 Pod 规约中被作为内联卷使用,可以这样做: +- 从 CSIDriver 规约的 `volumeLifecycleModes` 中删除 `Ephemeral`,这可以防止驱动程序被用作内联临时卷。 +- 使用[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/) + 来限制如何使用此驱动程序。 -就[资源所有权](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)而言, +就[资源所有权](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)而言, 拥有通用临时存储的 Pod 是提供临时存储 (ephemeral storage) 的 PersistentVolumeClaim 的所有者。 当 Pod 被删除时,Kubernetes 垃圾收集器会删除 PVC, 然后 PVC 通常会触发卷的删除,因为存储类的默认回收策略是删除卷。 @@ -437,36 +430,21 @@ same namespace, so that these conflicts can't occur. Enabling the GenericEphemeralVolume feature allows users to create PVCs indirectly if they can create Pods, even if they do not have permission to create PVCs directly. Cluster administrators must be -aware of this. If this does not fit their security model, they have -two choices: +aware of this. If this does not fit their security model, they should +use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) that rejects objects like Pods that have a generic ephemeral volume. --> 启用 GenericEphemeralVolume 特性会导致那些没有 PVCs 创建权限的用户, 在创建 Pods 时,被允许间接的创建 PVCs。 集群管理员必须意识到这一点。 -如果这不符合他们的安全模型,他们有如下选择: - - -- 通过特性门控显式禁用该特性。 -- 使用一个[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/) - 拒绝包含通用临时卷的 Pods。 -- 当 `volumes` 列表不包含 `ephemeral` 卷类型时,使用 - [Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)。 - (这一方式在 Kubernetes 1.21 版本已经弃用) +如果这不符合他们的安全模型,他们应该使用一个[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/) +拒绝包含通用临时卷的 Pods。 -[为 PVC 卷所设置的逐名字空间的配额](/zh/docs/concepts/policy/resource-quotas/#storage-resource-quota) +[为 PVC 卷所设置的逐名字空间的配额](/zh-cn/docs/concepts/policy/resource-quotas/#storage-resource-quota) 仍然有效,因此即使允许用户使用这种新机制,他们也不能使用它来规避其他策略。 ## {{% heading "whatsnext" %}} @@ -478,7 +456,7 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont --> ### kubelet 管理的临时卷 {#ephemeral-volumes-managed-by-kubelet} -参阅[本地临时存储](/zh/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage)。 +参阅[本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage)。 本文描述 Kubernetes 中的 _持久卷(Persistent Volume)_ 。 -建议先熟悉[卷(Volume)](/zh/docs/concepts/storage/volumes/)的概念。 +建议先熟悉[卷(Volume)](/zh-cn/docs/concepts/storage/volumes/)的概念。 @@ -52,7 +52,7 @@ PersistentVolumeClaim。 A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. --> 持久卷(PersistentVolume,PV)是集群中的一块存储,可以由管理员事先供应,或者 -使用[存储类(Storage Class)](/zh/docs/concepts/storage/storage-classes/)来动态供应。 +使用[存储类(Storage Class)](/zh-cn/docs/concepts/storage/storage-classes/)来动态供应。 持久卷是集群资源,就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,也是使用 卷插件来实现的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。 此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。 @@ -77,7 +77,7 @@ See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-c 仅限于卷大小和访问模式,同时又不能将卷是如何实现的这些细节暴露给用户。 为了满足这类需求,就有了 _存储类(StorageClass)_ 资源。 -参见[基于运行示例的详细演练](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)。 +参见[基于运行示例的详细演练](/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)。 为了基于存储类完成动态的存储供应,集群管理员需要在 API 服务器上启用 -`DefaultStorageClass` [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)。 +`DefaultStorageClass` [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)。 举例而言,可以通过保证 `DefaultStorageClass` 出现在 API 服务器组件的 `--enable-admission-plugins` 标志值中实现这点;该标志的值可以是逗号 分隔的有序列表。关于 API 服务器标志的更多信息,可以参考 -[kube-apiserver](/zh/docs/reference/command-line-tools-reference/kube-apiserver/) +[kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/) 文档。 不过,管理员可以按 -[参考资料](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/) +[参考资料](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) 中所述,使用 Kubernetes 控制器管理器命令行参数来配置一个定制的回收器(Recycler) Pod 模板。此定制的回收器 Pod 模板必须包含一个 `volumes` 规约,如下例所示: @@ -374,6 +374,97 @@ However, the particular path specified in the custom recycler Pod template in th 定制回收器 Pod 模板中在 `volumes` 部分所指定的特定路径要替换为 正被回收的卷的路径。 + +### PersistentVolume 删除保护 finalizer {#persistentvolume-deletion-protection-finalizer} +{{< feature-state for_k8s_version="v1.23" state="alpha" >}} + +可以在 PersistentVolume 上添加终结器(Finalizers),以确保只有在删除对应的存储后才删除具有 +`Delete` 回收策略的 PersistentVolume。 + + +新引入的 `kubernetes.io/pv-controller` 和 `external-provisioner.volume.kubernetes.io/finalizer` +终结器仅会被添加到动态制备的卷上。 + +终结器 `kubernetes.io/pv-controller` 会被添加到树内插件卷上。 +下面是一个例子: + +```shell +kubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78 +Name: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78 +Labels: +Annotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner + pv.kubernetes.io/bound-by-controller: yes + pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume +Finalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller] +StorageClass: vcp-sc +Status: Bound +Claim: default/vcp-pvc-1 +Reclaim Policy: Delete +Access Modes: RWO +VolumeMode: Filesystem +Capacity: 1Gi +Node Affinity: +Message: +Source: + Type: vSphereVolume (a Persistent Disk resource in vSphere) + VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk + FSType: ext4 + StoragePolicyName: vSAN Default Storage Policy +Events: +``` + + +终结器 `external-provisioner.volume.kubernetes.io/finalizer` 会被添加到 CSI 卷上。下面是一个例子: + +```shell +Name: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d +Labels: +Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com +Finalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer] +StorageClass: fast +Status: Bound +Claim: demo-app/nginx-logs +Reclaim Policy: Delete +Access Modes: RWO +VolumeMode: Filesystem +Capacity: 200Mi +Node Affinity: +Message: +Source: + Type: CSI (a Container Storage Interface (CSI) volume source) + Driver: csi.vsphere.vmware.com + FSType: ext4 + VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd + ReadOnly: false + VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com + type=vSphere CNS Block Volume +Events: +``` + + +为特定的树内卷插件启用 `CSIMigration` 特性将删除 `kubernetes.io/pv-controller` 终结器, +同时添加 `external-provisioner.volume.kubernetes.io/finalizer` 终结器。 +同样,禁用 `CSIMigration` 将删除 `external-provisioner.volume.kubernetes.io/finalizer` 终结器, +同时添加 `kubernetes.io/pv-controller` 终结器。 + 绑定操作不会考虑某些卷匹配条件是否满足,包括节点亲和性等等。 控制面仍然会检查 -[存储类](/zh/docs/concepts/storage/storage-classes/)、访问模式和所请求的 +[存储类](/zh-cn/docs/concepts/storage/storage-classes/)、访问模式和所请求的 存储尺寸都是合法的。 ```yaml @@ -550,19 +641,7 @@ FlexVolume 卷(于 Kubernetes v1.23 弃用)可以在 Pod 重启期间调整 --> #### 重设使用中 PVC 申领的大小 {#resizing-an-in-use-persistentvolumevlaim} -{{< feature-state for_k8s_version="v1.15" state="beta" >}} - - -{{< note >}} -Kubernetes 从 1.15 版本开始将调整使用中 PVC 申领大小这一能力作为 Beta -特性支持;该特性在 1.11 版本以来处于 Alpha 阶段。 -`ExpandInUsePersistentVolumes` 特性必须被启用;在很多集群上,与此类似的 -Beta 阶段的特性是自动启用的。 -可参考[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -文档了解更多信息。 -{{< /note >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} {{< note >}} Kubernetes 从 1.23 版本开始将允许用户恢复失败的 PVC 扩展这一能力作为 -alpha 特性支持。 `RecoverVolumeExpansionFailure` 必须被启用以允许使用此功能。 -可参考[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +alpha 特性支持。 `RecoverVolumeExpansionFailure` 必须被启用以允许使用此特性。 +可参考[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 文档了解更多信息。 {{< /note >}} -如果集群中的特性门控 `ExpandPersistentVolumes` 和 `RecoverVolumeExpansionFailure` -都已启用,在 PVC 的扩展发生失败时,你可以使用比先前请求的值更小的尺寸来重试扩展。 +如果集群中的特性门控 `RecoverVolumeExpansionFailure` +已启用,在 PVC 的扩展发生失败时,你可以使用比先前请求的值更小的尺寸来重试扩展。 要使用一个更小的尺寸尝试请求新的扩展,请编辑该 PVC 的 `.spec.resources` 并选择 一个比你之前所尝试的值更小的值。 如果由于容量限制而无法成功扩展至更高的值,这将很有用。 @@ -708,23 +787,23 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插 * [`rbd`](/docs/concepts/storage/volumes/#rbd) - Rados Block Device (RBD) volume * [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume --> -* [`awsElasticBlockStore`](/zh/docs/concepts/storage/volumes/#awselasticblockstore) - AWS 弹性块存储(EBS) -* [`azureDisk`](/zh/docs/concepts/storage/volumes/#azuredisk) - Azure Disk -* [`azureFile`](/zh/docs/concepts/storage/volumes/#azurefile) - Azure File -* [`cephfs`](/zh/docs/concepts/storage/volumes/#cephfs) - CephFS volume -* [`csi`](/zh/docs/concepts/storage/volumes/#csi) - 容器存储接口 (CSI) -* [`fc`](/zh/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) 存储 -* [`gcePersistentDisk`](/zh/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘 -* [`glusterfs`](/zh/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷 -* [`hostPath`](/zh/docs/concepts/storage/volumes/#hostpath) - HostPath 卷 +* [`awsElasticBlockStore`](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore) - AWS 弹性块存储(EBS) +* [`azureDisk`](/zh-cn/docs/concepts/storage/volumes/#azuredisk) - Azure Disk +* [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile) - Azure File +* [`cephfs`](/zh-cn/docs/concepts/storage/volumes/#cephfs) - CephFS volume +* [`csi`](/zh-cn/docs/concepts/storage/volumes/#csi) - 容器存储接口 (CSI) +* [`fc`](/zh-cn/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) 存储 +* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘 +* [`glusterfs`](/zh-cn/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷 +* [`hostPath`](/zh-cn/docs/concepts/storage/volumes/#hostpath) - HostPath 卷 (仅供单节点测试使用;不适用于多节点集群; 请尝试使用 `local` 卷作为替代) -* [`iscsi`](/zh/docs/concepts/storage/volumes/#iscsi) - iSCSI (SCSI over IP) 存储 -* [`local`](/zh/docs/concepts/storage/volumes/#local) - 节点上挂载的本地存储设备 -* [`nfs`](/zh/docs/concepts/storage/volumes/#nfs) - 网络文件系统 (NFS) 存储 -* [`portworxVolume`](/zh/docs/concepts/storage/volumes/#portworxvolume) - Portworx 卷 -* [`rbd`](/zh/docs/concepts/storage/volumes/#rbd) - Rados 块设备 (RBD) 卷 -* [`vsphereVolume`](/zh/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK 卷 +* [`iscsi`](/zh-cn/docs/concepts/storage/volumes/#iscsi) - iSCSI (SCSI over IP) 存储 +* [`local`](/zh-cn/docs/concepts/storage/volumes/#local) - 节点上挂载的本地存储设备 +* [`nfs`](/zh-cn/docs/concepts/storage/volumes/#nfs) - 网络文件系统 (NFS) 存储 +* [`portworxVolume`](/zh-cn/docs/concepts/storage/volumes/#portworxvolume) - Portworx 卷 +* [`rbd`](/zh-cn/docs/concepts/storage/volumes/#rbd) - Rados 块设备 (RBD) 卷 +* [`vsphereVolume`](/zh-cn/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK 卷 +Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVolume。 +在某些场合下,卷访问模式也会限制 PersistentVolume 可以挂载的位置。 +卷访问模式并**不会**在存储已经被挂载的情况下为其实施写保护。 +即使访问模式设置为 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany,它们也不会对卷形成限制。 +例如,即使某个卷创建时设置为 ReadOnlyMany,也无法保证该卷是只读的。 +如果访问模式设置为 ReadWriteOncePod,则卷会被限制起来并且只能挂载到一个 Pod 上。 +{{< /note >}} + @@ -980,7 +1076,7 @@ to PVCs that request no particular class. ### 类 {#class} 每个 PV 可以属于某个类(Class),通过将其 `storageClassName` 属性设置为某个 -[StorageClass](/zh/docs/concepts/storage/storage-classes/) 的名称来指定。 +[StorageClass](/zh-cn/docs/concepts/storage/storage-classes/) 的名称来指定。 特定类的 PV 卷只能绑定到请求该类存储卷的 PVC 申领。 未设置 `storageClassName` 的 PV 卷没有类设定,只能绑定到那些没有指定特定 存储类的 PVC 申领。 @@ -1085,11 +1181,11 @@ For most volume types, you do not need to set this field. It is automatically po --> {{< note >}} 对大多数类型的卷而言,你不需要设置节点亲和性字段。 -[AWS EBS](/zh/docs/concepts/storage/volumes/#awselasticblockstore)、 -[GCE PD](/zh/docs/concepts/storage/volumes/#gcepersistentdisk) 和 -[Azure Disk](/zh/docs/concepts/storage/volumes/#azuredisk) 卷类型都能 +[AWS EBS](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore)、 +[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 和 +[Azure Disk](/zh-cn/docs/concepts/storage/volumes/#azuredisk) 卷类型都能 自动设置相关字段。 -你需要为 [local](/zh/docs/concepts/storage/volumes/#local) 卷显式地设置 +你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置 此属性。 {{< /note >}} @@ -1125,7 +1221,7 @@ The name of a PersistentVolumeClaim object must be a valid --> 每个 PVC 对象都有 `spec` 和 `status` 部分,分别对应申领的规约和状态。 PersistentVolumeClaim 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). ```yaml @@ -1184,7 +1280,7 @@ Claims can specify a [label selector](/docs/concepts/overview/working-with-objec --> ### 选择算符 {#selector} -申领可以设置[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors) +申领可以设置[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors) 来进一步过滤卷集合。只有标签与选择算符相匹配的卷能够绑定到申领上。 选择算符包含两个字段: @@ -1213,7 +1309,7 @@ be bound to the PVC. ### 类 {#class} 申领可以通过为 `storageClassName` 属性设置 -[StorageClass](/zh/docs/concepts/storage/storage-classes/) 的名称来请求特定的存储类。 +[StorageClass](/zh-cn/docs/concepts/storage/storage-classes/) 的名称来请求特定的存储类。 只有所请求的类的 PV 卷,即 `storageClassName` 值与 PVC 设置相同的 PV 卷, 才能绑定到 PVC 申领。 @@ -1231,7 +1327,7 @@ PVC 申领不必一定要请求某个类。如果 PVC 的 `storageClassName` 属 存储类的 PV 卷(未设置注解或者注解值为 `""` 的 PersistentVolume(PV)对象在系统中不会被删除,因为这样做可能会引起数据丢失。 未设置 `storageClassName` 的 PVC 与此大不相同,也会被集群作不同处理。 具体筛查方式取决于 -[`DefaultStorageClass` 准入控制器插件](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) +[`DefaultStorageClass` 准入控制器插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) 是否被启用。 -卷快照(Volume Snapshot)功能的添加仅是为了支持 CSI 卷插件。 -有关细节可参阅[卷快照](/zh/docs/concepts/storage/volume-snapshots/)文档。 +卷快照(Volume Snapshot)特性的添加仅是为了支持 CSI 卷插件。 +有关细节可参阅[卷快照](/zh-cn/docs/concepts/storage/volume-snapshots/)文档。 要启用从卷快照数据源恢复数据卷的支持,可在 API 服务器和控制器管理器上启用 `VolumeSnapshotDataSource` 特性门控。 @@ -1666,7 +1757,7 @@ spec: --> ## 卷克隆 {#volume-cloning} -[卷克隆](/zh/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于 +[卷克隆](/zh-cn/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于 CSI 卷插件。 -* 进一步了解[创建持久卷](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). -* 进一步学习[创建 PVC 申领](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). -* 阅读[持久存储的设计文档](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md). +* 进一步了解[创建持久卷](/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). +* 进一步学习[创建 PVC 申领](/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). +* 阅读[持久存储的设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/persistent-storage.md). -本文档描述 Kubernet 中的*投射卷(Projected Volumes)*。 -建议先熟悉[卷](/zh/docs/concepts/storage/volumes/)概念。 +本文档描述 Kubernetes 中的*投射卷(Projected Volumes)*。 +建议先熟悉[卷](/zh-cn/docs/concepts/storage/volumes/)概念。 @@ -42,17 +42,17 @@ Currently, the following types of volume sources can be projected: 目前,以下类型的卷源可以被投射: -* [`secret`](/zh/docs/concepts/storage/volumes/#secret) -* [`downwardAPI`](/zh/docs/concepts/storage/volumes/#downwardapi) -* [`configMap`](/zh/docs/concepts/storage/volumes/#configmap) +* [`secret`](/zh-cn/docs/concepts/storage/volumes/#secret) +* [`downwardAPI`](/zh-cn/docs/concepts/storage/volumes/#downwardapi) +* [`configMap`](/zh-cn/docs/concepts/storage/volumes/#configmap) * [`serviceAccountToken`](#serviceaccounttoken) 所有的卷源都要求处于 Pod 所在的同一个名字空间内。进一步的详细信息,可参考 -[一体化卷设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/all-in-one-volume.md)。 +[一体化卷](https://github.com/kubernetes/design-proposals-archive/blob/main/node/all-in-one-volume.md)设计文档。 ## serviceAccountToken 投射卷 {#serviceaccounttoken} 当 `TokenRequestProjection` 特性被启用时,你可以将当前 -[服务账号](/zh/docs/reference/access-authn-authz/authentication/#service-account-tokens) +[服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens) 的令牌注入到 Pod 中特定路径下。例如: {{< codenew file="pods/storage/projected-service-account-token.yaml" >}} @@ -108,7 +108,7 @@ is optional and it defaults to the identifier of the API server. --> 示例 Pod 中包含一个投射卷,其中包含注入的服务账号令牌。 此 Pod 中的容器可以使用该令牌访问 Kubernetes API 服务器, 使用 -[pod 的 ServiceAccount](/zh/docs/tasks/configure-pod-container/configure-service-account/) +[pod 的 ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) 进行身份验证。`audience` 字段包含令牌所针对的受众。 收到令牌的主体必须使用令牌受众中所指定的某个标识符来标识自身,否则应该拒绝该令牌。 此字段是可选的,默认值为 API 服务器的标识。 @@ -130,7 +130,7 @@ of the projected volume. A container using a projected volume source as a [`subPath`](/docs/concepts/storage/volumes/#using-subpath) volume mount will not receive updates for those volume sources. --> -以 [`subPath`](/zh/docs/concepts/storage/volumes/#using-subpath) +以 [`subPath`](/zh-cn/docs/concepts/storage/volumes/#using-subpath) 形式使用投射卷源的容器无法收到对应卷源的更新。 {{< /note >}} @@ -140,11 +140,9 @@ volume mount will not receive updates for those volume sources. ## 与 SecurityContext 间的关系 {#securitycontext-interactions} -[关于在投射的服务账号卷中处理文件访问权限的提案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#token-volume-projection) +关于在投射的服务账号卷中处理文件访问权限的[提案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) 介绍了如何使得所投射的文件具有合适的属主访问权限。 ### Linux diff --git a/content/zh/docs/concepts/storage/storage-capacity.md b/content/zh-cn/docs/concepts/storage/storage-capacity.md similarity index 70% rename from content/zh/docs/concepts/storage/storage-capacity.md rename to content/zh-cn/docs/concepts/storage/storage-capacity.md index 10089b428d312..c89d98e3bce3b 100644 --- a/content/zh/docs/concepts/storage/storage-capacity.md +++ b/content/zh-cn/docs/concepts/storage/storage-capacity.md @@ -10,58 +10,66 @@ Storage capacity is limited and may vary depending on the node on which a pod runs: network-attached storage might not be accessible by all nodes, or storage is local to a node to begin with. -{{< feature-state for_k8s_version="v1.21" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} This page describes how Kubernetes keeps track of storage capacity and -how the scheduler uses that information to schedule Pods onto nodes +how the scheduler uses that information to [schedule Pods](/docs/concepts/scheduling-eviction/) onto nodes that have access to enough storage capacity for the remaining missing volumes. Without storage capacity tracking, the scheduler may choose a node that doesn't have enough capacity to provision a volume and multiple scheduling retries will be needed. - -Tracking storage capacity is supported for {{< glossary_tooltip -text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and -[needs to be enabled](#enabling-storage-capacity-tracking) when installing a CSI driver. --> 存储容量是有限的,并且会因为运行 Pod 的节点不同而变化: 网络存储可能并非所有节点都能够访问,或者对于某个节点存储是本地的。 -{{< feature-state for_k8s_version="v1.21" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} 本页面描述了 Kubernetes 如何跟踪存储容量以及调度程序如何为了余下的尚未挂载的卷使用该信息将 -Pod 调度到能够访问到足够存储容量的节点上。 +[Pod 调度](/zh-cn/docs/concepts/scheduling-eviction/)到能够访问到足够存储容量的节点上。 如果没有跟踪存储容量,调度程序可能会选择一个没有足够容量来提供卷的节点,并且需要多次调度重试。 -{{< glossary_tooltip text="容器存储接口" term_id="csi" >}}(CSI)驱动程序支持跟踪存储容量, -并且在安装 CSI 驱动程序时[需要启用](#enabling-storage-capacity-tracking)该功能。 +## {{% heading "prerequisites" %}} + + +Kubernetes v{{< skew currentVersion >}} 包含了对存储容量跟踪的集群级 API 支持。 +要使用它,你还必须使用支持容量跟踪的 CSI 驱动程序。请查阅你使用的 CSI 驱动程序的文档, +以了解此支持是否可用,如果可用,该如何使用它。如果你运行的不是 +Kubernetes v{{< skew currentVersion >}},请查看对应版本的 Kubernetes 文档。 ## API 这个特性有两个 API 扩展接口: -- CSIStorageCapacity 对象:这些对象由 CSI 驱动程序在安装驱动程序的命名空间中产生。 +- [CSIStorageCapacity](/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/) 对象:这些对象由 + CSI 驱动程序在安装驱动程序的命名空间中产生。 每个对象都包含一个存储类的容量信息,并定义哪些节点可以访问该存储。 -- [`CSIDriverSpec.StorageCapacity` 字段](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io): +- [`CSIDriverSpec.StorageCapacity` 字段](/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/#CSIDriverSpec): 设置为 true 时,Kubernetes 调度程序将考虑使用 CSI 驱动程序的卷的存储容量。 ## 限制 @@ -151,32 +156,12 @@ to handle this automatically. 当 Pod 使用多个卷时,调度可能会永久失败:一个卷可能已经在拓扑段中创建,而该卷又没有足够的容量来创建另一个卷, 要想从中恢复,必须要进行手动干预,比如通过增加存储容量或者删除已经创建的卷。 -需要[进一步工作](https://github.com/kubernetes/enhancements/pull/1703)来自动处理此问题。 - - -## 开启存储容量跟踪 - -存储容量跟踪是一个 Beta 特性,从 Kubernetes 1.21 版本起在 Kubernetes 集群 -中默认被启用。除了在集群中启用此功能特性之外,还要求 CSI 驱动支持此特性。 -请参阅驱动的文档了解详细信息。 ## {{% heading "whatsnext" %}} - 想要获得更多该设计的信息,查看 [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md)。 -- 有关此功能的下一步开发信息,查看 - [enhancement tracking issue #1472](https://github.com/kubernetes/enhancements/issues/1472)。 -- 学习 [Kubernetes 调度器](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)。 diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh-cn/docs/concepts/storage/storage-classes.md similarity index 98% rename from content/zh/docs/concepts/storage/storage-classes.md rename to content/zh-cn/docs/concepts/storage/storage-classes.md index 9fb39ed2d4547..be1f6eca0b116 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh-cn/docs/concepts/storage/storage-classes.md @@ -23,8 +23,8 @@ with [volumes](/docs/concepts/storage/volumes/) and [persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested. --> 本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉 -[卷](/zh/docs/concepts/storage/volumes/)和 -[持久卷](/zh/docs/concepts/storage/persistent-volumes)的概念。 +[卷](/zh-cn/docs/concepts/storage/volumes/)和 +[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes)的概念。 @@ -74,7 +74,7 @@ for details. --> 管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类: 更多详情请参阅 -[PersistentVolumeClaim 章节](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 +[PersistentVolumeClaim 章节](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 ```yaml apiVersion: storage.k8s.io/v1 @@ -249,7 +249,7 @@ the class or PV, If a mount option is invalid, the PV mount fails. The `volumeBindingMode` field controls when [volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. --> -`volumeBindingMode` 字段控制了[卷绑定和动态制备](/zh/docs/concepts/storage/persistent-volumes/#provisioning) +`volumeBindingMode` 字段控制了[卷绑定和动态制备](/zh-cn/docs/concepts/storage/persistent-volumes/#provisioning) 应该发生在什么时候。 -动态配置和预先创建的 PV 也支持 [CSI卷](/zh/docs/concepts/storage/volumes/#csi), +动态配置和预先创建的 PV 也支持 [CSI卷](/zh-cn/docs/concepts/storage/volumes/#csi), 但是你需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。 {{< note >}} @@ -770,7 +770,7 @@ vSphere 存储类有两种制备器 [弃用](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi)。 更多关于 CSI 制备器的详情,请参阅 [Kubernetes vSphere CSI 驱动](https://vsphere-csi-driver.sigs.k8s.io/) -和 [vSphereVolume CSI 迁移](/zh/docs/concepts/storage/volumes/#csi-migration-5)。 +和 [vSphereVolume CSI 迁移](/zh-cn/docs/concepts/storage/volumes/#csi-migration-5)。 在存储制备期间,为挂载凭证创建一个名为 `secretName` 的 Secret。如果集群同时启用了 -[RBAC](/zh/docs/reference/access-authn-authz/rbac/) 和 -[控制器角色](/zh/docs/reference/access-authn-authz/rbac/#controller-roles), +[RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 和 +[控制器角色](/zh-cn/docs/reference/access-authn-authz/rbac/#controller-roles), 为 `system:controller:persistent-volume-binder` 的 clusterrole 添加 `Secret` 资源的 `create` 权限。 diff --git a/content/zh/docs/concepts/storage/storage-limits.md b/content/zh-cn/docs/concepts/storage/storage-limits.md similarity index 93% rename from content/zh/docs/concepts/storage/storage-limits.md rename to content/zh-cn/docs/concepts/storage/storage-limits.md index 14ffd377a6f51..6943ba3ed6a00 100644 --- a/content/zh/docs/concepts/storage/storage-limits.md +++ b/content/zh-cn/docs/concepts/storage/storage-limits.md @@ -16,22 +16,21 @@ content_type: concept - + 此页面描述了各个云供应商可关联至一个节点的最大卷数。 - - +waiting for volumes to attach. +--> 谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。 Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。 - - - - - ## Kubernetes 的默认限制 The Kubernetes 调度器对关联于一个节点的卷数有默认限制: @@ -73,20 +71,18 @@ the limit you set. The limit applies to the entire cluster, so it affects all Nodes. --> - ## 自定义限制 -您可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。 +你可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。 CSI 驱动程序可能具有不同的过程,关于如何自定义其限制请参阅相关文档。 -如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持您设置的限制。 +如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持你设置的限制。 此限制应用于整个集群,所以它会影响所有节点。 - ## 动态卷限制 {{< feature-state state="stable" for_k8s_version="v1.17" >}} @@ -99,7 +95,6 @@ Dynamic volume limits are supported for following volume types. - Azure Disk - CSI --> - 以下卷类型支持动态卷限制。 - Amazon EBS @@ -111,7 +106,6 @@ Dynamic volume limits are supported for following volume types. For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node type and enforces the appropriate maximum number of volumes for the node. For example: --> - 对于由内建插件管理的卷,Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。 例如: - * 在 Google Compute Engine环境中, [根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将127个卷关联到节点。 diff --git a/content/zh/docs/concepts/storage/volume-health-monitoring.md b/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md similarity index 98% rename from content/zh/docs/concepts/storage/volume-health-monitoring.md rename to content/zh-cn/docs/concepts/storage/volume-health-monitoring.md index 0b7f5924a488a..bf1c732fd65dd 100644 --- a/content/zh/docs/concepts/storage/volume-health-monitoring.md +++ b/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md @@ -64,7 +64,7 @@ You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command- --> {{< note >}} 你需要启用 `CSIVolumeHealth` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 才能在节点上使用此特性。 {{< /note >}} diff --git a/content/zh/docs/concepts/storage/volume-pvc-datasource.md b/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md similarity index 99% rename from content/zh/docs/concepts/storage/volume-pvc-datasource.md rename to content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md index 06454b6234a6d..d50dcdd99c9ed 100644 --- a/content/zh/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md @@ -21,7 +21,7 @@ weight: 60 This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested. --> 本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉 -[卷](/zh/docs/concepts/storage/volumes)。 +[卷](/zh-cn/docs/concepts/storage/volumes)。 diff --git a/content/zh/docs/concepts/storage/volume-snapshot-classes.md b/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md similarity index 96% rename from content/zh/docs/concepts/storage/volume-snapshot-classes.md rename to content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md index 8739de2d04063..47f617fe11033 100644 --- a/content/zh/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md @@ -12,8 +12,8 @@ with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and [storage classes](/docs/concepts/storage/storage-classes) is suggested. --> 本文档描述了 Kubernetes 中 VolumeSnapshotClass 的概念。建议熟悉 -[卷快照(Volume Snapshots)](/zh/docs/concepts/storage/volume-snapshots/)和 -[存储类(Storage Class)](/zh/docs/concepts/storage/storage-classes)。 +[卷快照(Volume Snapshots)](/zh-cn/docs/concepts/storage/volume-snapshots/)和 +[存储类(Storage Class)](/zh-cn/docs/concepts/storage/storage-classes)。 diff --git a/content/zh/docs/concepts/storage/volume-snapshots.md b/content/zh-cn/docs/concepts/storage/volume-snapshots.md similarity index 83% rename from content/zh/docs/concepts/storage/volume-snapshots.md rename to content/zh-cn/docs/concepts/storage/volume-snapshots.md index f4ded9d560d55..719bcfa2a323a 100644 --- a/content/zh/docs/concepts/storage/volume-snapshots.md +++ b/content/zh-cn/docs/concepts/storage/volume-snapshots.md @@ -18,7 +18,7 @@ weight: 40 In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). --> 在 Kubernetes 中,卷快照是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes -的 [持久卷](/zh/docs/concepts/storage/persistent-volumes/)。 +的 [持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)。 @@ -118,7 +118,7 @@ Instead of using a pre-existing snapshot, you can request that a snapshot to be #### 动态的 {#dynamic} 可以从 `PersistentVolumeClaim` 中动态获取快照,而不用使用已经存在的快照。 -在获取快照时,[卷快照类](/zh/docs/concepts/storage/volume-snapshot-classes/) +在获取快照时,[卷快照类](/zh-cn/docs/concepts/storage/volume-snapshot-classes/) 指定要用的特定于存储提供程序的参数。 `snapshotHandle` 是存储后端创建卷的唯一标识符。对于预设置快照,这个字段是必须的。它指定此 `VolumeSnapshotContent` 表示的存储系统上的 CSI 快照 id。 + +`sourceVolumeMode` 是创建快照的卷的模式。`sourceVolumeMode` 字段的值可以是 +`Filesystem` 或 `Block`。如果没有指定源卷模式,Kubernetes 会将快照视为未知的源卷模式。 + + +## 转换快照的卷模式 {#convert-volume-mode} + +如果在你的集群上安装的 `VolumeSnapshots` API 支持 `sourceVolumeMode` +字段,则该 API 可以防止未经授权的用户转换卷的模式。 + +要检查你的集群是否具有此特性的能力,可以运行如下命令: + +```yaml +$ kubectl get crd volumesnapshotcontent -o yaml +``` + + +如果你希望允许用户从现有的 `VolumeSnapshot` 创建 `PersistentVolumeClaim`, +但是使用与源卷不同的卷模式,则需要添加注解 +`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"` +到对应 `VolumeSnapshot` 的 `VolumeSnapshotContent` 中。 + +对于预配置的快照,`Spec.SourceVolumeMode` 需要由集群管理员填充。 + +启用此特性的 `VolumeSnapshotContent` 资源示例如下所示: + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + @@ -284,4 +353,4 @@ For more details, see [Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support). --> 更多详细信息,请参阅 -[卷快照和从快照还原卷](/zh/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 +[卷快照和从快照还原卷](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh-cn/docs/concepts/storage/volumes.md similarity index 90% rename from content/zh/docs/concepts/storage/volumes.md rename to content/zh-cn/docs/concepts/storage/volumes.md index 2d92dbf79cf7a..ebe50dd89705b 100644 --- a/content/zh/docs/concepts/storage/volumes.md +++ b/content/zh-cn/docs/concepts/storage/volumes.md @@ -25,6 +25,7 @@ but with a clean state. A second problem occurs when sharing files between containers running together in a `Pod`. The Kubernetes {{< glossary_tooltip text="volume" term_id="volume" >}} abstraction solves both of these problems. +Familiarity with [Pods](/docs/concepts/workloads/pods/) is suggested. --> Container 中的文件在磁盘上是临时存放的,这给 Container 中运行的较重要的应用程序带来一些问题。 问题之一是当容器崩溃时文件丢失。 @@ -33,10 +34,7 @@ kubelet 会重新启动容器,但容器会以干净的状态重启。 Kubernetes {{< glossary_tooltip text="卷(Volume)" term_id="volume" >}} 这一抽象概念能够解决这两个问题。 - -阅读本文前建议你熟悉一下 [Pods](/zh/docs/concepts/workloads/pods)。 +阅读本文前建议你熟悉一下 [Pod](/zh-cn/docs/concepts/workloads/pods)。 @@ -120,15 +118,21 @@ Kubernetes supports several types of Volumes: Kubernetes 支持下列类型的卷: -### awsElasticBlockStore {#awselasticblockstore} - +### awsElasticBlockStore (已弃用) {#awselasticblockstore} + +{{< feature-state for_k8s_version="v1.17" state="deprecated" >}} + `awsElasticBlockStore` 卷将 Amazon Web服务(AWS)[EBS 卷](https://aws.amazon.com/ebs/) 挂载到你的 Pod 中。与 `emptyDir` 在 Pod 被删除时也被删除不同,EBS 卷的内容在删除 Pod 时会被保留,卷只是被卸载掉了。 @@ -239,13 +243,18 @@ and the kubelet, set the `InTreePluginAWSUnregister` flag to `true`. 要禁止控制器管理器和 kubelet 加载 `awsElasticBlockStore` 存储插件, 请将 `InTreePluginAWSUnregister` 标志设置为 `true`。 -### azureDisk {#azuredisk} - +### azureDisk (已弃用) {#azuredisk} + +{{< feature-state for_k8s_version="v1.19" state="deprecated" >}} `azureDisk` 卷类型用来在 Pod 上挂载 Microsoft Azure [数据盘(Data Disk)](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) 。 @@ -256,22 +265,21 @@ For more details, see the [`azureDisk` volume plugin](https://github.com/kuberne --> #### azureDisk 的 CSI 迁移 {#azuredisk-csi-migration} -{{< feature-state for_k8s_version="v1.19" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -启用 `azureDisk` 的 `CSIMigration` 功能后,所有插件操作从现有的树内插件重定向到 +启用 `azureDisk` 的 `CSIMigration` 特性后,所有插件操作从现有的树内插件重定向到 `disk.csi.azure.com` 容器存储接口(CSI)驱动程序。 -为了使用此功能,必须在集群中安装 +为了使用此特性,必须在集群中安装 [Azure 磁盘 CSI 驱动程序](https://github.com/kubernetes-sigs/azuredisk-csi-driver), -并且 `CSIMigration` 和 `CSIMigrationAzureDisk` 功能必须被启用。 +并且 `CSIMigration` 特性必须被启用。 +### azureFile (已弃用) {#azurefile} + +{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} + `azureFile` 卷类型用来在 Pod 上挂载 Microsoft Azure 文件卷(File Volume)(SMB 2.1 和 3.0)。 更多详情请参考 [`azureFile` 卷插件](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md)。 @@ -314,18 +328,19 @@ Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver) must be installed on the cluster and the `CSIMigration` and `CSIMigrationAzureFile` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled. --> -启用 `azureFile` 的 `CSIMigration` 功能后,所有插件操作将从现有的树内插件重定向到 -`file.csi.azure.com` 容器存储接口(CSI)驱动程序。要使用此功能,必须在集群中安装 +启用 `azureFile` 的 `CSIMigration` 特性后,所有插件操作将从现有的树内插件重定向到 +`file.csi.azure.com` 容器存储接口(CSI)驱动程序。要使用此特性,必须在集群中安装 [Azure 文件 CSI 驱动程序](https://github.com/kubernetes-sigs/azurefile-csi-driver), 并且 `CSIMigration` 和 `CSIMigrationAzureFile` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 必须被启用。 Azure 文件 CSI 驱动尚不支持为同一卷设置不同的 fsgroup。 -如果 AzureFile CSI 迁移被启用,用不同的 fsgroup 来使用同一卷也是不被支持的。 +如果 `CSIMigrationAzureFile` 特性被启用,用不同的 fsgroup 来使用同一卷也是不被支持的。 更多信息请参考 [CephFS 示例](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/)。 -### cinder {#cinder} + +### cinder (已弃用) {#cinder} + +{{< feature-state for_k8s_version="v1.18" state="deprecated" >}} +{{< note >}} -{{< note >}} Kubernetes 必须配置了 OpenStack Cloud Provider。 {{< /note >}} `cinder` 卷类型用于将 OpenStack Cinder 卷挂载到 Pod 中。 -#### Cinder 卷示例配置 + +#### Cinder 卷示例配置 {#cinder-volume-example-configuration} ```yaml apiVersion: v1 @@ -413,29 +434,31 @@ spec: --> #### OpenStack CSI 迁移 -{{< feature-state for_k8s_version="v1.21" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -Cinder 的 `CSIMigration` 功能在 Kubernetes 1.21 版本中是默认被启用的。 +自 Kubernetes 1.21 版本起,Cinder 的 `CSIMigration` 特性是默认被启用的。 此特性会将插件的所有操作从现有的树内插件重定向到 `cinder.csi.openstack.org` 容器存储接口(CSI)驱动程序。 -为了使用此功能,必须在集群中安装 +为了使用此特性,必须在集群中安装 [OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md), 你可以通过设置 `CSIMigrationOpenStack` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 为 `false` 来禁止 Cinder CSI 迁移。 -如果你禁用了 `CSIMigrationOpenStack` 功能特性,则树内的 Cinder 卷插件 -会负责 Cinder 卷存储管理的方方面面。 + + +要禁止控制器管理器和 kubelet 加载树内 Cinder 插件,你可以启用 +`InTreePluginOpenStackUnregister` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 ### configMap @@ -445,7 +468,7 @@ provides a way to inject configuration data into Pods. The data stored in a ConfigMap object can be referenced in a volume of type `configMap` and then consumed by containerized applications running in a Pod. --> -[`configMap`](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) +[`configMap`](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/) 卷提供了向 Pod 注入配置数据的方法。 ConfigMap 对象中存储的数据可以被 `configMap` 类型的卷引用,然后被 Pod 中运行的容器化应用使用。 @@ -501,7 +524,7 @@ keyed with `log_level`. * Text data is exposed as files using the UTF-8 character encoding. For other character encodings, use `binaryData`. --> {{< note >}} -* 在使用 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 之前你首先要创建它。 +* 在使用 [ConfigMap](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/) 之前你首先要创建它。 * 容器以 [subPath](#using-subpath) 卷挂载方式使用 ConfigMap 时,将无法接收 ConfigMap 的更新。 * 文本数据挂载成文件时采用 UTF-8 字符编码。如果使用其他字符编码形式,可使用 `binaryData` 字段。 @@ -527,7 +550,7 @@ receive Downward API updates. -更多详细信息请参考 [`downwardAPI` 卷示例](/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)。 +更多详细信息请参考 [`downwardAPI` 卷示例](/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)。 ### emptyDir @@ -589,7 +612,7 @@ backed volumes are sized to 50% of the memory on a Linux host. --> {{< note >}} -当启用 `SizeMemoryBackedVolumes` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +当启用 `SizeMemoryBackedVolumes` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 时,你可以为基于内存提供的卷指定大小。 如果未指定大小,则基于内存的卷的大小为 Linux 主机上内存的 50%。 {{< /note>}} @@ -783,8 +806,8 @@ within the same region. In order to use this feature, the volume must be provisi as a PersistentVolume; referencing the volume directly from a Pod is not supported. --> [区域持久盘](https://cloud.google.com/compute/docs/disks/#repds) -功能允许你创建能在同一区域的两个可用区中使用的持久盘。 -要使用这个功能,必须以持久卷(PersistentVolume)的方式提供卷;直接从 +特性允许你创建能在同一区域的两个可用区中使用的持久盘。 +要使用这个特性,必须以持久卷(PersistentVolume)的方式提供卷;直接从 Pod 引用这种卷是不可以的。 #### 手动供应基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv} -使用[为 GCE PD 定义的存储类](/zh/docs/concepts/storage/storage-classes/#gce) +使用[为 GCE PD 定义的存储类](/zh-cn/docs/concepts/storage/storage-classes/#gce) 可以实现动态供应。在创建 PersistentVolume 之前,你首先要创建 PD。 ```shell @@ -851,11 +874,11 @@ Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-drive must be installed on the cluster and the `CSIMigration` and `CSIMigrationGCE` beta features must be enabled. --> -启用 GCE PD 的 `CSIMigration` 功能后,所有插件操作将从现有的树内插件重定向到 +启用 GCE PD 的 `CSIMigration` 特性后,所有插件操作将从现有的树内插件重定向到 `pd.csi.storage.gke.io` 容器存储接口( CSI )驱动程序。 -为了使用此功能,必须在集群中上安装 +为了使用此特性,必须在集群中上安装 [GCE PD CSI驱动程序](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver), -并且 `CSIMigration` 和 `CSIMigrationGCE` Beta 功能必须被启用。 +并且 `CSIMigration` 和 `CSIMigrationGCE` Beta 特性必须被启用。 使用 `local` 卷时,建议创建一个 StorageClass 并将其 `volumeBindingMode` 设置为 `WaitForFirstConsumer`。要了解更多详细信息,请参考 -[local StorageClass 示例](/zh/docs/concepts/storage/storage-classes/#local)。 +[local StorageClass 示例](/zh-cn/docs/concepts/storage/storage-classes/#local)。 延迟卷绑定的操作可以确保 Kubernetes 在为 PersistentVolumeClaim 作出绑定决策时,会评估 Pod 可能具有的其他节点约束,例如:如节点资源需求、节点选择器、Pod亲和性和 Pod 反亲和性。 @@ -1301,7 +1324,7 @@ A `persistentVolumeClaim` volume is used to mount a are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. --> -`persistentVolumeClaim` 卷用来将[持久卷](/zh/docs/concepts/storage/persistent-volumes/)(PersistentVolume)挂载到 Pod 中。 +`persistentVolumeClaim` 卷用来将[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)(PersistentVolume)挂载到 Pod 中。 持久卷申领(PersistentVolumeClaim)是用户在不知道特定云环境细节的情况下“申领”持久存储(例如 GCE PersistentDisk 或者 iSCSI 卷)的一种方法。 @@ -1309,7 +1332,7 @@ GCE PersistentDisk 或者 iSCSI 卷)的一种方法。 See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more details. --> -更多详情请参考[持久卷示例](/zh/docs/concepts/storage/persistent-volumes/)。 +更多详情请参考[持久卷示例](/zh-cn/docs/concepts/storage/persistent-volumes/)。 ### portworxVolume {#portworxvolume} @@ -1372,7 +1395,7 @@ For more details, see the [Portworx volume](https://github.com/kubernetes/exampl A projected volume maps several existing volume sources into the same directory. For more details, see [projected volumes](/docs/concepts/storage/projected-volumes/). --> -投射卷能将若干现有的卷来源映射到同一目录上。更多详情请参考[投射卷](/zh/docs/concepts/storage/projected-volumes/)。 +投射卷能将若干现有的卷来源映射到同一目录上。更多详情请参考[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)。 ### quobyte (已弃用) {#quobyte} @@ -1456,11 +1479,11 @@ must be installed on the cluster and the `CSIMigration` and `csiMigrationRBD` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled. --> -启用 RBD 的 `CSIMigration` 功能后,所有插件操作从现有的树内插件重定向到 +启用 RBD 的 `CSIMigration` 特性后,所有插件操作从现有的树内插件重定向到 `rbd.csi.ceph.com` {{}} 驱动程序。 -要使用该功能,必须在集群内安装 +要使用该特性,必须在集群内安装 [Ceph CSI 驱动](https://github.com/ceph/ceph-csi),并启用 `CSIMigration` 和 `csiMigrationRBD` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 `secret` 卷用来给 Pod 传递敏感信息,例如密码。你可以将 Secret 存储在 Kubernetes -API 服务器上,然后以文件的形式挂在到 Pod 中,无需直接与 Kubernetes 耦合。 +API 服务器上,然后以文件的形式挂载到 Pod 中,无需直接与 Kubernetes 耦合。 `secret` 卷由 tmpfs(基于 RAM 的文件系统)提供存储,因此它们永远不会被写入非易失性(持久化的)存储器。 -更多详情请参考[配置 Secrets](/zh/docs/concepts/configuration/secret/)。 +更多详情请参考[配置 Secrets](/zh-cn/docs/concepts/configuration/secret/)。 ### storageOS (已弃用) {#storageos} @@ -1601,15 +1624,13 @@ For more information about StorageOS, dynamic provisioning, and PersistentVolume 关于 StorageOS 的进一步信息、动态供应和持久卷申领等等,请参考 [StorageOS 示例](https://github.com/kubernetes/examples/blob/master/volumes/storageos)。 -### vsphereVolume {#vspherevolume} +### vsphereVolume(弃用) {#vspherevolume} {{< note >}} -你必须配置 Kubernetes 的 vSphere 云驱动。云驱动的配置方法请参考 -[vSphere 使用指南](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)。 +建议你改用 vSphere CSI 树外驱动程序。 {{< /note >}} -{{< caution >}} -在挂载到 Pod 之前,你必须用下列方式之一创建 VMDK。 -{{< /caution >}} - - -#### 创建 VMDK 卷 {#creating-vmdk-volume} - -选择下列方式之一创建 VMDK。 - -{{< tabs name="tabs_volumes" >}} -{{% tab name="使用 vmkfstools 创建" %}} - -首先 ssh 到 ESX,然后使用下面的命令来创建 VMDK: - -```shell -vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk -``` -{{% /tab %}} -{{% tab name="使用 vmware-vdiskmanager 创建" %}} - -使用下面的命令创建 VMDK: - -```shell -vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk -``` -{{% /tab %}} - -{{< /tabs >}} - - - -#### vSphere VMDK 配置示例 {#vsphere-vmdk-configuration} - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: test-vmdk -spec: - containers: - - image: k8s.gcr.io/test-webserver - name: test-container - volumeMounts: - - mountPath: /test-vmdk - name: test-volume - volumes: - - name: test-volume - # 此 VMDK 卷必须已经存在 - vsphereVolume: - volumePath: "[DatastoreName] volumes/myDisk" - fsType: ext4 -``` - @@ -1707,13 +1666,29 @@ must be installed on the cluster and the `CSIMigration` and `CSIMigrationvSphere 为了使用此功能特性,必须在集群中安装 [vSphere CSI 驱动](https://github.com/kubernetes-sigs/vsphere-csi-driver),并启用 `CSIMigration` 和 `CSIMigrationvSphere` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 +你可以在 VMware 的文档页面 +[迁移树内 vSphere 卷插件到 vSphere 容器存储插件](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html) +中找到有关如何迁移的其他建议。 + -此特性还要求 vSphere vCenter/ESXi 的版本至少为 7.0u1,且 HW 版本至少为 -VM version 15。 +为了迁移到树外 CSI 驱动程序,Kubernetes v{{< skew currentVersion >}} +要求你使用 vSphere 7.0u2 或更高版本。 +如果你正在运行 v{{< skew currentVersion >}} 以外的 Kubernetes 版本, +请查阅该 Kubernetes 版本的文档。 +如果你正在运行 Kubernetes v{{< skew currentVersion >}} 和旧版本的 vSphere, +请考虑至少升级到 vSphere 7.0u2。 {{< note >}} -Kubernetes 1.23 中加入了 Portworx 的 `CSIMigration` 功能,但默认不会启用,因为该功能仍处于 alpha 阶段。 -该功能会将所有的插件操作从现有的树内插件重定向到 +Kubernetes 1.23 中加入了 Portworx 的 `CSIMigration` 特性,但默认不会启用,因为该特性仍处于 alpha 阶段。 +该特性会将所有的插件操作从现有的树内插件重定向到 `pxd.portworx.com` 容器存储接口(Container Storage Interface, CSI)驱动程序。 集群中必须安装 [Portworx CSI 驱动](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/csi/)。 -要启用此功能,请在 kube-controller-manager 和 kubelet 中设置 `CSIMigrationPortworx=true`。 +要启用此特性,请在 kube-controller-manager 和 kubelet 中设置 `CSIMigrationPortworx=true`。 要了解如何使用资源规约来请求空间,可参考 -[如何管理资源](/zh/docs/concepts/configuration/manage-resources-containers/)。 +[如何管理资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)。 你可以和以前一样,安装自己的 -[带有原始块卷支持的 PV/PVC](/zh/docs/concepts/storage/persistent-volumes/#raw-block-volume-support), +[带有原始块卷支持的 PV/PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#raw-block-volume-support), 采用 CSI 对此过程没有影响。 你可以直接在 Pod 规约中配置 CSI 卷。采用这种方式配置的卷都是临时卷, 无法在 Pod 重新启动后继续存在。 -进一步的信息可参阅[临时卷](/zh/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume)。 +进一步的信息可参阅[临时卷](/zh-cn/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume)。 有关如何开发 CSI 驱动的更多信息,请参考 [kubernetes-csi 文档](https://kubernetes-csi.github.io/docs/)。 + +#### Windows CSI 代理 {#windows-csi-proxy} + +{{< feature-state for_k8s_version="v1.22" state="stable" >}} + + +CSI 节点插件需要执行多种特权操作,例如扫描磁盘设备和挂载文件系统等。 +这些操作在每个宿主操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI +节点插件通常部署为特权容器。对于 Windows 工作节点而言,容器化 CSI +节点插件的特权操作是通过 [csi-proxy](https://github.com/kubernetes-csi/csi-proxy) +来支持的。csi-proxy 是一个由社区管理的、独立的可执行二进制文件, +需要被预安装到每个 Windows 节点上。 + +要了解更多的细节,可以参考你要部署的 CSI 插件的部署指南。 + @@ -2170,19 +2171,30 @@ configuration changes to existing Storage Classes, PersistentVolumes or Persiste The operations and features that are supported include: provisioning/delete, attach/detach, mount/unmount and resizing of volumes. - -In-tree plugins that support `CSIMigration` and have a corresponding CSI driver implemented -are listed in [Types of Volumes](#volume-types). --> -启用 `CSIMigration` 功能后,针对现有树内插件的操作会被重定向到相应的 CSI 插件(应已安装和配置)。 +启用 `CSIMigration` 特性后,针对现有树内插件的操作会被重定向到相应的 CSI 插件(应已安装和配置)。 因此,操作员在过渡到取代树内插件的 CSI 驱动时,无需对现有存储类、PV 或 PVC(指树内插件)进行任何配置更改。 -所支持的操作和功能包括:配备(Provisioning)/删除、挂接(Attach)/解挂(Detach)、 +所支持的操作和特性包括:配备(Provisioning)/删除、挂接(Attach)/解挂(Detach)、 挂载(Mount)/卸载(Unmount)和调整卷大小。 + 上面的[卷类型](#volume-types)节列出了支持 `CSIMigration` 并已实现相应 CSI 驱动程序的树内插件。 +下面是支持 Windows 节点上持久性存储的树内插件: + +* [`awsElasticBlockStore`](#awselasticblockstore) +* [`azureDisk`](#azuredisk) +* [`azureFile`](#azurefile) +* [`gcePersistentDisk`](#gcepersistentdisk) +* [`vsphereVolume`](#vspherevolume) + ### flexVolume {{< feature-state for_k8s_version="v1.23" state="deprecated" >}} @@ -2202,14 +2214,24 @@ FlexVolume 是一个使用基于 exec 的模型来与驱动程序对接的树外 Pod 通过 `flexvolume` 树内插件与 FlexVolume 驱动程序交互。 更多详情请参考 FlexVolume [README](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md#readme) 文档。 + +下面的 FlexVolume [插件](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows) +以 PowerShell 脚本的形式部署在宿主系统上,支持 Windows 节点: + +* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd) +* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd) + +{{< note >}} -{{< note >}} -FlexVolume 已弃用。推荐使用树外 CSI 驱动来将外部存储整合进 Kubernetes。 +FlexVolume 已被弃用。推荐使用树外 CSI 驱动来将外部存储整合进 Kubernetes。 FlexVolume 驱动的维护者应开发一个 CSI 驱动并帮助用户从 FlexVolume 驱动迁移到 CSI。 FlexVolume 用户应迁移工作负载以使用对等的 CSI 驱动。 @@ -2334,10 +2356,8 @@ sudo systemctl restart docker ## {{% heading "whatsnext" %}} - - -参考[使用持久卷部署 WordPress 和 MySQL](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。 +参考[使用持久卷部署 WordPress 和 MySQL](/zh-cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。 diff --git a/content/zh-cn/docs/concepts/storage/windows-storage.md b/content/zh-cn/docs/concepts/storage/windows-storage.md new file mode 100644 index 0000000000000..697aa224775f4 --- /dev/null +++ b/content/zh-cn/docs/concepts/storage/windows-storage.md @@ -0,0 +1,132 @@ +--- +title: Windows 存储 +content_type: concept +--- + + + + +此页面提供特定于 Windows 操作系统的存储概述。 + + + +## 持久存储 {#storage} +Windows 有一个分层文件系统驱动程序用来挂载容器层和创建基于 NTFS 的文件系统拷贝。 +容器中的所有文件路径仅在该容器的上下文中解析。 + + +* 使用 Docker 时,卷挂载只能是容器中的目录,而不能是单个文件。此限制不适用于 containerd。 +* 卷挂载不能将文件或目录映射回宿主文件系统。 +* 不支持只读文件系统,因为 Windows 注册表和 SAM 数据库始终需要写访问权限。不过,Windows 支持只读的卷。 +* 不支持卷的用户掩码和访问许可,因为宿主与容器之间并不共享 SAM,二者之间不存在映射关系。 + 所有访问许可都是在容器上下文中解析的。 + + +因此,Windows 节点不支持以下存储功能: + + +* 卷子路径挂载:只能在 Windows 容器上挂载整个卷 +* Secret 的子路径挂载 +* 宿主挂载映射 +* 只读的根文件系统(映射的卷仍然支持 `readOnly`) +* 块设备映射 +* 内存作为存储介质(例如 `emptyDir.medium` 设置为 `Memory`) +* 类似 UID/GID、各用户不同的 Linux 文件系统访问许可等文件系统特性 +* 使用 [DefaultMode 设置 Secret 权限](/zh-cn/docs/concepts/configuration/secret/#secret-files-permissions) + (因为该特性依赖 UID/GID) +* 基于 NFS 的存储和卷支持 +* 扩展已挂载卷(resizefs) + + +使用 Kubernetes {{< glossary_tooltip text="卷" term_id="volume" >}}, +对数据持久性和 Pod 卷共享有需求的复杂应用也可以部署到 Kubernetes 上。 +管理与特定存储后端或协议相关的持久卷时,相关的操作包括:对卷的制备(Provisioning)、 +去配(De-provisioning)和调整大小,将卷挂接到 Kubernetes 节点或从节点上解除挂接, +将卷挂载到需要持久数据的 Pod 中的某容器上或从容器上卸载。 + + +卷管理组件作为 Kubernetes 卷[插件](/zh-cn/docs/concepts/storage/volumes/#types-of-volumes)发布。 +Windows 支持以下类型的 Kubernetes 卷插件: + + +* [`FlexVolume plugins`](/zh-cn/docs/concepts/storage/volumes/#flexVolume) + * 请注意自 1.23 版本起,FlexVolume 已被弃用 +* [`CSI Plugins`](/zh-cn/docs/concepts/storage/volumes/#csi) + + +##### 树内(In-Tree)卷插件 {#in-tree-volume-plugins} + +以下树内(In-Tree)插件支持 Windows 节点上的持久存储: + + +* [`awsElasticBlockStore`](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore) +* [`azureDisk`](/zh-cn/docs/concepts/storage/volumes/#azuredisk) +* [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile) +* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) +* [`vsphereVolume`](/zh-cn/docs/concepts/storage/volumes/#vspherevolume) \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/windows/_index.md b/content/zh-cn/docs/concepts/windows/_index.md new file mode 100644 index 0000000000000..a78ddcfe75f02 --- /dev/null +++ b/content/zh-cn/docs/concepts/windows/_index.md @@ -0,0 +1,8 @@ +--- +title: "Kubernetes 中的 Windows" +weight: 50 +--- + diff --git a/content/zh-cn/docs/concepts/windows/intro.md b/content/zh-cn/docs/concepts/windows/intro.md new file mode 100644 index 0000000000000..1ea44cfc3d9be --- /dev/null +++ b/content/zh-cn/docs/concepts/windows/intro.md @@ -0,0 +1,721 @@ +--- +title: Kubernetes 中的 Windows 容器 +content_type: concept +weight: 65 +--- + + + + +在许多组织中,所运行的很大一部分服务和应用是 Windows 应用。 +[Windows 容器](https://aka.ms/windowscontainers)提供了一种封装进程和包依赖项的方式, +从而简化了 DevOps 实践,令 Windows 应用程序同样遵从云原生模式。 + +对于同时投入基于 Windows 应用和 Linux 应用的组织而言,他们不必寻找不同的编排系统来管理其工作负载, +使其跨部署的运营效率得以大幅提升,而不必关心所用的操作系统。 + + + + +## Kubernetes 中的 Windows 节点 {#windows-nodes-in-k8s} + +若要在 Kubernetes 中启用对 Windows 容器的编排,可以在现有的 Linux 集群中包含 Windows 节点。 +在 Kubernetes 上调度 {{< glossary_tooltip text="Pod" term_id="pod" >}} 中的 Windows 容器与调度基于 Linux 的容器类似。 + +为了运行 Windows 容器,你的 Kubernetes 集群必须包含多个操作系统。 +尽管你只能在 Linux 上运行{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}, +你可以部署运行 Windows 或 Linux 的工作节点。 + + +支持 Windows {{< glossary_tooltip text="节点" term_id="node" >}}的前提是操作系统为 Windows Server 2019。 + +本文使用术语 **Windows 容器**表示具有进程隔离能力的 Windows 容器。 +Kubernetes 不支持使用 +[Hyper-V 隔离能力](https://docs.microsoft.com/zh-cn/virtualization/windowscontainers/manage-containers/hyperv-container)来运行 +Windows 容器。 + + +## 兼容性与局限性 {#limitations} + +某些节点层面的功能特性仅在使用特定[容器运行时](#container-runtime)时才可用; +另外一些特性则在 Windows 节点上不可用,包括: + +* 巨页(HugePages):Windows 容器当前不支持。 +* 特权容器:Windows 容器当前不支持。 + [HostProcess 容器](/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod/)提供类似功能。 +* TerminationGracePeriod:需要 containerD。 + + +Windows 节点并不支持共享命名空间的所有功能特性。 +有关更多详细信息,请参考 [API 兼容性](#api)。 + +有关 Kubernetes 测试时所使用的 Windows 版本的详细信息,请参考 [Windows 操作系统版本兼容性](#windows-os-version-support)。 + +从 API 和 kubectl 的角度来看,Windows 容器的行为与基于 Linux 的容器非常相似。 +然而,在本节所概述的一些关键功能上,二者存在一些显著差异。 + + +### 与 Linux 比较 {#comparison-with-Linux-similarities} + +Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。 +本节介绍几个关键的工作负载抽象及其如何映射到 Windows。 + + +* [Pod](/zh-cn/docs/concepts/workloads/pods/) + + Pod 是 Kubernetes 的基本构建块,是可以创建或部署的最小和最简单的单元。 + 你不可以在同一个 Pod 中部署 Windows 和 Linux 容器。 + Pod 中的所有容器都调度到同一 Node 上,每个 Node 代表一个特定的平台和体系结构。 + Windows 容器支持以下 Pod 能力、属性和事件: + + * 每个 Pod 有一个或多个容器,具有进程隔离和卷共享能力 + * Pod `status` 字段 + * 就绪、存活和启动探针 + * postStart 和 preStop 容器生命周期回调 + * ConfigMap 和 Secret:作为环境变量或卷 + * `emptyDir` 卷 + * 命名管道形式的主机挂载 + * 资源限制 + * 操作系统字段: + + `.spec.os.name` 字段应设置为 `windows` 以表明当前 Pod 使用 Windows 容器。 + 需要启用 `IdentifyPodOS` 特性门控才能让这个字段被识别。 + + {{< note >}} + 从 1.24 开始,`IdentifyPodOS` 特性门控进入 Beta 阶段,默认启用。 + {{< /note >}} + + 如果 `IdentifyPodOS` 特性门控已启用并且你将 `.spec.os.name` 字段设置为 `windows`, + 则你不得在对应 Pod 的 `.spec` 中设置以下字段: + + * `spec.hostPID` + * `spec.hostIPC` + * `spec.securityContext.seLinuxOptions` + * `spec.securityContext.seccompProfile` + * `spec.securityContext.fsGroup` + * `spec.securityContext.fsGroupChangePolicy` + * `spec.securityContext.sysctls` + * `spec.shareProcessNamespace` + * `spec.securityContext.runAsUser` + * `spec.securityContext.runAsGroup` + * `spec.securityContext.supplementalGroups` + * `spec.containers[*].securityContext.seLinuxOptions` + * `spec.containers[*].securityContext.seccompProfile` + * `spec.containers[*].securityContext.capabilities` + * `spec.containers[*].securityContext.readOnlyRootFilesystem` + * `spec.containers[*].securityContext.privileged` + * `spec.containers[*].securityContext.allowPrivilegeEscalation` + * `spec.containers[*].securityContext.procMount` + * `spec.containers[*].securityContext.runAsUser` + * `spec.containers[*].securityContext.runAsGroup` + + 在上述列表中,通配符(`*`)表示列表中的所有项。 + 例如,`spec.containers[*].securityContext` 指代所有容器的 SecurityContext 对象。 + 如果指定了这些字段中的任意一个,则 API 服务器不会接受此 Pod。 + + +* [工作负载资源](/zh-cn/docs/concepts/workloads/controllers/)包括: + + * ReplicaSet + * Deployment + * StatefulSet + * DaemonSet + * Job + * CronJob + * ReplicationController + +* {{< glossary_tooltip text="Services" term_id="service" >}} + + 有关更多详细信息,请参考[负载均衡和 Service](#load-balancing-and-services)。 + + +Pod、工作负载资源和 Service 是在 Kubernetes 上管理 Windows 工作负载的关键元素。 +然而,它们本身还不足以在动态的云原生环境中对 Windows 工作负载进行恰当的生命周期管理。 +Kubernetes 还支持: + +* `kubectl exec` +* Pod 和容器度量指标 +* {{< glossary_tooltip text="Pod 水平自动扩缩容" term_id="horizontal-pod-autoscaler" >}} +* {{< glossary_tooltip text="资源配额" term_id="resource-quota" >}} +* 调度器抢占 + + +### kubelet 的命令行选项 {#kubelet-compatibility} + +某些 kubelet 命令行选项在 Windows 上的行为不同,如下所述: + + +* `--windows-priorityclass` 允许你设置 kubelet 进程的调度优先级 + (参考 [CPU 资源管理](/zh-cn/docs/concepts/configuration/windows-resource-management/#resource-management-cpu))。 +* `--kubelet-reserve`、`--system-reserve` 和 `--eviction-hard` 标志更新 + [NodeAllocatable](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)。 +* 未实现使用 `--enforce-node-allocable` 驱逐。 +* 未实现使用 `--eviction-hard` 和 `--eviction-soft` 驱逐。 +* 在 Windows 节点上运行时,kubelet 没有内存或 CPU 限制。 + `--kube-reserved` 和 `--system-reserved` 仅从 `NodeAllocatable` 中减去,并且不保证为工作负载提供的资源。 + 有关更多信息,请参考 [Windows 节点的资源管理](/zh-cn/docs/concepts/configuration/windows-resource-management/#resource-reservation)。 +* 未实现 `MemoryPressure` 条件。 +* kubelet 不会执行 OOM 驱逐操作。 + + +### API 兼容性 {#api} + +由于操作系统和容器运行时的缘故,Kubernetes API 在 Windows 上的工作方式存在细微差异。 +某些工作负载属性是为 Linux 设计的,无法在 Windows 上运行。 + +从较高的层面来看,以下操作系统概念是不同的: + + +* 身份 - Linux 使用 userID(UID)和 groupID(GID),表示为整数类型。 + 用户名和组名是不规范的,它们只是 `/etc/groups` 或 `/etc/passwd` 中的别名, + 作为 UID+GID 的后备标识。 + Windows 使用更大的二进制[安全标识符](https://docs.microsoft.com/zh-cn/windows/security/identity-protection/access-control/security-identifiers)(SID), + 存放在 Windows 安全访问管理器(Security Access Manager,SAM)数据库中。 + 此数据库在主机和容器之间或容器之间不共享。 +* 文件权限 - Windows 使用基于 SID 的访问控制列表, + 而像 Linux 使用基于对象权限和 UID+GID 的位掩码(POSIX 系统)以及**可选的**访问控制列表。 +* 文件路径 - Windows 上的约定是使用 `\` 而不是 `/`。 + Go IO 库通常接受两者,能让其正常工作,但当你设置要在容器内解读的路径或命令行时, + 可能需要用 `\`。 + + +* 信号 - Windows 交互式应用处理终止的方式不同,可以实现以下一种或多种: + * UI 线程处理包括 `WM_CLOSE` 在内准确定义的消息。 + * 控制台应用使用控制处理程序(Control Handler)处理 Ctrl-C 或 Ctrl-Break。 + * 服务会注册可接受 `SERVICE_CONTROL_STOP` 控制码的服务控制处理程序(Service Control Handler)函数。 + +容器退出码遵循相同的约定,其中 0 表示成功,非零表示失败。 +具体的错误码在 Windows 和 Linux 中可能不同。 +但是,从 Kubernetes 组件(kubelet、kube-proxy)传递的退出码保持不变。 + + +##### 容器规范的字段兼容性 {#compatibility-v1-pod-spec-containers} + +以下列表记录了 Pod 容器规范在 Windows 和 Linux 之间的工作方式差异: + +* 巨页(Huge page)在 Windows 容器运行时中未实现,且不可用。 + 巨页需要不可为容器配置的[用户特权生效](https://docs.microsoft.com/zh-cn/windows/win32/memory/large-page-support)。 +* `requests.cpu` 和 `requests.memory` - + 从节点可用资源中减去请求,因此请求可用于避免一个节点过量供应。 + 但是,请求不能用于保证已过量供应的节点中的资源。 + 如果运营商想要完全避免过量供应,则应将设置请求作为最佳实践应用到所有容器。 + +* `securityContext.allowPrivilegeEscalation` - + 不能在 Windows 上使用;所有权能字都无法生效。 +* `securityContext.capabilities` - POSIX 权能未在 Windows 上实现。 +* `securityContext.privileged` - Windows 不支持特权容器。 +* `securityContext.procMount` - Windows 没有 `/proc` 文件系统。 +* `securityContext.readOnlyRootFilesystem` - + 不能在 Windows 上使用;对于容器内运行的注册表和系统进程,写入权限是必需的。 +* `securityContext.runAsGroup` - 不能在 Windows 上使用,因为不支持 GID。 + +* `securityContext.runAsNonRoot` - + 此设置将阻止以 `ContainerAdministrator` 身份运行容器,这是 Windows 上与 root 用户最接近的身份。 +* `securityContext.runAsUser` - 改用 [`runAsUserName`](/zh-cn/docs/tasks/configure-pod-container/configure-runasusername)。 +* `securityContext.seLinuxOptions` - 不能在 Windows 上使用,因为 SELinux 特定于 Linux。 +* `terminationMessagePath` - 这个字段有一些限制,因为 Windows 不支持映射单个文件。 + 默认值为 `/dev/termination-log`,因为默认情况下它在 Windows 上不存在,所以能生效。 + + +##### Pod 规范的字段兼容性 {#compatibility-v1-pod} + +以下列表记录了 Pod 规范在 Windows 和 Linux 之间的工作方式差异: + +* `hostIPC` 和 `hostpid` - 不能在 Windows 上共享主机命名空间。 +* `hostNetwork` - Windows 操作系统不支持共享主机网络。 +* `dnsPolicy` - Windows 不支持将 Pod `dnsPolicy` 设为 `ClusterFirstWithHostNet`, + 因为未提供主机网络。Pod 始终用容器网络运行。 +* `podSecurityContext`(参见下文) +* `shareProcessNamespace` - 这是一个 beta 版功能特性,依赖于 Windows 上未实现的 Linux 命名空间。 + Windows 无法共享进程命名空间或容器的根文件系统(root filesystem)。 + 只能共享网络。 + +* `terminationGracePeriodSeconds` - 这在 Windows 上的 Docker 中没有完全实现, + 请参考 [GitHub issue](https://github.com/moby/moby/issues/25982)。 + 目前的行为是通过 CTRL_SHUTDOWN_EVENT 发送 ENTRYPOINT 进程,然后 Windows 默认等待 5 秒, + 最后使用正常的 Windows 关机行为终止所有进程。 + 5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的 Windows 注册表中, + 因此在构建容器时可以覆盖这个值。 +* `volumeDevices` - 这是一个 beta 版功能特性,未在 Windows 上实现。 + Windows 无法将原始块设备挂接到 Pod。 +* `volumes` + * 如果你定义一个 `emptyDir` 卷,则你无法将卷源设为 `memory`。 +* 你无法为卷挂载启用 `mountPropagation`,因为这在 Windows 上不支持。 + + +##### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext} + +Pod 的所有 [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +字段都无法在 Windows 上生效。 + + +## 节点问题检测器 {#node-problem-detector} + +节点问题检测器(参考[节点健康监测](/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/))初步支持 Windows。 +有关更多信息,请访问该项目的 [GitHub 页面](https://github.com/kubernetes/node-problem-detector#windows)。 + + +### Pause 容器 {#pause-container} + +在 Kubernetes Pod 中,首先创建一个基础容器或 “pause” 容器来承载容器。 +在 Linux 中,构成 Pod 的 cgroup 和命名空间维持持续存在需要一个进程; +而 pause 进程就提供了这个功能。 +属于同一 Pod 的容器(包括基础容器和工作容器)共享一个公共网络端点 +(相同的 IPv4 和/或 IPv6 地址,相同的网络端口空间)。 +Kubernetes 使用 pause 容器以允许工作容器崩溃或重启,而不会丢失任何网络配置。 + + +Kubernetes 维护一个多体系结构的镜像,包括对 Windows 的支持。 +对于 Kubernetes v{{< skew currentVersion >}},推荐的 pause 镜像为 `k8s.gcr.io/pause:3.6`。 +可在 GitHub 上获得[源代码](https://github.com/kubernetes/kubernetes/tree/master/build/pause)。 + +Microsoft 维护一个不同的多体系结构镜像,支持 Linux 和 Windows amd64, +你可以找到的镜像类似 `mcr.microsoft.com/oss/kubernetes/pause:3.6`。 +此镜像的构建与 Kubernetes 维护的镜像同源,但所有 Windows 可执行文件均由 +Microsoft 进行了[验证码签名](https://docs.microsoft.com/zh-cn/windows-hardware/drivers/install/authenticode)。 +如果你正部署到一个需要签名可执行文件的生产或类生产环境, +Kubernetes 项目建议使用 Microsoft 维护的镜像。 + + +### 容器运行时 {#container-runtime} + +你需要将{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}安装到集群中的每个节点, +这样 Pod 才能在这些节点上运行。 + +以下容器运行时适用于 Windows: + +{{% thirdparty-content %}} + + +#### ContainerD {#containerd} + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +对于运行 Windows 的 Kubernetes 节点,你可以使用 +{{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ 作为容器运行时。 + +学习如何[在 Windows 上安装 ContainerD](/zh-cn/docs/setup/production-environment/container-runtimes/#install-containerd)。 + + +{{< note >}} +将 GMSA 和 containerd 一起用于访问 Windows +网络共享时存在[已知限制](/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations), +这需要一个内核补丁。 +{{< /note >}} + + +#### Mirantis 容器运行时 {#mcr} + +[Mirantis 容器运行时](https://docs.mirantis.com/mcr/20.10/overview.html)(MCR) +可作为所有 Windows Server 2019 和更高版本的容器运行时。 + +有关更多信息,请参考[在 Windows Server 上安装 MCR](https://docs.mirantis.com/mcr/20.10/install/mcr-windows.html)。 + + +## Windows 操作系统版本兼容性 {#windows-os-version-support} + +在 Windows 节点上,如果主机操作系统版本必须与容器基础镜像操作系统版本匹配, +则会应用严格的兼容性规则。 +仅 Windows Server 2019 作为容器操作系统时,才能完全支持 Windows 容器。 + +对于 Kubernetes v{{< skew currentVersion >}},Windows 节点(和 Pod)的操作系统兼容性如下: + +Windows Server LTSC release +: Windows Server 2019 +: Windows Server 2022 + +Windows Server SAC release +: Windows Server version 20H2 + + +也适用 Kubernetes [版本偏差策略](/zh-cn/releases/version-skew-policy/)。 + + +## 获取帮助和故障排查 {#troubleshooting} + +对 Kubernetes 集群进行故障排查的主要帮助来源应始于[故障排查](/zh-cn/docs/tasks/debug/)页面。 + +本节包括了一些其他特定于 Windows 的故障排查帮助。 +日志是解决 Kubernetes 中问题的重要元素。 +确保在任何时候向其他贡献者寻求故障排查协助时随附了日志信息。 +遵照 SIG Windows +[日志收集贡献指南](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs)中的指示说明。 + + +### 报告问题和功能请求 {#report-issue-and-feature-request} + +如果你发现疑似 bug,或者你想提出功能请求,请按照 +[SIG Windows 贡献指南](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#reporting-issues-and-feature-requests) +新建一个 Issue。 +你应该先搜索 issue 列表,以防之前报告过这个问题,凭你对该问题的经验添加评论, +并随附日志信息。 +Kubernetes Slack 上的 SIG Windows 频道也是一个很好的途径, +可以在创建工单之前获得一些初始支持和故障排查思路。 + +## {{% heading "whatsnext" %}} + + +### 部署工具 {#deployment-tools} + +kubeadm 工具帮助你部署 Kubernetes 集群,提供管理集群的控制平面以及运行工作负载的节点。 +[添加 Windows 节点](/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)阐述了如何使用 +kubeadm 将 Windows 节点部署到你的集群。 + +Kubernetes [集群 API](https://cluster-api.sigs.k8s.io/) 项目也提供了自动部署 Windows 节点的方式。 + + +### Windows 分发渠道 {#windows-distribution-channels} + +有关 Windows 分发渠道的详细阐述,请参考 +[Microsoft 文档](https://docs.microsoft.com/zh-cn/windows-server/get-started-19/servicing-channels-19)。 + +有关支持模型在内的不同 Windows Server 服务渠道的信息,请参考 +[Windows Server 服务渠道](https://docs.microsoft.com/zh-cn/windows-server/get-started/servicing-channels-comparison)。 diff --git a/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/zh-cn/docs/concepts/windows/user-guide.md similarity index 50% rename from content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md rename to content/zh-cn/docs/concepts/windows/user-guide.md index f13822c0bf7d6..49d89b946193d 100644 --- a/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/zh-cn/docs/concepts/windows/user-guide.md @@ -1,14 +1,13 @@ --- -title: Kubernetes 中 Windows 容器的调度指南 +title: Kubernetes 中的 Windows 容器调度指南 content_type: concept weight: 75 --- - - -Windows 应用程序构成了许多组织中运行的服务和应用程序的很大一部分。 -本指南将引导您完成在 Kubernetes 中配置和部署 Windows 容器的步骤。 +在许多组织中运行的服务和应用程序中,Windows 应用程序构成了很大一部分。 +本指南将引导你完成在 Kubernetes 中配置和部署 Windows 容器的步骤。 - -## 目标 +## 目标 {#objectives} -* 配置一个示例 deployment 以在 Windows 节点上运行 Windows 容器 -* (可选)使用组托管服务帐户(GMSA)为您的 Pod 配置 Active Directory 身份 +* 配置 Deployment 样例以在 Windows 节点上运行 Windows 容器 +* 在 Kubernetes 中突出 Windows 特定的功能 - -## 在你开始之前 +## 在你开始之前 {#before-you-begin} -* 创建一个 Kubernetes 集群,其中包括一个控制平面和 - [运行 Windows 服务器的工作节点](/zh/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -* 重要的是要注意,对于 Linux 和 Windows 容器,在 Kubernetes - 上创建和部署服务和工作负载的行为几乎相同。 - 与集群接口的 [kubectl 命令](/zh/docs/reference/kubectl/overview/)相同。 - 提供以下部分中的示例只是为了快速启动 Windows 容器的使用体验。 +* 创建一个 Kubernetes 集群,其中包含一个控制平面和一个[运行 Windows Server 的工作节点](/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) +* 务必请注意,在 Kubernetes 上创建和部署服务和工作负载的行为方式与 Linux 和 Windows 容器的行为方式大致相同。 + 与集群交互的 [kubectl 命令](/zh-cn/docs/reference/kubectl/)是一致的。 + 下一小节的示例旨在帮助你快速开始使用 Windows 容器。 - -## 入门:部署 Windows 容器 +## 快速开始:部署 Windows 容器 {#getting-started-deploying-a-windows-container} + +以下示例 YAML 文件部署了一个在 Windows 容器内运行的简单 Web 服务器的应用程序。 -要在 Kubernetes 上部署 Windows 容器,您必须首先创建一个示例应用程序。 -下面的示例 YAML 文件创建了一个简单的 Web 服务器应用程序。 -创建一个名为 `win-webserver.yaml` 的服务规约,其内容如下: +创建一个名为 `win-webserver.yaml` 的 Service 规约,其内容如下: ```yaml apiVersion: v1 @@ -77,7 +74,7 @@ metadata: app: win-webserver spec: ports: - # the port that this service should serve on + # 此 Service 服务的端口 - port: 80 targetPort: 80 selector: @@ -112,35 +109,43 @@ spec: kubernetes.io/os: windows ``` - {{< note >}} -端口映射也是支持的,但为简单起见,在此示例中容器端口 80 直接暴露给服务。 + +端口映射也是支持的,但为简单起见,此示例将容器的端口 80 直接暴露给服务。 {{< /note >}} - +1. 检查所有节点是否健康 - ```bash - kubectl get nodes - ``` + ```bash + kubectl get nodes + ``` + +1. 部署 Service 并监视 Pod 更新: - ```bash - kubectl apply -f win-webserver.yaml - kubectl get pods -o wide -w - ``` + ```bash + kubectl apply -f win-webserver.yaml + kubectl get pods -o wide -w + ``` - When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C. + + 当 Service 被正确部署时,两个 Pod 都被标记为就绪(Ready)。要退出 watch 命令,请按 Ctrl+C。 + -1. 检查所有节点是否健康: - - ```bash - kubectl get nodes - ``` - -1. 部署服务并观察 pod 更新: - - ```bash - kubectl apply -f win-webserver.yaml - kubectl get pods -o wide -w - ``` - - 正确部署服务后,两个 Pod 都标记为“Ready”。要退出 watch 命令,请按 Ctrl + C。 +1. 检查部署是否成功。请验证: + + * 使用 `kubectl get pods` 从 Linux 控制平面节点能够列出两个 Pod + * 跨网络的节点到 Pod 通信,从 Linux 控制平面节点上执行 `curl` 访问 + Pod IP 的 80 端口以检查 Web 服务器响应 + * Pod 间通信,使用 docker exec 或 kubectl exec + 在 Pod 之间(以及跨主机,如果你有多个 Windows 节点)互 ping + * Service 到 Pod 的通信,在 Linux 控制平面节点以及独立的 Pod 中执行 `curl` + 访问虚拟的服务 IP(在 `kubectl get services` 下查看) + * 服务发现,使用 Kubernetes [默认 DNS 后缀](/zh-cn/docs/concepts/services-networking/dns-pod-service/#services)的服务名称, + 用 `curl` 访问服务名称 + * 入站连接,在 Linux 控制平面节点或集群外的机器上执行 `curl` 来访问 NodePort 服务 + * 出站连接,使用 kubectl exec,从 Pod 内部执行 `curl` 访问外部 IP -1. 检查部署是否成功。验证: - - * Windows 节点上每个 Pod 有两个容器,使用 `docker ps` - * Linux 控制平面节点列出两个 Pod,使用 `kubectl get pods` - * 跨网络的节点到 Pod 通信,从 Linux 控制平面节点 `curl` 您的 pod IPs 的端口80,以检查 Web 服务器响应 - * Pod 到 Pod 的通信,使用 docker exec 或 kubectl exec 在 Pod 之间 - (以及跨主机,如果你有多个 Windows 节点)进行 ping 操作 - * 服务到 Pod 的通信,从 Linux 控制平面节点和各个 Pod 中 `curl` 虚拟服务 IP - (在 `kubectl get services` 下可见) - * 服务发现,使用 Kubernetes `curl` 服务名称 - [默认 DNS 后缀](/zh/docs/concepts/services-networking/dns-pod-service/#services) - * 入站连接,从 Linux 控制平面节点或集群外部的计算机 `curl` NodePort - * 出站连接,使用 kubectl exec 从 Pod 内部 curl 外部 IP - - -{{< note >}} -由于当前平台对 Windows 网络堆栈的限制,Windows 容器主机无法访问在其上调度的服务的 IP。只有 Windows pods 才能访问服务 IP。 +由于当前 Windows 平台的网络堆栈限制,Windows 容器主机无法访问调度到其上的 Service 的 IP。 +只有 Windows Pod 能够访问 Service IP。 {{< /note >}} - -## 可观测性 {#observability} - -### 抓取来自工作负载的日志 - -日志是可观测性的重要一环;使用日志用户可以获得对负载运行状况的洞察, -因而日志是故障排查的一个重要手法。 -因为 Windows 容器中的 Windows 容器和负载与 Linux 容器的行为不同, -用户很难收集日志,因此运行状态的可见性很受限。 -例如,Windows 工作负载通常被配置为将日志输出到 Windows 事件跟踪 -(Event Tracing for Windows,ETW),或者将日志条目推送到应用的事件日志中。 -[LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor) -是 Microsoft 提供的一个开源工具,是监视 Windows 容器中所配置的日志源 -的推荐方式。 -LogMonitor 支持监视时间日志、ETW 提供者模块以及自定义的应用日志, -并使用管道的方式将其输出到标准输出(stdout),以便 `kubectl logs ` -这类命令能够读取这些数据。 - -请遵照 LogMonitor GitHub 页面上的指令,将其可执行文件和配置文件复制到 -你的所有容器中,并为其添加必要的入口点(Entrypoint),以便 LogMonitor -能够将你的日志输出推送到标准输出(stdout)。 +## 可观察性 {#observability} +### 捕捉来自工作负载的日志 {#capturing-logs-from-workloads} - -## 使用可配置的容器用户名 +## 配置容器用户 {#configuring-container-user} + +### 使用可配置的容器用户名 {#using-configurable-container-usernames} -从 Kubernetes v1.16 开始,可以为 Windows 容器配置与其镜像默认值不同的用户名 -来运行其入口点和进程。 -此能力的实现方式和 Linux 容器有些不同。 -在[此处](/zh/docs/tasks/configure-pod-container/configure-runasusername/) -可了解更多信息。 +Windows 容器可以配置为使用不同于镜像默认值的用户名来运行其入口点和进程。 +[在这里](/zh-cn/docs/tasks/configure-pod-container/configure-runasusername/)了解更多信息。 - -## 使用组托管服务帐户管理工作负载身份 +### 使用组托管服务帐户(GMSA)管理工作负载身份 {#managing-workload-identity-with-group-managed-service-accounts} -从 Kubernetes v1.14 开始,可以将 Windows 容器工作负载配置为使用组托管服务帐户(GMSA)。 -组托管服务帐户是 Active Directory 帐户的一种特定类型,它提供自动密码管理, -简化的服务主体名称(SPN)管理以及将管理委派给跨多台服务器的其他管理员的功能。 -配置了 GMSA 的容器可以访问外部 Active Directory 域资源,同时携带通过 GMSA 配置的身份。 -在[此处](/zh/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 -Windows 容器配置和使用 GMSA 的更多信息。 +Windows 容器工作负载可以配置为使用组托管服务帐户(Group Managed Service Accounts,GMSA)。 +组托管服务帐户是一种特定类型的活动目录(Active Directory)帐户,可提供自动密码管理、 +简化的服务主体名称(Service Principal Name,SPN)管理,以及将管理委派给多个服务器上的其他管理员的能力。 +配置了 GMSA 的容器可以携带使用 GMSA 配置的身份访问外部活动目录域资源。 +在[此处](/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 Windows 容器配置和使用 GMSA 的更多信息。 - -## 污点和容忍度 - -目前,用户需要将 Linux 和 Windows 工作负载运行在各自特定的操作系统的节点上, -因而需要结合使用污点和节点选择算符。 这可能仅给 Windows 用户造成不便。 -推荐的方法概述如下,其主要目标之一是该方法不应破坏与现有 Linux 工作负载的兼容性。 - +## 污点和容忍度 {#taints-and-tolerations} + +用户需要使用某种污点(Taint)和节点选择器的组合,以便将 Linux 和 Windows 工作负载各自调度到特定操作系统的节点。 +下面概述了推荐的方法,其主要目标之一是该方法不应破坏现有 Linux 工作负载的兼容性。 +如果启用了 `IdentifyPodOS` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +你可以(并且应该)将 Pod 的 `.spec.os.name` 设置为该 Pod 中的容器设计所用于的操作系统。 +对于运行 Linux 容器的 Pod,将 `.spec.os.name` 设置为 `linux`。 +对于运行 Windows 容器的 Pod,将 `.spec.os.name` 设置为 `Windows`。 + +{{< note >}} + +从 1.24 开始,`IdentifyPodOS` 特性处于 Beta 阶段,默认启用。 +{{< /note >}} + + - {{< note >}} -如果 `IdentifyPodOS` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)是启用的, -你可以(并且应该)为 Pod 设置 `.spec.os.name` 以表明该 Pod -中的容器所针对的操作系统。 对于运行 Linux 容器的 Pod,设置 -`.spec.os.name` 为 `linux`。 对于运行 Windows 容器的 Pod,设置 `.spec.os.name` -为 `Windows`。 - -在将 Pod 分配给节点时,调度程序不使用 `.spec.os.name` 的值。你应该使用正常的 Kubernetes -机制[将 Pod 分配给节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/), -确保集群的控制平面将 Pod 放置到适合运行的操作系统。 -对 Windows Pod 的调度没有影响,因此仍然需要污点、容忍度以及节点选择器, -以确保 Windows Pod 调度至合适的 Windows 节点。 - {{< /note >}} - -### 确保特定操作系统的工作负载落在适当的容器主机上 +调度器在将 Pod 分配到节点时并不使用 `.spec.os.name` 的值。 +你应该使用正常的 Kubernetes 机制[将 Pod 分配给节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/), +以确保集群的控制平面将 Pod 放置到运行适当操作系统的节点上。 + +`.spec.os.name` 值对 Windows Pod 的调度没有影响, +因此仍然需要污点和容忍以及节点选择器来确保 Windows Pod 落在适当的 Windows 节点。 + + -用户可以使用污点和容忍度确保 Windows 容器可以调度在适当的主机上。目前所有 Kubernetes 节点都具有以下默认标签: +### 确保特定于操作系统的工作负载落到合适的容器主机上 {#ensuring-os-specific-workloads-land-on-the-appropriate-container-host} + +用户可以使用污点(Taint)和容忍度(Toleration)确保将 Windows 容器调度至合适的主机上。 +现在,所有的 Kubernetes 节点都有以下默认标签: * kubernetes.io/os = [windows|linux] * kubernetes.io/arch = [amd64|arm64|...] - -如果 Pod 规范未指定诸如 `"kubernetes.io/os": windows` 之类的 nodeSelector,则该 Pod -可能会被调度到任何主机(Windows 或 Linux)上。 -这是有问题的,因为 Windows 容器只能在 Windows 上运行,而 Linux 容器只能在 Linux 上运行。 +如果 Pod 规约没有指定像 `"kubernetes.io/os": windows` 这样的 nodeSelector, +则 Pod 可以被调度到任何主机上,Windows 或 Linux。 +这可能会有问题,因为 Windows 容器只能在 Windows 上运行,而 Linux 容器只能在 Linux 上运行。 最佳实践是使用 nodeSelector。 - -但是,我们了解到,在许多情况下,用户都有既存的大量的 Linux 容器部署,以及一个现成的配置生态系统, -例如社区 Helm charts,以及程序化 Pod 生成案例,例如 Operators。 -在这些情况下,您可能会不愿意更改配置添加 nodeSelector。替代方法是使用污点。 -由于 kubelet 可以在注册期间设置污点,因此可以轻松修改它,使其仅在 Windows 上运行时自动添加污点。 +但是,我们了解到,在许多情况下,用户已经预先存在大量 Linux 容器部署, +以及现成配置的生态系统,例如社区中的 Helm Chart 包和程序化的 Pod 生成案例,例如 Operator。 +在这些情况下,你可能不愿更改配置来添加节点选择器。 +另一种方法是使用污点。因为 kubelet 可以在注册过程中设置污点, +所以可以很容易地修改为,当只能在 Windows 上运行时,自动添加污点。 - -例如:`--register-with-taints='os=windows:NoSchedule'` - -向所有 Windows 节点添加污点后,Kubernetes 将不会在它们上调度任何负载(包括现有的 Linux Pod)。 -为了使某 Windows Pod 调度到 Windows 节点上,该 Pod 需要 nodeSelector 和合适的匹配的容忍度设置来选择 Windows, +例如:`--register-with-taints='os=windows:NoSchedule'` + +通过向所有 Windows 节点添加污点,任何负载都不会被调度到这些节点上(包括现有的 Linux Pod)。 +为了在 Windows 节点上调度 Windows Pod,它需要 nodeSelector 和匹配合适的容忍度来选择 Windows。 ```yaml nodeSelector: - kubernetes.io/os: windows - node.kubernetes.io/windows-build: '10.0.17763' + kubernetes.io/os: windows + node.kubernetes.io/windows-build: '10.0.17763' tolerations: - - key: "os" - operator: "Equal" - value: "windows" - effect: "NoSchedule" + - key: "os" + operator: "Equal" + value: "windows" + effect: "NoSchedule" ``` - -### 处理同一集群中的多个 Windows 版本 - -每个 Pod 使用的 Windows Server 版本必须与该节点的 Windows Server 版本相匹配。 -如果要在同一集群中使用多个 Windows Server 版本,则应该设置其他节点标签和 -nodeSelector。 - -Kubernetes 1.17 自动添加了一个新标签 `node.kubernetes.io/windows-build` 来简化此操作。 -如果您运行的是旧版本,则建议手动将此标签添加到 Windows 节点。 - -此标签反映了需要兼容的 Windows 主要、次要和内部版本号。以下是当前每个 -Windows Server 版本使用的值。 +### 处理同一集群中的多个 Windows 版本 {#handling-multiple-windows-versions-in-the-same-cluster} + +每个 Pod 使用的 Windows Server 版本必须与节点的版本匹配。 +如果要在同一个集群中使用多个 Windows Server 版本,则应设置额外的节点标签和节点选择器。 + +Kubernetes 1.17 自动添加了一个新标签 `node.kubernetes.io/windows-build` 来简化这一点。 +如果你运行的是旧版本,则建议手动将此标签添加到 Windows 节点。 -| 产品名称 | 内部编号 | +此标签反映了需要匹配以实现兼容性的 Windows 主要、次要和内部版本号。 +以下是目前用于每个 Windows Server 版本的值。 + + +| 产品名称 | 构建号 | |--------------------------------------|------------------------| | Windows Server 2019 | 10.0.17763 | -| Windows Server version 1809 | 10.0.17763 | -| Windows Server version 1903 | 10.0.18362 | +| Windows Server, Version 20H2 | 10.0.19042 | +| Windows Server 2022 | 10.0.20348 | - - -### 使用 RuntimeClass 简化 - -[RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 可用于 -简化使用污点和容忍度的过程。 -集群管理员可以创建 `RuntimeClass` 对象,用于封装这些污点和容忍度。 - -1. 将此文件保存到 `runtimeClasses.yml` 文件。 - 它包括适用于 Windows 操作系统、体系结构和版本的 `nodeSelector`。 - - ```yaml - apiVersion: node.k8s.io/v1 - kind: RuntimeClass - metadata: - name: windows-2019 - handler: 'docker' - scheduling: - nodeSelector: - kubernetes.io/os: 'windows' - kubernetes.io/arch: 'amd64' - node.kubernetes.io/windows-build: '10.0.17763' - tolerations: - - effect: NoSchedule - key: os - operator: Equal - value: "windows" - ``` +### 使用 RuntimeClass 进行简化 {#simplifying-with-runtimeclass} - -2. 集群管理员执行 `kubectl create -f runtimeClasses.yml` 操作 -3. 根据需要向 Pod 规约中添加 `runtimeClassName: windows-2019` - +1. 以集群管理员身份运行 `kubectl create -f runtimeClasses.yml` +1. 根据情况,向 Pod 规约中添加 `runtimeClassName: windows-2019` + 例如: ```yaml @@ -495,3 +478,5 @@ spec: selector: app: iis-2019 ``` + +[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/ diff --git a/content/zh/docs/concepts/workloads/_index.md b/content/zh-cn/docs/concepts/workloads/_index.md similarity index 84% rename from content/zh/docs/concepts/workloads/_index.md rename to content/zh-cn/docs/concepts/workloads/_index.md index a61176f6609ee..5d895415d34ab 100644 --- a/content/zh/docs/concepts/workloads/_index.md +++ b/content/zh-cn/docs/concepts/workloads/_index.md @@ -26,7 +26,7 @@ Pod is running means that all the Pods on that node fail. Kubernetes treats that of failure as final: you would need to create a new Pod even if the node later recovers. --> 无论你的负载是单一组件还是由多个一同工作的组件构成,在 Kubernetes 中你 -可以在一组 [Pods](/zh/docs/concepts/workloads/pods) 中运行它。 +可以在一组 [Pods](/zh-cn/docs/concepts/workloads/pods) 中运行它。 在 Kubernetes 中,Pod 代表的是集群上处于运行状态的一组 {{< glossary_tooltip text="容器" term_id="container" >}}。 @@ -37,7 +37,7 @@ For example, once a pod is running in your cluster then a critical fault on the all the pods on that node fail. Kubernetes treats that level of failure as final: you would need to create a new `Pod` to recover, even if the node later becomes healthy. --> -Kubernetes Pods 有[确定的生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/)。 +Kubernetes Pods 有[确定的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 例如,当某 Pod 在你的集群中运行时,Pod 运行所在的 {{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时, 所有该节点上的 Pods 都会失败。Kubernetes 将这类失败视为最终状态: @@ -72,15 +72,15 @@ Kubernetes 提供若干种内置的工作负载资源: `Pods` for that `StatefulSet`, can replicate data to other `Pods` in the same `StatefulSet` to improve overall resilience. --> -* [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和 - [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) +* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和 + [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) (替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}})。 `Deployment` 很适合用来管理你的集群上的无状态应用,`Deployment` 中的所有 `Pod` 都是相互等价的,并且在需要的时候被换掉。 -* [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/) +* [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 让你能够运行一个或者多个以某种方式跟踪应用状态的 Pods。 例如,如果你的负载会将数据作持久存储,你可以运行一个 `StatefulSet`,将每个 - `Pod` 与某个 [`PersistentVolume`](/zh/docs/concepts/storage/persistent-volumes/) + `Pod` 与某个 [`PersistentVolume`](/zh-cn/docs/concepts/storage/persistent-volumes/) 对应起来。你在 `StatefulSet` 中各个 `Pod` 内运行的代码可以将数据复制到同一 `StatefulSet` 中的其它 `Pod` 中以提高整体的服务可靠性。 -* [DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) +* [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) 定义提供节点本地支撑设施的 `Pods`。这些 Pods 可能对于你的集群的运维是 非常重要的,例如作为网络链接的辅助工具或者作为网络 {{< glossary_tooltip text="插件" term_id="addons" >}} 的一部分等等。每次你向集群中添加一个新节点时,如果该节点与某 `DaemonSet` 的规约匹配,则控制面会为该 `DaemonSet` 调度一个 `Pod` 到该新节点上运行。 -* [Job](/zh/docs/concepts/workloads/controllers/job/) 和 - [CronJob](/zh/docs/concepts/workloads/controllers/cron-jobs/)。 +* [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 和 + [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)。 定义一些一直运行到结束并停止的任务。`Job` 用来表达的是一次性的任务,而 `CronJob` 会根据其时间规划反复运行。 @@ -117,7 +117,7 @@ then you can implement or install an extension that does provide that feature. --> 在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方 工作负载资源。通过使用 -[定制资源定义(CRD)](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/), +[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/), 你可以添加第三方工作负载资源,以完成原本不是 Kubernetes 核心功能的工作。 例如,如果你希望运行一组 `Pods`,但要求所有 Pods 都可用时才执行操作 (比如针对某种高吞吐量的分布式任务),你可以实现一个能够满足这一需求 @@ -135,18 +135,18 @@ As well as reading about each resource, you can learn about specific tasks that --> 除了阅读了解每类资源外,你还可以了解与这些资源相关的任务: -* [使用 Deployment 运行一个无状态的应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/) -* 以[单实例](/zh/docs/tasks/run-application/run-single-instance-stateful-application/) - 或者[多副本集合](/zh/docs/tasks/run-application/run-replicated-stateful-application/) +* [使用 Deployment 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) +* 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/) + 或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/) 的形式运行有状态的应用; -* [使用 `CronJob` 运行自动化的任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/) +* [使用 `CronJob` 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) 要了解 Kubernetes 将代码与配置分离的实现机制,可参阅 -[配置部分](/zh/docs/concepts/configuration/)。 +[配置部分](/zh-cn/docs/concepts/configuration/)。 关于 Kubernetes 如何为应用管理 Pods,还有两个支撑概念能够提供相关背景信息: -* [垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/)机制负责在 +* [垃圾收集](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/)机制负责在 对象的 _属主资源_ 被删除时在集群中清理这些对象。 -* [_Time-to-Live_ 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/) +* [_Time-to-Live_ 控制器](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/) 会在 Job 结束之后的指定时间间隔之后删除它们。 一旦你的应用处于运行状态,你就可能想要以 -[`Service`](/zh/docs/concepts/services-networking/service/) +[`Service`](/zh-cn/docs/concepts/services-networking/service/) 的形式使之可在互联网上访问;或者对于 Web 应用而言,使用 -[`Ingress`](/zh/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 +[`Ingress`](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 diff --git a/content/zh/docs/concepts/workloads/controllers/_index.md b/content/zh-cn/docs/concepts/workloads/controllers/_index.md similarity index 100% rename from content/zh/docs/concepts/workloads/controllers/_index.md rename to content/zh-cn/docs/concepts/workloads/controllers/_index.md diff --git a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md b/content/zh-cn/docs/concepts/workloads/controllers/cron-jobs.md similarity index 73% rename from content/zh/docs/concepts/workloads/controllers/cron-jobs.md rename to content/zh-cn/docs/concepts/workloads/controllers/cron-jobs.md index e31fa8edc36b9..ad93bcd604d5c 100644 --- a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/cron-jobs.md @@ -54,7 +54,7 @@ recommended in a production cluster. --> {{< caution >}} -如 [v1 CronJob API](/zh/docs/reference/kubernetes-api/workload-resources/cron-job-v1/) 所述,官方并不支持设置时区。 +如 [v1 CronJob API](/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1/) 所述,官方并不支持设置时区。 Kubernetes 项目官方并不支持设置如 `CRON_TZ` 或者 `TZ` 等变量。 `CRON_TZ` 或者 `TZ` 是用于解析和计算下一个 Job 创建时间所使用的内部库中一个实现细节。 @@ -69,7 +69,7 @@ append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters. --> 为 CronJob 资源创建清单时,请确保所提供的名称是一个合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 名称不能超过 52 个字符。 这是因为 CronJob 控制器将自动在提供的 Job 名称后附加 11 个字符,并且存在一个限制, 即 Job 名称的最大长度不能超过 63 个字符。 @@ -84,7 +84,7 @@ report generation, and so on. Each of those tasks should be configured to recur indefinitely (for example: once a day / week / month); you can define the point in time within that interval when the job should start. --> -## CronJob +## CronJob {#cronjob} CronJob 用于执行周期性的动作,例如备份、报告生成等。 这些任务中的每一个都应该配置为周期性重复的(例如:每天/每周/每月一次); @@ -95,19 +95,19 @@ CronJob 用于执行周期性的动作,例如备份、报告生成等。 This example CronJob manifest prints the current time and a hello message every minute: --> -### 示例 +### 示例 {#example} 下面的 CronJob 示例清单会在每分钟打印出当前时间和问候消息: {{< codenew file="application/job/cronjob.yaml" >}} -[使用 CronJob 运行自动化任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/) +[使用 CronJob 运行自动化任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) 一文会为你详细讲解此例。 -### Cron 时间表语法 +### Cron 时间表语法 {#cron-schedule-syntax} ``` # ┌───────────── 分钟 (0 - 59) @@ -150,6 +150,69 @@ To generate CronJob schedule expressions, you can also use web tools like [cront --> 要生成 CronJob 时间表表达式,你还可以使用 [crontab.guru](https://crontab.guru/) 之类的 Web 工具。 + + +## 时区 {#time-zones} +对于没有指定时区的 CronJob,kube-controller-manager 基于本地时区解释排期表(Schedule)。 + +{{< feature-state for_k8s_version="v1.24" state="alpha" >}} + +如果启用了 `CronJobTimeZone` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +你可以为 CronJob 指定一个时区(如果你没有启用该特性门控,或者你使用的是不支持试验性时区功能的 +Kubernetes 版本,集群中所有 CronJob 的时区都是未指定的)。 + +启用该特性后,你可以将 `spec.timeZone` +设置为有效[时区](https://zh.wikipedia.org/wiki/%E6%97%B6%E5%8C%BA%E4%BF%A1%E6%81%AF%E6%95%B0%E6%8D%AE%E5%BA%93)名称。 +例如,设置 `spec.timeZone: "Etc/UTC"` 指示 Kubernetes 采用 UTC 来解释排期表。 + +Go 标准库中的时区数据库包含在二进制文件中,并用作备用数据库,以防系统上没有可用的外部数据库。 + + + +## 时区 {#time-zones} +对于没有指定时区的 CronJob,kube-controller-manager 会根据其本地时区来解释其排期表(schedule)。 + +{{< feature-state for_k8s_version="v1.24" state="alpha" >}} + + +如果启用 `CronJobTimeZone` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +你可以为 CronJob 指定时区(如果你不启用该特性门控,或者如果你使用的 Kubernetes 版本不支持实验中的时区特性, +则集群中的所有 CronJob 都属于未指定时区)。 + + +当你启用该特性时,你可以将 `spec.timeZone` 设置为有效的[时区](https://zh.wikipedia.org/wiki/%E6%97%B6%E5%8C%BA%E4%BF%A1%E6%81%AF%E6%95%B0%E6%8D%AE%E5%BA%93)名称。 +例如,设置 `spec.timeZone: "Etc/UTC"` 表示 Kubernetes +使用协调世界时(Coordinated Universal Time)进行解释排期表。 + +Go 标准库中的时区数据库包含在二进制文件中,并用作备用数据库,以防系统上没有外部数据库可用。 -## CronJob 限制 {#cron-job-limitations} +## CronJob 限制 {#cronjob-limitations} CronJob 根据其计划编排,在每次该执行任务的时候大约会创建一个 Job。 我们之所以说 "大约",是因为在某些情况下,可能会创建两个 Job,或者不会创建任何 Job。 @@ -242,12 +305,12 @@ and use the original CronJob controller instead, one pass the `CronJobController flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}, and set this flag to `false`. For example: --> -## 控制器版本 {#new-controller} +## 控制器版本 {#new-controller} 从 Kubernetes v1.21 版本开始,CronJob 控制器的第二个版本被用作默认实现。 要禁用此默认 CronJob 控制器而使用原来的 CronJob 控制器,请在 {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} -中设置[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +中设置[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) `CronJobControllerV2`,将此标志设置为 `false`。例如: ``` @@ -271,11 +334,11 @@ and set this flag to `false`. For example: object definition to understand the API for Kubernetes cron jobs. --> -* 了解 CronJob 所依赖的 [Pods](/zh/docs/concepts/workloads/pods/) 与 [Job](/zh/docs/concepts/workloads/controllers/job/) 的概念。 +* 了解 CronJob 所依赖的 [Pod](/zh-cn/docs/concepts/workloads/pods/) 与 [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 的概念。 * 阅读 CronJob `.spec.schedule` 字段的[格式](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format)。 * 有关创建和使用 CronJob 的说明及示例规约文件,请参见 - [使用 CronJob 运行自动化任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/)。 -* 有关自动清理失败或完成作业的说明,请参阅[自动清理作业](/zh/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) + [使用 CronJob 运行自动化任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/)。 +* 有关自动清理失败或完成作业的说明,请参阅[自动清理作业](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) * `CronJob` 是 Kubernetes REST API 的一部分, 阅读 {{< api-reference page="workload-resources/cron-job-v1" >}} 对象定义以了解关于该资源的 API。 diff --git a/content/zh/docs/concepts/workloads/controllers/daemonset.md b/content/zh-cn/docs/concepts/workloads/controllers/daemonset.md similarity index 89% rename from content/zh/docs/concepts/workloads/controllers/daemonset.md rename to content/zh-cn/docs/concepts/workloads/controllers/daemonset.md index d8acb65797732..40927843c6597 100644 --- a/content/zh/docs/concepts/workloads/controllers/daemonset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/daemonset.md @@ -91,13 +91,13 @@ section. 和所有其他 Kubernetes 配置一样,DaemonSet 需要 `apiVersion`、`kind` 和 `metadata` 字段。 有关配置文件的基本信息,参见 -[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、 -[配置容器](/zh/docs/tasks/)和 -[使用 kubectl 进行对象管理](/zh/docs/concepts/overview/working-with-objects/object-management/) +[部署应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)、 +[配置容器](/zh-cn/docs/tasks/)和 +[使用 kubectl 进行对象管理](/zh-cn/docs/concepts/overview/working-with-objects/object-management/) 文档。 DaemonSet 对象的名称必须是一个合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 DaemonSet 也需要一个 [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 配置段。 @@ -120,14 +120,14 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl `.spec` 中唯一必需的字段是 `.spec.template`。 -`.spec.template` 是一个 [Pod 模板](/zh/docs/concepts/workloads/pods/#pod-templates)。 +`.spec.template` 是一个 [Pod 模板](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。 除了它是嵌套的,因而不具有 `apiVersion` 或 `kind` 字段之外,它与 {{< glossary_tooltip text="Pod" term_id="pod" >}} 具有相同的 schema。 除了 Pod 必需字段外,在 DaemonSet 中的 Pod 模板必须指定合理的标签(查看 [Pod 选择算符](#pod-selector))。 在 DaemonSet 中的 Pod 模板必须具有一个值为 `Always` 的 -[`RestartPolicy`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)。 +[`RestartPolicy`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)。 当该值未指定时,默认是 `Always`。 -* `matchLabels` - 与 [ReplicationController](/zh/docs/concepts/workloads/controllers/replicationcontroller/) +* `matchLabels` - 与 [ReplicationController](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) 的 `.spec.selector` 的作用相同。 * `matchExpressions` - 允许构建更加复杂的选择器,可以通过指定 key、value 列表以及将 key 和 value 列表关联起来的 operator。 @@ -192,9 +192,9 @@ If you do not specify either, then the DaemonSet controller will create Pods on ### 仅在某些节点上运行 Pod {#running-pods-on-only-some-nodes} 如果指定了 `.spec.template.spec.nodeSelector`,DaemonSet 控制器将在能够与 -[Node 选择算符](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。 +[Node 选择算符](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。 类似这种情况,可以指定 `.spec.template.spec.affinity`,之后 DaemonSet 控制器 -将在能够与[节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) +将在能够与[节点亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。 如果根本就没有指定,则 DaemonSet Controller 将在所有节点上创建 Pod。 @@ -228,7 +228,7 @@ DaemonSet 确保所有符合条件的节点都运行该 Pod 的一个副本。 * Pod 行为的不一致性:正常 Pod 在被创建后等待调度时处于 `Pending` 状态, DaemonSet Pods 创建后不会处于 `Pending` 状态下。这使用户感到困惑。 -* [Pod 抢占](/zh/docs/concepts/configuration/pod-priority-preemption/) +* [Pod 抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) 由默认调度器处理。启用抢占后,DaemonSet 控制器将在不考虑 Pod 优先级和抢占 的情况下制定调度决策。 @@ -242,7 +242,7 @@ taken into account before selecting the target host). The DaemonSet controller o performs these operations when creating or modifying DaemonSet pods, and no changes are made to the `spec.template` of the DaemonSet. --> -`ScheduleDaemonSetPods` 允许您使用默认调度器而不是 DaemonSet 控制器来调度 DaemonSets, +`ScheduleDaemonSetPods` 允许你使用默认调度器而不是 DaemonSet 控制器来调度 DaemonSets, 方法是将 `NodeAffinity` 条件而不是 `.spec.nodeName` 条件添加到 DaemonSet Pods。 默认调度器接下来将 Pod 绑定到目标主机。 如果 DaemonSet Pod 的节点亲和性配置已存在,则被替换 @@ -279,7 +279,7 @@ the related features. --> ### 污点和容忍度 {#taint-and-toleration} -尽管 Daemon Pods 遵循[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration) +尽管 Daemon Pods 遵循[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration) 规则,根据相关特性,控制器会自动将以下容忍度添加到 DaemonSet Pod: | 容忍度键名 | 效果 | 版本 | 描述 | @@ -320,7 +320,7 @@ Some possible patterns for communicating with Pods in a DaemonSet are: 访问到 Pod。客户端能通过某种方法获取节点 IP 列表,并且基于此也可以获取到相应的端口。 - **DNS**:创建具有相同 Pod 选择算符的 - [无头服务](/zh/docs/concepts/services-networking/service/#headless-services), + [无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services), 通过使用 `endpoints` 资源或从 DNS 中检索到多个 A 记录来发现 DaemonSet。 - **Service**:创建具有相同 Pod 选择算符的服务,并使用该服务随机访问到某个节点上的 @@ -352,12 +352,12 @@ them according to its `updateStrategy`. You can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/) on a DaemonSet. --> -您可以删除一个 DaemonSet。如果使用 `kubectl` 并指定 `--cascade=orphan` 选项, +你可以删除一个 DaemonSet。如果使用 `kubectl` 并指定 `--cascade=orphan` 选项, 则 Pod 将被保留在节点上。接下来如果创建使用相同选择算符的新 DaemonSet, 新的 DaemonSet 会收养已有的 Pod。 如果有 Pod 需要被替换,DaemonSet 会根据其 `updateStrategy` 来替换。 -你可以对 DaemonSet [执行滚动更新](/zh/docs/tasks/manage-daemon/update-daemon-set/)操作。 +你可以对 DaemonSet [执行滚动更新](/zh-cn/docs/tasks/manage-daemon/update-daemon-set/)操作。 ### Deployments -DaemonSet 与 [Deployments](/zh/docs/concepts/workloads/controllers/deployment/) 非常类似, +DaemonSet 与 [Deployments](/zh-cn/docs/concepts/workloads/controllers/deployment/) 非常类似, 它们都能创建 Pod,并且 Pod 中的进程都不希望被终止(例如,Web 服务器、存储服务器)。 建议为无状态的服务使用 Deployments,比如前端服务。 @@ -446,7 +446,7 @@ DaemonSet 与 [Deployments](/zh/docs/concepts/workloads/controllers/deployment/) 当需要 Pod 副本总是运行在全部或特定主机上,并且当该 DaemonSet 提供了节点级别的功能(允许其他 Pod 在该特定节点上正确运行)时, 应该使用 DaemonSet。 -例如,[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)通常包含一个以 DaemonSet 运行的组件。 +例如,[网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)通常包含一个以 DaemonSet 运行的组件。 这个 DaemonSet 组件确保它所在的节点的集群网络正常工作。 ## {{% heading "whatsnext" %}} @@ -465,14 +465,14 @@ DaemonSet 与 [Deployments](/zh/docs/concepts/workloads/controllers/deployment/) Read the {{< api-reference page="workload-resources/daemon-set-v1" >}} object definition to understand the API for daemon sets. --> -* 了解 [Pods](/zh/docs/concepts/workloads/pods)。 +* 了解 [Pods](/zh-cn/docs/concepts/workloads/pods)。 * 了解[静态 Pod](#static-pods),这对运行 Kubernetes {{< glossary_tooltip text="控制面" term_id="control-plane" >}}组件有帮助。 * 了解如何使用 DaemonSet - * [对 DaemonSet 执行滚动更新](/zh/docs/tasks/manage-daemon/update-daemon-set/) - * [对 DaemonSet 执行回滚](/zh/docs/tasks/manage-daemon/rollback-daemon-set/)(例如:新的版本没有达到你的预期) -* 理解[Kubernetes 如何将 Pod 分配给节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)。 -* 了解[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)和 - [扩展(Addons)](/zh/docs/concepts/cluster-administration/addons/),它们常以 DaemonSet 运行。 + * [对 DaemonSet 执行滚动更新](/zh-cn/docs/tasks/manage-daemon/update-daemon-set/) + * [对 DaemonSet 执行回滚](/zh-cn/docs/tasks/manage-daemon/rollback-daemon-set/)(例如:新的版本没有达到你的预期) +* 理解[Kubernetes 如何将 Pod 分配给节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)。 +* 了解[设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)和 + [扩展(Addons)](/zh-cn/docs/concepts/cluster-administration/addons/),它们常以 DaemonSet 运行。 * `DaemonSet` 是 Kubernetes REST API 中的顶级资源。阅读 {{< api-reference page="workload-resources/daemon-set-v1" >}} 对象定义理解关于该资源的 API。 diff --git a/content/zh/docs/concepts/workloads/controllers/deployment.md b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md similarity index 98% rename from content/zh/docs/concepts/workloads/controllers/deployment.md rename to content/zh-cn/docs/concepts/workloads/controllers/deployment.md index f5249b60d6043..15d5382f93fad 100644 --- a/content/zh/docs/concepts/workloads/controllers/deployment.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md @@ -1063,7 +1063,7 @@ Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods. --> -假设集群启用了[Pod 的水平自动缩放](/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/), +假设集群启用了[Pod 的水平自动缩放](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/), 你可以为 Deployment 设置自动缩放器,并基于现有 Pod 的 CPU 利用率选择要运行的 Pod 个数下限和上限。 @@ -1230,7 +1230,7 @@ apply multiple fixes in between pausing and resuming without triggering unnecess 在你更新一个 Deployment 的时候,或者计划更新它的时候, 你可以在触发一个或多个更新之前暂停 Deployment 的上线过程。 -当你准备行应用这些变更时,你可以重新恢复 Deployment 上线过程。 +当你准备应用这些变更时,你可以重新恢复 Deployment 上线过程。 这样做使得你能够在暂停和恢复执行之间应用多个修补程序,而不会触发不必要的上线操作。 检测此状况的一种方法是在 Deployment 规约中指定截止时间参数: -([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds))。 +([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds))。 `.spec.progressDeadlineSeconds` 给出的是一个秒数值,Deployment 控制器在(通过 Deployment 状态) 标示 Deployment 进展停滞之前,需要等待所给的时长。 @@ -1854,7 +1854,7 @@ can create multiple Deployments, one for each release, following the canary patt ## 金丝雀部署 {#canary-deployment} 如果要使用 Deployment 向用户子集或服务器子集上线版本, -则可以遵循[资源管理](/zh/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) +则可以遵循[资源管理](/zh-cn/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) 所描述的金丝雀模式,创建多个 Deployment,每个版本一个。 Deployment 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 Deployment 还需要 [`.spec` 部分](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 -`.spec.template` 是一个 [Pod 模板](/zh/docs/concepts/workloads/pods/#pod-templates)。 +`.spec.template` 是一个 [Pod 模板](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。 它和 {{< glossary_tooltip text="Pod" term_id="pod" >}} 的语法规则完全相同。 只是这里它是嵌套的,因此不需要 `apiVersion` 或 `kind`。 @@ -1909,7 +1909,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified. --> -只有 [`.spec.template.spec.restartPolicy`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +只有 [`.spec.template.spec.restartPolicy`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 等于 `Always` 才是被允许的,这也是在没有指定时的默认设置。 -如果一个 [HorizontalPodAutoscaler](/zh/docs/tasks/run-application/horizontal-pod-autoscale/) +如果一个 [HorizontalPodAutoscaler](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/) (或者其他执行水平扩缩操作的类似 API)在管理 Deployment 的扩缩, 则不要设置 `.spec.replicas`。 @@ -1960,7 +1960,7 @@ for the Pods targeted by this Deployment. ### 选择算符 {#selector} `.spec.selector` 是指定本 Deployment 的 Pod -[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)的必需字段。 +[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)的必需字段。 `.spec.selector` 必须匹配 `.spec.template.metadata.labels`,否则请求会被 API 拒绝。 @@ -2033,7 +2033,7 @@ replacement will be created immediately (even if the old Pod is still in a Termi 才会创建新版本的 Pod。如果你手动删除一个 Pod,其生命周期是由 ReplicaSet 来控制的, 后者会立即创建一个替换 Pod(即使旧的 Pod 仍然处于 Terminating 状态)。 如果你需要一种“最多 n 个”的 Pod 个数保证,你需要考虑使用 -[StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。 +[StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/)。 {{< /note >}} -* 了解 [Pod](/zh/docs/concepts/workloads/pods)。 -* [使用 Deployment 运行一个无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。 +* 了解 [Pod](/zh-cn/docs/concepts/workloads/pods)。 +* [使用 Deployment 运行一个无状态应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)。 * `Deployment` 是 Kubernetes REST API 中的一个顶层资源。 阅读 {{< api-reference page="workload-resources/deployment-v1" >}} 对象定义,以了解 Deployment 的 API 细节。 -* 阅读 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/) +* 阅读 [PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/) 了解如何使用它来在可能出现干扰的情况下管理应用的可用性。 diff --git a/content/zh/docs/concepts/workloads/controllers/job.md b/content/zh-cn/docs/concepts/workloads/controllers/job.md similarity index 90% rename from content/zh/docs/concepts/workloads/controllers/job.md rename to content/zh-cn/docs/concepts/workloads/controllers/job.md index 99edfabf76823..fa647a7067284 100644 --- a/content/zh/docs/concepts/workloads/controllers/job.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/job.md @@ -33,6 +33,9 @@ The Job object will start a new Pod if the first Pod fails or is deleted (for ex due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. + +If you want to run a Job (either a single task, or several in parallel) on a schedule, +see [CronJob](/docs/concepts/workloads/controllers/cron-jobs/). --> Job 会创建一个或者多个 Pods,并将继续重试 Pods 的执行,直到指定数量的 Pods 成功终止。 随着 Pods 成功结束,Job 跟踪记录成功完成的 Pods 个数。 @@ -46,7 +49,11 @@ Job 会创建一个或者多个 Pods,并将继续重试 Pods 的执行,直 你也可以使用 Job 以并行的方式运行多个 Pod。 +如果你想按某种排期表(Schedule)运行 Job(单个任务或多个并行任务),请参阅 +[CronJob](/docs/concepts/workloads/controllers/cron-jobs/)。 + + + 你可以使用下面的命令来运行此示例: ```shell kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml ``` + 输出类似于: ``` job.batch/pi created ``` - + 使用 `kubectl` 来检查 Job 的状态: ```shell kubectl describe jobs/pi ``` + 输出类似于: ``` @@ -132,6 +149,9 @@ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].met echo $pods ``` + 输出类似于: ``` @@ -139,7 +159,7 @@ pi-5rwd7 ``` + 输出类似于: ``` 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` + -## 编写 Job 规约 +## 编写 Job 规约 {#writing-a-job-spec} 与 Kubernetes 中其他资源的配置类似,Job 也需要 `apiVersion`、`kind` 和 `metadata` 字段。 -Job 的名字必须是合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +Job 的名字必须是合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 -Job 配置还需要一个[`.spec` 节](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 +Job 配置还需要一个 [`.spec` 节](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 -### Pod 模版 +### Pod 模版 {#pod-template} Job 的 `.spec` 中只有 `.spec.template` 是必需的字段。 -字段 `.spec.template` 的值是一个 [Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)。 +字段 `.spec.template` 的值是一个 [Pod 模版](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。 其定义规范与 {{< glossary_tooltip text="Pod" term_id="pod" >}} 完全相同,只是其中不再需要 `apiVersion` 或 `kind` 字段。 除了作为 Pod 所必需的字段之外,Job 中的 Pod 模版必需设置合适的标签 -(参见[Pod 选择算符](#pod-selector))和合适的重启策略。 +(参见 [Pod 选择算符](#pod-selector))和合适的重启策略。 -Job 中 Pod 的 [`RestartPolicy`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +Job 中 Pod 的 [`RestartPolicy`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 只能设置为 `Never` 或 `OnFailure` 之一。 ### 完成模式 {#completion-mode} -{{< feature-state for_k8s_version="v1.22" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -## 处理 Pod 和容器失效 +## 处理 Pod 和容器失效 {#handling-pod-and-container-failures} Pod 中的容器可能因为多种不同原因失效,例如因为其中的进程退出时返回值非零, 或者容器因为超出内存约束而被杀死等等。 @@ -386,7 +410,7 @@ Pod 则继续留在当前节点,但容器会被重新运行。 因此,你的程序需要能够处理在本地被重启的情况,或者要设置 `.spec.template.spec.restartPolicy = "Never"`。 关于 `restartPolicy` 的更多信息,可参阅 -[Pod 生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/#example-states)。 +[Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#example-states)。 -### Pod 回退失效策略 +### Pod 回退失效策略 {#pod-backoff-failure-policy} 在有些情形下,你可能希望 Job 在经历若干次重试之后直接进入失败状态,因为这很 可能意味着遇到了配置错误。 @@ -438,7 +472,16 @@ other Pods for the Job failing around that time. 失效回退的限制值默认为 6。 与 Job 相关的失效的 Pod 会被 Job 控制器重建,回退重试时间将会按指数增长 (从 10 秒、20 秒到 40 秒)最多至 6 分钟。 -当 Job 的 Pod 被删除时,或者 Pod 成功时没有其它 Pod 处于失败状态,失效回退的次数也会被重置(为 0)。 + +计算重试次数有以下两种方法: +- 计算 `.status.phase = "Failed"` 的 Pod 数量。 +- 当 Pod 的 `restartPolicy = "OnFailure"` 时,针对 `.status.phase` 等于 `Pending` 或 + `Running` 的 Pod,计算其中所有容器的重试次数。 + +如果两种方式其中一个的值达到 `.spec.backoffLimit`,则 Job 被判定为失败。 + +当 [`JobTrackingWithFinalizers`](#job-tracking-with-finalizers) 特性被禁用时, +失败的 Pod 数目仅基于 API 中仍然存在的 Pod。 -## Job 终止与清理 +## Job 终止与清理 {#clean-up-finished-jobs-automatically} Job 完成时不会再创建新的 Pod,不过已有的 Pod [通常](#pod-backoff-failure-policy)也不会被删除。 保留这些 Pod 使得你可以查看已完成的 Pod 的日志输出,以便检查错误、警告 @@ -531,7 +574,7 @@ Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job its That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve. --> 注意 Job 规约和 Job 中的 -[Pod 模版规约](/zh/docs/concepts/workloads/pods/init-containers/#detailed-behavior) +[Pod 模版规约](/zh-cn/docs/concepts/workloads/pods/init-containers/#detailed-behavior) 都有 `activeDeadlineSeconds` 字段。 请确保你在合适的层次设置正确的字段。 @@ -556,7 +599,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy. 完成的 Job 通常不需要留存在系统中。在系统中一直保留它们会给 API 服务器带来额外的压力。 如果 Job 由某种更高级别的控制器来管理,例如 -[CronJobs](/zh/docs/concepts/workloads/controllers/cron-jobs/), +[CronJobs](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/), 则 Job 可以被 CronJob 基于特定的根据容量裁定的清理策略清理掉。 ### 已完成 Job 的 TTL 机制 {#ttl-mechanisms-for-finished-jobs} @@ -578,7 +621,7 @@ be honored. For example: --> 自动清理已完成 Job (状态为 `Complete` 或 `Failed`)的另一种方式是使用由 -[TTL 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)所提供 +[TTL 控制器](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/)所提供 的 TTL 机制。 通过设置 Job 的 `.spec.ttlSecondsAfterFinished` 字段,可以让该控制器清理掉 已结束的资源。 @@ -677,10 +720,10 @@ The pattern names are also links to examples and more detailed description. | 模式 | 单个 Job 对象 | Pods 数少于工作条目数? | 直接使用应用无需修改? | | ----- |:-------------:|:-----------------------:|:---------------------:| -| [每工作条目一 Pod 的队列](/zh/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 | -| [Pod 数量可变的队列](/zh/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | -| [静态任务分派的带索引的 Job](/zh/docs/tasks/job/indexed-parallel-processing-static) | ✓ | | ✓ | -| [Job 模版扩展](/zh/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | +| [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 | +| [Pod 数量可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | +| [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | ✓ | | ✓ | +| [Job 模版扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | -该特性在 Kubernetes 1.21 版本中是 Alpha 阶段,启用该特性需要额外的步骤; -请确保你正在阅读[与集群版本一致的文档](/zh/docs/home/supported-doc-versions/)。 -{{< /note >}} {{< note >}} -为了使用此功能,你必须在 [API 服务器](/zh/docs/reference/command-line-tools-reference/kube-apiserver/)上启用 -`JobMutableNodeSchedulingDirectives` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +为了使用此功能,你必须在 [API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)上启用 +`JobMutableNodeSchedulingDirectives` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 默认情况下启用。 {{< /note >}} @@ -920,7 +954,7 @@ Job 的 Pod 模板中可以更新的字段是节点亲和性、节点选择器 输出类似于: ```yaml @@ -1008,7 +1045,7 @@ the selector that the system normally generates for you automatically. 它们也会被名为 `new` 的 Job 所控制。 你需要在新 Job 中设置 `manualSelector: true`,因为你并未使用系统通常自动为你 -生成的选择算符。 +生成的选择算符。 ```yaml kind: Job @@ -1034,29 +1071,31 @@ mismatch. +### 使用 Finalizer 追踪 Job {#job-tracking-with-finalizers} + +{{< feature-state for_k8s_version="v1.23" state="beta" >}} +{{< note >}} + +要使用该行为,你必须为 [API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/) +和[控制器管理器](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) +启用 `JobTrackingWithFinalizers` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 +默认是启用的。 + -### 使用 Finalizer 追踪 Job {#job-tracking-with-finalizers} - -{{< feature-state for_k8s_version="v1.23" state="beta" >}} - -{{< note >}} -要使用该行为,你必须为 [API 服务器](/zh/docs/reference/command-line-tools-reference/kube-apiserver/) -和[控制器管理器](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/) -启用 `JobTrackingWithFinalizers` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 -默认是启用的。 - 启用后,控制面基于下述行为追踪新的 Job。在启用该特性之前创建的 Job 不受影响。 作为用户,你会看到的唯一区别是控制面对 Job 完成情况的跟踪更加准确。 {{< /note >}} @@ -1126,7 +1165,7 @@ Job 会重新创建新的 Pod 来替代已终止的 Pod。 ### 副本控制器 {#replication-controller} -Job 与[副本控制器](/zh/docs/concepts/workloads/controllers/replicationcontroller/)是彼此互补的。 +Job 与[副本控制器](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/)是彼此互补的。 副本控制器管理的是那些不希望被终止的 Pod (例如,Web 服务器), Job 管理的是那些希望被终止的 Pod(例如,批处理作业)。 -正如在 [Pod 生命期](/zh/docs/concepts/workloads/pods/pod-lifecycle/) 中讨论的, +正如在 [Pod 生命期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/) 中讨论的, `Job` 仅适合于 `restartPolicy` 设置为 `OnFailure` 或 `Never` 的 Pod。 注意:如果 `restartPolicy` 未设置,其默认值是 `Always`。 @@ -1151,7 +1190,7 @@ Another pattern is for a single Job to create a Pod which then creates other Pod of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. --> -### 单个 Job 启动控制器 Pod +### 单个 Job 启动控制器 Pod {#single-job-starts-controller-pod} 另一种模式是用唯一的 Job 来创建 Pod,而该 Pod 负责启动其他 Pod,因此扮演了一种 后启动 Pod 的控制器的角色。 @@ -1189,15 +1228,15 @@ object, but maintains complete control over what Pods are created and how work i object definition to understand the API for jobs. * Read about [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/), which you can use to define a series of Jobs that will run based on a schedule, similar to - the Unix tool `cron`. + the UNIX tool `cron`. --> -* 了解 [Pods](/zh/docs/concepts/workloads/pods)。 +* 了解 [Pods](/zh-cn/docs/concepts/workloads/pods)。 * 了解运行 Job 的不同的方式: - * [使用工作队列进行粗粒度并行处理](/zh/docs/tasks/job/coarse-parallel-processing-work-queue/) - * [使用工作队列进行精细的并行处理](/zh/docs/tasks/job/fine-parallel-processing-work-queue/) - * [使用索引作业完成静态工作分配下的并行处理](/zh/docs/tasks/job/indexed-parallel-processing-static/)(Beta 阶段) - * 基于一个模板运行多个 Job:[使用展开的方式进行并行处理](/zh/docs/tasks/job/parallel-processing-expansion/) + * [使用工作队列进行粗粒度并行处理](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) + * [使用工作队列进行精细的并行处理](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) + * [使用索引作业完成静态工作分配下的并行处理](/zh-cn/docs/tasks/job/indexed-parallel-processing-static/)(Beta 阶段) + * 基于一个模板运行多个 Job:[使用展开的方式进行并行处理](/zh-cn/docs/tasks/job/parallel-processing-expansion/) * 跟随[自动清理完成的 Job](#clean-up-finished-jobs-automatically) 文中的链接,了解你的集群如何清理完成和失败的任务。 * `Job` 是 Kubernetes REST API 的一部分。阅读 {{< api-reference page="workload-resources/job-v1" >}} 对象定义理解关于该资源的 API。 -* 阅读 [`CronJob`](/zh/docs/concepts/workloads/controllers/cron-jobs/),它允许你定义一系列定期运行的 Job,类似于 Unix 工具 `cron`。 +* 阅读 [`CronJob`](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/),它允许你定义一系列定期运行的 Job,类似于 UNIX 工具 `cron`。 diff --git a/content/zh/docs/concepts/workloads/controllers/replicaset.md b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md similarity index 78% rename from content/zh/docs/concepts/workloads/controllers/replicaset.md rename to content/zh-cn/docs/concepts/workloads/controllers/replicaset.md index ca92f8378edaa..b55efa8345584 100644 --- a/content/zh/docs/concepts/workloads/controllers/replicaset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md @@ -48,7 +48,7 @@ ReplicaSet's identifying information within their ownerReferences field. It's th knows of the state of the Pods it is maintaining and plans accordingly. --> ReplicaSet 通过 Pod 上的 -[metadata.ownerReferences](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) +[metadata.ownerReferences](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) 字段连接到附属 Pod,该字段给出当前对象的属主资源。 ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 ReplicaSet 的标识信息。正是通过这一连接,ReplicaSet 知道它所维护的 Pod 集合的状态, @@ -56,10 +56,11 @@ ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 R ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有 -OwnerReference 或者其 OwnerReference 不是一个 +OwnerReference 或者其 OwnerReference 不是一个 {{< glossary_tooltip text="控制器" term_id="controller" >}},且其匹配到 某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。 @@ -68,14 +69,14 @@ OwnerReference 或者其 OwnerReference 不是一个 A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and -provides declarative updates to pods along with a lot of other useful features. +provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section. --> -## 何时使用 ReplicaSet +## 何时使用 ReplicaSet {#when-to-use-a-replicaset} ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。 然而,Deployment 是一个更高级的概念,它管理 ReplicaSet,并向 Pod @@ -89,17 +90,16 @@ Deployment,并在 spec 部分定义你的应用。 -## 示例 +## 示例 {#example} {{< codenew file="controllers/frontend.yaml" >}} 将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群, -应该就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。 - +就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。 ```shell kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml @@ -139,25 +139,25 @@ And you will see output similar to: 你会看到类似如下的输出: ``` -Name: frontend -Namespace: default -Selector: tier=frontend -Labels: app=guestbook - tier=frontend +Name: frontend +Namespace: default +Selector: tier=frontend +Labels: app=guestbook + tier=frontend Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":{"app":"guestbook","tier":"frontend"},"name":"frontend",... -Replicas: 3 current / 3 desired -Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed +Replicas: 3 current / 3 desired +Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: - Labels: tier=frontend + Labels: tier=frontend Containers: php-redis: - Image: gcr.io/google_samples/gb-frontend:v3 + Image: gcr.io/google_samples/gb-frontend:v3 Port: Host Port: Environment: - Mounts: - Volumes: + Mounts: + Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- @@ -226,26 +226,19 @@ metadata: -## 非模板 Pod 的获得 +## 非模板 Pod 的获得 {#non-template-pod-acquisitions} -尽管你完全可以直接创建裸的 Pods,强烈建议你确保这些裸的 Pods 并不包含可能与你 -的某个 ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有 -在其模板中设置的 Pods,它还可以像前面小节中所描述的那样获得其他 Pods。 +尽管你完全可以直接创建裸的 Pod,强烈建议你确保这些裸的 Pod 并不包含可能与你的某个 +ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有在其模板中设置的 +Pod,它还可以像前面小节中所描述的那样获得其他 Pod。 {{< codenew file="pods/pod-rs.yaml" >}} @@ -256,11 +249,10 @@ ReplicaSet, they will immediately be acquired by it. Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to fulfill its replica count requirement: --> -由于这些 Pod 没有控制器(Controller,或其他对象)作为其属主引用,并且 -其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet -获取。 +由于这些 Pod 没有控制器(Controller,或其他对象)作为其属主引用, +并且其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet 获取。 -假定你在 frontend ReplicaSet 已经被部署之后创建 Pods,并且你已经在 ReplicaSet +假定你在 frontend ReplicaSet 已经被部署之后创建 Pod,并且你已经在 ReplicaSet 中设置了其初始的 Pod 副本数以满足其副本计数需要: ```shell @@ -273,8 +265,8 @@ its desired count. Fetching the Pods: --> -新的 Pods 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,因为 -它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。 +新的 Pod 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止, +因为它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。 取回 Pods: @@ -286,9 +278,9 @@ kubectl get pods -输出显示新的 Pods 或者已经被终止,或者处于终止过程中: +输出显示新的 Pod 或者已经被终止,或者处于终止过程中: -```shell +``` NAME READY STATUS RESTARTS AGE frontend-b2zdv 1/1 Running 0 10m frontend-vcmts 1/1 Running 0 10m @@ -319,9 +311,9 @@ kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the Pods: --> -你会看到 ReplicaSet 已经获得了该 Pods,并仅根据其规约创建新的 Pods,直到 -新的 Pods 和原来的 Pods 的总数达到其预期个数。 -这时取回 Pods: +你会看到 ReplicaSet 已经获得了该 Pod,并仅根据其规约创建新的 Pod, +直到新的 Pod 和原来的 Pod 的总数达到其预期个数。 +这时取回 Pod 列表: ```shell kubectl get pods @@ -339,10 +331,13 @@ pod1 1/1 Running 0 36s pod2 1/1 Running 0 36s ``` + 采用这种方式,一个 ReplicaSet 中可以包含异质的 Pods 集合。 -## 编写 ReplicaSet 的 spec +## 编写 ReplicaSet 的清单 {#writing-a-replicaset-manifest} 与所有其他 Kubernetes API 对象一样,ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。 对于 ReplicaSets 而言,其 `kind` 始终是 ReplicaSet。 ReplicaSet 对象的名称必须是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 -ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) +ReplicaSet 也需要 +[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 部分。 -### Pod 模版 +### Pod 模版 {#pod-template} -`.spec.template` 是一个[Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates), +`.spec.template` 是一个 [Pod 模版](/zh-cn/docs/concepts/workloads/pods/#pod-templates), 要求设置标签。在 `frontend.yaml` 示例中,我们指定了标签 `tier: frontend`。 注意不要将标签与其他控制器的选择算符重叠,否则那些控制器会尝试收养此 Pod。 -对于模板的[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +对于模板的[重启策略](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 字段,`.spec.template.spec.restartPolicy`,唯一允许的取值是 `Always`,这也是默认值. ### Pod 选择算符 {#pod-selector} -`.spec.selector` 字段是一个[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)。 +`.spec.selector` 字段是一个[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。 如前文中[所讨论的](#how-a-replicaset-works),这些是用来标识要被获取的 Pods 的标签。在签名的 `frontend.yaml` 示例中,选择算符为: @@ -408,17 +404,16 @@ matchLabels: tier: frontend ``` -在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector` 值 -相匹配,否则该配置会被 API 拒绝。 +在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector` +值相匹配,否则该配置会被 API 拒绝。 {{< note >}} -对于设置了相同的 `.spec.selector`,但 -`.spec.template.metadata.labels` 和 `.spec.template.spec` 字段不同的 -两个 ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所 -创建的 Pods。 +对于设置了相同的 `.spec.selector`,但 +`.spec.template.metadata.labels` 和 `.spec.template.spec` 字段不同的两个 +ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所创建的 Pods。 {{< /note >}} -## 使用 ReplicaSets +## 使用 ReplicaSets {#working-with-replicasets} -### 删除 ReplicaSet 和它的 Pod +### 删除 ReplicaSet 和它的 Pod {#deleting-a-replicaset-and-its-pods} 要删除 ReplicaSet 和它的所有 Pod,使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。 -默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/) +默认情况下,[垃圾收集器](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/) 自动删除所有依赖的 Pod。 -当使用 REST API 或 `client-go` 库时,你必须在删除选项中将 `propagationPolicy` +当使用 REST API 或 `client-go` 库时,你必须在 `-d` 选项中将 `propagationPolicy` 设置为 `Background` 或 `Foreground`。例如: ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ - -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ - -H "Content-Type: application/json" + -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ + -H "Content-Type: application/json" ``` -### 只删除 ReplicaSet +### 只删除 ReplicaSet {#deleting-just-a-replicaset} 你可以只删除 ReplicaSet 而不影响它的 Pods,方法是使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) @@ -489,8 +486,8 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron @@ -498,20 +495,19 @@ To update Pods to a new spec in a controlled way, use a 由于新旧 ReplicaSet 的 `.spec.selector` 是相同的,新的 ReplicaSet 将接管老的 Pod。 但是,它不会努力使现有的 Pod 与新的、不同的 Pod 模板匹配。 若想要以可控的方式更新 Pod 的规约,可以使用 -[Deployment](/zh/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) +[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) 资源,因为 ReplicaSet 并不直接支持滚动更新。 -### 将 Pod 从 ReplicaSet 中隔离 +### 将 Pod 从 ReplicaSet 中隔离 {#isolating-pods-from-a-replicaset} -可以通过改变标签来从 ReplicaSet 的目标集中移除 Pod。 +可以通过改变标签来从 ReplicaSet 中移除 Pod。 这种技术可以用来从服务中去除 Pod,以便进行排错、数据恢复等。 以这种方式移除的 Pod 将被自动替换(假设副本的数量没有改变)。 @@ -519,9 +515,9 @@ from service for debugging, data recovery, etc. Pods that are removed in this wa ### Scaling a ReplicaSet A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller -ensures that a desired number of pods with a matching label selector are available and operational. +ensures that a desired number of Pods with a matching label selector are available and operational. --> -### 缩放 RepliaSet +### 缩放 RepliaSet {#scaling-a-replicaset} 通过更新 `.spec.replicas` 字段,ReplicaSet 可以被轻松的进行缩放。ReplicaSet 控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。 @@ -547,11 +543,10 @@ prioritize scaling down pods based on the following general algorithm: 较小的优先被裁减掉 3. 所处节点上副本个数较多的 Pod 优先于所处节点上副本较少者 4. 如果 Pod 的创建时间不同,最近创建的 Pod 优先于早前创建的 Pod 被裁减。 - (当 `LogarithmicScaleDown` 这一 - [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) + (当 `LogarithmicScaleDown` 这一[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 被启用时,创建时间是按整数幂级来分组的)。 -如果以上比较结果都相同,则随机选择。 +如果以上比较结果都相同,则随机选择。 -通过使用 [`controller.kubernetes.io/pod-deletion-cost`](/zh/docs/reference/labels-annotations-taints/#pod-deletion-cost) +通过使用 [`controller.kubernetes.io/pod-deletion-cost`](/zh-cn/docs/reference/labels-annotations-taints/#pod-deletion-cost) 注解,用户可以对 ReplicaSet 缩容时要先删除哪些 Pods 设置偏好。 此功能特性处于 Beta 阶段,默认被禁用。你可以通过为 kube-apiserver 和 -kube-controller-manager 设置 -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +kube-controller-manager 设置[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) `PodDeletionCost` 来启用此功能。 {{< note >}} @@ -601,7 +595,7 @@ kube-controller-manager 设置 --> - 此机制实施时仅是尽力而为,并不能对 Pod 的删除顺序作出任何保证; - 用户应避免频繁更新注解值,例如根据某观测度量值来更新此注解值是应该避免的。 - 这样做会在 API 服务器上产生大量的 Pod 更新操作。 + 这样做会在 API 服务器上产生大量的 Pod 更新操作。 {{< /note >}} -#### 使用场景示例 +#### 使用场景示例 {#example-use-case} 同一应用的不同 Pods 可能其利用率是不同的。在对应用执行缩容操作时,可能 希望移除利用率较低的 Pods。为了避免频繁更新 Pods,应用应该在执行缩容 @@ -623,17 +617,16 @@ the down scaling; for example, the driver pod of a Spark deployment. 是可以起作用的。 -### ReplicaSet 作为水平的 Pod 自动缩放器目标 +### ReplicaSet 作为水平的 Pod 自动缩放器目标 {#replicaset-as-a-horizontal-pod-autoscaler-target} -ReplicaSet 也可以作为 -[水平的 Pod 缩放器 (HPA)](/zh/docs/tasks/run-application/horizontal-pod-autoscale/) +ReplicaSet 也可以作为[水平的 Pod 缩放器 (HPA)](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/) 的目标。也就是说,ReplicaSet 可以被 HPA 自动缩放。 以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。 @@ -642,7 +635,7 @@ ReplicaSet 也可以作为 将这个列表保存到 `hpa-rs.yaml` 并提交到 Kubernetes 集群,就能创建它所定义的 HPA,进而就能根据复制的 Pod 的 CPU 利用率对目标 ReplicaSet进行自动缩放。 @@ -655,7 +648,7 @@ kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml Alternatively, you can use the `kubectl autoscale` command to accomplish the same (and it's easier!) --> -或者,可以使用 `kubectl autoscale` 命令完成相同的操作。 (而且它更简单!) +或者,可以使用 `kubectl autoscale` 命令完成相同的操作。(而且它更简单!) ```shell kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50 @@ -664,7 +657,7 @@ kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50 -## ReplicaSet 的替代方案 +## ReplicaSet 的替代方案 {#alternatives-to-replicaset} -### Deployment (推荐) +### Deployment(推荐) {#deployment-recommended} -[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个 -可以拥有 ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。 -尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为 -编排 Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心 -如何管理它所创建的 ReplicaSet,Deployment 拥有并管理其 ReplicaSet。 +[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 是一个可以拥有 +ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。 +尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为编排 +Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心如何管理它所创建的 +ReplicaSet,Deployment 拥有并管理其 ReplicaSet。 因此,建议你在需要 ReplicaSet 时使用 Deployment。 -### 裸 Pod +### 裸 Pod {#bare-pods} 与用户直接创建 Pod 的情况不同,ReplicaSet 会替换那些由于某些原因被删除或被终止的 Pod,例如在节点故障或破坏性的节点维护(如内核升级)的情况下。 因为这个原因,我们建议你使用 ReplicaSet,即使应用程序只需要一个 Pod。 想像一下,ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod, 而不是在单个节点上监视单个进程。 -ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理(例如,Kubelet 或 Docker)去完成。 +ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理(例如,Kubelet)去完成。 - ### Job -使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet, +使用[`Job`](/zh-cn/docs/concepts/workloads/controllers/job/) 代替 ReplicaSet, 可以用于那些期望自行终止的 Pod。 ### DaemonSet 对于管理那些提供主机级别功能(如主机监控和主机日志)的容器, -就要用 [`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/) +就要用 [`DaemonSet`](/zh-cn/docs/concepts/workloads/controllers/daemonset/) 而不用 ReplicaSet。 这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行, 并且在机器准备重新启动/关闭时安全地终止。 @@ -734,9 +726,9 @@ The two serve the same purpose, and behave similarly, except that a ReplicationC selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). As such, ReplicaSets are preferred over ReplicationControllers --> -ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/replicationcontroller/) +ReplicaSet 是 [ReplicationController](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) 的后继者。二者目的相同且行为类似,只是 ReplicationController 不支持 -[标签用户指南](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors) +[标签用户指南](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors) 中讨论的基于集合的选择算符需求。 因此,相比于 ReplicationController,应优先考虑 ReplicaSet。 @@ -753,9 +745,13 @@ ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/r * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. --> -* 了解 [Pods](/zh/docs/concepts/workloads/pods)。 -* 了解 [Deployments](/zh/docs/concepts/workloads/controllers/deployment/)。 -* [使用 Deployment 运行一个无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/),它依赖于 ReplicaSet。 -* `ReplicaSet` 是 Kubernetes REST API 中的顶级资源。阅读 {{< api-reference page="workload-resources/replica-set-v1" >}} - 对象定义理解关于该资源的 API。 -* 阅读[Pod 干扰预算(Disruption Budget)](/zh/docs/concepts/workloads/pods/disruptions/),了解如何在干扰下运行高度可用的应用。 +* 了解 [Pod](/zh-cn/docs/concepts/workloads/pods)。 +* 了解 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)。 +* [使用 Deployment 运行一个无状态应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/), + 它依赖于 ReplicaSet。 +* `ReplicaSet` 是 Kubernetes REST API 中的顶级资源。阅读 + {{< api-reference page="workload-resources/replica-set-v1" >}} + 对象定义理解关于该资源的 API。 +* 阅读 [Pod 干扰预算(Disruption Budget)](/zh-cn/docs/concepts/workloads/pods/disruptions/), + 了解如何在干扰下运行高度可用的应用。 + diff --git a/content/zh/docs/concepts/workloads/controllers/replicationcontroller.md b/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md similarity index 92% rename from content/zh/docs/concepts/workloads/controllers/replicationcontroller.md rename to content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md index f83b5d46867b2..851dafd0a0027 100644 --- a/content/zh/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md @@ -7,7 +7,7 @@ feature: 重新启动失败的容器,在节点死亡时替换并重新调度容器,杀死不响应用户定义的健康检查的容器,并且在它们准备好服务之前不会将它们公布给客户端。 content_type: concept weight: 90 ---- +--- {{< note >}} -现在推荐使用配置 [`ReplicaSet`](/zh/docs/concepts/workloads/controllers/replicaset/) 的 -[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 来建立副本管理机制。 +现在推荐使用配置 [`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 的 +[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 来建立副本管理机制。 {{< /note >}} ### Pod 选择算符 {#pod-selector} -`.spec.selector` 字段是一个[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 +`.spec.selector` 字段是一个[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 ReplicationController 管理标签与选择算符匹配的所有 Pod。 它不区分它创建或删除的 Pod 和其他人或进程创建或删除的 Pod。 这允许在不影响正在运行的 Pod 的情况下替换 ReplicationController。 @@ -299,7 +299,7 @@ If you do not specify `.spec.replicas`, then it defaults to 1. 你可以通过设置 `.spec.replicas` 来指定应该同时运行多少个 Pod。 在任何时候,处于运行状态的 Pod 个数都可能高于或者低于设定值。例如,副本个数刚刚被增加或减少时,或者一个 Pod 处于优雅终止过程中而其替代副本已经提前开始创建时。 -如果你没有指定 `.spec.replicas` ,那么它默认是 1。 +如果你没有指定 `.spec.replicas`,那么它默认是 1。 ### Deployment (推荐) -[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一种更高级别的 API 对象,用于更新其底层 ReplicaSet 及其 Pod。 +[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 是一种更高级别的 API 对象,用于更新其底层 ReplicaSet 及其 Pod。 如果你想要这种滚动更新功能,那么推荐使用 Deployment,因为它们是声明式的、服务端的,并且具有其它特性。 ## {{% heading "whatsnext" %}} -- 了解 [Pods](/zh/docs/concepts/workloads/pods)。 -- 了解 [Depolyment](/zh/docs/concepts/workloads/controllers/deployment/),ReplicationController 的替代品。 +- 了解 [Pods](/zh-cn/docs/concepts/workloads/pods)。 +- 了解 [Depolyment](/zh-cn/docs/concepts/workloads/controllers/deployment/),ReplicationController 的替代品。 - `ReplicationController` 是 Kubernetes REST API 的一部分,阅读 {{< api-reference page="workload-resources/replication-controller-v1" >}} 对象定义以了解 replication controllers 的 API。 diff --git a/content/zh/docs/concepts/workloads/controllers/statefulset.md b/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md similarity index 62% rename from content/zh/docs/concepts/workloads/controllers/statefulset.md rename to content/zh-cn/docs/concepts/workloads/controllers/statefulset.md index 26f00ae25c09d..8897131ba1d91 100644 --- a/content/zh/docs/concepts/workloads/controllers/statefulset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md @@ -27,7 +27,7 @@ StatefulSet 是用来管理有状态应用的工作负载 API 对象。 StatefulSets are valuable for applications that require one or more of the following. --> -## 使用 StatefulSets +## 使用 StatefulSets {#using-statefulsets} StatefulSets 对于需要满足以下一个或多个需求的应用程序很有价值: @@ -53,8 +53,8 @@ that provides a set of stateless replicas. 在上面描述中,“稳定的”意味着 Pod 调度或重调度的整个过程是有持久性的。 如果应用程序不需要任何稳定的标识符或有序的部署、删除或伸缩,则应该使用 由一组无状态的副本控制器提供的工作负载来部署应用程序,比如 -[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 或者 -[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) +[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 或者 +[ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 可能更适用于你的无状态应用部署需要。 -## Pod 选择算符 {#pod-selector} +### Pod 选择算符 {#pod-selector} 你必须设置 StatefulSet 的 `.spec.selector` 字段,使之匹配其在 -`.spec.template.metadata.labels` 中设置的标签。在 Kubernetes 1.8 版本之前, -被忽略 `.spec.selector` 字段会获得默认设置值。 -在 1.8 和以后的版本中,未指定匹配的 Pod 选择器将在创建 StatefulSet 期间导致验证错误。 +`.spec.template.metadata.labels` 中设置的标签。 +未指定匹配的 Pod 选择器将在创建 StatefulSet 期间导致验证错误。 + + +### 卷申领模版 {#volume-claim-templates} + +你可以设置 `.spec.volumeClaimTemplates`, +它可以使用 PersistentVolume 制备程序所准备的 +[PersistentVolumes](/zh-cn/docs/concepts/storage/persistent-volumes/) 来提供稳定的存储。 + + +### 最短就绪秒数 {#minimum-ready-seconds} + +{{< feature-state for_k8s_version="v1.23" state="beta" >}} + + +`.spec.minReadySeconds` 是一个可选字段, +它指定新创建的 Pod 应该准备好且其任何容器不崩溃的最小秒数,以使其被视为可用。 +请注意,此功能是测试版,默认启用。如果你不希望启用此功能, +请通过取消设置 StatefulSetMinReadySeconds 标志来选择退出。 +该字段默认为 0(Pod 准备就绪后将被视为可用)。 +要了解有关何时认为 Pod 准备就绪的更多信息, +请参阅[容器探针](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)。 {{< note >}} -集群域会被设置为 `cluster.local`,除非有[其他配置](/zh/docs/concepts/services-networking/dns-pod-service/)。 +集群域会被设置为 `cluster.local`,除非有[其他配置](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。 {{< /note >}} StatefulSet 不应将 `pod.Spec.TerminationGracePeriodSeconds` 设置为 0。 这种做法是不安全的,要强烈阻止。更多的解释请参考 -[强制删除 StatefulSet Pod](/zh/docs/tasks/run-application/force-delete-stateful-set-pod/)。 +[强制删除 StatefulSet Pod](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/)。 在上面的 nginx 示例被创建后,会按照 web-0、web-1、web-2 的顺序部署三个 Pod。 -在 web-0 进入 [Running 和 Ready](/zh/docs/concepts/workloads/pods/pod-lifecycle/) +在 web-0 进入 [Running 和 Ready](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/) 状态前不会部署 web-1。在 web-1 进入 Running 和 Ready 状态前不会部署 web-2。 如果 web-1 已经处于 Running 和 Ready 状态,而 web-2 尚未部署,在此期间发生了 web-0 运行失败,那么 web-2 将不会被部署,要等到 web-0 部署完成并进入 Running 和 @@ -372,12 +403,12 @@ until web-0 is Running and Ready. ### Pod 管理策略 {#pod-management-policies} -在 Kubernetes 1.7 及以后的版本中,StatefulSet 允许你放宽其排序保证, +StatefulSet 允许你放宽其排序保证, 同时通过它的 `.spec.podManagementPolicy` 域保持其唯一性和身份保证。 -#### OrderedReady Pod 管理 +#### OrderedReady Pod 管理 {#orderedready-pod-management} `OrderedReady` Pod 管理是 StatefulSet 的默认设置。它实现了 [上面](#deployment-and-scaling-guarantees)描述的功能。 @@ -436,7 +467,7 @@ StatefulSet 的 `.spec.updateStrategy` 字段让 `RollingUpdate` : `RollingUpdate` 更新策略对 StatefulSet 中的 Pod 执行自动的滚动更新。这是默认的更新策略。 - + ## 滚动更新 {#rolling-updates} @@ -482,6 +515,47 @@ update, roll out a canary, or perform a phased roll out. 在大多数情况下,你不需要使用分区,但如果你希望进行阶段更新、执行金丝雀或执行 分阶段上线,则这些分区会非常有用。 + +### 最大不可用 Pod {#maximum-unavailable-pods} + +{{< feature-state for_k8s_version="v1.24" state="alpha" >}} + + +你可以通过指定 `.spec.updateStrategy.rollingUpdate.maxUnavailable` +字段来控制更新期间不可用的 Pod 的最大数量。 +该值可以是绝对值(例如,“5”)或者是期望 Pod 个数的百分比(例如,`10%`)。 +绝对值是根据百分比值四舍五入计算的。 +该字段不能为 0。默认设置为 1。 + + +该字段适用于 `0` 到 `replicas - 1` 范围内的所有 Pod。 +如果在 `0` 到 `replicas - 1` 范围内存在不可用 Pod,这类 Pod 将被计入 `maxUnavailable` 值。 + + +{{< note >}} +`maxUnavailable` 字段处于 Alpha 阶段,仅当 API 服务器启用了 `MaxUnavailableStatefulSet` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)时才起作用。 +{{< /note >}} + +## PersistentVolumeClaim 保留 {#persistentvolumeclaim-retention} + +{{< feature-state for_k8s_version="v1.23" state="alpha" >}} + +在 StatefulSet 的生命周期中,可选字段 +`.spec.persistentVolumeClaimRetentionPolicy` 控制是否删除以及如何删除 PVC。 +使用该字段,你必须启用 `StatefulSetAutoDeletePVC` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 +启用后,你可以为每个 StatefulSet 配置两个策略: -`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly -created Pod should be ready without any of its containers crashing, for it to be considered available. -This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when -a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). + -### 最短就绪秒数 {#minimum-ready-seconds} +`whenDeleted` +: 配置删除 StatefulSet 时应用的卷保留行为 -{{< feature-state for_k8s_version="v1.22" state="alpha" >}} +`whenScaled` +: 配置当 StatefulSet 的副本数减少时应用的卷保留行为;例如,缩小集合时。 -`.spec.minReadySeconds` 是一个可选字段,用于指定新创建的 Pod 就绪(没有任何容器崩溃)后被认为可用的最小秒数。 -默认值是 0(Pod 就绪时就被认为可用)。要了解 Pod 何时被认为已就绪,请参阅[容器探针](/zh/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)。 +对于你可以配置的每个策略,你可以将值设置为 `Delete` 或 `Retain`。 -请注意只有当你启用 `StatefulSetMinReadySeconds` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)时,该字段才会生效。 + +`Delete` +: 对于受策略影响的每个 Pod,基于 StatefulSet 的 `volumeClaimTemplate` 字段创建的 PVC 都会被删除。 + 使用 `whenDeleted` 策略,所有来自 `volumeClaimTemplate` 的 PVC 在其 Pod 被删除后都会被删除。 + 使用 `whenScaled` 策略,只有与被缩减的 Pod 副本对应的 PVC 在其 Pod 被删除后才会被删除。 + + +`Retain`(默认) +: 来自 `volumeClaimTemplate` 的 PVC 在 Pod 被删除时不受影响。这是此新功能之前的行为。 + + +请记住,这些策略**仅**适用于由于 StatefulSet 被删除或被缩小而被删除的 Pod。 +例如,如果与 StatefulSet 关联的 Pod 由于节点故障而失败, +并且控制平面创建了替换 Pod,则 StatefulSet 保留现有的 PVC。 +现有卷不受影响,集群会将其附加到新 Pod 即将启动的节点上。 + +策略的默认值为 `Retain`,与此新功能之前的 StatefulSet 行为相匹配。 + +这是一个示例策略。 + +```yaml +apiVersion: apps/v1 +kind: StatefulSet +... +spec: + persistentVolumeClaimRetentionPolicy: + whenDeleted: Retain + whenScaled: Delete +... +``` + + +StatefulSet {{}}为其 PVC 添加了 +[属主引用](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/#owner-references-in-object-specifications), +这些 PVC 在 Pod 终止后被{{}}删除。 +这使 Pod 能够在删除 PVC 之前(以及在删除后备 PV 和卷之前,取决于保留策略)干净地卸载所有卷。 +当你设置 `whenDeleted` 删除策略,对 StatefulSet 实例的属主引用放置在与该 StatefulSet 关联的所有 PVC 上。 + + +`whenScaled` 策略必须仅在 Pod 缩减时删除 PVC,而不是在 Pod 因其他原因被删除时删除。 +执行协调操作时,StatefulSet 控制器将其所需的副本数与集群上实际存在的 Pod 进行比较。 +对于 StatefulSet 中的所有 Pod 而言,如果其 ID 大于副本数,则将被废弃并标记为需要删除。 +如果 `whenScaled` 策略是 `Delete`,则在删除 Pod 之前, +首先将已销毁的 Pod 设置为与 StatefulSet 模板 对应的 PVC 的属主。 +这会导致 PVC 仅在已废弃的 Pod 终止后被垃圾收集。 + + +这意味着如果控制器崩溃并重新启动,在其属主引用更新到适合策略的 Pod 之前,不会删除任何 Pod。 +如果在控制器关闭时强制删除了已废弃的 Pod,则属主引用可能已被设置,也可能未被设置,具体取决于控制器何时崩溃。 +更新属主引用可能需要几个协调循环,因此一些已废弃的 Pod 可能已经被设置了属主引用,而其他可能没有。 +出于这个原因,我们建议等待控制器恢复,控制器将在终止 Pod 之前验证属主引用。 +如果这不可行,则操作员应验证 PVC 上的属主引用,以确保在强制删除 Pod 时删除预期的对象。 + + +### 副本数 {#replicas} + +`.spec.replicas` 是一个可选字段,用于指定所需 Pod 的数量。它的默认值为 1。 + +如果你手动扩缩已部署的负载,例如通过 `kubectl scale statefulset statefulset --replicas=X`, +然后根据清单更新 StatefulSet(例如:通过运行 `kubectl apply -f statefulset.yaml`), +那么应用该清单的操作会覆盖你之前所做的手动缩放。 + + +如果 [HorizontalPodAutoscaler](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/) +(或任何类似的水平缩放 API)正在管理 Statefulset 的缩放, +请不要设置 `.spec.replicas`。 +相反,允许 Kubernetes 控制平面自动管理 `.spec.replicas` 字段。 ## {{% heading "whatsnext" %}} @@ -556,15 +779,15 @@ Please note that this field only works if you enable the `StatefulSetMinReadySec * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. --> -* 了解 [Pods](/zh/docs/concepts/workloads/pods)。 +* 了解 [Pod](/zh-cn/docs/concepts/workloads/pods)。 * 了解如何使用 StatefulSet - * 跟随示例[部署有状态应用](/zh/docs/tutorials/stateful-application/basic-stateful-set/)。 - * 跟随示例[使用 StatefulSet 部署 Cassandra](/zh/docs/tutorials/stateful-application/cassandra/)。 - * 跟随示例[运行多副本的有状态应用程序](/zh/docs/tasks/run-application/run-replicated-stateful-application/)。 - * 了解如何[扩缩 StatefulSet](/zh/docs/tasks/run-application/scale-stateful-set/)。 - * 了解[删除 StatefulSet](/zh/docs/tasks/run-application/delete-stateful-set/)涉及到的操作。 - * 了解如何[配置 Pod 以使用卷进行存储](/zh/docs/tasks/configure-pod-container/configure-volume-storage/)。 - * 了解如何[配置 Pod 以使用 PersistentVolume 作为存储](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)。 + * 跟随示例[部署有状态应用](/zh-cn/docs/tutorials/stateful-application/basic-stateful-set/)。 + * 跟随示例[使用 StatefulSet 部署 Cassandra](/zh-cn/docs/tutorials/stateful-application/cassandra/)。 + * 跟随示例[运行多副本的有状态应用程序](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/)。 + * 了解如何[扩缩 StatefulSet](/zh-cn/docs/tasks/run-application/scale-stateful-set/)。 + * 了解[删除 StatefulSet](/zh-cn/docs/tasks/run-application/delete-stateful-set/)涉及到的操作。 + * 了解如何[配置 Pod 以使用卷进行存储](/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage/)。 + * 了解如何[配置 Pod 以使用 PersistentVolume 作为存储](/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)。 * `StatefulSet` 是 Kubernetes REST API 中的顶级资源。阅读 {{< api-reference page="workload-resources/stateful-set-v1" >}} 对象定义理解关于该资源的 API。 -* 阅读[Pod 干扰预算(Disruption Budget)](/zh/docs/concepts/workloads/pods/disruptions/),了解如何在干扰下运行高度可用的应用。 +* 阅读 [Pod 干扰预算(Disruption Budget)](/zh-cn/docs/concepts/workloads/pods/disruptions/),了解如何在干扰下运行高度可用的应用。 diff --git a/content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished.md similarity index 91% rename from content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md rename to content/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished.md index 0d02b7a04be73..7887151b9aa82 100644 --- a/content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -37,7 +37,7 @@ up finished Jobs (either `Complete` or `Failed`) automatically by specifying the TTL-after-finished 控制器只支持 Job。集群操作员可以通过指定 Job 的 `.spec.ttlSecondsAfterFinished` 字段来自动清理已结束的作业(`Complete` 或 `Failed`),如 -[示例](/zh/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) +[示例](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) 所示。 * 在作业清单(manifest)中指定此字段,以便 Job 在完成后的某个时间被自动清除。 * 将此字段设置为现有的、已完成的作业,以采用此新功能。 -* 在创建作业时使用 [mutating admission webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) +* 在创建作业时使用 [mutating admission webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) 动态设置该字段。集群管理员可以使用它对完成的作业强制执行 TTL 策略。 -* 使用 [mutating admission webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) +* 使用 [mutating admission webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) 在作业完成后动态设置该字段,并根据作业状态、标签等选择不同的 TTL 值。 -* [自动清理 Job](/zh/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) +* [自动清理 Job](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) * [设计文档](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) diff --git a/content/zh/docs/concepts/workloads/pods/_index.md b/content/zh-cn/docs/concepts/workloads/pods/_index.md similarity index 95% rename from content/zh/docs/concepts/workloads/pods/_index.md rename to content/zh-cn/docs/concepts/workloads/pods/_index.md index 5eec324a1a0e3..12063e1e843c1 100644 --- a/content/zh/docs/concepts/workloads/pods/_index.md +++ b/content/zh-cn/docs/concepts/workloads/pods/_index.md @@ -52,8 +52,8 @@ during Pod startup. You can also inject for debugging if your cluster offers this. --> 除了应用容器,Pod 还可以包含在 Pod 启动期间运行的 -[Init 容器](/zh/docs/concepts/workloads/pods/init-containers/)。 -你也可以在集群中支持[临时性容器](/zh/docs/concepts/workloads/pods/ephemeral-containers/) +[Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)。 +你也可以在集群中支持[临时性容器](/zh-cn/docs/concepts/workloads/pods/ephemeral-containers/) 的情况下,为调试的目的注入临时性容器。 @@ -271,7 +271,7 @@ When you create the manifest for a Pod object, make sure the name specified is a [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> 当你为 Pod 对象创建清单时,要确保所指定的 Pod 名称是合法的 -[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 Pod 中的容器所看到的系统主机名与为 Pod 配置的 `name` 属性值相同。 -[网络](/zh/docs/concepts/cluster-administration/networking/)部分提供了更多有关此内容的信息。 +[网络](/zh-cn/docs/concepts/cluster-administration/networking/)部分提供了更多有关此内容的信息。 -* 了解 [Pod 生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/) -* 了解 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/),以及如何使用它 +* 了解 [Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/) +* 了解 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/),以及如何使用它 来配置不同的 Pod 使用不同的容器运行时配置 -* 了解 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/) -* 了解 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/),以及你 +* 了解 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* 了解 [PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/),以及你 如何可以利用它在出现干扰因素时管理应用的可用性。 * Pod 在 Kubernetes REST API 中是一个顶层资源。 {{< api-reference page="workload-resources/pod-v1" >}} diff --git a/content/zh/docs/concepts/workloads/pods/disruptions.md b/content/zh-cn/docs/concepts/workloads/pods/disruptions.md similarity index 94% rename from content/zh/docs/concepts/workloads/pods/disruptions.md rename to content/zh-cn/docs/concepts/workloads/pods/disruptions.md index 6ebb4928cac4a..24e7ac25dc418 100644 --- a/content/zh/docs/concepts/workloads/pods/disruptions.md +++ b/content/zh-cn/docs/concepts/workloads/pods/disruptions.md @@ -60,7 +60,7 @@ an application. Examples are: - 云提供商或虚拟机管理程序中的故障导致的虚拟机消失 - 内核错误 - 节点由于集群网络隔离从集群中消失 -- 由于节点[资源不足](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)导致 pod 被驱逐。 +- 由于节点[资源不足](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)导致 pod 被驱逐。 集群管理员操作包括: -- [排空(drain)节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)进行修复或升级。 +- [排空(drain)节点](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)进行修复或升级。 - 从集群中排空节点以缩小集群(了解[集群自动扩缩](https://github.com/kubernetes/autoscaler/#readme))。 - 从节点中移除一个 Pod,以允许其他 Pod 使用该节点。 @@ -145,13 +145,13 @@ and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) or across zones (if using a [multi-zone cluster](/docs/setup/multiple-zones).) --> -- 确保 Pod 在请求中给出[所需资源](/zh/docs/tasks/configure-pod-container/assign-memory-resource/)。 +- 确保 Pod 在请求中给出[所需资源](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/)。 - 如果需要更高的可用性,请复制应用程序。 - (了解有关运行多副本的[无状态](/zh/docs/tasks/run-application/run-stateless-application-deployment/) - 和[有状态](/zh/docs/tasks/run-application/run-replicated-stateful-application/)应用程序的信息。) + (了解有关运行多副本的[无状态](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) + 和[有状态](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/)应用程序的信息。) - 为了在运行复制应用程序时获得更高的可用性,请跨机架(使用 - [反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) - 或跨区域(如果使用[多区域集群](/zh/docs/setup/best-practices/multiple-zones/))扩展应用程序。 + [反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) + 或跨区域(如果使用[多区域集群](/zh-cn/docs/setup/best-practices/multiple-zones/))扩展应用程序。 集群管理员和托管提供商应该使用遵循 PodDisruptionBudgets 的接口 -(通过调用[Eviction API](/zh/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)), +(通过调用[Eviction API](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)), 而不是直接删除 Pod 或 Deployment。 当使用驱逐 API 驱逐 Pod 时,Pod 会被体面地 -[终止](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination),期间会 +[终止](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination),期间会 参考 [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) 中的 `terminationGracePeriodSeconds` 配置值。 @@ -315,7 +315,7 @@ Both pods go into the `terminating` state at the same time. This puts the cluster in this state: --> -例如,假设集群管理员想要重启系统,升级内核版本来修复内核中的权限。 +例如,假设集群管理员想要重启系统,升级内核版本来修复内核中的缺陷。 集群管理员首先使用 `kubectl drain` 命令尝试排空 `node-1` 节点。 命令尝试驱逐 `pod-a` 和 `pod-x`。操作立即就成功了。 两个 Pod 同时进入 `terminating` 状态。这时的集群处于下面的状态: @@ -506,7 +506,7 @@ the nodes in your cluster, such as a node or system software upgrade, here are s * Learn about [updating a deployment](/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) including steps to maintain its availability during the rollout. --> -* 参考[配置 Pod 干扰预算](/zh/docs/tasks/run-application/configure-pdb/)中的方法来保护你的应用。 -* 进一步了解[排空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)的信息。 -* 了解[更新 Deployment](/zh/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) +* 参考[配置 Pod 干扰预算](/zh-cn/docs/tasks/run-application/configure-pdb/)中的方法来保护你的应用。 +* 进一步了解[排空节点](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)的信息。 +* 了解[更新 Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) 的过程,包括如何在其进程中维持应用的可用性 diff --git a/content/zh/docs/concepts/workloads/pods/ephemeral-containers.md b/content/zh-cn/docs/concepts/workloads/pods/ephemeral-containers.md similarity index 96% rename from content/zh/docs/concepts/workloads/pods/ephemeral-containers.md rename to content/zh-cn/docs/concepts/workloads/pods/ephemeral-containers.md index 0535b4afe3366..a4649913043ad 100644 --- a/content/zh/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/zh-cn/docs/concepts/workloads/pods/ephemeral-containers.md @@ -124,11 +124,11 @@ sharing](/docs/tasks/configure-pod-container/share-process-namespace/) so you can view processes in other containers. --> 使用临时容器时,启用 -[进程名字空间共享](/zh/docs/tasks/configure-pod-container/share-process-namespace/) +[进程名字空间共享](/zh-cn/docs/tasks/configure-pod-container/share-process-namespace/) 很有帮助,可以查看其他容器中的进程。 {{% heading "whatsnext" %}} -* 了解如何[使用临时调试容器来进行调试](/zh/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container) +* 了解如何[使用临时调试容器来进行调试](/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container) diff --git a/content/zh/docs/concepts/workloads/pods/init-containers.md b/content/zh-cn/docs/concepts/workloads/pods/init-containers.md similarity index 95% rename from content/zh/docs/concepts/workloads/pods/init-containers.md rename to content/zh-cn/docs/concepts/workloads/pods/init-containers.md index 198111fe044db..18782f28cef39 100644 --- a/content/zh/docs/concepts/workloads/pods/init-containers.md +++ b/content/zh-cn/docs/concepts/workloads/pods/init-containers.md @@ -145,8 +145,8 @@ have some advantages for start-up related code: * Init 容器能以不同于 Pod 内应用容器的文件系统视图运行。因此,Init 容器可以访问 应用容器不能访问的 {{< glossary_tooltip text="Secret" term_id="secret" >}} 的权限。 -* 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器 - 提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。 +* 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init + 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。 一旦前置条件满足,Pod 内的所有的应用容器会并行启动。 ### 示例 {#examples} 下面是一些如何使用 Init 容器的想法: -* 等待一个 Service 完成创建,通过类似如下 shell 命令: +* 等待一个 Service 完成创建,通过类似如下 Shell 命令: ```shell for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 ``` + * 注册这个 Pod 到远程服务器,通过在命令中调用 API,类似如下: ```shell - curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register \ - -d 'instance=$()&ip=$()' + curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()' ``` + * 在启动应用容器之前等一段时间,使用类似命令: ```shell sleep 60 ``` + * 克隆 Git 仓库到{{< glossary_tooltip text="卷" term_id="volume" >}}中。 * 将配置值放到配置文件中,运行模板工具为主应用容器动态地生成配置文件。 @@ -249,6 +242,7 @@ kubectl apply -f myapp.yaml The output is similar to this: --> 输出类似于: + ``` pod/myapp-pod created ``` @@ -261,10 +255,12 @@ And check on its status with: ```shell kubectl get -f myapp.yaml ``` + 输出类似于: + ``` NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 6m @@ -278,10 +274,12 @@ or for more details: ```shell kubectl describe -f myapp.yaml ``` + 输出类似于: + ``` Name: myapp-pod Namespace: default @@ -408,13 +406,26 @@ init containers. [What's next](#what-s-next) contains a link to a more detailed During Pod startup, the kubelet delays running init containers until the networking and storage are ready. Then the kubelet runs the Pod's init containers in the order they appear in the Pod's spec. +--> +## 具体行为 {#detailed-behavior} + +在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。 +kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。 + +每个 Init 容器成功退出后才会启动下一个 Init 容器。 +如果某容器因为容器运行时的原因无法启动,或以错误状态退出,kubelet 会根据 +Pod 的 `restartPolicy` 策略进行重试。 +然而,如果 Pod 的 `restartPolicy` 设置为 "Always",Init 容器失败时会使用 +`restartPolicy` 的 "OnFailure" 策略。 + -## 具体行为 {#detailed-behavior} - -在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。 -kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。 - -每个 Init 容器成功退出后才会启动下一个 Init 容器。 -如果某容器因为容器运行时的原因无法启动,或以错误状态退出,kubelet 会根据 -Pod 的 `restartPolicy` 策略进行重试。 -然而,如果 Pod 的 `restartPolicy` 设置为 "Always",Init 容器失败时会使用 -`restartPolicy` 的 "OnFailure" 策略。 - 在所有的 Init 容器没有成功之前,Pod 将不会变成 `Ready` 状态。 Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的 Pod 处于 `Pending` 状态, 但会将状况 `Initializing` 设置为 false。 @@ -446,11 +446,6 @@ Altering an init container image field is equivalent to restarting the Pod. Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on `EmptyDirs` should be prepared for the possibility that an output file already exists. - -Init containers have all of the fields of an app container. However, Kubernetes -prohibits `readinessProbe` from being used because init containers cannot -define readiness distinct from completion. This is enforced during validation. - --> 对 Init 容器规约的修改仅限于容器的 `image` 字段。 更改 Init 容器的 `image` 字段,等同于重启该 Pod。 @@ -458,6 +453,11 @@ define readiness distinct from completion. This is enforced during validation. 因为 Init 容器可能会被重启、重试或者重新执行,所以 Init 容器的代码应该是幂等的。 特别地,基于 `emptyDirs` 写文件的代码,应该对输出文件可能已经存在做好准备。 + Init 容器具有应用容器的所有字段。然而 Kubernetes 禁止使用 `readinessProbe`, 因为 Init 容器不能定义不同于完成态(Completion)的就绪态(Readiness)。 Kubernetes 会在校验时强制执行此检查。 @@ -487,31 +487,36 @@ Init 容器一直重复失败。 Given the ordering and execution for init containers, the following rules for resource usage apply: +--> +### 资源 {#resources} + +在给定的 Init 容器执行顺序下,资源使用适用于如下规则: + +* 所有 Init 容器上定义的任何特定资源的 limit 或 request 的最大值,作为 + Pod **有效初始 request/limit**。 + 如果任何资源没有指定资源限制,这被视为最高限制。 +* Pod 对资源的 **有效 limit/request** 是如下两者中的较大者: + * 所有应用容器对某个资源的 limit/request 之和 + * 对某个资源的有效初始 limit/request + + -### 资源 {#resources} - -在给定的 Init 容器执行顺序下,资源使用适用于如下规则: - -* 所有 Init 容器上定义的任何特定资源的 limit 或 request 的最大值,作为 Pod *有效初始 request/limit*。 - 如果任何资源没有指定资源限制,这被视为最高限制。 -* Pod 对资源的 *有效 limit/request* 是如下两者的较大者: - * 所有应用容器对某个资源的 limit/request 之和 - * 对某个资源的有效初始 limit/request * 基于有效 limit/request 完成调度,这意味着 Init 容器能够为初始化过程预留资源, 这些资源在 Pod 生命周期过程中并没有被使用。 -* Pod 的 *有效 QoS 层* ,与 Init 容器和应用容器的一样。 +* Pod 的 **有效 QoS 层** ,与 Init 容器和应用容器的一样。 +### Pod 重启的原因 {#pod-restart-reasons} + +Pod 重启会导致 Init 容器重新执行,主要有如下几个原因: + -### Pod 重启的原因 {#pod-restart-reasons} - -Pod 重启会导致 Init 容器重新执行,主要有如下几个原因: - * Pod 的基础设施容器 (译者注:如 `pause` 容器) 被重启。这种情况不多见, 必须由具备 root 权限访问节点的人员来完成。 @@ -549,8 +555,8 @@ applies for Kubernetes v1.20 and later. If you are using an earlier version of Kubernetes, consult the documentation for the version you are using. --> 当 Init 容器的镜像发生改变或者 Init 容器的完成记录因为垃圾收集等原因被丢失时, -Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。如果你在使用较早 -版本的 Kubernetes,可查阅你所使用的版本对应的文档。 +Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。 +如果你在使用较早版本的 Kubernetes,可查阅你所使用的版本对应的文档。 ## {{% heading "whatsnext" %}} @@ -558,5 +564,6 @@ Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。 * Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) * Learn how to [debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/) --> -* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) -* 学习如何[调试 Init 容器](/zh/docs/tasks/debug/debug-application/debug-init-containers/) +* 阅读[创建包含 Init 容器的 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) +* 学习如何[调试 Init 容器](/zh-cn/docs/tasks/debug/debug-application/debug-init-containers/) + diff --git a/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md b/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md similarity index 96% rename from content/zh/docs/concepts/workloads/pods/pod-lifecycle.md rename to content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md index 345879957334b..4c120be05a3f8 100644 --- a/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md @@ -45,7 +45,7 @@ or is [terminated](#pod-termination). Pod 对象的状态包含了一组 [Pod 状况(Conditions)](#pod-conditions)。 如果应用需要的话,你也可以向其中注入[自定义的就绪性信息](#pod-readiness-gate)。 -Pod 在其生命周期中只会被[调度](/zh/docs/concepts/scheduling-eviction/)一次。 +Pod 在其生命周期中只会被[调度](/zh-cn/docs/concepts/scheduling-eviction/)一次。 一旦 Pod 被调度(分派)到某个节点,Pod 会一直在该节点运行,直到 Pod 停止或者 被[终止](#pod-termination)。 @@ -66,7 +66,7 @@ are [scheduled for deletion](#pod-garbage-collection) after a timeout period. 和一个个独立的应用容器一样,Pod 也被认为是相对临时性(而不是长期存在)的实体。 Pod 会被创建、赋予一个唯一的 -ID([UID](/zh/docs/concepts/overview/working-with-objects/names/#uids)), +ID([UID](/zh-cn/docs/concepts/overview/working-with-objects/names/#uids)), 并被调度到节点,并在终止(根据重启策略)或删除之前一直运行在该节点。 如果一个{{< glossary_tooltip text="节点" term_id="node" >}}死掉了,调度到该节点 @@ -182,7 +182,7 @@ There are three possible container states: `Waiting`, `Running`, and `Terminated ## 容器状态 {#container-states} Kubernetes 会跟踪 Pod 中每个容器的状态,就像它跟踪 Pod 总体上的[阶段](#pod-phase)一样。 -你可以使用[容器生命周期回调](/zh/docs/concepts/containers/container-lifecycle-hooks/) +你可以使用[容器生命周期回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks/) 来在容器生命周期中的特定时间点触发事件。 一旦{{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}将 Pod @@ -306,7 +306,7 @@ Pod 有一个 PodStatus 对象,其中包含一个 --> * `PodScheduled`:Pod 已经被调度到某节点; * `ContainersReady`:Pod 中所有容器都已就绪; -* `Initialized`:所有的 [Init 容器](/zh/docs/concepts/workloads/pods/init-containers/) +* `Initialized`:所有的 [Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/) 都已成功完成; * `Ready`:Pod 可以为请求提供服务,并且应该被添加到对应服务的负载均衡池中。 @@ -382,8 +382,8 @@ status: -你所添加的 Pod 状况名称必须满足 Kubernetes -[标签键名格式](/zh/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)。 +你所添加的 Pod 状况名称必须满足 Kubernetes +[标签键名格式](/zh-cn/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)。 ## 容器探针 {#container-probes} -probe 是由 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 对容器执行的定期诊断。 +probe 是由 [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) 对容器执行的定期诊断。 要执行诊断,kubelet 既可以在容器内执行代码,也可以发出一个网络请求。 如欲了解如何设置存活态、就绪态和启动探针的进一步细节,可以参阅 -[配置存活态、就绪态和启动探针](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)。 +[配置存活态、就绪态和启动探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)。 -如果容器中的进程能够在遇到问题或不健康的情况下自行崩溃,则不一定需要存活态探针; -`kubelet` 将根据 Pod 的`restartPolicy` 自动执行修复操作。 +如果容器中的进程能够在遇到问题或不健康的情况下自行崩溃,则不一定需要存活态探针; +`kubelet` 将根据 Pod 的 `restartPolicy` 自动执行修复操作。 如果你希望容器在探测失败时被杀死并重新启动,那么请指定一个存活态探针, -并指定`restartPolicy` 为 "`Always`" 或 "`OnFailure`"。 +并指定 `restartPolicy` 为 "`Always`" 或 "`OnFailure`"。 1. 如果 Pod 中的容器之一定义了 `preStop` - [回调](/zh/docs/concepts/containers/container-lifecycle-hooks), + [回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks), `kubelet` 开始在容器内运行该回调逻辑。如果超出体面终止限期时,`preStop` 回调逻辑 仍在运行,`kubelet` 会请求给予该 Pod 的宽限期一次性增加 2 秒钟。 @@ -874,7 +874,7 @@ API 服务器直接删除 Pod 对象,这样新的与之同名的 Pod 即可以 在节点侧,被设置为立即终止的 Pod 仍然会在被强行杀死之前获得一点点的宽限时间。 如果你需要强制删除 StatefulSet 的 Pod,请参阅 -[从 StatefulSet 中删除 Pod](/zh/docs/tasks/run-application/force-delete-stateful-set-pod/) +[从 StatefulSet 中删除 Pod](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/) 的任务文档。 -* 动手实践[为容器生命周期时间关联处理程序](/zh/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)。 -* 动手实践[配置存活态、就绪态和启动探针](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)。 -* 进一步了解[容器生命周期回调](/zh/docs/concepts/containers/container-lifecycle-hooks/)。 +* 动手实践[为容器生命周期时间关联处理程序](/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)。 +* 动手实践[配置存活态、就绪态和启动探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)。 +* 进一步了解[容器生命周期回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks/)。 * 关于 API 中定义的有关 Pod 和容器状态的详细规范信息, 可参阅 API 参考文档中 Pod 的 [`.status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus) 字段。 diff --git a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints.md similarity index 82% rename from content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md rename to content/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index ab91304c04e71..40ef623ad6b2a 100644 --- a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -10,19 +10,14 @@ content_type: concept weight: 40 --> -{{< feature-state for_k8s_version="v1.19" state="stable" >}} - - 你可以使用 _拓扑分布约束(Topology Spread Constraints)_ 来控制 -{{< glossary_tooltip text="Pods" term_id="Pod" >}} 在集群内故障域 -之间的分布,例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。 +{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布, +例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。 这样做有助于实现高可用并提升资源利用率。 @@ -81,7 +76,7 @@ graph TB -你可以复用在大多数集群上自动创建和填充的[常用标签](/zh/docs/reference/labels-annotations-taints/), +你可以复用在大多数集群上自动创建和填充的[常用标签](/zh-cn/docs/reference/labels-annotations-taints/), 而不是手动添加标签。 + +- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中匹配的 + Pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable` 的取值, + 其语义会有不同。 + - 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域中匹配的 + Pod 数与全局最小值(一个拓扑域中与标签选择器匹配的 Pod 的最小数量。例如,如果你有 + 3 个区域,分别具有 0 个、2 个 和 3 个匹配的 Pod,则全局最小值为 0。)之间可存在的差异。 + - 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低偏差值的拓扑域。 + + +- **minDomains** 表示符合条件的域的最小数量。域是拓扑的一个特定实例。 + 符合条件的域是其节点与节点选择器匹配的域。 + + - 指定的 `minDomains` 的值必须大于 0。 + - 当符合条件的、拓扑键匹配的域的数量小于 `minDomains` 时,Pod 拓扑分布将“全局最小值” + (global minimum)设为 0,然后进行 `skew` 计算。“全局最小值”是一个符合条件的域中匹配 + Pod 的最小数量,如果符合条件的域的数量小于 `minDomains`,则全局最小值为零。 + - 当符合条件的拓扑键匹配域的个数等于或大于 `minDomains` 时,该值对调度没有影响。 + - 当 `minDomains` 为 nil 时,约束的行为等于 `minDomains` 为 1。 + - 当 `minDomains` 不为 nil 时,`whenUnsatisfiable` 的值必须为 "`DoNotSchedule`" 。 + + {{< note >}} + + `minDomains` 字段是在 1.24 版本中新增的 alpha 字段。你必须启用 + `MinDomainsInPodToplogySpread` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)才能使用它。 + {{< /note >}} + + - -- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中 - 匹配的 pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable` 的 - 取值,其语义会有不同。 - - 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域 - 中匹配的 Pod 数与全局最小值之间可存在的差异。 - - 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低 - 偏差值的拓扑域。 - **topologyKey** 是节点标签的键。如果两个节点使用此键标记并且具有相同的标签值, - 则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量 - 均衡的 Pod。 + 则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量均衡的 Pod。 + - **whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理: - `DoNotSchedule`(默认)告诉调度器不要调度。 - - `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对 - 节点进行排序。 -- **labelSelector** 用于查找匹配的 pod。匹配此标签的 Pod 将被统计,以确定相应 - 拓扑域中 Pod 的数量。 - 有关详细信息,请参考[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 + - `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。 + +- **labelSelector** 用于查找匹配的 Pod。匹配此标签的 Pod 将被统计, + 以确定相应拓扑域中 Pod 的数量。 + 有关详细信息,请参考[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 -你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints` 命令以 -了解关于 topologySpreadConstraints 的更多信息。 +你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints` +命令以了解关于 topologySpreadConstraints 的更多信息。 `topologyKey: zone` 意味着均匀分布将只应用于存在标签键值对为 -"zone:<any value>" 的节点。 +"zone:<任何值>" 的节点。 `whenUnsatisfiable: DoNotSchedule` 告诉调度器如果新的 Pod 不满足约束, 则让它保持悬决状态。 -如果调度器将新的 Pod 放入 "zoneA",Pods 分布将变为 [3, 1],因此实际的偏差 -为 2(3 - 1)。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在 +如果调度器将新的 Pod 放入 "zoneA",Pods 分布将变为 [3, 1],因此实际的偏差为 +2(3 - 1)。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在 "zoneB" 上: {{}} @@ -313,8 +351,8 @@ You can use 2 TopologySpreadConstraints to control the Pods spreading on both zo In this case, to match the first constraint, the incoming Pod can only be placed onto "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4". --> 在这种情况下,为了匹配第一个约束,新的 Pod 只能放置在 "zoneB" 中;而在第二个约束中, -新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置 -在 "node4" 上。 +新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置在 +"node4" 上。 如果对集群应用 "two-constraints.yaml",会发现 "mypod" 处于 `Pending` 状态。 这是因为:为了满足第一个约束,"mypod" 只能放在 "zoneB" 中,而第二个约束要求 @@ -406,7 +444,7 @@ class zoneC cluster; 而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 YAML, 以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector` @@ -437,7 +475,7 @@ There are some implicit conventions worth noting here: - The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that: 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". + 2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". --> - 只有与新的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。 - 调度器会忽略没有 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着: @@ -452,8 +490,8 @@ There are some implicit conventions worth noting here: -- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` 与自身的 - 标签不匹配,将会发生什么。 +- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` + 与自身的标签不匹配,将会发生什么。 在上面的例子中,如果移除新 Pod 上的标签,Pod 仍然可以调度到 "zoneB",因为约束仍然满足。 然而,在调度之后,集群的不平衡程度保持不变。zoneA 仍然有 2 个带有 {foo:bar} 标签的 Pod, zoneB 有 1 个带有 {foo:bar} 标签的 Pod。 @@ -471,8 +509,8 @@ topology spread constraints are applied to a Pod if, and only if: --> ### 集群级别的默认约束 {#cluster-level-default-constraints} -为集群设置默认的拓扑分布约束也是可能的。默认拓扑分布约束在且仅在以下条件满足 -时才会应用到 Pod 上: +为集群设置默认的拓扑分布约束也是可能的。 +默认拓扑分布约束在且仅在以下条件满足时才会被应用到 Pod 上: - Pod 没有在其 `.spec.topologySpreadConstraints` 设置任何约束; - Pod 隶属于某个服务、副本控制器、ReplicaSet 或 StatefulSet。 @@ -486,7 +524,7 @@ replication controllers, replica sets or stateful sets that the Pod belongs to. An example configuration might look like follows: --> -你可以在 [调度方案(Scheduling Profile)](/zh/docs/reference/scheduling/config/#profiles) +你可以在 [调度方案(Scheduling Profile)](/zh-cn/docs/reference/scheduling/config/#profiles) 中将默认约束作为 `PodTopologySpread` 插件参数的一部分来设置。 约束的设置采用[如前所述的 API](#api),只是 `labelSelector` 必须为空。 选择算符是根据 Pod 所属的服务、副本控制器、ReplicaSet 或 StatefulSet 来设置的。 @@ -511,16 +549,12 @@ profiles: {{< note >}} -默认调度约束所生成的评分可能与 -[`SelectorSpread` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins) -所生成的评分有冲突。 -建议你在为 `PodTopologySpread` 设置默认约束是禁用调度方案中的该插件。 +[`SelectorSpread` 插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)默认是被禁用的。 +建议使用 `PodTopologySpread` 来实现类似的行为。 {{< /note >}} #### 内部默认约束 {#internal-default-constraints} -{{< feature-state for_k8s_version="v1.20" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -当你使用了默认启用的 `DefaultPodTopologySpread` 特性门控时,原来的 -`SelectorSpread` 插件会被禁用。 -kube-scheduler 会使用下面的默认拓扑约束作为 `PodTopologySpread` 插件的 -配置: +如果你没有为 Pod 拓扑分布配置任何集群级别的默认约束, +kube-scheduler 的行为就像你指定了以下默认拓扑约束一样: ```yaml defaultConstraints: @@ -553,9 +583,9 @@ defaultConstraints: -此外,原来用于提供等同行为的 `SelectorSpread` 插件也会被禁用。 +此外,原来用于提供等同行为的 `SelectorSpread` 插件默认被禁用。 {{< note >}} + + + + +*Kubernetes 欢迎来自所有贡献者的改进,无论你是新人和有经验的贡献者!* + + +{{< note >}} +要了解有关为 Kubernetes 做出贡献的更多信息,请参阅 +[贡献者文档](https://www.kubernetes.dev/docs/)。 + +你还可以阅读 +{{< glossary_tooltip text="CNCF" term_id="cncf" >}} +关于为 Kubernetes 做贡献的[页面](https://contribute.cncf.io/contributors/projects/#kubernetes)。 + + +本网站由 [Kubernetes SIG Docs](/zh-cn/docs/contribute/#get-involved-with-SIG-Docs)(文档特别兴趣小组)维护。 + +Kubernetes 文档项目的贡献者: + +- 改进现有内容 +- 创建新内容 +- 翻译文档 +- 管理并发布 Kubernetes 周期性发行版的文档 + + + + +## 入门 {#getting-started} + +任何人都可以提出文档方面的问题(issue),或贡献一个变更,用拉取请求(PR)的方式提交到 +[GitHub 上的 `kubernetes/website` 仓库](https://github.com/kubernetes/website)。 +当然你需要熟练使用 [git](https://git-scm.com/) 和 [GitHub](https://lab.github.com/) 才能在 Kubernetes 社区中有效工作。 + + +如何参与文档编制: + +1. 签署 CNCF 的[贡献者许可协议](https://github.com/kubernetes/community/blob/master/CLA.md)。 +2. 熟悉[文档仓库](https://github.com/kubernetes/website)和网站的[静态站点生成器](https://gohugo.io)。 +3. 确保理解[发起 PR](/zh-cn/docs/contribute/new-content/open-a-pr/) 和[审查变更](/zh-cn/docs/contribute/review/reviewing-prs/)的基本流程。 + + + + +{{< mermaid >}} +flowchart TB +subgraph third[发起 PR] +direction TB +U[ ] -.- +Q[改进现有内容] --- N[创建新内容] +N --- O[翻译文档] +O --- P[管理并发布 K8s
        周期性发行版的文档] + +end + +subgraph second[评审] +direction TB + T[ ] -.- + D[仔细查看
        K8s/website
        仓库] --- E[下载安装 Hugo
        静态站点
        生成器] + E --- F[了解基本的
        GitHub 命令] + F --- G[评审待处理的 PR
        并遵从变更审查
        流程] +end + +subgraph first[注册] + direction TB + S[ ] -.- + B[签署 CNCF
        贡献者
        许可协议] --- C[加入 sig-docs
        Slack 频道] + C --- V[加入 kubernetes-sig-docs
        邮件列表] + V --- M[参加每周的
        sig-docs 电话会议
        或 slack 会议] +end + +A([fa:fa-user 新的
        贡献者]) --> first +A --> second +A --> third +A --> H[提出问题!!!] + + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,B,C,D,E,F,G,H,M,Q,N,O,P,V grey +class S,T,U spacewhite +class first,second,third white +{{}} + + +图 1. 新手入门指示。 + +图 1 概述了新贡献者的路线图。 +你可以遵从“注册”和“评审”所述的某些或全部步骤。 +至此,你完成了发起 PR 的准备工作, +可以通过“发起 PR” 列出的事项实现你的贡献目标。 +再次重申,欢迎随时提出问题! + + +有些任务要求 Kubernetes 组织内更高的信任级别和访问权限。 +阅读[参与 SIG Docs 工作](/zh-cn/docs/contribute/participate/),获取角色和权限的更多细节。 + + +## 第一次贡献 {#your-first-contribution} + +你可以提前查阅几个步骤,来准备你的第一次贡献。 +图 2 概述了后续的步骤和细节。 + + + + +{{< mermaid >}} +flowchart LR + subgraph second[第一次贡献] + direction TB + S[ ] -.- + G[查阅其他 K8s
        成员发起的 PR] --> + A[检索 K8s/website
        问题列表是否有
        good first 一类的 PR] --> B[发起一个 PR!!] + end + subgraph first[建议的准备工作] + direction TB + T[ ] -.- + D[阅读贡献概述] -->E[阅读 K8s 内容
        和风格指南] + E --> F[了解 Hugo 页面
        内容类型
        和短代码] + end + + + first ----> second + + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,B,D,E,F,G grey +class S,T spacewhite +class first,second white +{{}} + + +图 2. 第一次贡献的准备工作。 + + +- 通读[贡献概述](/zh-cn/docs/contribute/new-content/),了解参与贡献的不同方式。 +- 查看 [`kubernetes/website` 问题列表](https://github.com/kubernetes/website/issues/), + 检索最适合作为切入点的问题。 +- 在现有文档上,[使用 GitHub 提交 PR](/zh-cn/docs/contribute/new-content/open-a-pr/#changes-using-github), + 掌握在 GitHub 上登记 Issue 的方法。 +- Kubernetes 社区其他成员会[评审 PR ](/zh-cn/docs/contribute/review/reviewing-prs/), + 以确保文档精准和语言流畅。 +- 阅读 kubernetes 的[内容指南](/zh-cn/docs/contribute/style/content-guide/)和 + [风格指南](/zh-cn/docs/contribute/style/style-guide/),以发表有见地的评论。 +- 了解[页面内容类型](/zh-cn/docs/contribute/style/page-content-types/)和 + [Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/)。 + + +## 下一步 {#next-teps} + +- 学习在仓库的[本地克隆中工作](/zh-cn/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 +- 为[发行版的特性](/zh-cn/docs/contribute/new-content/new-features/)编写文档。 +- 加入 [SIG Docs](/zh-cn/docs/contribute/participate/),并成为[成员或评审者](/zh-cn/docs/contribute/participate/roles-and-responsibilities/)。 + +- 开始或帮助[本地化](/zh-cn/docs/contribute/localization/) 工作。 + + +## 参与 SIG Docs 工作 {#get-involved-with-SIG-Docs} + +[SIG Docs](/zh-cn/docs/contribute/participate/) 是负责发布、维护 Kubernetes 文档的贡献者团体。 +参与 SIG Docs 是 Kubernetes 贡献者(开发者和其他人员)对 Kubernetes 项目产生重大影响力的好方式。 + +SIG Docs 的几种沟通方式: + +- [加入 Kubernetes 在 Slack 上的`#sig-docs` 频道](https://slack.k8s.io/)。 + 一定记得自我介绍! +- [加入 `kubernetes-sig-docs` 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), + 这里有更广泛的讨论,和官方决策的记录。 +- 参加每两周召开一次的 [SIG Docs 视频会议](https://github.com/kubernetes/community/tree/master/sig-docs)。 + 会议总是在 `#sig-docs` 上发出公告,同时添加到 + [Kubernetes 社区会议日历](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles)。 + 你需要下载 [Zoom 客户端软件](https://zoom.us/download),或电话拨号接入。 +- 如果有几周未召开实况 Zoom 视频会议,请参加 SIG Docs 异步 Slack 站会。 + 会议总是在 `#sig-docs` 上发出公告。 + 你可以在会议公告后 24 小时内为其中任一议题做贡献。 + + +## 其他贡献方式 {#other-ways-to-contribute} + +- 访问 [Kubernetes 社区网站](/zh-cn/community/)。 + 参与 Twitter 或 Stack Overflow,了解当地的 Kubernetes 会议和活动等等。 +- 阅读[贡献者备忘单](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet), + 参与 Kubernetes 功能开发。 +- 访问贡献者网站,进一步了解有关 [Kubernetes 贡献者](https://www.kubernetes.dev/) + 和[更多贡献者资源](https://www.kubernetes.dev/resources/)的信息。 +- 提交一篇[博客文章或案例研究](/zh-cn/docs/contribute/new-content/blogs-case-studies/)。 diff --git a/content/zh/docs/contribute/advanced.md b/content/zh-cn/docs/contribute/advanced.md similarity index 78% rename from content/zh/docs/contribute/advanced.md rename to content/zh-cn/docs/contribute/advanced.md index ccbbd6b3a0ade..7f49ce5bf8f63 100644 --- a/content/zh/docs/contribute/advanced.md +++ b/content/zh-cn/docs/contribute/advanced.md @@ -21,20 +21,21 @@ to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. --> -如果你已经了解如何[贡献新内容](/zh/docs/contribute/new-content/overview/)和 -[评阅他人工作](/zh/docs/contribute/review/reviewing-prs/),并准备了解更多贡献的途径, -请阅读此文。您需要使用 Git 命令行工具和其他工具做这些工作。 +如果你已经了解如何[贡献新内容](/zh-cn/docs/contribute/new-content/)和 +[评阅他人工作](/zh-cn/docs/contribute/review/reviewing-prs/),并准备了解更多贡献的途径, +请阅读此文。你需要使用 Git 命令行工具和其他工具做这些工作。 -## 提出改进建议 +## 提出改进建议 {#propose-improvements} -SIG Docs 的 [成员](/zh/docs/contribute/participate/roles-and-responsibilities/#members) 可以提出改进建议。 +SIG Docs 的[成员](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members) 可以提出改进建议。 -在对 Kubernetes 文档贡献了一段时间后,你可能会对[样式指南](/zh/docs/contribute/style/style-guide/)、 -[内容指南](/zh/docs/contribute/style/content-guide/)、用于构建文档的工具链、网站样式、 +在对 Kubernetes 文档贡献了一段时间后,你可能会对[样式指南](/zh-cn/docs/contribute/style/style-guide/)、 +[内容指南](/zh-cn/docs/contribute/style/content-guide/)、用于构建文档的工具链、网站样式、 评审和合并 PR 的流程或者文档的其他方面产生改进的想法。 为了尽可能透明化,这些提议都需要在 SIG Docs 会议或 [kubernetes-sig-docs 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)上讨论。 @@ -66,18 +67,19 @@ appropriate. For instance, an update to the style guide or the website's functionality might involve opening a pull request, while a change related to documentation testing might involve working with sig-testing. --> -在进行了讨论并且 SIG 就期望的结果达成一致之后,你就能以最合理的方式处理改进建议了。例如,样式指南或网站功能的更新可能涉及 PR 的新增,而与文档测试相关的更改可能涉及 sig-testing。 +在进行了讨论并且 SIG 就期望的结果达成一致之后,你就能以最合理的方式处理改进建议了。 +例如,样式指南或网站功能的更新可能涉及 PR 的新增,而与文档测试相关的更改可能涉及 sig-testing。 -## 为 Kubernetes 版本发布协调文档工作 +## 为 Kubernetes 版本发布协调文档工作 {#coordinate-docs-for-a-kubernetes-release} -SIG Docs 的[批准者(approvers)](/zh/docs/contribute/participating/#approvers) 可以为 -Kubernetes 版本发布协调文档工作。 +SIG Docs 的[批准者(approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) +可以为 Kubernetes 版本发布协调文档工作。 -## 担任新的贡献者大使 +## 担任新的贡献者大使 {#serve-as-a-new-contributor-ambassador} -SIG Docs [批准人(Approvers)](/zh/docs/contribute/participating/#approvers) +SIG Docs [批准人(Approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) 可以担任新的贡献者大使。 -新的贡献者大使共同努力欢迎 SIG-Docs 的新贡献者,对新贡献者的 PR 提出建议, +新的贡献者大使欢迎 SIG-Docs 的新贡献者,对新贡献者的 PR 提出建议, 以及在前几份 PR 提交中指导新贡献者。 新的贡献者大使的职责包括: - 监听 [Kubernetes #sig-docs 频道](https://kubernetes.slack.com) 上新贡献者的 Issue。 -- 与 PR 管理者合作为新参与者寻找[合适的第一个 issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue) 。 -- 通过前几个 PR 指导新贡献者为文档存储库作贡献。 +- 与 PR 管理者合作为新参与者寻找[合适的第一个 issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue)。 +- 通过前几个 PR 指导新贡献者为文档存储库作贡献。 - 帮助新的贡献者创建成为 Kubernetes 成员所需的更复杂的 PR。 - [为贡献者提供保荐](#sponsor-a-new-contributor),使其成为 Kubernetes 成员。 - 每月召开一次会议,帮助和指导新的贡献者。 @@ -172,22 +174,23 @@ Current New Contributor Ambassadors are announced at each SIG-Docs meeting and i ## 为新的贡献者提供保荐 {#sponsor-a-new-contributor} -SIG Docs 的[评审人(Reviewers)](/zh/docs/contribute/participating/#reviewers) 可以为新的贡献者提供保荐。 +SIG Docs 的[评审人(Reviewers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#reviewers) +可以为新的贡献者提供保荐。 新的贡献者针对一个或多个 Kubernetes 项目仓库成功提交了 5 个实质性 PR 之后, -就有资格申请 Kubernetes 组织的[成员身份](/zh/docs/contribute/participate/roles-and-responsibilities/#members)。 +就有资格申请 Kubernetes 组织的[成员身份](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members)。 贡献者的成员资格需要同时得到两位评审人的保荐。 -## 担任 SIG 联合主席 +## 担任 SIG 联合主席 {#sponsor-a-new-contributor} -SIG Docs [成员(Members)](/zh/docs/contribute/participate/roles-and-responsibilities/#members) +SIG Docs [成员(Members)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members) 可以担任 SIG Docs 的联合主席。 -### 前提条件 +### 前提条件 {#prerequisites} -### 职责范围 +### 职责范围 {#responsibilities} 联合主席主要提供以下服务: 联合主席负责处理流程和政策、时间安排和召开会议、安排 PR 管理员、以及一些其他人不想做的事情,目的是增长贡献者团队。 @@ -256,7 +264,7 @@ Responsibilities include: - 保持 SIG Docs 专注于通过出色的文档最大限度地提高开发人员的满意度 -- 以身作则,践行[社区行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md), +- 以身作则,践行[社区行为准则](https://github.com/cncf/foundation/blob/main/code-of-conduct.md), 并要求 SIG 成员对自身行为负责 - 通过更新贡献指南,为 SIG 学习并设置最佳实践 - 安排和举行 SIG 会议:每周状态更新,每季度回顾/计划会议以及其他需要的会议 @@ -278,15 +286,15 @@ Responsibilities include: To schedule and run effective meetings, these guidelines show what to do, how to do it, and why. -**Uphold the [community code of conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)**: +**Uphold the [community code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)**: - Hold respectful, inclusive discussions with respectful, inclusive language. --> -### 召开高效的会议 +### 召开高效的会议 {#running-effective-meetings} 为了安排和召开高效的会议,这些指南说明了如何做、怎样做以及原因。 -**坚持[社区行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)**: +**坚持[社区行为准则](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)**: - 相互尊重地、包容地进行讨论。 @@ -332,7 +340,7 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings **Honor folks' time**: -- Begin and end meetings punctually +Begin and end meetings on time. --> **根据需要来进行协调**: @@ -341,19 +349,19 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings **尊重大家的时间**: -- 准时开始和结束会议 +按时开始和结束会议 **有效利用 Zoom**: -- 熟悉 [ Kubernetes Zoom 指南](https://github.com/kubernetes/community/blob/master/communication/zoom-guidelines.md) +- 熟悉 [ Kubernetes Zoom 指南](https://github.com/kubernetes/community/blob/main/communication/zoom-guidelines.md) - 输入主持人密钥登录时声明主持人角色 声明 Zoom 角色 @@ -362,12 +370,12 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings ### Recording meetings on Zoom When you're ready to start the recording, click Record to Cloud. - + When you're ready to stop recording, click Stop. The video uploads automatically to YouTube. --> -### 录制 Zoom 会议 +### 录制 Zoom 会议 {#recording-meetings-on-zoom} 准备开始录制时,请单击“录制到云”。 diff --git a/content/zh/docs/contribute/analytics.md b/content/zh-cn/docs/contribute/analytics.md similarity index 100% rename from content/zh/docs/contribute/analytics.md rename to content/zh-cn/docs/contribute/analytics.md diff --git a/content/zh/docs/contribute/generate-ref-docs/_index.md b/content/zh-cn/docs/contribute/generate-ref-docs/_index.md similarity index 85% rename from content/zh/docs/contribute/generate-ref-docs/_index.md rename to content/zh-cn/docs/contribute/generate-ref-docs/_index.md index fc73d84b29c84..87f08c5f6d75a 100644 --- a/content/zh/docs/contribute/generate-ref-docs/_index.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/_index.md @@ -22,5 +22,5 @@ To build the reference documentation, see the following guide: 本节的主题是描述如何生成 Kubernetes 参考指南。 要生成参考文档,请参考下面的指南: -* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) +* [生成参考文档快速入门](/zh-cn/docs/contribute/generate-ref-docs/quickstart/) diff --git a/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream.md similarity index 90% rename from content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md rename to content/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream.md index 4bf2591aba900..cbbc13cb9b27b 100644 --- a/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream.md @@ -27,10 +27,10 @@ API or the `kube-*` components from the upstream code, see the following instruc - [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) - [Generating Reference Documentation for the Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) --> -如果您仅想从上游代码重新生成 Kubernetes API 或 `kube-*` 组件的参考文档。请参考以下说明: +如果你仅想从上游代码重新生成 Kubernetes API 或 `kube-*` 组件的参考文档。请参考以下说明: -- [生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) -- [生成 Kubernetes 组件和工具的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) +- [生成 Kubernetes API 的参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) +- [生成 Kubernetes 组件和工具的参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/) ## {{% heading "prerequisites" %}} @@ -65,7 +65,7 @@ You need to have these tools installed: [Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/) and [GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). --> -- 您需要知道如何创建对 GitHub 代码仓库的拉取请求(Pull Request)。 +- 你需要知道如何创建对 GitHub 代码仓库的拉取请求(Pull Request)。 通常,这涉及创建代码仓库的派生副本。 要获取更多的信息请参考[创建 PR](https://help.github.com/articles/creating-a-pull-request/) 和 [GitHub 标准派生和 PR 工作流程](https://gist.github.com/Chaser324/ce0505fbed06b947d962)。 @@ -87,7 +87,7 @@ creating a patch to fix it in the upstream project. Kubernetes API 和 `kube-*` 组件(例如 `kube-apiserver`、`kube-controller-manager`)的参考文档 是根据[上游 Kubernetes](https://github.com/kubernetes/kubernetes/) 中的源代码自动生成的。 -当您在生成的文档中看到错误时,您可能需要考虑创建一个 PR 用来在上游项目中对其进行修复。 +当你在生成的文档中看到错误时,你可能需要考虑创建一个 PR 用来在上游项目中对其进行修复。 ## 克隆 Kubernetes 代码仓库 -如果您还没有 kubernetes/kubernetes 代码仓库,请参照下列命令获取: +如果你还没有 kubernetes/kubernetes 代码仓库,请参照下列命令获取: ```shell mkdir $GOPATH/src @@ -111,7 +111,7 @@ For example, if you followed the preceding step to get the repository, your base directory is `$GOPATH/src/github.com/kubernetes/kubernetes.` The remaining steps refer to your base directory as ``. --> -确定您的 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) 代码仓库克隆的根目录。 +确定你的 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) 代码仓库克隆的根目录。 例如,如果按照前面的步骤获取代码仓库,则你的根目录为 `$GOPATH/src/github.com/kubernetes/kubernetes`。 接下来其余步骤将你的根目录称为 ``。 @@ -122,7 +122,7 @@ For example, if you followed the preceding step to get the repository, your base directory is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs.` The remaining steps refer to your base directory as ``. --> -确定您的 [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) +确定你的 [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) 代码仓库克隆的根目录。 例如,如果按照前面的步骤获取代码仓库,则你的根目录为 `$GOPATH/src/github.com/kubernetes-sigs/reference-docs`。 @@ -146,7 +146,7 @@ The documentation for the `kube-*` components is also generated from the upstrea source code. You must change the code related to the component you want to fix in order to fix the generated documentation. --> -`kube-*` 组件的文档也是从上游源代码生成的。您必须更改与要修复的组件相关的代码,才能修复生成的文档。 +`kube-*` 组件的文档也是从上游源代码生成的。你必须更改与要修复的组件相关的代码,才能修复生成的文档。 以下在 Kubernetes 源代码中编辑注释的示例。 -在您本地的 kubernetes/kubernetes 代码仓库中,检出默认分支,并确保它是最新的: +在你本地的 kubernetes/kubernetes 代码仓库中,检出默认分支,并确保它是最新的: ```shell cd @@ -200,7 +200,7 @@ git status The output shows that you are on the master branch, and that the `types.go` source file has been modified: --> -输出显示您在 master 分支上,`types.go` 源文件已被修改: +输出显示你在 master 分支上,`types.go` 源文件已被修改: ```shell On branch master @@ -217,7 +217,7 @@ you will do a second commit. It is important to keep your changes separated into ### 提交已编辑的文件 运行 `git add` 和 `git commit` 命令提交到目前为止所做的更改。 -在下一步中,您将进行第二次提交,将更改分成两个提交很重要。 +在下一步中,你将进行第二次提交,将更改分成两个提交很重要。 查看 `api/openapi-spec/swagger.json` 的内容,以确保拼写错误已经被修正。 -例如,您可以运行 `git diff -a api/openapi-spec/swagger.json` 命令。 +例如,你可以运行 `git diff -a api/openapi-spec/swagger.json` 命令。 这很重要,因为 `swagger.json` 是文档生成过程中第二阶段的输入。 -运行 `git add` 和 `git commit` 命令来提交您的更改。现在您有两个提交(commits): +运行 `git add` 和 `git commit` 命令来提交你的更改。现在你有两个提交(commits): 一种包含编辑的 `types.go` 文件,另一种包含生成的 OpenAPI 规范和相关文件。 -将这两个提交分开独立。也就是说,不要 squash 您的提交。 +将这两个提交分开独立。也就是说,不要 squash 你的提交。 -将您的更改作为 [PR](https://help.github.com/articles/creating-a-pull-request/) +将你的更改作为 [PR](https://help.github.com/articles/creating-a-pull-request/) 提交到 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) 代码仓库的 master 分支。 -关注您的 PR,并根据需要回复 reviewer 的评论。继续关注您的 PR,直到 PR 被合并为止。 +关注你的 PR,并根据需要回复 reviewer 的评论。继续关注你的 PR,直到 PR 被合并为止。 {{< note >}} 确定要更改的正确源文件可能很棘手。在前面的示例中,官方的源文件位于 `kubernetes/kubernetes` -代码仓库的 `staging` 目录中。但是根据您的情况,`staging` 目录可能不是找到官方源文件的地方。 +代码仓库的 `staging` 目录中。但是根据你的情况,`staging` 目录可能不是找到官方源文件的地方。 如果需要帮助,请阅读 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes/tree/master/staging) 代码仓库和相关代码仓库 @@ -330,7 +330,7 @@ commit into the release-{{< skew prevMinorVersion >}} branch. The idea is to che that edited `types.go`, but not the commit that has the results of running the scripts. For instructions, see [Propose a Cherry Pick](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md). --> -回想一下,您的 PR 有两个提交:一个用于编辑 `types.go`,一个用于由脚本生成的文件。 +回想一下,你的 PR 有两个提交:一个用于编辑 `types.go`,一个用于由脚本生成的文件。 下一步是将你的第一次提交 cherrypick 到 release-{{< skew prevMinorVersion >}} 分支。 这样做的原因是仅 cherrypick 编辑了 types.go 的提交, 而不是具有脚本运行结果的提交。 @@ -366,7 +366,7 @@ Now add a commit to your cherry-pick pull request that has the recently generate and related files. Monitor your pull request until it gets merged into the release-{{< skew prevMinorVersion >}} branch. --> -现在将提交添加到您的 Cherry-Pick PR 中,该 PR 中包含最新生成的 OpenAPI 规范和相关文件。 +现在将提交添加到你的 Cherry-Pick PR 中,该 PR 中包含最新生成的 OpenAPI 规范和相关文件。 关注你的 PR,直到其合并到 release-{{< skew prevMinorVersion >}} 分支中为止。 -现在,您可以按照 -[生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +现在,你可以按照 +[生成 Kubernetes API 的参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) 指南来生成 [已发布的 Kubernetes API 参考文档](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)。 @@ -417,7 +417,7 @@ You are now ready to follow the [Generating Reference Documentation for the Kube * [Generating Reference Docs for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/) --> -* [生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) -* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) -* [生成 kubectl 命令的参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) +* [生成 Kubernetes API 的参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为 Kubernetes 组件和工具生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/) +* [生成 kubectl 命令的参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubectl/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubectl.md b/content/zh-cn/docs/contribute/generate-ref-docs/kubectl.md similarity index 96% rename from content/zh/docs/contribute/generate-ref-docs/kubectl.md rename to content/zh-cn/docs/contribute/generate-ref-docs/kubectl.md index cf376a312f7ed..957ca8b347ebd 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubectl.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/kubectl.md @@ -36,7 +36,7 @@ reference page, see 生成参考文档,如 [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) 和 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint)。 本主题没有讨论如何生成 [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) 组件选项的参考页面。 -相关说明请参见[为 Kubernetes 组件和工具生成参考页面](/zh/docs/contribute/generate-ref-docs/kubernetes-components/)。 +相关说明请参见[为 Kubernetes 组件和工具生成参考页面](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/)。 {{< /note >}} ## {{% heading "prerequisites" %}} @@ -72,7 +72,7 @@ go get -u kubernetes-incubator/reference-docs -如果您还没有获取过 `kubernetes/website` 仓库,现在获取之: +如果你还没有获取过 `kubernetes/website` 仓库,现在获取之: ```shell git clone https://github.com//website $GOPATH/src/github.com//website @@ -242,7 +242,7 @@ For example, update the following variables: * 设置 `K8S_ROOT` 为 ``。 * 设置 `K8S_WEBROOT` 为 ``。 * 设置 `K8S_RELEASE` 为要构建文档的版本。 - 例如,如果您想为 Kubernetes {{< skew prevMinorVersion >}} 构建文档, + 例如,如果你想为 Kubernetes {{< skew prevMinorVersion >}} 构建文档, 请将 `K8S_RELEASE` 设置为 {{< skew prevMinorVersion >}}。 例如: @@ -416,7 +416,7 @@ topics will be visible in the 对 `kubernetes/website` 仓库创建 PR。跟踪你的 PR,并根据需要回应评审人的评论。 继续跟踪你的 PR,直到它被合入。 -在 PR 合入的几分钟后,你更新的参考主题将出现在[已发布文档](/zh/docs/home/)中。 +在 PR 合入的几分钟后,你更新的参考主题将出现在[已发布文档](/zh-cn/docs/home/)中。 ## {{% heading "whatsnext" %}} @@ -425,7 +425,7 @@ topics will be visible in the * [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) --> -* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) -* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) -* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +* [生成参考文档快速入门](/zh-cn/docs/contribute/generate-ref-docs/quickstart/) +* [为 Kubernetes 组件和工具生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 Kubernetes API 生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api.md similarity index 93% rename from content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md rename to content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api.md index 5563e174a0e2f..74fc1fe897a51 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api.md @@ -31,9 +31,9 @@ Kubernetes API 参考文档是从 构建的, 且使用[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) 生成代码。 -如果您在生成的文档中发现错误,则需要[在上游修复](/zh/docs/contribute/generate-ref-docs/contribute-upstream/)。 +如果你在生成的文档中发现错误,则需要[在上游修复](/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream/)。 -如果您只需要从 [OpenAPI](https://github.com/OAI/OpenAPI-Specification) 规范中重新生成参考文档,请继续阅读此页。 +如果你只需要从 [OpenAPI](https://github.com/OAI/OpenAPI-Specification) 规范中重新生成参考文档,请继续阅读此页。 ## {{% heading "prerequisites" %}} @@ -135,7 +135,7 @@ Go to ``, and open the `Makefile` for editing: * 设置 `K8S_ROOT` 为 ``. * 设置 `K8S_WEBROOT` 为 ``. * 设置 `K8S_RELEASE` 为要构建的文档的版本。 - 例如,如果您想为 Kubernetes 1.17.0 构建文档,请将 `K8S_RELEASE` 设置为 1.17.0。 + 例如,如果你想为 Kubernetes 1.17.0 构建文档,请将 `K8S_RELEASE` 设置为 1.17.0。 -基于你所生成的更改[创建 PR](/zh/docs/contribute/new-content/open-a-pr/), +基于你所生成的更改[创建 PR](/zh-cn/docs/contribute/new-content/open-a-pr/), 提交到 [kubernetes/website](https://github.com/kubernetes/website) 仓库。 -监视您提交的 PR,并根据需要回复 reviewer 的评论。继续监视您的 PR,直到合并为止。 +监视你提交的 PR,并根据需要回复 reviewer 的评论。继续监视你的 PR,直到合并为止。 ## {{% heading "whatsnext" %}} @@ -316,7 +316,7 @@ to monitor your pull request until it has been merged. * [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) --> -* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) -* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) -* [为 kubectl 命令集生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) +* [生成参考文档快速入门](/zh-cn/docs/contribute/generate-ref-docs/quickstart/) +* [为 Kubernetes 组件和工具生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 kubectl 命令集生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubectl/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md b/content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components.md similarity index 67% rename from content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md rename to content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components.md index 0ca9d48df6fce..a89e8d655eee8 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components.md @@ -23,7 +23,7 @@ This page shows how to build the Kubernetes component and tool reference pages. Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin) in the Reference Documentation Quickstart guide. --> -阅读参考文档快速入门指南中的[准备工作](/zh/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)节。 +阅读参考文档快速入门指南中的[准备工作](/zh-cn/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)节。 @@ -31,7 +31,7 @@ in the Reference Documentation Quickstart guide. Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) to generate the Kubernetes component and tool reference pages. --> -按照[参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) +按照[参考文档快速入门](/zh-cn/docs/contribute/generate-ref-docs/quickstart/) 指引,生成 Kubernetes 组件和工具的参考文档。 ## {{% heading "whatsnext" %}} @@ -43,8 +43,8 @@ to generate the Kubernetes component and tool reference pages. * [Contributing to the Upstream Kubernetes Project for Documentation](/docs/contribute/generate-ref-docs/contribute-upstream/) --> -* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) -* [为 kubectll 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) -* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) -* [为上游 Kubernetes 项目做贡献以改进文档](/zh/docs/contribute/generate-ref-docs/contribute-upstream/) +* [生成参考文档快速入门](/zh-cn/docs/contribute/generate-ref-docs/quickstart/) +* [为 kubectll 命令生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubectl/) +* [为 Kubernetes API 生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为上游 Kubernetes 项目做贡献以改进文档](/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream/) diff --git a/content/zh/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/zh-cn/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md similarity index 96% rename from content/zh/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md rename to content/zh-cn/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md index 5d5cde99f8155..7954f939c003e 100644 --- a/content/zh/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md @@ -41,5 +41,5 @@ - 你需要知道如何为一个 GitHub 仓库创建拉取请求(PR)。 这牵涉到创建仓库的派生(fork)副本。 - 有关信息可进一步查看[基于本地副本开展工作](/zh/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 + 有关信息可进一步查看[基于本地副本开展工作](/zh-cn/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 diff --git a/content/zh/docs/contribute/generate-ref-docs/quickstart.md b/content/zh-cn/docs/contribute/generate-ref-docs/quickstart.md similarity index 96% rename from content/zh/docs/contribute/generate-ref-docs/quickstart.md rename to content/zh-cn/docs/contribute/generate-ref-docs/quickstart.md index 521b834153e17..7c4336aae50d3 100644 --- a/content/zh/docs/contribute/generate-ref-docs/quickstart.md +++ b/content/zh-cn/docs/contribute/generate-ref-docs/quickstart.md @@ -57,7 +57,7 @@ see the [contributing upstream guide](/docs/contribute/generate-ref-docs/contrib {{< note>}} 如果你希望更改构建工具和 API 参考资料,可以阅读 -[上游贡献指南](/zh/docs/contribute/generate-ref-docs/contribute-upstream). +[上游贡献指南](/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream). {{< /note >}} 通过工具导入的单页面的 Markdown 文档必须遵从 -[文档样式指南](/zh/docs/contribute/style/style-guide/)。 +[文档样式指南](/zh-cn/docs/contribute/style/style-guide/)。 要手动设置所需的构造仓库,执行构建目标,以生成各个参考文档,可参考下面的指南: -* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) -* [为 kubectl 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) -* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为 Kubernetes 组件和工具生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 kubectl 命令生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubectl/) +* [为 Kubernetes API 生成参考文档](/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/) diff --git a/content/zh/docs/contribute/localization.md b/content/zh-cn/docs/contribute/localization.md similarity index 88% rename from content/zh/docs/contribute/localization.md rename to content/zh-cn/docs/contribute/localization.md index 5c70568251946..9c21b55e2aa42 100644 --- a/content/zh/docs/contribute/localization.md +++ b/content/zh-cn/docs/contribute/localization.md @@ -7,7 +7,7 @@ card: weight: 50 title: 翻译文档 --- - - 此页面描述如何为其他语言的文档提供 [本地化](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)版本。 - -## 为现有的本地化做出贡献 +## 为现有的本地化做出贡献 {#contribute-to-an-existing-localization} 你可以帮助添加或改进现有本地化的内容。在 [Kubernetes Slack](https://slack.k8s.io/) 中, 你能找到每个本地化的频道。还有一个通用的 @@ -44,7 +44,7 @@ You can help add or improve content to an existing localization. In [Kubernetes 你可以在这里打个招呼。 {{< note >}} - -### 找到两个字母的语言代码 +### 找到两个字母的语言代码 {#find-your-two-letter-language-code} 首先,有关本地化的两个字母的语言代码,请参考 [ISO 639-1 标准](https://www.loc.gov/standards/iso639-2/php/code_list.php)。 @@ -73,7 +73,7 @@ The website content directory includes sub-directories for each language. The lo ### 派生(fork)并且克隆仓库 {#fork-and-clone-the-repo} 首先,为 [kubernetes/website](https://github.com/kubernetes/website) 仓库 -[创建你自己的副本](/zh/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 +[创建你自己的副本](/zh-cn/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 网站内容目录包括每种语言的子目录。你想要助力的本地化位于 `content/` 中。 - -## 开始新的本地化 +## 开始新的本地化 {#start-a-new-localization} 如果你希望将 Kubernetes 文档本地化为一种新语言,你需要执行以下操作。 @@ -137,7 +140,7 @@ it's up to you to translate it and keep existing localized content current. 所有本地化团队都必须能够自我维持。 Kubernetes 网站很乐意托管你的作品,但要由你来翻译它并使现有的本地化内容保持最新。 - -### 找到社区 +### 找到社区 {#find-community} 让 Kubernetes SIG Docs 知道你有兴趣创建本地化! 加入 [SIG Docs Slack 频道](https://kubernetes.slack.com/messages/sig-docs) 和 [SIG Docs Localizations Slack 频道](https://kubernetes.slack.com/messages/sig-docs-localizations)。 其他本地化团队很乐意帮助你入门并回答你的任何问题。 - -### 加入到 Kubernetes GitHub 组织 +### 加入到 Kubernetes GitHub 组织 {#join-the-kubernetes-github-organization} 提交本地化 PR 后,你可以成为 Kubernetes GitHub 组织的成员。 团队中的每个人都需要在 `kubernetes/org` 仓库中创建自己的 [组织成员申请](https://github.com/kubernetes/org/issues/new/choose)。 - ### 在 GitHub 中添加你的本地化团队 {#add-your-localization-team-in-github} @@ -217,24 +221,24 @@ The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new P `@kubernetes/sig-docs-**-owners` 成员可以批准更改对应本地化目录 `/content/**/` 中内容的 PR,并仅限这类 PR。 -`@kubernetes/sig-docs-**-reviews` 团队被自动分派新 PR 的审阅任务。 +对于每个本地化,`@kubernetes/sig-docs-**-reviews` 团队被自动分派新 PR 的审阅任务。 - `@kubernetes/website-maintainers` 成员可以创建新的本地化分支来协调翻译工作。 `@kubernetes/website-milestone-maintainers` 成员可以使用 `/milestone` [Prow 命令](https://prow.k8s.io/command-help)为 issues 或 PR 设定里程碑。 - - ### 配置工作流程 {#configure-the-workflow} @@ -246,16 +250,14 @@ For an example of adding a label, see the PR for adding the [Italian language la 你还可以在 `kubernetes/community` 仓库中为你的本地化创建一个 Slack 频道。 有关添加 Slack 频道的示例,请参见[为印尼语和葡萄牙语添加频道](https://github.com/kubernetes/community/pull/3605)的 PR。 - -## 最低要求内容 {#minimum-required-content} - -### 修改站点配置 +### 修改站点配置 {#configure-the-workflow} Kubernetes 网站使用 Hugo 作为其 Web 框架。网站的 Hugo 配置位于 [`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml)文件中。 @@ -275,7 +277,7 @@ weight = 8 ``` `languageName` 的值将列在语言选择栏中。 将 `languageName` 赋值为“本地脚本中的语言名称(拉丁脚本中的语言名称)”。 @@ -284,21 +286,21 @@ The value for `languageName` will be listed in language selection bar. Assign "l 将 `languageNameLatinScript` 赋值为“拉丁脚本中的语言名称”。 例如,`languageNameLatinScript ="Korean"`。 - 为你的语言块分配一个 `weight` 参数时,找到权重最高的语言块并将其加 1。 有关 Hugo 多语言支持的更多信息,请参阅"[多语言模式](https://gohugo.io/content-management/multilingual/)"。 - -### 添加一个新的本地化目录 +### 添加一个新的本地化目录 {#add-a-new-localization-directory} 将特定语言的子目录添加到仓库中的 [`content`](https://github.com/kubernetes/website/tree/main/content) 文件夹下。 @@ -308,7 +310,7 @@ Add a language-specific subdirectory to the [`content`](https://github.com/kuber mkdir content/de ``` - -### 本地化社区行为准则 +### 本地化社区行为准则 {#localize-the-community-code-of-conduct} 在 [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) 仓库提交 PR,添加你所用语言版本的行为准则。 --> - -### 设置 OWNERS 文件 +### 设置 OWNERS 文件 {#setting-up-the-owners-files} 要设置每个对本地化做出贡献用户的角色,请在特定于语言的子目录内创建一个 `OWNERS` 文件,其中: @@ -362,10 +364,10 @@ To set the roles of each user contributing to the localization, create an `OWNER - **labels**: 可以自动应用于 PR 的 GitHub 标签列表,在本例中为 [配置工作流程](#configure-the-workflow)中创建的语言标签。 - 有关 `OWNERS` 文件的更多信息,请访问[go.k8s.io/owners](https://go.k8s.io/owners)。 @@ -386,12 +388,12 @@ approvers: labels: - language/es -``` +``` - 添加了特定语言的 OWNERS 文件之后,使用新的 Kubernetes 本地化团队、 `sig-docs-**-owners` 和 `sig-docs-**-reviews` 列表更新 @@ -421,7 +423,7 @@ For each team, add the list of GitHub users requested in [Add your localization - remyleone ``` - ### 打开拉取请求 {#open-a-pull-request} -接下来,[打开拉取请求](/zh/docs/contribute/new-content/open-a-pr/#open-a-pr)(PR) +接下来,[打开拉取请求](/zh-cn/docs/contribute/new-content/open-a-pr/#open-a-pr)(PR) 将本地化添加到 `kubernetes/website` 存储库。 PR 必须包含所有[最低要求内容](#minimum-required-content)才能获得批准。 @@ -440,7 +442,7 @@ PR 必须包含所有[最低要求内容](#minimum-required-content)才能获得 有关添加新本地化的示例, 请参阅 PR 以启用[法语文档](https://github.com/kubernetes/website/pull/12548)。 - -### 添加本地化的 README 文件 +### 添加本地化的 README 文件 {#add-a-localized-readme-file} 为了指导其他本地化贡献者,请在 [k/website](https://github.com/kubernetes/website/) 的根目录添加一个新的 [`README-**.md`](https://help.github.com/articles/about-readmes/), @@ -461,14 +463,14 @@ Provide guidance to localization contributors in the localized `README-**.md` fi - 本地化项目的联系人 - 任何特定于本地化的信息 - 创建本地化的 README 文件后,请在英语版文件 `README.md` 中添加指向该文件的链接, 并给出英文形式的联系信息。你可以提供 GitHub ID、电子邮件地址、 [Slack 频道](https://slack.com/)或其他联系方式。你还必须提供指向本地化的社区行为准则的链接。 - -### 启动你的新本地化 +### 启动你的新本地化 {#add-a-localized-readme-file} 一旦本地化满足工作流程和最小输出的要求,SIG Docs 将: @@ -484,20 +486,25 @@ Once a localization meets requirements for workflow and minimum output, SIG Docs - 通过[云原生计算基金会](https://www.cncf.io/about/)(CNCF)渠道, 包括 [Kubernetes 博客](https://kubernetes.io/blog/),来宣传本地化的可用性。 - ## 翻译文档 {#translating-content} 本地化*所有* Kubernetes 文档是一项艰巨的任务。从小做起,循序渐进。 + +### 最低要求内容 {#minimum-required-content} + 所有本地化至少必须包括: - 描述 | 网址 -----|----- -主页 | [所有标题和副标题网址](/zh/docs/home/) -安装 | [所有标题和副标题网址](/zh/docs/setup/) -教程 | [Kubernetes 基础](/zh/docs/tutorials/kubernetes-basics/), [Hello Minikube](/zh/docs/tutorials/hello-minikube/) +主页 | [所有标题和副标题网址](/zh-cn/docs/home/) +安装 | [所有标题和副标题网址](/zh-cn/docs/setup/) +教程 | [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/), [Hello Minikube](/zh-cn/docs/tutorials/hello-minikube/) 网站字符串 | [所有网站字符串](#Site-strings-in-i18n) 发行版本 | [所有标题和副标题 URL](/releases) - 翻译后的文档必须保存在自己的 `content/**/` 子目录中,否则将遵循与英文源相同的 URL 路径。 -例如,要准备将 [Kubernetes 基础](/zh/docs/tutorials/kubernetes-basics/) 教程翻译为德语, +例如,要准备将 [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/) 教程翻译为德语, 请在 `content/de/` 文件夹下创建一个子文件夹并复制英文源: ```shell @@ -525,24 +533,24 @@ mkdir -p content/de/docs/tutorials cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md ``` - 翻译工具可以加快翻译过程。例如,某些编辑器提供了用于快速翻译文本的插件。 - {{< caution >}} -机器生成的翻译不能达到最低质量标准,需要进行大量人工审查才能达到该标准。 +机器生成的翻译本身是不够的,本地化需要广泛的人工审核才能满足最低质量标准。 {{< /caution >}} - 为了确保语法和含义的准确性,本地化团队的成员应在发布之前仔细检查所有由机器生成的翻译。 - -### 源文件 +### 源文件 {#source-files} 本地化必须基于本地化团队所针对的特定发行版本中的英文文件。 每个本地化团队可以决定要针对哪个发行版本,在下文中称作目标版本(target version)。) @@ -580,27 +588,27 @@ The `master` branch holds content for the current release `{{< latest-version >} 发行团队会在下一个发行版本 v{{< skew nextMinorVersion >}} 出现之前创建 `{{< release-branch >}}` 分支。 - ### i18n/ 中的网站字符串 {#site-strings-in-i18n} 本地化必须在新的语言特定文件中包含 -[`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) +[`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml) 的内容。以德语为例:`data/i18n/de/de.toml`。 -将新的本地化文件添加到 `i18n/`。例如德语 (`de`): +将新的本地化文件和目录添加到 `data/i18n/`。例如德语 (`de`): ```bash mkdir -p data/i18n/de cp data/i18n/en/en.toml data/i18n/de/de.toml ``` - 本地化网站字符串允许你自定义网站范围的文本和特性:例如,每个页面页脚中的合法版权文本。 - -### 特定语言的样式指南和词汇表 +### 特定语言的样式指南和词汇表 {#language-specific-style-guide-and-glossary} 一些语言团队有自己的特定语言样式指南和词汇表。 -例如,请参见[中文本地化指南](/zh/docs/contribute/localization_zh/)。 +例如,请参见[中文本地化指南](/zh-cn/docs/contribute/localization_zh/)。 -### 特定语言的 Zoom 会议 +### 特定语言的 Zoom 会议 {#language-specific-zoom-meetings} -如果本地化项目需要单独的会议时间, -请联系 SIG Docs 联合主席或技术主管以创建新的重复 Zoom 会议和日历邀请。 +如果本地化项目需要单独的会议时间, +请联系 SIG Docs 联合主席或技术主管以创建新的重复 Zoom 会议和日历邀请。 仅当团队维持在足够大的规模并需要单独的会议时才需要这样做。 根据 CNCF 政策,本地化团队必须将他们的会议上传到 SIG Docs YouTube 播放列表。 SIG Docs 联合主席或技术主管可以帮助完成该过程,直到 SIG Docs 实现自动化。 - ### 分支策略 {#branching-strategy} @@ -662,10 +670,10 @@ To collaborate on a localization branch: 在本地化分支上协作需要: - 2. 个人贡献者基于本地化分支创建新的特性分支 @@ -703,13 +711,13 @@ To collaborate on a localization branch: 4. 批准人会定期发起并批准新的 PR,将本地化分支合并到其源分支。 在批准 PR 之前,请确保先 squash commits。 - 根据需要重复步骤 1-4,直到完成本地化工作。例如,随后的德语本地化分支将是: `dev-1.12-de.2`、`dev-1.12-de.3`,等等。 - +--> 在团队每个里程碑的开始时段,创建一个 issue 来比较先前的本地化分支 和当前的本地化分支之间的上游变化很有帮助。 现在有两个脚本用来比较上游的变化。 @@ -751,13 +759,13 @@ While only approvers can open a new localization branch and merge pull requests, 虽然只有批准人才能创建新的本地化分支并合并 PR,任何人都可以 为新的本地化分支提交一个拉取请求(PR)。不需要特殊权限。 - 有关基于派生或直接从仓库开展工作的更多信息,请参见 ["派生和克隆"](#fork-and-clone-the-repo)。 - diff --git a/content/zh/docs/contribute/localization_zh.md b/content/zh-cn/docs/contribute/localization_zh.md similarity index 88% rename from content/zh/docs/contribute/localization_zh.md rename to content/zh-cn/docs/contribute/localization_zh.md index c55d03941fcda..56c2432416342 100644 --- a/content/zh/docs/contribute/localization_zh.md +++ b/content/zh-cn/docs/contribute/localization_zh.md @@ -6,7 +6,7 @@ content_type: concept 本节详述文档中文本地化过程中须注意的事项。 -这里列举的内容包含了*中文本地化小组*早期给出的指导性建议和后续实践过程中积累的经验。 +这里列举的内容包含了**中文本地化小组**早期给出的指导性建议和后续实践过程中积累的经验。 在阅读、贡献、评阅中文本地化文档的过程中,如果对本文的指南有任何改进建议, 都请直接提出 PR。我们欢迎任何形式的补充和更正! @@ -158,28 +158,19 @@ weight: 30 通过 HTML 注释的短代码仍会被运行,因此需要额外小心。建议处理方式: ``` - {{}} -中文译文 -{{}} -``` - -评阅人应该不难理解中英文段落的对应关系。但是如果采用下面的方式, -则会出现两个 `note`,因此需要避免。这是因为被注释起来的短代码仍会起作用! - -``` -{{}} 中文译文 {{}} ``` +{{< note >}} +现行风格与之前风格有些不同,这是因为较新的 Hugo 版本已经能够正确处理短代码中的注释段落。 +保持注释掉的英文与译文都在短代码内更便于维护。 +{{< /note >}} + ### 译与不译 #### 资源名称或字段不译 @@ -198,13 +189,13 @@ deployment 来表示名为 "Deployment" 的 API 资源类型和对象实例。 #### 代码中的注释 -一般而言,代码中的注释需要翻译,包括存放在 `content/zh/examples/` +一般而言,代码中的注释需要翻译,包括存放在 `content/zh-cn/examples/` 目录下的清单文件中的注释。 #### 出站链接 -如果超级链接的目标是 Kubernetes 网站之外的纯英文网页,链接中的内容*可以*不翻译。 +如果超级链接的目标是 Kubernetes 网站之外的纯英文网页,链接中的内容**可以**不翻译。 例如: ``` @@ -214,16 +205,18 @@ Please check [installation caveats](https://acme.com/docs/v1/caveats) ... 请参阅 [installation caveats](https://acme.com/docs/v1/caveats) ... ``` +{{< note >}} 注意,这里的 `installation` 与 `参阅` 之间留白,因为解析后属于中英文混排的情况。 +{{< /note >}} ### 标点符号 -译文中标点符号要使用全角字符,除非以下两种情况: +1. 译文中标点符号要使用全角字符,除非以下两种情况: -- 标点符号是英文命令的一部分; -- 标点符号是 Markdown 语法的一部分。 + - 标点符号是英文命令的一部分; + - 标点符号是 Markdown 语法的一部分。 -英文排比句式中采用的逗号,在译文中要使用顿号代替,以便符合中文书写习惯。 +1. 英文排比句式中采用的逗号,在译文中要使用顿号代替,以便符合中文书写习惯。 ## 更新译文 @@ -233,7 +226,7 @@ Please check [installation caveats](https://acme.com/docs/v1/caveats) ... 为确保准确跟踪中文化版本与英文版本之间的差异,中文内容的 PR 所包含的每个页面都必须是“最新的”。 这里的“最新”指的是对应的英文页面中的更改已全部同步到中文页面。 -如果某中文 PR 中包含对 `content/zh/docs/foo/bar.md` 的更改,且文件 `bar.md` +如果某中文 PR 中包含对 `content/zh-cn/docs/foo/bar.md` 的更改,且文件 `bar.md` 的上次更改日期是 `2020-10-01 01:02:03 UTC`,对应 GIT 标签 `abcd1234`, 则 `bar.md` 应包含自 `abcd1234` 以来 `content/en/docs/foo/bar.md` 的所有变更, 否则视此 PR 为不完整 PR,会破坏我们对上游变更的跟踪。 @@ -242,7 +235,7 @@ Please check [installation caveats](https://acme.com/docs/v1/caveats) ... `bar.md` 上次提交以来发生的所有变更,可使用: ``` -./scripts/lsync.sh content/zh/docs/foo/bar.md +./scripts/lsync.sh content/zh-cn/docs/foo/bar.md ``` ## 关于链接 @@ -273,18 +266,18 @@ Please check [installation caveats](https://acme.com/docs/v1/caveats) ... For more information, please check [volumes](/docs/concepts/storage/) ... --> -更多的信息可参考[卷](/zh/docs/concepts/storage/)页面。 +更多的信息可参考[卷](/zh-cn/docs/concepts/storage/)页面。 ``` 如果对应目标页面尚未本地化,建议登记一个 Issue。 {{< note >}} Website 的仓库中 `scripts/linkchecker.py` 是一个工具,可用来检查页面中的链接。 -例如,下面的命令检查中文本地化目录 `/content/zh/docs/concepts/containers/` +例如,下面的命令检查中文本地化目录 `/content/zh-cn/docs/concepts/containers/` 中所有 Markdown 文件中的链接合法性: -``` -./scripts/linkchecker.py -l zh -f /docs/concepts/containers/**/*.md +```shell +./scripts/linkchecker.py -l zh-cn -f /docs/concepts/containers/**/*.md ``` {{< /note >}} @@ -293,15 +286,22 @@ Website 的仓库中 `scripts/linkchecker.py` 是一个工具,可用来检查 以下为译文 Markdown 排版格式要求: - 中英文之间留一个空格 + * 这里的“英文”包括以英文呈现的超级链接 - * 这里的中文、英文都不包括标点符号 + * 这里的中文、英文都**不包括**标点符号 + - 译文 Markdown 中不要使用长行,应适当断行。 + * 可根据需要在 80-120 列断行 * 最好结合句子的边界断行,即一句话在一行,不必留几个字转到下一行 * 不要在两个中文字符中间断行,因为这样会造成中文字符中间显示一个多余空格, 如果句子太长,可以从中文与非中文符号之间断行 * 超级链接文字一般较长,可独立成行 +- 英文原文中可能通过 `_text_` 或者 `*text*` 的形式用斜体突出部分字句。 + 考虑到中文斜体字非常不美观,在译文中应改为 `**译文**` 形式, + 即用双引号语法生成加粗字句,实现突出显示效果。 + {{< warning >}} 我们注意到有些贡献者可能使用了某种自动化工具,在 Markdown 英文原文中自动添加空格。 虽然这些工具可一定程度提高效率,仍然需要提请作者注意,某些工具所作的转换可能是不对的, @@ -309,11 +309,10 @@ Website 的仓库中 `scripts/linkchecker.py` 是一个工具,可用来检查 甚至将超级链接中的半角井号(`#`)转换为全角,导致链接失效。 {{< /warning >}} -英文中 "you" 翻译成 "你" 不必是 “您"。 -文章内的链接用英文,例如 (#deploying),在对应的标题上后面加上 {#deploying} +## 特殊词汇 -## 术语 +英文中 "you" 翻译成 "你",不必翻译为 "您" 以表现尊敬或谦卑。 ### 术语拼写 @@ -324,14 +323,14 @@ Website 的仓库中 `scripts/linkchecker.py` 是一个工具,可用来检查 列举所有 Pod,查看其创建时间 ... [Yes] ``` -*第一次*使用首字母缩写时,应标注其全称和中文译文。例如: +**第一次**使用首字母缩写时,应标注其全称和中文译文。例如: ``` 你可以创建一个 Pod 干扰预算(Pod Disruption Budget,PDB)来解决这一问题。 所谓 PDB 实际上是 ... ``` -对于某些特定于 Kubernetes 语境的术语,也应在*第一次*出现在页面中时给出其英文原文, +对于某些特定于 Kubernetes 语境的术语,也应在**第一次**出现在页面中时给出其英文原文, 以便读者对照阅读。例如: ``` diff --git a/content/zh-cn/docs/contribute/new-content/_index.md b/content/zh-cn/docs/contribute/new-content/_index.md new file mode 100644 index 0000000000000..e37acc6428c4a --- /dev/null +++ b/content/zh-cn/docs/contribute/new-content/_index.md @@ -0,0 +1,200 @@ +--- +title: 贡献新内容 +content_type: 概念 +main_menu: true +weight: 20 +--- + + + + + + +本节包含你在贡献新内容之前需要知晓的信息。 + + + + +{{< mermaid >}} +flowchart LR + subgraph second[开始之前] + direction TB + S[ ] -.- + A[签署 CNCF CLA] --> B[选择 Git 分支] + B --> C[每个 PR 一种语言] + C --> F[检查贡献者工具] + end + subgraph first[基本知识] + direction TB + T[ ] -.- + D[用 markdown 编写文档
        并用 Hugo 构建网站] --- E[GitHub 源代码] + E --- G['/content/../docs' 文件夹包含
        多语言文档] + G --- H[评审 Hugo 页面内容
        类型和短代码] + end + + + first ----> second + + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,B,C,D,E,F,G,H grey +class S,T spacewhite +class first,second white +{{}} + + + +***插图 - 贡献新内容准备工作*** + +上图描述了你在提交新内容之前需要知晓的信息。 +详细信息见下文。 + + + + +## 基本知识 + +- 使用 Markdown 编写 Kubernetes 文档并使用 [Hugo](https://gohugo.io/) 构建网站。 +- Kubernetes 文档使用 [CommonMark](https://commonmark.org/) 作为 Markdown 的风格。 +- 源代码位于 [GitHub](https://github.com/kubernetes/website) 仓库中。 + 你可以在 `/content/zh-cn/docs/` 目录下找到 Kubernetes 文档。 + 某些参考文档是使用位于 `update-imported-docs/` 目录下的脚本自动生成的。 +- [页面内容类型](/zh-cn/docs/contribute/style/page-content-types/)使用 Hugo 描述文档内容的呈现。 + + + +- 你可以使用 [Docsy 短代码](https://www.docsy.dev/docs/adding-content/shortcodes/) + 或[定制的 Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/)贡献 Kubernetes 文档。 +- 除了标准的 Hugo 短代码外, + 我们还在文档中使用一些[定制的 Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/)来控制内容的呈现。 +- 文档的源代码有多种语言形式,位于 `/content/` 目录下。 + 每种语言都有一个自己的目录,用两个字母表示,这两个字母是基于 + [ISO 639-1 标准](https://www.loc.gov/standards/iso639-2/php/code_list.php)来确定的。 + 例如,英语文档的源代码位于 `/content/en/docs/` 目录下。 +- 关于为多语言文档做贡献以及如何开始新翻译的详细信息, + 可参考[本地化文档](/zh-cn/docs/contribute/localization)。 + + + +## 开始之前 {#before-you-begin} + +### 签署 CNCF CLA {#sign-the-cla} + +所有 Kubernetes 贡献者**必须**阅读[贡献者指南](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) +并[签署贡献者授权同意书 (Contributor License Agreement, CLA)](https://github.com/kubernetes/community/blob/master/CLA.md)。 + +若贡献者尚未签署 CLA,其发起的 PR 将无法通过自动化测试。 +你所提供的姓名和邮件地址必须与 `git config` 中配置的完全相同, +而且你的 git 用户名和邮件地址必须与用来签署 CNCF CLA 的信息一致。 + + + +### 选择要使用的 Git 分支 + +在发起 PR 时,你需要预先知道基于哪个分支来开展工作。 + +场景 | 分支 +:---------|:------------ +针对当前发行版本的,对现有英文内容的修改或新的英文内容 | `main` + 针对功能特性变更的内容 | 分支对应于功能特性变更的主要和次要版本,分支名称采用 `dev-` 的模式。例如,如果某功能特性在 `v{{< skew nextMinorVersion >}}` 版本发生变化,则对应的文档变化要添加到 `dev-{{< skew nextMinorVersion >}}` 分支。 + 其他语言的内容(本地化) | 基于本地化团队的约定。参见[本地化分支策略](/zh-cn/docs/contribute/localization/#branching-strategy)了解更多信息。 + +如果你仍不能确定要选择哪个分支,请在 Slack 的 `#sig-docs` 频道上提出问题。 + + + +{{< note >}} +如果你已经提交了 PR,并且发现所针对的分支选错了,你(且只有作为提交人的你)可以更改分支。 +{{< /note >}} + + + +### 每个 PR 牵涉的语言 + +请确保每个 PR 仅涉及一种语言。 +如果你需要对多种语言下的同一代码示例进行相同的修改,也请为每种语言发起一个独立的 PR。 + + + +## 为贡献者提供的工具 + +`kubernetes/website` 仓库的[文档贡献者工具](https://github.com/kubernetes/website/tree/main/content/zh-cn/docs/doc-contributor-tools)目录中包含了一些工具, +有助于使你的贡献过程更为顺畅。 diff --git a/content/zh/docs/contribute/new-content/blogs-case-studies.md b/content/zh-cn/docs/contribute/new-content/blogs-case-studies.md similarity index 71% rename from content/zh/docs/contribute/new-content/blogs-case-studies.md rename to content/zh-cn/docs/contribute/new-content/blogs-case-studies.md index 71ebf83930768..aa425e2bbb753 100644 --- a/content/zh/docs/contribute/new-content/blogs-case-studies.md +++ b/content/zh-cn/docs/contribute/new-content/blogs-case-studies.md @@ -34,14 +34,78 @@ Anyone can write a blog post and submit it for review. --> ## Kubernetes 博客 -Kubernetes 博客用于项目发布新功能特性、社区报告以及其他一些可能对整个社区 -很重要的新闻。 +Kubernetes 博客用于项目发布新功能特性、 +社区报告以及其他一些可能对整个社区很重要的新闻。 其读者包括最终用户和开发人员。 -大多数博客的内容是关于核心项目中正在发生的事情,不过我们也鼓励你提交一些 -关于生态系统中其他地方发生的事情的博客。 +大多数博客的内容是关于核心项目中正在发生的事情, +不过我们也鼓励你提交一些有关生态系统中其他时事的博客。 任何人都可以撰写博客并提交评阅。 + +### 提交博文 + +博文不应该是商业性质的,应该包含广泛适用于 Kubernetes 社区的原创内容。 +合适的博客内容包括: + +- Kubernetes 新能力 +- Kubernetes 项目更新信息 +- 来自特别兴趣小组(Special Interest Groups, SIG)的更新信息 +- 教程和演练 +- 有关 Kubernetes 的纲领性理念 +- Kubernetes 合作伙伴 OSS 集成信息 +- **仅限原创内容** + + + +不合适的博客内容包括: + +- 供应商产品推介 +- 不含集成信息和客户故事的合作伙伴更新信息 +- 已发表的博文(可刊登博文译稿) + + +要提交博文,你可以遵从以下步骤: + +1. 如果你还未签署 CLA,请先[签署 CLA](https://kubernetes.io/docs/contribute/start/#sign-the-cla)。 +2. 查阅[网站仓库](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts)中现有博文的 Markdown 格式。 +3. 在你所选的文本编辑器中撰写你的博文。 +4. 在第 2 步的同一链接上,点击 **Create new file** 按钮。 + 将你的内容粘贴到编辑器中。为文件命名,使其与提议的博文标题一致, + 但不要在文件名中写日期。 + 博客评阅者将与你一起确定最终的文件名和发表博客的日期。 +5. 保存文件时,GitHub 将引导你完成 PR 流程。 +6. 博客评阅者将评阅你提交的内容,并与你一起处理反馈和最终细节。 + 当博文被批准后,博客将排期发表。 + - 博客内容应该对 Kubernetes 用户有用。 - 与参与 Kubernetes SIGs 活动相关,或者与这类活动的结果相关的主题通常是切题的。 @@ -98,6 +163,7 @@ Kubernetes 博客用于项目发布新功能特性、社区报告以及其他一 - 很多 CNCF 项目有自己的博客。这些博客通常是更好的选择。 有些时候,某个 CNCF 项目的主要功能特性或者里程碑的变化可能是用户有兴趣在 Kubernetes 博客上阅读的内容。 + - 关于为 Kubernetes 项目做贡献的博客内容应该放在 [Kubernetes 贡献者站点](https://kubernetes.dev)上。 ### 提交博客的技术考虑 -所提交的内容应该是 Markdown 格式的,以便能够被[Hugo](https://gohugo.io/) 生成器来处理。 +所提交的内容应该是 Markdown 格式的,以便能够被 [Hugo](https://gohugo.io/) 生成器来处理。 关于如何使用相关技术,有[很多可用的资源](https://gohugo.io/documentation/)。 我们知道这一需求可能给那些对此过程不熟悉的朋友们带来不便, @@ -141,7 +207,6 @@ To submit a blog post follow these directions: SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) 负责管理博客的评阅过程。 更多信息可参考[提交博文](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post)。 - 要提交博文,你可以遵从以下指南: -- [发起一个包含博文的 PR](/zh/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 +- [发起一个包含新博文的 PR](/zh-cn/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 新博文要创建于 [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts) 目录下。 - 确保你的博文遵从合适的命名规范,并带有下面的引言(元数据)信息: @@ -198,6 +263,30 @@ SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/s - 博客团队会对 PR 内容进行评阅,为你提供一些评语以便修订。 之后,机器人会将你的博文合并并发表。 + + + - 如果博文的内容仅包含预期无需更新就能对读者保持精准的内容, + 则可以将这篇博文标记为长期有效(evergreen), + 且免除添加博文发表一年后内容过期的自动警告。 + - 要将一篇博文标记为长期有效,请在引言部分添加以下标记: + + ```yaml + evergreen: true + ``` + - 不应标记为长期有效的内容示例: + - 仅适用于特定发行版或版本而不是所有未来版本的**教程** + - 对非正式发行(Pre-GA)API 或功能特性的引用 + 如果你在处理的功能特性处于 Alpha 或 Beta 阶段并由某特性门控控制, 请确保在你的 PR 中,该特性门控被添加到 -[Alpha/Beta 特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) +[Alpha/Beta 特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) 表格中。对于新的特性门控选项,需要为该特性门控提供一段描述。 如果所处理的功能特性已经进入正式发布(GA)状态或者被废弃, 请确保将其从上述表格中迁移到 -[已毕业或废弃的特性](/zh/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features) +[已毕业或废弃的特性](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features) 表格中,并确保迁移后保留其 Alpha、Beta 版本变迁历史。 {{< note >}} -**代码开发者们**:如果你在为下一个 Kubernetes 发行版本中的某功能特性 -撰写文档,请参考[为新功能撰写文档](/zh/docs/contribute/new-content/new-features/)。 +**代码开发者们**:如果你在为下一个 Kubernetes 发行版本中的某功能特性撰写文档, +请参考[为发行版本撰写功能特性文档](/zh-cn/docs/contribute/new-content/new-features/)。 {{< /note >}} 要贡献新的内容页面或者改进已有内容页面,请发起拉取请求(PR)。 -请确保你满足了[开始之前](/zh/docs/contribute/new-content/overview/#before-you-begin) -节中所列举的所有要求。 +请确保你满足了[开始之前](/zh-cn/docs/contribute/new-content/#before-you-begin)一节中所列举的所有要求。 - +## 使用 GitHub 提交变更 {#changes-using-github} +如果你在 git 工作流方面欠缺经验,这里有一种发起拉取请求的更为简单的方法。 +下图勾勒了后续的步骤和细节。 + + + + +{{< mermaid >}} +flowchart LR +A([fa:fa-user 新的
        贡献者]) --- id1[(K8s/Website
        GitHub)] +subgraph tasks[使用 GitHub 提交变更] +direction TB + 0[ ] -.- + 1[1. 编辑此页] --> 2[2. 使用 GitHub markdown
        编辑器进行修改] + 2 --> 3[3. 填写 Propose file change] + +end +subgraph tasks2[ ] +direction TB +4[4. 选择 Propose file change] --> 5[5. 选择 Create pull request] --> 6[6. 填写 Open a pull request] +6 --> 7[7. 选择 Create pull request] +end + +id1 --> tasks --> tasks2 + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,1,2,3,4,5,6,7 grey +class 0 spacewhite +class tasks,tasks2 white +class id1 k8s +{{}} + +***插图 - 使用 GitHub 发起一个 PR 的步骤*** + + -## 使用 GitHub 提交变更 {#changes-using-github} - -如果你在 git 工作流方面欠缺经验,这里有一种发起拉取请求的更为简单的方法。 -1. 在你发现问题的网页,选择右上角的铅笔图标。你也可以滚动到页面底端,选择 - **编辑此页面**。 +1. 在你发现问题的网页,选择右上角的铅笔图标。 + 你也可以滚动到页面底端,选择**编辑此页**。 2. 在 GitHub 的 Markdown 编辑器中修改内容。 -3. 在编辑器的下方,填写 **建议文件变更** 表单。 +3. 在编辑器的下方,填写 **Propose file change** 表单。 在第一个字段中,为你的提交消息取一个标题。 在第二个字段中,为你的提交写一些描述文字。 - + {{< note >}} - 不要在提交消息中使用 [GitHub 关键词](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) + 不要在提交消息中使用 [GitHub 关键词](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)。 你可以在后续的 PR 描述中使用这些关键词。 {{< /note >}} -4. 选择 **Propose File Change**. -5. 选择 **Create pull request**. -6. 在 **Open a pull request** 屏幕上填写表单: +4. 选择 **Propose File Change**。 +5. 选择 **Create pull request**。 +6. 出现 **Open a pull request** 界面。填写表单: - **Subject** 字段默认为提交的概要信息。你可以根据需要修改它。 - - **Body** 字段包含更为详细的提交消息,如果你之前有填写过的话,以及一些模板文字。 - 填写模板所要求的详细信息,之后删除多余的模板文字。 + - **Body** 字段包含更为详细的提交消息,如果你之前有填写过的话, + 以及一些模板文字。填写模板所要求的详细信息, + 之后删除多余的模板文字。 - 确保 **Allow edits from maintainers** 复选框被勾选。 {{< note >}} PR 描述信息是帮助 PR 评阅人了解你所提议的变更的重要途径。 - 更多信息请参考[发起一个 PR](#open-a-pr). + 更多信息请参考[发起一个 PR](#open-a-pr)。 {{< /note >}} -7. 选择 **Create pull request**. +7. 选择 **Create pull request**。 ## 基于本地克隆副本开展工作 {#work-from-a-local-fork} @@ -168,6 +211,42 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi 首先要确保你在本地计算机上安装了 [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)。 你也可以使用 git 的带用户界面的应用。 +下图显示了基于本地克隆副本开展工作的步骤。 +每个步骤的细节如下。 + + + + +{{< mermaid >}} +flowchart LR +1[派生 K8s/website
        仓库] --> 2[创建本地克隆副本
        并指定 upstream 仓库] +subgraph changes[你的变更] +direction TB +S[ ] -.- +3[创建一个分支
        例如: my_new_branch] --> 3a[使用文本编辑器
        进行修改] --> 4["使用 Hugo 在本地
        预览你的变更
        (localhost:1313)
        或构建容器镜像"] +end +subgraph changes2[提交 / 推送] +direction TB +T[ ] -.- +5[提交你的变更] --> 6[将提交推送到
        origin/my_new_branch] +end + +2 --> changes --> changes2 + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class 1,2,3,3a,4,5,6 grey +class S,T spacewhite +class changes,changes2 white +{{}} + + +***插图 - 使用本地克隆副本进行修改*** + ### 创建一个本地克隆副本并指定 upstream 仓库 -3. 打开终端窗口,克隆你所派生的副本: +3. 打开终端窗口,克隆你所派生的副本,并更新 [Docsy Hugo 主题](https://github.com/google/docsy#readme): ```bash git clone git@github.com//website + cd website + git submodule update --init --recursive --depth 1 ``` {{< note >}} 此工作流程与 [Kubernetes 社区 GitHub 工作流](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md)有所不同。 @@ -262,12 +345,12 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi 1. 决定你要基于哪个分支来开展工作: - - 针对已有内容的改进,请使用 `upstream/main`; - - 针对已有功能特性的新文档内容,请使用 `upstream/main`; + - 针对已有内容的改进,请使用 `upstream/main`。 + - 针对已有功能特性的新文档内容,请使用 `upstream/main`。 - 对于本地化内容,请基于本地化的约定。 - 可参考[对 Kubernetes 文档进行本地化](/zh/docs/contribute/localization/)了解详细信息。 + 可参考[本地化 Kubernetes 文档](/zh-cn/docs/contribute/localization/)了解详细信息。 - 对于在下一个 Kubernetes 版本中新功能特性的文档,使用独立的功能特性分支。 - 参考[为发行版本功能特性撰写文档](/zh/docs/contribute/new-content/new-features/)了解更多信息。 + 参考[为发行版本撰写功能特性文档](/zh-cn/docs/contribute/new-content/new-features/)了解更多信息。 - 对于很多 SIG Docs 共同参与的,需较长时间才完成的任务,例如内容的重构, 请使用为该任务创建的特性分支。 @@ -276,7 +359,7 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi -2. 基于第一步中选定的分支,创建新分支。 +2. 基于第 1 步中选定的分支,创建新分支。 下面的例子假定基础分支是 `upstream/main`: ```bash @@ -285,11 +368,13 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi -3. 使用文本编辑器开始构造变更。 + +3. 使用文本编辑器进行修改。 + 在任何时候,都可以使用 `git status` 命令查看你所改变了的文件列表。 ### 在本地预览你的变更 {#preview-locally} -在推送变更或者发起 PR 之前在本地查看一下预览是个不错的注意。 +在推送变更或者发起 PR 之前在本地查看一下预览是个不错的主意。 通过预览你可以发现构建错误或者 Markdown 格式问题。 -你可以构造网站的容器镜像或者在本地运行 Hugo。 -构造容器镜像的方式比较慢,不过能够显示 [Hugo 短代码(shortcodes)](/zh/docs/contribute/style/hugo-shortcodes/), +你可以构建网站的容器镜像或者在本地运行 Hugo。 +构建容器镜像的方式比较慢,不过能够显示 [Hugo 短代码(shortcodes)](/zh-cn/docs/contribute/style/hugo-shortcodes/), 因此对于调试是很有用的。 {{< tabs name="tab_with_hugo" >}} {{% tab name="在容器内执行 Hugo" %}} {{< note >}} 下面的命令中使用 Docker 作为默认的容器引擎。 -如果需要重载这一行为,可以设置 `CONTAINER_ENGINE`。 +如果需要重载这一行为,可以设置 `CONTAINER_ENGINE` 环境变量。 {{< /note >}} -1. 在本地构造镜像; +1. 在本地构建镜像: ```bash # 使用 docker (默认) @@ -441,7 +527,7 @@ You can set up the `CONTAINER_ENGINE` to override this behavior. -2. 在本地构造了 `kubernetes-hugo` 镜像之后,可以构造并启动网站: +2. 在本地构建了 `kubernetes-hugo` 镜像之后,可以构建并启动网站: ```bash # 使用 docker (默认) @@ -473,24 +559,37 @@ Alternately, install and use the `hugo` command on your computer: 1. 安装 [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml) 文件中指定的 [Hugo](https://gohugo.io/getting-started/installing/) 版本。 -2. 启动一个终端窗口,进入 Kubernetes 网站仓库目录,启动 Hugo 服务器: +2. 如果你尚未更新你的网站仓库,则 `website/themes/docsy` 目录是空的。 + 如果本地缺少主题的副本,则该站点无法构建。 + 要更新网站主题,运行以下命令: + + ```bash + git submodule update --init --recursive --depth 1 + ``` + +3. 启动一个终端窗口,进入 Kubernetes 网站仓库目录,启动 Hugo 服务器: ```bash cd /website hugo server ``` -3. 在浏览器的地址栏输入: `https://localhost:1313`。 -4. 要停止本地 Hugo 实例,返回到终端窗口并输入 `Ctrl+C` 或者关闭终端窗口。 +4. 在浏览器的地址栏输入: `https://localhost:1313`。 + Hugo 会监测文件的变更并根据需要重新构建网站。 +5. 要停止本地 Hugo 实例,返回到终端窗口并输入 `Ctrl+C` 或者关闭终端窗口。 + {{% /tab %}} {{< /tabs >}} @@ -499,6 +598,42 @@ Alternately, install and use the `hugo` command on your computer: --> ### 从你的克隆副本向 kubernetes/website 发起拉取请求(PR) {#open-a-pr} + +下图显示了从你的克隆副本向 K8s/website 发起 PR 的步骤。 +详细信息如下。 + + + + +{{< mermaid >}} +flowchart LR +subgraph first[ ] +direction TB +1[1. 前往 K8s/website 仓库] --> 2[2. 选择 New Pull Request] +2 --> 3[3. 选择 compare across forks] +3 --> 4[4. 从 head repository 下拉菜单
        选择你的克隆副本] +end +subgraph second [ ] +direction TB +5[5. 从 compare 下拉菜单
        选择你的分支] --> 6[6. 选择 Create Pull Request] +6 --> 7[7. 为你的 PR
        添加一个描述] +7 --> 8[8. 选择 Create pull request] +end + +first --> second + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +class 1,2,3,4,5,6,7,8 grey +class first,second white +{{}} + +***插图 - 从你的克隆副本向 K8s/website 发起一个 PR 的步骤*** + 2. 如果有必要,更新你的提交消息; -3. 使用 `git push origin ` 来推送你的变更,重新出发 Netlify 测试。 +3. 使用 `git push origin ` 来推送你的变更,重新触发 Netlify 测试。 {{< note >}} 如果你使用 `git commit -m` 而不是增补参数,在 PR 最终合并之前你必须 @@ -626,7 +762,7 @@ For more information, see [Git Branching - Basic Branching and Merging](https:// {{< note >}} 要了解更多信息,可参考 [Git 分支管理 - 基本分支和合并](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts)、 -[高级合并](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging)、 +[高级合并](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), 或者在 `#sig-docs` Slack 频道寻求帮助。 {{< /note >}} @@ -730,7 +866,9 @@ If another contributor commits changes to the same file in another PR, it can cr ### 压缩(Squashing)提交 {#squashing-commits} {{< note >}} 要了解更多信息,可参看 @@ -793,7 +931,9 @@ If your PR has multiple commits, you must squash them into a single commit befor 就重设基线操作本身,我们关注 `squash` 和 `pick` 选项。 {{< note >}} 进一步的详细信息可参考 [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode)。 @@ -866,17 +1006,19 @@ Most repositories use issue and PR templates. Have a look through some open issues and PRs to get a feel for that team's processes. Make sure to fill out the templates with as much detail as possible when you file issues or PRs. --> -每个仓库有其自己的流程和过程。在登记 Issue 或者发起 PR 之前,记得阅读仓库的 -`README.md`、`CONTRIBUTING.md` 和 `code-of-conduct.md` 文件,如果有的话。 +每个仓库有其自己的流程和过程。在登记 Issue 或者发起 PR 之前, +记得阅读仓库可能存在的 `README.md`、`CONTRIBUTING.md` 和 +`code-of-conduct.md` 文件。 -大多数仓库都有自己的 Issue 和 PR 模版。通过查看一些待解决的 Issues 和 -PR,也可以添加对它们的链接。你可以多少了解该团队的流程。 -在登记 Issue 或提出 PR 时,务必尽量填充所给的模版,多提供详细信息。 +大多数仓库都有自己的 Issue 和 PR 模板。 +通过查看一些待解决的 Issue 和 PR, +你可以大致了解协作的流程。 +在登记 Issue 或提出 PR 时,务必尽量填充所给的模板,多提供详细信息。 ## {{% heading "whatsnext" %}} -- 阅读[评阅](/zh/docs/contribute/review/reviewing-prs)节,学习评阅过程。 +- 阅读[评阅](/zh-cn/docs/contribute/review/reviewing-prs)节,学习评阅过程。 diff --git a/content/zh/docs/contribute/participate/_index.md b/content/zh-cn/docs/contribute/participate/_index.md similarity index 88% rename from content/zh/docs/contribute/participate/_index.md rename to content/zh-cn/docs/contribute/participate/_index.md index ba6ab0c738a97..42b4df0ef61c3 100644 --- a/content/zh/docs/contribute/participate/_index.md +++ b/content/zh-cn/docs/contribute/participate/_index.md @@ -40,8 +40,8 @@ SIG Docs 欢迎所有贡献者提供内容和审阅。任何人可以提交拉 欢迎所有人对文档内容创建 Issue 和对正在处理中的 PR 进行评论。 -你也可以成为[成员(member)](/docs/contribute/participating/roles-and-responsibilities/#members)、 -[评阅人(reviewer)](/docs/contribute/participating/roles-and-responsibilities/#reviewers) 或者 -[批准人(approver)](/docs/contribute/participating/roles-and-responsibilities/#approvers)。 +你也可以成为[成员(member)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members)、 +[评阅人(reviewer)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#reviewers) 或者 +[批准人(approver)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers)。 这些角色拥有更高的权限,且需要承担批准和提交变更的责任。 有关 Kubernetes 社区中的成员如何工作的更多信息,请参见 [社区成员身份](https://github.com/kubernetes/community/blob/master/community-membership.md)。 @@ -72,7 +72,7 @@ of the Kubernetes project as a whole and how SIG Docs works within it. See [Leadership](https://github.com/kubernetes/community/tree/master/sig-docs#leadership) for the current list of chairpersons. --> -## SIG Docs 主席 +## SIG Docs 主席 {#sig-docs-chairperson} 每个 SIG,包括 SIG Docs,都会选出一位或多位成员作为主席。 主席会成为 SIG Docs 和其他 Kubernetes 组织的联络接口人。 @@ -125,7 +125,7 @@ related to GitHub issues and pull requests. The [Kubernetes website repository](https://github.com/kubernetes/website) uses two [prow plugins](https://github.com/kubernetes/test-infra/blob/master/prow/plugins): --> -### OWNERS 文件和扉页 +### OWNERS 文件和扉页 {#owners-files-and-front-matter} Kubernetes 项目使用名为 prow 的自动化工具来自动处理 GitHub issue 和 PR。 [Kubernetes website 仓库](https://github.com/kubernetes/website) 使用了两个 @@ -144,7 +144,7 @@ how prow works within the repository. 这两个插件使用位于 `kubernetes/website` 仓库顶层的 [OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) 文件和 [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) -文件来控制 prow 在仓库范围的工作方式。 +文件来控制 prow 在仓库范围的工作方式。 -OWNERS 文件包含 SIG Docs 评阅人和批准人的列表。 +OWNERS 文件包含 SIG Docs 评阅人和批准人的列表。 OWNERS 文件也可以存在于子目录中,可以在子目录层级重新设置哪些人可以作为评阅人和 -批准人,并将这一设定传递到下层子目录。 +批准人,并将这一设定传递到下层子目录。 关于 OWNERS 的更多信息,请参考 [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md) 文档。 @@ -206,7 +206,7 @@ SIG Docs 批准人。下面是合并的工作机制: - 所有 Kubernetes 成员可以通过 `/lgtm` 评论添加 `lgtm` 标签。 - 只有 SIG Docs 批准人可以通过评论 `/approve` 合并 PR。 某些批准人还会执行一些其他角色,例如 - [PR 管理者](/zh/docs/contribute/participate/pr-wranglers/) 或 + [PR 管理者](/zh-cn/docs/contribute/participate/pr-wranglers/) 或 [SIG Docs 主席](#sig-docs-chairperson)等。 ## {{% heading "whatsnext" %}} @@ -220,6 +220,6 @@ For more information about contributing to the Kubernetes documentation, see: --> 关于贡献 Kubernetes 文档的更多信息,请参考: -- [贡献新内容](/zh/docs/contribute/new-content/overview/) -- [评阅内容](/zh/docs/contribute/review/reviewing-prs) -- [文档样式指南](/zh/docs/contribute/style/) +- [贡献新内容](/zh-cn/docs/contribute/new-content/) +- [评阅内容](/zh-cn/docs/contribute/review/reviewing-prs) +- [文档样式指南](/zh-cn/docs/contribute/style/) diff --git a/content/zh/docs/contribute/participate/pr-wranglers.md b/content/zh-cn/docs/contribute/participate/pr-wranglers.md similarity index 67% rename from content/zh/docs/contribute/participate/pr-wranglers.md rename to content/zh-cn/docs/contribute/participate/pr-wranglers.md index bf08aa770ae0a..9ec889d8cdaab 100644 --- a/content/zh/docs/contribute/participate/pr-wranglers.md +++ b/content/zh-cn/docs/contribute/participate/pr-wranglers.md @@ -11,15 +11,15 @@ weight: 20 -SIG Docs 的[批准人(Approvers)](/zh/docs/contribute/participate/roles-and-responsibilites/#approvers)们每周轮流负责 -[管理仓库的 PRs](https://github.com/kubernetes/website/wiki/PR-Wranglers)。 +SIG Docs 的[批准人(Approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers)们每周轮流负责 +[管理仓库的 PR](https://github.com/kubernetes/website/wiki/PR-Wranglers)。 -本节介绍 PR 管理者的职责。关于如何提供较好的评审意见,可参阅 -[评审变更](/zh/docs/contribute/review/). +本节介绍 PR 管理者的职责。关于如何提供较好的评审意见, +可参阅[评审变更](/zh-cn/docs/contribute/review/)。 @@ -31,42 +31,57 @@ Each day in a week-long shift as PR Wrangler: - Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. - Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides. - Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can. -- Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md). - - Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven’t signed the CLA to do so. -- Provide feedback on changes and ask for technical reviews from members of other SIGs. - - Provide inline suggestions on the PR for the proposed content changes. - - If you need to verify content, comment on the PR and request more details. - - Assign relevant `sig/` label(s). - - If needed, assign reviewers from the `reviewers:` block in the file's front matter. -- Use the `/approve` comment to approve a PR for merging. Merge the PR when ready. - - PRs should have a `/lgtm` comment from another member before merging. - - Consider accepting technically accurate content that doesn't meet the [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns. --> ## 职责 {#duties} 在为期一周的轮值期内,PR 管理者要: - 每天对新增的 Issues 判定和打标签。参见 - [对 Issues 进行判定和分类](/zh/docs/contribute/review/for-approvers/#triage-and-categorize-issues) + [对 Issues 进行判定和分类](/zh-cn/docs/contribute/review/for-approvers/#triage-and-categorize-issues) 以了解 SIG Docs 如何使用元数据的详细信息。 - 检查[悬决的 PR](https://github.com/kubernetes/website/pulls) 的质量并确保它们符合 - [样式指南](/zh/docs/contribute/style/style-guide/)和 - [内容指南](/zh/docs/contribute/style/content-guide/)要求。 + [样式指南](/zh-cn/docs/contribute/style/style-guide/)和 + [内容指南](/zh-cn/docs/contribute/style/content-guide/)要求。 - 首先查看最小的 PR(`size/XS`),然后逐渐扩展到最大的 PR(`size/XXL`),尽可能多地评审 PR。 + - 确保贡献者完成 [CLA](https://github.com/kubernetes/community/blob/master/CLA.md) 签署。 - 使用[此脚本](https://github.com/zparnold/k8s-docs-pr-botherer)自动提醒尚未签署 CLA 的贡献者签署 CLA。 - 针对提供提供反馈,请求其他 SIG 的成员进行技术审核。 - 为 PR 所建议的内容更改提供就地反馈。 - - 如果您需要验证内容,请在 PR 上发表评论并要求贡献者提供更多细节。 + - 如果你需要验证内容,请在 PR 上发表评论并要求贡献者提供更多细节。 - 设置相关的 `sig/` 标签。 - - 如果需要,从文件开头的 `reviewers:` 块中指派评阅人。 + - 如果需要,根据文件开头的 `reviewers:` 块来指派评审人。 + - 你也可以通过在 PR 上作出 `@kubernetes/-pr-reviews` 的评论以标记需要某个 + [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md) 来评审。 + - 使用 `/approve` 评论来批准可以合并的 PR,在 PR 就绪时将其合并。 - PR 在被合并之前,应该有来自其他成员的 `/lgtm` 评论。 - 可以考虑接受那些技术上准确,但文风上不满足 - [风格指南](/zh/docs/contribute/style/style-guide/)要求的 PR。 - 可以登记一个新的 Issue 来解决文档风格问题,并将其标记为 `good first issue`。 + [风格指南](/zh-cn/docs/contribute/style/style-guide/)要求的 PR。 + 批准变更时,可以登记一个新的 Issue 来解决文档风格问题。 + 你通常可以将这些风格修复问题标记为 `good first issue`。 + - 将风格修复事项标记为 `good first issue` 可以很好地确保向新加入的贡献者分派一些比较简单的任务, + 这有助于接纳新的贡献者。 -### 对于管理人有用的 GitHub 查询 +### 对管理者有用的 GitHub 查询 执行管理操作时,以下查询很有用。完成以下这些查询后,剩余的要审阅的 PR 列表通常很小。 这些查询都不包含本地化的 PR,并仅包含主分支上的 PR(除了最后一个查询)。 @@ -164,11 +179,46 @@ To close a pull request, leave a `/close` comment on the PR. 要关闭 PR,请在 PR 上输入 `/close` 评论。 {{< note >}} -一个名为 [`fejta-bot`](https://github.com/fejta-bot) 的自动服务会在 Issue 停滞 90 +一个名为 [`k8s-ci-robot`](https://github.com/k8s-ci-robot) 的自动服务会在 Issue 停滞 90 天后自动将其标记为过期;然后再等 30 天,如果仍然无人过问,则将其关闭。 PR 管理者应该在 issues 处于无人过问状态 14-30 天后关闭它们。 {{< /note >}} + +## PR 管理者影子计划 + +2021 下半年,SIG Docs 推出了 PR 管理者影子计划(PR Wrangler Shadow Program)。 +该计划旨在帮助新的贡献者们了解 PR 管理流程。 + + +### 成为一名影子 + +- 如果你有兴趣成为一名 PR 管理者的影子,请访问 [PR 管理者维基页面](https://github.com/kubernetes/website/wiki/PR-Wranglers)查看今年的 + PR 管理轮值表,然后注册报名。 + +- Kubernetes 组织成员可以编辑 [PR 管理者维基页面](https://github.com/kubernetes/website/wiki/PR-Wranglers), + 注册成为一名现有 PR 管理者一周内的影子。 + +- 其他人可以通过 [#sig-docs Slack 频道](https://kubernetes.slack.com/messages/sig-docs)申请成为指定 + PR 管理者某一周的影子。可以随时咨询 (`@bradtopol`) 或某一位 + [SIG Docs 联席主席/主管](https://github.com/kubernetes/community/tree/master/sig-docs#leadership)。 + +- 注册成为一名 PR 管理者的影子时, + 请你在 [Kubernetes Slack](https://slack.k8s.io) 向这名 PR 管理者做一次自我介绍。 diff --git a/content/zh/docs/contribute/participate/roles-and-responsibilities.md b/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md similarity index 91% rename from content/zh/docs/contribute/participate/roles-and-responsibilities.md rename to content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md index 7739af7584436..fc410d58456d5 100644 --- a/content/zh/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md @@ -62,12 +62,12 @@ For more information, see [contributing new content](/docs/contribute/new-conten [SIG Docs 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) 上提出改进建议。 -在[签署了 CLA](/zh/docs/contribute/new-content/overview/#sign-the-cla) 之后,任何人还可以: +在[签署了 CLA](/zh-cn/docs/contribute/new-content/#sign-the-cla) 之后,任何人还可以: - 发起拉取请求(PR),改进现有内容、添加新内容、撰写博客或者案例分析 - 创建示意图、图形资产或者嵌入式的截屏和视频内容 -进一步的详细信息,可参见[贡献新内容](/zh/docs/contribute/new-content/)。 +进一步的详细信息,可参见[贡献新内容](/zh-cn/docs/contribute/new-content/)。 1. 找到两个[评审人](#reviewers)或[批准人](#approvers)为你的成员身份提供 - [担保](/zh/docs/contribute/advanced#sponsor-a-new-contributor)。 + [担保](/zh-cn/docs/contribute/advanced#sponsor-a-new-contributor)。 通过 [Kubernetes Slack 上的 #sig-docs 频道](https://kubernetes.slack.com) 或者 [SIG Docs 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) @@ -146,7 +146,7 @@ After submitting at least 5 substantial pull requests and meeting the other [req {{< /note >}} 2. 在 [`kubernetes/org`](https://github.com/kubernetes/org/) 仓库 - 使用 **Organization Membership Request** Issue 模版登记一个 Issue。 + 使用 **Organization Membership Request** Issue 模板登记一个 Issue。 1. 发起 PR,将你的 GitHub 用户名添加到 `kubernetes/website` 仓库中 [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) - 文件的特定节。 + 文件的对应节区。 {{< note >}} 如果你不确定要添加到哪个位置,可以将自己添加到 `sig-docs-en-reviews`。 @@ -292,8 +292,8 @@ If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added 2. 将 PR 指派给一个或多个 SIG Docs 批准人(`sig-docs-{language}-owners` 下列举的用户名)。 -请求被批准之后,SIG Docs Leads 之一会将你添加到合适的 GitHub 团队。 -一旦添加完成, [@k8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) +申请被批准之后,SIG Docs Leads 之一会将你添加到合适的 GitHub 团队。 +一旦添加完成,[@k8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) 会在处理未来的 PR 时,将 PR 指派给你或者建议你来评审某 PR。 -- 阅读[管理 PR](/zh/docs/contribute/participate/pr-wranglers/),了解所有批准人轮值的一个角色。 +- 阅读 [PR 管理者](/zh-cn/docs/contribute/participate/pr-wranglers/),了解所有批准人轮值的角色。 diff --git a/content/zh/docs/contribute/review/_index.md b/content/zh-cn/docs/contribute/review/_index.md similarity index 100% rename from content/zh/docs/contribute/review/_index.md rename to content/zh-cn/docs/contribute/review/_index.md diff --git a/content/zh/docs/contribute/review/for-approvers.md b/content/zh-cn/docs/contribute/review/for-approvers.md similarity index 97% rename from content/zh/docs/contribute/review/for-approvers.md rename to content/zh-cn/docs/contribute/review/for-approvers.md index c5763a378c6b2..3192603db1538 100644 --- a/content/zh/docs/contribute/review/for-approvers.md +++ b/content/zh-cn/docs/contribute/review/for-approvers.md @@ -27,8 +27,8 @@ In addition to the rotation, a bot assigns reviewers and approvers for the PR based on the owners for the affected files. --> SIG Docs -[评阅人(Reviewers)](/zh/docs/contribute/participate/roles-and-responsibilities/#reviewers) -和[批准人(Approvers)](/zh/docs/contribute/participate/roles-and-responsibilities/#approvers) +[评阅人(Reviewers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#reviewers) +和[批准人(Approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) 在对变更进行评审时需要做一些额外的事情。 每周都有一个特定的文档批准人自愿负责对 PR 进行分类和评阅。 @@ -51,7 +51,7 @@ Everything described in [Reviewing a pull request](/docs/contribute/review/revie Kubernetes 文档遵循 [Kubernetes 代码评阅流程](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process)。 -[评阅 PR](/zh/docs/contribute/review/reviewing-prs/) 文档中所描述的所有规程都适用, +[评阅 PR](/zh-cn/docs/contribute/review/reviewing-prs/) 文档中所描述的所有规程都适用, 不过评阅人和批准人还要做以下工作: + -任何人均可评阅文档的拉取请求。访问 Kubernetes 网站仓库的 -[pull requests](https://github.com/kubernetes/website/pulls) -部分可以查看所有待处理的拉取请求(PRs)。 +任何人均可评审文档的拉取请求。 +访问 Kubernetes 网站仓库的 [pull requests](https://github.com/kubernetes/website/pulls) 部分, +可以查看所有待处理的拉取请求(PR)。 -评阅文档 PR 是将你自己介绍给 Kubernetes 社区的一种很好的方式。 +评审文档 PR 是将你自己介绍给 Kubernetes 社区的一种很好的方式。 它将有助于你学习代码库并与其他贡献者之间建立相互信任关系。 -在评阅之前,可以考虑: +在评审之前,可以考虑: -- 阅读[内容指南](/zh/docs/contribute/style/content-guide/)和 - [样式指南](/zh/docs/contribute/style/style-guide/)以便给出有价值的评论。 -- 了解 Kubernetes 文档社区中不同的[角色和职责](/zh/docs/contribute/participate/roles-and-responsibilities/)。 +- 阅读[内容指南](/zh-cn/docs/contribute/style/content-guide/)和 + [样式指南](/zh-cn/docs/contribute/style/style-guide/)以便给出有价值的评论。 +- 了解 Kubernetes 文档社区中不同的[角色和职责](/zh-cn/docs/contribute/participate/roles-and-responsibilities/)。 + ## 准备工作 {#before-you-begin} -在你开始评阅之前: +在你开始评审之前: -- 阅读 [CNCF 行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) - 确保你会始终遵从其中约定; -- 保持有礼貌、体谅他人,怀助人为乐初心; -- 评论时若给出修改建议,也要兼顾 PR 的积极方面 -- 保持同理心,多考虑他人收到评阅意见时的可能反应 -- 假定大家都是好意的,通过问问题澄清意图 -- 如果你是有经验的贡献者,请考虑和新贡献者一起合作,提高其产出质量 +- 阅读 [CNCF 行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)。 + 确保你会始终遵从其中约定。 +- 保持有礼貌、体谅他人,怀助人为乐初心。 +- 评论时若给出修改建议,也要兼顾 PR 的积极方面。 +- 保持同理心,多考虑他人收到评审意见时的可能反应。 +- 假定大家都是好意的,通过问问题澄清意图。 +- 如果你是有经验的贡献者,请考虑和新贡献者一起合作,提高其产出质量。 - +## 评审过程 {#review-process} + +一般而言,应该使用英语来评审 PR 的内容和样式。 +图 1 概述了评审流程的各个步骤。 +每个步骤的详细信息如下。 + + + + +{{< mermaid >}} +flowchart LR + subgraph fourth[开始评审] + direction TB + S[ ] -.- + M[添加评论] --> N[评审变更] + N --> O[新手应该
        选择 Comment] + end + subgraph third[选择 PR] + direction TB + T[ ] -.- + J[阅读描述
        和评论]--> K[通过 Netlify 预览构建
        来预览变更] + end + + A[查阅待处理的 PR 清单]--> B[通过标签过滤
        待处理的 PR] + B --> third --> fourth + + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,B,J,K,M,N,O grey +class S,T spacewhite +class third,fourth white +{{}} + + +图 1. 评审流程步骤。 + -## 评阅过程 {#review-process} - -一般而言,应该使用英语来评阅 PR 的内容和样式。 - 1. 前往 [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls), - 你会看到所有针对 Kubernetes 网站和文档的待处理 PRs。 + 你会看到所有针对 Kubernetes 网站和文档的待处理 PR。 -2. 使用以下标签(组合)对待处理 PRs 进行过滤: +2. 使用以下标签(组合)对待处理 PR 进行过滤: - - `cncf-cla: yes` (建议):由尚未签署 CLA 的贡献者所发起的 PRs 不可以合并。 - 参考[签署 CLA](/zh/docs/contribute/new-content/overview/#sign-the-cla) 以了解更多信息。 - - `language/en` (建议):仅查看英语语言的 PRs。 - - `size/<尺寸>`:过滤特定尺寸(规模)的 PRs。如果你刚入门,可以从较小的 PR 开始。 + - `cncf-cla: yes` (建议):由尚未签署 CLA 的贡献者所发起的 PR 不可以合并。 + 参考[签署 CLA](/zh-cn/docs/contribute/new-content/#sign-the-cla) 以了解更多信息。 + - `language/en` (建议):仅查看英语语言的 PR。 + - `size/<尺寸>`:过滤特定尺寸(规模)的 PR。 + 如果你刚入门,可以从较小的 PR 开始。 此外,确保 PR 没有标记为尚未完成(Work in Progress)。 - 包含 `work in progress` 的 PRs 通常还没准备好被评阅。 + 包含 `work in progress` 的 PR 通常还没准备好被评审。 - +3. 选定 PR 评审之后,可以通过以下方式理解所作的变更: + + - 阅读 PR 描述以理解所作变更,并且阅读所有关联的 Issues。 + - 阅读其他评审人给出的评论。 + - 点击 **Files changed** Tab 页面,查看被改变的文件和代码行。 + - 滚动到 **Conversation** Tab 页面下端的 PR 构建检查节区, + 预览 Netlify 预览构建中的变更。 + 以下是一个屏幕截图(这显示了 GitHub 的桌面版外观; + 如果你在平板电脑或智能手机设备上进行评审, + GitHub 的 Web UI 会略有不同): + {{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="GitHub PR 详细信息,包括 Netlify 预览链接" >}} + 要打开预览,请点击 **deploy/netlify** 行的 **Details** 链接。 + -3. 选定 PR 评阅之后,可以通过以下方式理解所作的变更: - - - 阅读 PR 描述以理解所作变更,并且阅读所有关联的 Issues - - 阅读其他评阅人给出的评论 - - 点击 **Files changed** Tab 页面,查看被改变的文件和代码行 - - 滚动到 **Conversation** Tab 页面下端的 PR 构建检查节区,点击 - **deploy/netlify** 行的 **Details** 链接,预览 Netlify - 预览构建所生成的结果 - -4. 前往 **Files changed** Tab 页面,开始你的评阅工作 - - 1. 点击你希望评论的行旁边的 `+` 号 - 2. 填写你对该行的评论,之后或者选择**Add single comment** (如果你只有一条评论) - 或者 **Start a review** (如果你还有其他评论要添加) - 3. 评论结束时,点击页面顶部的 **Review changes**。这里你可以添加你的评论结语 - (记得留下一些正能量的评论!)、根据需要批准 PR、请求作者进一步修改等等。 +4. 前往 **Files changed** Tab 页面,开始你的评审工作。 + + 1. 点击你希望评论的行旁边的 `+` 号。 + 2. 填写你对该行的评论, + 之后选择 **Add single comment**(如果你只有一条评论) + 或者 **Start a review**(如果你还有其他评论要添加)。 + 3. 评论结束时,点击页面顶部的 **Review changes**。 + 这里你可以添加你的评论结语(记得留下一些正能量的评论!)、 + 根据需要批准 PR、请求作者进一步修改等等。 新手应该选择 **Comment**。 - +## 评审清单 {#reviewing-checklist} + +评审 PR 时可以从下面的条目入手。 + -## 评阅清单 {#reviewing-checklist} - -评阅 PR 时可以从下面的条目入手。 - ### 语言和语法 {#language-and-grammar} - 是否存在明显的语言或语法错误?对某事的描述有更好的方式? - 是否存在一些过于复杂晦涩的用词,本可以用简单词汇来代替? - 是否有些用词、术语或短语可以用不带歧视性的表达方式代替? -- 用词和大小写方面是否遵从了[样式指南](/zh/docs/contribute/style/style-guide/)? +- 用词和大小写方面是否遵从了[样式指南](/zh-cn/docs/contribute/style/style-guide/)? - 是否有些句子太长,可以改得更短、更简单? - 是否某些段落过长,可以考虑使用列表或者表格来表达? @@ -177,10 +231,6 @@ When reviewing, use the following as a starting point. - Does the page appear correctly in the section's side navigation (or at all)? - Should the page appear on the [Docs Home](/docs/home/) listing? - Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code blocks, tables, notes and images. - -### Other - -For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. This lets the author know the issue is non-critical. --> ### 网站 {#Website} @@ -188,15 +238,20 @@ For small issues with a PR, like typos or whitespace, prefix your comments with 如果是这样,PR 是否会导致出现新的失效链接? 是否有其他的办法,比如改变页面标题但不改变其 slug? - PR 是否引入新的页面?如果是: - - 该页面是否使用了正确的[页面内容类型](/zh/docs/contribute/style/page-content-types/) + - 该页面是否使用了正确的[页面内容类型](/zh-cn/docs/contribute/style/page-content-types/) 及相关联的 Hugo 短代码(shortcodes)? - 该页面能否在对应章节的侧面导航中显示?显示得正确么? - - 该页面是否应出现在[网站主页面](/zh/docs/home/)的列表中? + - 该页面是否应出现在[网站主页面](/zh-cn/docs/home/)的列表中? - 变更是否正确出现在 Netlify 预览中了? - 要对列表、代码段、表格、注释和图像等元素格外留心 + 要对列表、代码段、表格、注释和图像等元素格外留心。 + ### 其他 {#other} -对于 PR 中的小问题,例如拼写错误或者空格问题,可以在你的评论前面加上 `nit:`。 +对于 PR 中的小问题,例如拼写错误或者空格问题, +可以在你的评论前面加上 `nit:`。 这样做可以让作者知道该问题不是一个不得了的大问题。 - diff --git a/content/zh/docs/contribute/style/_index.md b/content/zh-cn/docs/contribute/style/_index.md similarity index 100% rename from content/zh/docs/contribute/style/_index.md rename to content/zh-cn/docs/contribute/style/_index.md diff --git a/content/zh/docs/contribute/style/content-guide.md b/content/zh-cn/docs/contribute/style/content-guide.md similarity index 79% rename from content/zh/docs/contribute/style/content-guide.md rename to content/zh-cn/docs/contribute/style/content-guide.md index 1f787a1b93ec0..d506a90975064 100644 --- a/content/zh/docs/contribute/style/content-guide.md +++ b/content/zh-cn/docs/contribute/style/content-guide.md @@ -15,22 +15,22 @@ weight: 10 本页包含 Kubernetes 文档的一些指南。 -如果你不清楚哪些事情是可以做的,请加入到 -[Kubernetes Slack](http://slack.k8s.io/) 的 `#sig-docs` 频道提问! -你可以在 http://slack.k8s.io 注册到 Kubernetes Slack。 +如果你不清楚哪些事情是可以做的,请加入到 +[Kubernetes Slack](https://slack.k8s.io/) 的 `#sig-docs` 频道提问! +你可以在 https://slack.k8s.io 注册到 Kubernetes Slack。 关于为 Kubernetes 文档创建新内容的更多信息,可参考 -[样式指南](/zh/docs/contribute/style/style-guide)。 +[样式指南](/zh-cn/docs/contribute/style/style-guide)。 @@ -42,7 +42,7 @@ Source for the Kubernetes website, including the docs, resides in the Located in the `kubernetes/website/content//docs` folder, the majority of Kubernetes documentation is specific to the [Kubernetes -project](https://github.com/kubernetes/kubernetes). +project](https://github.com/kubernetes/kubernetes). ## What's allowed @@ -72,12 +72,12 @@ Kubernetes 网站(包括其文档)源代码位于 ### Third party content Kubernetes documentation includes applied examples of projects in the Kubernetes project—projects that live in the [kubernetes](https://github.com/kubernetes) and -[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations. +[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations. -Links to active content in the Kubernetes project are always allowed. +Links to active content in the Kubernetes project are always allowed. -Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker), -[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), and [logging](https://kubernetes.io/docs/concepts/cluster-administration/logging/). +Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker), +[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/), and [logging](/docs/concepts/cluster-administration/logging/). Docs can link to third-party open source software (OSS) outside the Kubernetes project only if it's necessary for Kubernetes to function. --> @@ -92,9 +92,9 @@ Kubernetes 文档包含 Kubernetes 项目下的多个项目的应用示例。 Kubernetes 需要某些第三方内容才能正常工作。例如 容器运行时(containerd、CRI-O、Docker), -[联网策略](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) -(CNI 插件),[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/) -以及[日志](https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/)等。 +[联网策略](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +(CNI 插件),[Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers/) +以及[日志](/zh-cn/docs/concepts/cluster-administration/logging/)等。 只有对应的第三方开源软件(OSS)是运行 Kubernetes 所必需的,才可以在文档中包含 指向这些 Kubernetes 项目之外的软件的链接。 @@ -109,7 +109,8 @@ Dual-sourced content requires double the effort (or more!) to maintain and grows stale more quickly. {{< note >}} -If you're a maintainer for a Kubernetes project and need help hosting your own docs, + +If you're a maintainer for a Kubernetes project and need help hosting your own docs, ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/messages/C1J0BPD2M/). {{< /note >}} --> @@ -128,15 +129,13 @@ ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/mes ### 更多信息 {#more-information} -如果你对允许出现的内容有疑问,请加入到 [Kubernetes Slack](http://slack.k8s.io/) +如果你对允许出现的内容有疑问,请加入到 [Kubernetes Slack](https://slack.k8s.io/) 的 `#sig-docs` 频道提问! ## {{% heading "whatsnext" %}} -* 阅读[样式指南](/zh/docs/contribute/style/style-guide)。 - - +* 阅读[样式指南](/zh-cn/docs/contribute/style/style-guide)。 diff --git a/content/zh/docs/contribute/style/content-organization.md b/content/zh-cn/docs/contribute/style/content-organization.md similarity index 96% rename from content/zh/docs/contribute/style/content-organization.md rename to content/zh-cn/docs/contribute/style/content-organization.md index eb35c80b5fb2c..247d016057a95 100644 --- a/content/zh/docs/contribute/style/content-organization.md +++ b/content/zh-cn/docs/contribute/style/content-organization.md @@ -159,7 +159,7 @@ One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/) 除了独立的内容页面(Markdown 文件),Hugo 还支持 [页面包](https://gohugo.io/content-management/page-bundles/)。 -一个例子是[定制的 Hugo 短代码(shortcodes)](/zh/docs/contribute/style/hugo-shortcodes/)。 +一个例子是[定制的 Hugo 短代码(shortcodes)](/zh-cn/docs/contribute/style/hugo-shortcodes/)。 它被认为是 `leaf bundle`(叶子包)。 目录下的所有内容,包括 `index.md`,都是包的一部分。此外还包括页面间相对链接、可被处理的图像等: @@ -222,7 +222,7 @@ The `SASS` source of the stylesheets for this site is stored below `src/sass` an * Learn about the [Content guide](/docs/contribute/style/content-guide) --> -* 了解[定制 Hugo 短代码](/zh/docs/contribute/style/hugo-shortcodes/) -* 了解[样式指南](/zh/docs/contribute/style/style-guide) -* 了解[内容指南](/zh/docs/contribute/style/content-guide) +* 了解[定制 Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/) +* 了解[样式指南](/zh-cn/docs/contribute/style/style-guide) +* 了解[内容指南](/zh-cn/docs/contribute/style/content-guide) diff --git a/content/zh/docs/contribute/style/diagram-guide.md b/content/zh-cn/docs/contribute/style/diagram-guide.md similarity index 96% rename from content/zh/docs/contribute/style/diagram-guide.md rename to content/zh-cn/docs/contribute/style/diagram-guide.md index 7a88803e45b43..010901f6dd3e6 100644 --- a/content/zh/docs/contribute/style/diagram-guide.md +++ b/content/zh-cn/docs/contribute/style/diagram-guide.md @@ -85,10 +85,10 @@ All you need to begin working with Mermaid is the following: * 对 Markdown 有一个基本的了解 * 使用 Mermaid 在线编辑器 -* 使用 [Hugo 短代码(shortcode)](/zh/docs/contribute/style/hugo-shortcodes/) +* 使用 [Hugo 短代码(shortcode)](/zh-cn/docs/contribute/style/hugo-shortcodes/) * 使用 [Hugo {{}} 短代码](https://gohugo.io/content-management/shortcodes/#figure) -* 执行 [Hugo 本地预览](/zh/docs/contribute/new-content/open-a-pr/#preview-locally) -* 熟悉[贡献新内容](/zh/docs/contribute/new-content/)的流程 +* 执行 [Hugo 本地预览](/zh-cn/docs/contribute/new-content/open-a-pr/#preview-locally) +* 熟悉[贡献新内容](/zh-cn/docs/contribute/new-content/)的流程 {{< note >}} -你应该使用[本地](/zh/docs/contribute/new-content/open-a-pr/#preview-locally)和 Netlify +你应该使用[本地](/zh-cn/docs/contribute/new-content/open-a-pr/#preview-locally)和 Netlify 预览来验证图表是可以正常渲染的。 {{< caution >}} @@ -564,7 +564,7 @@ Be sure to check that your diagram renders properly using the [local](/docs/contribute/new-content/open-a-pr/#preview-locally) and Netlify previews. --> -要使用[本地](/zh/docs/contribute/new-content/open-a-pr/#preview-locally)和 +要使用[本地](/zh-cn/docs/contribute/new-content/open-a-pr/#preview-locally)和 Netlify 预览来检查你的图表可以正常渲染。 ### 示例 1 - Pod 拓扑分布约束 -图 6 展示的是 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) +图 6 展示的是 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) 页面所出现的图表。 {{< mermaid >}} @@ -792,7 +792,7 @@ Figure 7 shows the diagram appearing in the [What is Ingress](/docs/concepts/ser --> ### 示例 2 - Ingress -图 7 显示的是 [Ingress 是什么](/zh/docs/concepts/services-networking/ingress/#what-is-ingress) +图 7 显示的是 [Ingress 是什么](/zh-cn/docs/concepts/services-networking/ingress/#what-is-ingress) 页面所出现的图表。 {{< mermaid >}} @@ -860,7 +860,7 @@ K8s components to start a container. 图 8 给出的是一个 Mermaid 时序图,展示启动容器时 K8s 组件间的控制流。 -{{< figure src="/zh/docs/images/diagram-guide-example-3.svg" alt="K8s system flow diagram" class="diagram-large" caption="Figure 8. K8s system flow diagram" link="https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiJSV7aW5pdDp7XCJ0aGVtZVwiOlwibmV1dHJhbFwifX0lJVxuc2VxdWVuY2VEaWFncmFtXG4gICAgYWN0b3IgbWVcbiAgICBwYXJ0aWNpcGFudCBhcGlTcnYgYXMgY29udHJvbCBwbGFuZTxicj48YnI-YXBpLXNlcnZlclxuICAgIHBhcnRpY2lwYW50IGV0Y2QgYXMgY29udHJvbCBwbGFuZTxicj48YnI-ZXRjZCBkYXRhc3RvcmVcbiAgICBwYXJ0aWNpcGFudCBjbnRybE1nciBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5jb250cm9sbGVyPGJyPm1hbmFnZXJcbiAgICBwYXJ0aWNpcGFudCBzY2hlZCBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5zY2hlZHVsZXJcbiAgICBwYXJ0aWNpcGFudCBrdWJlbGV0IGFzIG5vZGU8YnI-PGJyPmt1YmVsZXRcbiAgICBwYXJ0aWNpcGFudCBjb250YWluZXIgYXMgbm9kZTxicj48YnI-Y29udGFpbmVyPGJyPnJ1bnRpbWVcbiAgICBtZS0-PmFwaVNydjogMS4ga3ViZWN0bCBjcmVhdGUgLWYgcG9kLnlhbWxcbiAgICBhcGlTcnYtLT4-ZXRjZDogMi4gc2F2ZSBuZXcgc3RhdGVcbiAgICBjbnRybE1nci0-PmFwaVNydjogMy4gY2hlY2sgZm9yIGNoYW5nZXNcbiAgICBzY2hlZC0-PmFwaVNydjogNC4gd2F0Y2ggZm9yIHVuYXNzaWduZWQgcG9kcyhzKVxuICAgIGFwaVNydi0-PnNjaGVkOiA1LiBub3RpZnkgYWJvdXQgcG9kIHcgbm9kZW5hbWU9XCIgXCJcbiAgICBzY2hlZC0-PmFwaVNydjogNi4gYXNzaWduIHBvZCB0byBub2RlXG4gICAgYXBpU3J2LS0-PmV0Y2Q6IDcuIHNhdmUgbmV3IHN0YXRlXG4gICAga3ViZWxldC0-PmFwaVNydjogOC4gbG9vayBmb3IgbmV3bHkgYXNzaWduZWQgcG9kKHMpXG4gICAgYXBpU3J2LT4-a3ViZWxldDogOS4gYmluZCBwb2QgdG8gbm9kZVxuICAgIGt1YmVsZXQtPj5jb250YWluZXI6IDEwLiBzdGFydCBjb250YWluZXJcbiAgICBrdWJlbGV0LT4-YXBpU3J2OiAxMS4gdXBkYXRlIHBvZCBzdGF0dXNcbiAgICBhcGlTcnYtLT4-ZXRjZDogMTIuIHNhdmUgbmV3IHN0YXRlIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjp0cnVlfQ" >}} +{{< figure src="/zh-cn/docs/images/diagram-guide-example-3.svg" alt="K8s system flow diagram" class="diagram-large" caption="Figure 8. K8s system flow diagram" link="https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiJSV7aW5pdDp7XCJ0aGVtZVwiOlwibmV1dHJhbFwifX0lJVxuc2VxdWVuY2VEaWFncmFtXG4gICAgYWN0b3IgbWVcbiAgICBwYXJ0aWNpcGFudCBhcGlTcnYgYXMgY29udHJvbCBwbGFuZTxicj48YnI-YXBpLXNlcnZlclxuICAgIHBhcnRpY2lwYW50IGV0Y2QgYXMgY29udHJvbCBwbGFuZTxicj48YnI-ZXRjZCBkYXRhc3RvcmVcbiAgICBwYXJ0aWNpcGFudCBjbnRybE1nciBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5jb250cm9sbGVyPGJyPm1hbmFnZXJcbiAgICBwYXJ0aWNpcGFudCBzY2hlZCBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5zY2hlZHVsZXJcbiAgICBwYXJ0aWNpcGFudCBrdWJlbGV0IGFzIG5vZGU8YnI-PGJyPmt1YmVsZXRcbiAgICBwYXJ0aWNpcGFudCBjb250YWluZXIgYXMgbm9kZTxicj48YnI-Y29udGFpbmVyPGJyPnJ1bnRpbWVcbiAgICBtZS0-PmFwaVNydjogMS4ga3ViZWN0bCBjcmVhdGUgLWYgcG9kLnlhbWxcbiAgICBhcGlTcnYtLT4-ZXRjZDogMi4gc2F2ZSBuZXcgc3RhdGVcbiAgICBjbnRybE1nci0-PmFwaVNydjogMy4gY2hlY2sgZm9yIGNoYW5nZXNcbiAgICBzY2hlZC0-PmFwaVNydjogNC4gd2F0Y2ggZm9yIHVuYXNzaWduZWQgcG9kcyhzKVxuICAgIGFwaVNydi0-PnNjaGVkOiA1LiBub3RpZnkgYWJvdXQgcG9kIHcgbm9kZW5hbWU9XCIgXCJcbiAgICBzY2hlZC0-PmFwaVNydjogNi4gYXNzaWduIHBvZCB0byBub2RlXG4gICAgYXBpU3J2LS0-PmV0Y2Q6IDcuIHNhdmUgbmV3IHN0YXRlXG4gICAga3ViZWxldC0-PmFwaVNydjogOC4gbG9vayBmb3IgbmV3bHkgYXNzaWduZWQgcG9kKHMpXG4gICAgYXBpU3J2LT4-a3ViZWxldDogOS4gYmluZCBwb2QgdG8gbm9kZVxuICAgIGt1YmVsZXQtPj5jb250YWluZXI6IDEwLiBzdGFydCBjb250YWluZXJcbiAgICBrdWJlbGV0LT4-YXBpU3J2OiAxMS4gdXBkYXRlIHBvZCBzdGF0dXNcbiAgICBhcGlTcnYtLT4-ZXRjZDogMTIuIHNhdmUgbmV3IHN0YXRlIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjp0cnVlfQ" >}} 运行 Kubernetes 需要第三方软件。例如:你通常需要将 -[DNS 服务器](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) +[DNS 服务器](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) 添加到集群中,以便名称解析工作。 -当我们链接到第三方软件或以其他方式提及它时,我们会遵循[内容指南](/zh/docs/contribute/style/content-guide/) +当我们链接到第三方软件或以其他方式提及它时,我们会遵循[内容指南](/zh-cn/docs/contribute/style/content-guide/) 并标记这些第三方项目。 * 了解 [Hugo](https://gohugo.io/)。 -* 了解[撰写新的话题](/zh/docs/contribute/style/write-new-topic/)。 -* 了解[使用页面内容类型](/zh/docs/contribute/style/page-content-types/)。 -* 了解[发起 PR](/zh/docs/contribute/new-content/open-a-pr/)。 -* 了解[进阶贡献](/zh/docs/contribute/advanced/)。 +* 了解[撰写新的话题](/zh-cn/docs/contribute/style/write-new-topic/)。 +* 了解[使用页面内容类型](/zh-cn/docs/contribute/style/page-content-types/)。 +* 了解[发起 PR](/zh-cn/docs/contribute/new-content/open-a-pr/)。 +* 了解[进阶贡献](/zh-cn/docs/contribute/advanced/)。 diff --git a/content/zh/docs/contribute/style/hugo-shortcodes/podtemplate.json b/content/zh-cn/docs/contribute/style/hugo-shortcodes/podtemplate.json similarity index 100% rename from content/zh/docs/contribute/style/hugo-shortcodes/podtemplate.json rename to content/zh-cn/docs/contribute/style/hugo-shortcodes/podtemplate.json diff --git a/content/zh/docs/contribute/style/page-content-types.md b/content/zh-cn/docs/contribute/style/page-content-types.md similarity index 95% rename from content/zh/docs/contribute/style/page-content-types.md rename to content/zh-cn/docs/contribute/style/page-content-types.md index de72b6b4c4421..2c4f11b51d8e1 100644 --- a/content/zh/docs/contribute/style/page-content-types.md +++ b/content/zh-cn/docs/contribute/style/page-content-types.md @@ -198,7 +198,7 @@ published example of a concept page. - 在 `body` 节中,详细解释对应概念; - 对于 `whatsnext` 节,提供一个项目符号列表(最多 5 个),帮助读者进一步学习掌握概念 -[注解](/zh/docs/concepts/overview/working-with-objects/annotations/)页面是一个已经 +[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)页面是一个已经 上线的概念页面的例子。 -- 了解[样式指南](/zh/docs/contribute/style/style-guide/) -- 了解[内容指南](/zh/docs/contribute/style/content-guide/) -- 了解[内容组织](/zh/docs/contribute/style/content-organization/) +- 了解[样式指南](/zh-cn/docs/contribute/style/style-guide/) +- 了解[内容指南](/zh-cn/docs/contribute/style/content-guide/) +- 了解[内容组织](/zh-cn/docs/contribute/style/content-organization/) diff --git a/content/zh/docs/contribute/style/style-guide.md b/content/zh-cn/docs/contribute/style/style-guide.md similarity index 96% rename from content/zh/docs/contribute/style/style-guide.md rename to content/zh-cn/docs/contribute/style/style-guide.md index 7b51525d09dfe..83c9056f22bd3 100644 --- a/content/zh/docs/contribute/style/style-guide.md +++ b/content/zh-cn/docs/contribute/style/style-guide.md @@ -30,7 +30,7 @@ discussion. 你可以自行决定,且欢迎使用 PR 来为此文档提供修改意见。 关于为 Kubernetes 文档贡献新内容的更多信息,可以参考 -[文档内容指南](/zh/docs/contribute/style/content-guide/)。 +[文档内容指南](/zh-cn/docs/contribute/style/content-guide/)。 样式指南的变更是 SIG Docs 团队集体决定。 如要提议更改或新增条目,请先将其添加到下一次 SIG Docs 例会的 @@ -46,7 +46,7 @@ and representing feature state. --> {{< note >}} Kubernetes 文档使用带调整的 [Goldmark Markdown 解释器](https://github.com/yuin/goldmark/) -和一些 [Hugo 短代码](/zh/docs/contribute/style/hugo-shortcodes/) 来支持词汇表项、Tab +和一些 [Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/) 来支持词汇表项、Tab 页以及特性门控标注。 {{< /note >}} @@ -68,7 +68,7 @@ Kubernetes 文档已经被翻译为多个语种 (参见 [本地化 READMEs](https://github.com/kubernetes/website/blob/main/README.md#localization-readmemds))。 为文档提供一种新的语言翻译的途径可以在 -[本地化 Kubernetes 文档](/zh/docs/contribute/localization/)中找到。 +[本地化 Kubernetes 文档](/zh-cn/docs/contribute/localization/)中找到。 英语文档使用美国英语的拼写和语法。 @@ -93,8 +93,8 @@ The following examples focus on capitalization. For more information about forma ### 对 API 对象使用大写驼峰式命名法 {#use-upper-camel-case-for-api-objects} -当你与指定的 API 对象进行交互时,使用 [大写驼峰式命名法](https://en.wikipedia.org/wiki/Camel_case),也被称为帕斯卡拼写法(PascalCase). -你可能在 [API 参考](/docs/reference/kubernetes-api/) 中看到不同的大小写形式, +当你与指定的 API 对象进行交互时,使用[大写驼峰式命名法](https://en.wikipedia.org/wiki/Camel_case),也被称为帕斯卡拼写法(PascalCase)。 +你可能在 [API 参考](/zh-cn/docs/reference/kubernetes-api/)中看到不同的大小写形式, 例如 "configMap"。在一般性的文档中,最好使用大写驼峰形式,将之称作 "ConfigMap"。 在一般性地讨论 API 对象时,使用 @@ -103,8 +103,8 @@ The following examples focus on capitalization. For more information about forma 你可以使用“资源”、“API”或者“对象”这类词汇来进一步在句子中明确所指的是 一个 Kubernetes 资源类型。 -不要将 API 对象的名称切分成多个单词。例如,使用 PodTemplateList,不要 -使用 Pod Template List。 +不要将 API 对象的名称切分成多个单词。例如,使用 PodTemplateList, +不要使用 Pod Template List。 下面的例子关注的是大小写问题。关于如何格式化 API 对象的名称, 有关详细细节可参考相关的[代码风格](#code-style-inline-code)指南。 @@ -459,8 +459,8 @@ To specify the Kubernetes version for a task or tutorial page, include `min-kube 代码示例或者配置示例如果包含版本信息,应该与对应的文字描述一致。 如果所给的信息是特定于具体版本的,需要在 -[任务模版](/zh/docs/contribute/style/page-content-types/#task) -或[教程模版](/zh/docs/contribute/style/page-content-types/#tutorial) +[任务模版](/zh-cn/docs/contribute/style/page-content-types/#task) +或[教程模版](/zh-cn/docs/contribute/style/page-content-types/#tutorial) 的 `prerequisites` 小节定义 Kubernetes 版本。 页面保存之后,`prerequisites` 小节会显示为 **开始之前**。 @@ -952,7 +952,7 @@ Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes]( 可以 | 不可以 :--| :----- 插入超级链接时给出它们所链接到的目标内容的上下文。例如:你的机器上某些端口处于开放状态。参见检查所需端口了解更详细信息。| 使用有二义性的术语,如“点击这里”。例如:你的机器上某些端口处于打开状态。参见这里了解详细信息。 -编写 Markdown 风格的链接:`[链接文本](URL)`。例如:`[Hugo 短代码](/zh/docs/contribute/style/hugo-shortcodes/#table-captions)`,输出是[Hugo 短代码](/zh/docs/contribute/style/hugo-shortcodes/#table-captions). | 编写 HTML 风格的超级链接:`访问我们的教程!`,或者创建会打开新 Tab 页或新窗口的链接。例如:`[网站示例](https://example.com){target="_blank"}`。 +编写 Markdown 风格的链接:`[链接文本](URL)`。例如:`[Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/#table-captions)`,输出是[Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/#table-captions). | 编写 HTML 风格的超级链接:`访问我们的教程!`,或者创建会打开新 Tab 页或新窗口的链接。例如:`[网站示例](https://example.com){target="_blank"}`。 {{< /table >}} -如[发起 PR](/zh/docs/contribute/new-content/open-a-pr/)中所述,创建 Kubernetes 文档库的派生副本。 +如[发起 PR](/zh-cn/docs/contribute/new-content/open-a-pr/)中所述,创建 Kubernetes 文档库的派生副本。 @@ -46,8 +46,8 @@ Tutorial | A tutorial page shows how to accomplish a goal that ties together sev {{< table caption = "选择页面类型的说明" >}} 类型 | 描述 :--- | :---------- -概念(Concept) | 概念页面负责解释 Kubernetes 的某方面。例如,概念页面可以描述 Kubernetes Deployment 对象,并解释当部署、扩展和更新时,它作为应用程序所扮演的角色。一般来说,概念页面不包括步骤序列,而是提供任务或教程的链接。概念主题的示例可参见 节点。 -任务(Task) | 任务页面展示如何完成特定任务。其目的是给读者提供一系列的步骤,让他们在阅读时可以实际执行。任务页面可长可短,前提是它始终围绕着某个主题展开。在任务页面中,可以将简短的解释与要执行的步骤混合在一起。如果需要提供较长的解释,则应在概念主题中进行。相关联的任务和概念主题应该相互链接。一个简短的任务页面的实例可参见 配置 Pod 使用卷存储。一个较长的任务页面的实例可参见 配置活跃性和就绪性探针。 +概念(Concept) | 概念页面负责解释 Kubernetes 的某方面。例如,概念页面可以描述 Kubernetes Deployment 对象,并解释当部署、扩展和更新时,它作为应用程序所扮演的角色。一般来说,概念页面不包括步骤序列,而是提供任务或教程的链接。概念主题的示例可参见 节点。 +任务(Task) | 任务页面展示如何完成特定任务。其目的是给读者提供一系列的步骤,让他们在阅读时可以实际执行。任务页面可长可短,前提是它始终围绕着某个主题展开。在任务页面中,可以将简短的解释与要执行的步骤混合在一起。如果需要提供较长的解释,则应在概念主题中进行。相关联的任务和概念主题应该相互链接。一个简短的任务页面的实例可参见 配置 Pod 使用卷存储。一个较长的任务页面的实例可参见 配置活跃性和就绪性探针。 教程(Tutorial) | 教程页面展示如何实现某个目标,该目标将若干 Kubernetes 功能特性联系在一起。教程可能提供一些步骤序列,读者可以在阅读页面时实际执行这些步骤。或者它可以提供相关代码片段的解释。例如,教程可以提供代码示例的讲解。教程可以包括对 Kubernetes 几个关联特性的简要解释,但有关更深入的特性解释应该链接到相关概念主题。 {{< /table >}} @@ -56,7 +56,7 @@ Use a [content type](/docs/contribute/style/page-content-types/) for each new pa that you write. Using page type helps ensure consistency among topics of a given type. --> -为每个新页面选择其[内容类型](/zh/docs/contribute/style/page-content-types/)。 +为每个新页面选择其[内容类型](/zh-cn/docs/contribute/style/page-content-types/)。 使用页面类型有助于确保给定类型的各主题之间保持一致。 有关使用此技术的主题的示例,请参见 -[运行单实例有状态的应用](/zh/docs/tasks/run-application/run-single-instance-stateful-application/)。 +[运行单实例有状态的应用](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)。 -* 了解[使用页面内容类型](/zh/docs/contribute/style/page-content-types/). -* 了解[创建 PR](/zh/docs/contribute/new-content/open-a-pr/). +* 了解[使用页面内容类型](/zh-cn/docs/contribute/style/page-content-types/). +* 了解[创建 PR](/zh-cn/docs/contribute/new-content/open-a-pr/). diff --git a/content/zh/docs/contribute/suggesting-improvements.md b/content/zh-cn/docs/contribute/suggesting-improvements.md similarity index 100% rename from content/zh/docs/contribute/suggesting-improvements.md rename to content/zh-cn/docs/contribute/suggesting-improvements.md diff --git a/content/zh/docs/doc-contributor-tools/linkchecker/README.md b/content/zh-cn/docs/doc-contributor-tools/linkchecker/README.md similarity index 87% rename from content/zh/docs/doc-contributor-tools/linkchecker/README.md rename to content/zh-cn/docs/doc-contributor-tools/linkchecker/README.md index 86d1c81683750..6f31aed3ab7ad 100644 --- a/content/zh/docs/doc-contributor-tools/linkchecker/README.md +++ b/content/zh-cn/docs/doc-contributor-tools/linkchecker/README.md @@ -1,24 +1,26 @@ - # 内置链接检查工具 - -你可以使用 [htmltest](https://github.com/wjdp/htmltest) 来检查 [`/content/en/`](https://git.k8s.io/website/content/en/) 下面的失效链接。这在重构章节内容、移动页面或者重命名文件或页眉时非常有用。 +你可以使用 [htmltest](https://github.com/wjdp/htmltest) 来检查 +[`/content/en/`](https://git.k8s.io/website/content/en/) 下面的失效链接。 +这在重构章节内容、移动页面或者重命名文件或页眉时非常有用。 - ## 工作原理 - `htmltest` 会扫描 kubernetes website 仓库构建生成的 HTML 文件。通过执行 `make` 命令进行了下列操作: - ## 哪些链接不会检查 - -该链接检查器扫描生成的 HTML 文件,而非原始的 Markdown. 该 htmltest 工具依赖于一个配置文件,[`.htmltest.yml`](https://git.k8s.io/website/.htmltest.yml),来决定检查哪些内容。 +该链接检查器扫描生成的 HTML 文件,而非原始的 Markdown. 该 htmltest 工具依赖于配置文件 +[`.htmltest.yml`](https://git.k8s.io/website/.htmltest.yml),来决定检查哪些内容。 该链接检查器扫描以下内容: - 该链接检查器不会扫描以下内容: - - 包含在顶部和侧边导航栏的链接,以及页脚链接或者页面的 `` 部分中的链接,例如 CSS 样式表、脚本以及元信息的链接。 -- 顶级页面及其子页面,例如: `/training`, `/community`, `/case-studies/adidas` +- 顶级页面及其子页面,例如:`/training`、`/community`、`/case-studies/adidas` - 博客文章 -- API 参考文档,例如:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/ +- API 参考文档,例如: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/ - 本地化内容 - ## 先决条件以及安装说明 - 必须安装: * [Docker](https://docs.docker.com/get-docker/) * [make](https://www.gnu.org/software/make/) - - ## 运行链接检查器 - 运行链接检查器需要: - ## 理解输出的内容 - 如果链接检查器发现了失效链接,则输出内容类似如下: @@ -125,7 +128,7 @@ tasks/access-kubernetes-api/custom-resources/index.html hash does not exist --- tasks/access-kubernetes-api/custom-resources/index.html --> #preserving-unknown-fields ``` - 这是一系列失效链接。该日志附带了每个页面下的失效链接。 -在这部分输出中,包含失效链接的文件是 `tasks/access-kubernetes-api/custom-resources.md`. +在这部分输出中,包含失效链接的文件是 `tasks/access-kubernetes-api/custom-resources.md`。 -该工具给出了一个理由:`hash does not exist`. 在大部分情况下,你可以忽略这个。 +该工具给出了一个理由:`hash does not exist`,在大部分情况下,你可以忽略这个。 -目标链接是 `#preserving-unknown-fields`. +目标链接是 `#preserving-unknown-fields`。 修复这个问题的一种方式是: - 运行 htmltest 来验证失效链接是否已修复。 \ No newline at end of file diff --git a/content/zh/docs/home/_index.md b/content/zh-cn/docs/home/_index.md similarity index 93% rename from content/zh/docs/home/_index.md rename to content/zh-cn/docs/home/_index.md index ee0aaf68d7c53..738ee19bb4e3f 100644 --- a/content/zh/docs/home/_index.md +++ b/content/zh-cn/docs/home/_index.md @@ -62,7 +62,7 @@ overview: # title: K8s Release Notes # description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes. # button: "Download Kubernetes" -# button_path: "/zh/docs/setup/release/notes" +# button_path: "/zh-cn/docs/setup/release/notes" # - name: about # title: About the documentation # description: This website contains documentation for the current and previous 4 versions of Kubernetes. @@ -71,37 +71,37 @@ cards: title: "了解 Kubernetes" description: "了解 Kubernetes 和其基础概念。" button: "查看概念" - button_path: "/zh/docs/concepts" + button_path: "/zh-cn/docs/concepts" - name: tutorials title: "尝试 Kubernetes" description: "按照教程学习如何在 Kubernetes 上部署应用。" button: "查看教程" - button_path: "/zh/docs/tutorials" + button_path: "/zh-cn/docs/tutorials" - name: setup title: "设置 K8s 集群" description: "按照你的资源情况和需求运行 Kubernetes。" button: "设置 Kubernetes" - button_path: "/zh/docs/setup" + button_path: "/zh-cn/docs/setup" - name: tasks title: "了解如何使用 Kubernetes" description: "查看常见任务以及如何使用简单步骤执行它们。" button: "查看任务" - button_path: "/zh/docs/tasks" + button_path: "/zh-cn/docs/tasks" - name: training title: "培训" description: "通过 Kubernetes 认证,助你的云原生项目成功!" button: "查看培训" - button_path: "/zh/training" + button_path: "/zh-cn/training" - name: reference title: 查阅参考信息 description: 浏览术语、命令行语法、API 资源类型和安装工具文档。 button: 查看参考 - button_path: /zh/docs/reference + button_path: /zh-cn/docs/reference - name: contribute title: 为文档作贡献 description: 任何人,无论对该项目熟悉与否,都能贡献自己的力量。 button: 为文档作贡献 - button_path: /zh/docs/contribute + button_path: /zh-cn/docs/contribute - name: release-notes title: K8s 发布说明 description: 如果你正在安装或升级 Kubernetes,最好参考最新的发布说明。 diff --git a/content/zh-cn/docs/home/supported-doc-versions.md b/content/zh-cn/docs/home/supported-doc-versions.md new file mode 100644 index 0000000000000..6fc4dcf33987c --- /dev/null +++ b/content/zh-cn/docs/home/supported-doc-versions.md @@ -0,0 +1,34 @@ +--- +title: Kubernetes 文档支持的版本 +content_type: custom +layout: supported-versions +card: + name: about + weight: 10 + title: Kubernetes 文档支持的版本 +--- + + + + + +本网站包含当前版本和之前四个版本的 Kubernetes 文档。 + +Kubernetes 版本的文档可用性与当前是否支持该版本是分开的。 +阅读[支持期限](/zh-cn/releases/patch-releases/#support-period),了解官方支持 Kubernetes 的哪些版本,以及支持多长时间。 diff --git a/content/zh/docs/images/diagram-guide-example-3.svg b/content/zh-cn/docs/images/diagram-guide-example-3.svg similarity index 100% rename from content/zh/docs/images/diagram-guide-example-3.svg rename to content/zh-cn/docs/images/diagram-guide-example-3.svg diff --git a/content/zh/docs/images/ha-control-plane.svg b/content/zh-cn/docs/images/ha-control-plane.svg similarity index 100% rename from content/zh/docs/images/ha-control-plane.svg rename to content/zh-cn/docs/images/ha-control-plane.svg diff --git a/content/zh-cn/docs/images/ingress.svg b/content/zh-cn/docs/images/ingress.svg new file mode 100644 index 0000000000000..450a0aae9b4fa --- /dev/null +++ b/content/zh-cn/docs/images/ingress.svg @@ -0,0 +1 @@ +
        cluster
        Ingress-managed
        load balancer
        routing rule
        Ingress
        Pod
        Service
        Pod
        client
        \ No newline at end of file diff --git a/content/zh-cn/docs/images/ingressFanOut.svg b/content/zh-cn/docs/images/ingressFanOut.svg new file mode 100644 index 0000000000000..a6bf202635164 --- /dev/null +++ b/content/zh-cn/docs/images/ingressFanOut.svg @@ -0,0 +1 @@ +
        cluster
        Ingress-managed
        load balancer
        /foo
        /bar
        Ingress, 178.91.123.132
        Pod
        Service service1:4200
        Pod
        Pod
        Service service2:8080
        Pod
        client
        \ No newline at end of file diff --git a/content/zh-cn/docs/images/ingressNameBased.svg b/content/zh-cn/docs/images/ingressNameBased.svg new file mode 100644 index 0000000000000..7e1d7be98c60f --- /dev/null +++ b/content/zh-cn/docs/images/ingressNameBased.svg @@ -0,0 +1 @@ +
        cluster
        Ingress-managed
        load balancer
        Host: foo.bar.com
        Host: bar.foo.com
        Ingress, 178.91.123.132
        Pod
        Service service1:80
        Pod
        Pod
        Service service2:80
        Pod
        client
        \ No newline at end of file diff --git a/content/zh-cn/docs/images/tutor-service-nodePort-fig01.svg b/content/zh-cn/docs/images/tutor-service-nodePort-fig01.svg new file mode 100644 index 0000000000000..bb4d866f853f3 --- /dev/null +++ b/content/zh-cn/docs/images/tutor-service-nodePort-fig01.svg @@ -0,0 +1 @@ +
        SNAT
        SNAT
        client
        Node 2
        Node 1
        Endpoint
        \ No newline at end of file diff --git a/content/zh-cn/docs/images/tutor-service-nodePort-fig02.svg b/content/zh-cn/docs/images/tutor-service-nodePort-fig02.svg new file mode 100644 index 0000000000000..1a891575e5f58 --- /dev/null +++ b/content/zh-cn/docs/images/tutor-service-nodePort-fig02.svg @@ -0,0 +1 @@ +
        client
        Node 1
        Node 2
        endpoint
        \ No newline at end of file diff --git a/content/zh/docs/reference/_index.md b/content/zh-cn/docs/reference/_index.md similarity index 93% rename from content/zh/docs/reference/_index.md rename to content/zh-cn/docs/reference/_index.md index 108277d9eaa37..af2a04118b625 100644 --- a/content/zh/docs/reference/_index.md +++ b/content/zh-cn/docs/reference/_index.md @@ -64,7 +64,7 @@ client libraries: --> ## 官方支持的客户端库 -如果您需要通过编程语言调用 Kubernetes API,您可以使用 +如果你需要通过编程语言调用 Kubernetes API,你可以使用 [客户端库](/zh/docs/reference/using-api/client-libraries/)。以下是官方支持的客户端库: - [Kubernetes Go 语言客户端库](https://github.com/kubernetes/client-go/) @@ -137,9 +137,11 @@ operator to use or manage a cluster. * [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) +* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1/) * [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) +* [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) * [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) and [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) * [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/) @@ -147,6 +149,7 @@ operator to use or manage a cluster. * [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and [Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/) * [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/) +* [ImagePolicy API (v1alpha1)](/docs/reference/config-api/imagepolicy.v1alpha1/) --> ## 配置 API @@ -157,9 +160,11 @@ operator to use or manage a cluster. * [kube-apiserver 配置 (v1alpha1)](/zh/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver 配置 (v1)](/zh/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver 加密 (v1)](/zh/docs/reference/config-api/apiserver-encryption.v1/) +* [kube-apiserver 事件速率限制 (v1alpha1)](/zh/docs/reference/config-api/apiserver-eventratelimit.v1/) * [kubelet 配置 (v1alpha1)](/zh/docs/reference/config-api/kubelet-config.v1alpha1/) 和 [kubelet 配置 (v1beta1)](/zh/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet 凭据驱动 (v1alpha1)](/zh/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) +* [kubelet 凭据驱动 (v1beta1)](/zh/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) * [kube-scheduler 配置 (v1beta2)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/) 和 [kube-scheduler 配置 (v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) * [kube-proxy 配置 (v1alpha1)](/zh/docs/reference/config-api/kube-proxy-config.v1alpha1/) @@ -167,6 +172,7 @@ operator to use or manage a cluster. * [客户端认证 API (v1beta1)](/zh/docs/reference/config-api/client-authentication.v1beta1/) 和 [客户端认证 API (v1)](/zh/docs/reference/config-api/client-authentication.v1/) * [WebhookAdmission 配置 (v1)](/zh/docs/reference/config-api/apiserver-webhookadmission.v1/) +* [ImagePolicy API (v1alpha1)](/zh/docs/reference/config-api/imagepolicy.v1alpha1/) - [身份认证](/zh/docs/reference/access-authn-authz/authentication/) - [使用启动引导令牌来执行身份认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) @@ -53,4 +55,5 @@ Reference documentation: - 服务账号 - [开发者指南](/zh/docs/tasks/configure-pod-container/configure-service-account/) - [管理文档](/zh/docs/reference/access-authn-authz/service-accounts-admin/) - +- [Kubelet 认证和鉴权](/zh/docs/reference/access-authn-authz/kubelet-authn-authz/) + - 包括 kubelet [TLS 启动引导](/zh/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) diff --git a/content/zh/docs/reference/access-authn-authz/abac.md b/content/zh-cn/docs/reference/access-authn-authz/abac.md similarity index 90% rename from content/zh/docs/reference/access-authn-authz/abac.md rename to content/zh-cn/docs/reference/access-authn-authz/abac.md index 1c8a308556a07..6d1c6b6a172b8 100644 --- a/content/zh/docs/reference/access-authn-authz/abac.md +++ b/content/zh-cn/docs/reference/access-authn-authz/abac.md @@ -1,15 +1,9 @@ --- -approvers: -- erictune -- lavalamp -- deads2k -- liggitt title: 使用 ABAC 鉴权 content_type: concept +weight: 80 --- - @@ -26,7 +19,8 @@ weight: 80 -基于属性的访问控制(Attribute-based access control - ABAC)定义了访问控制范例,其中通过使用将属性组合在一起的策略来向用户授予访问权限。 +基于属性的访问控制(Attribute-based access control - ABAC)定义了访问控制范例, +其中通过使用将属性组合在一起的策略来向用户授予访问权限。 @@ -68,8 +62,7 @@ properties: - `/foo/*` matches all subpaths of `/foo/`. - `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list, and watch operations, Non-resource-matching policy only applies to get operation. --> - -## 策略文件格式 +## 策略文件格式 {#policy-file-format} 基于 `ABAC` 模式,可以这样指定策略文件 `--authorization-policy-file=SOME_FILENAME`。 @@ -78,15 +71,16 @@ properties: 每一行都是一个策略对象,策略对象是具有以下属性的映射: - 版本控制属性: - - `apiVersion`,字符串类型:有效值为`abac.authorization.kubernetes.io/v1beta1`,允许对策略格式进行版本控制和转换。 + - `apiVersion`,字符串类型:有效值为 `abac.authorization.kubernetes.io/v1beta1`,允许对策略格式进行版本控制和转换。 - `kind`,字符串类型:有效值为 `Policy`,允许对策略格式进行版本控制和转换。 - `spec` 配置为具有以下映射的属性: - 主体匹配属性: - `user`,字符串类型;来自 `--token-auth-file` 的用户字符串,如果你指定 `user`,它必须与验证用户的用户名匹配。 - - `group`,字符串类型;如果指定 `group`,它必须与经过身份验证的用户的一个组匹配,`system:authenticated`匹配所有经过身份验证的请求。`system:unauthenticated`匹配所有未经过身份验证的请求。 + - `group`,字符串类型;如果指定 `group`,它必须与经过身份验证的用户的一个组匹配,`system:authenticated` 匹配所有经过身份验证的请求。 + `system:unauthenticated` 匹配所有未经过身份验证的请求。 - 资源匹配属性: - `apiGroup`,字符串类型;一个 API 组。 - - 例: `apps`, `networking.k8s.io` + - 例:`apps`, `networking.k8s.io` - 通配符:`*`匹配所有 API 组。 - `namespace`,字符串类型;一个命名空间。 - 例如:`kube-system` @@ -96,7 +90,7 @@ properties: - 通配符:`*`匹配所有资源请求。 - 非资源匹配属性: - `nonResourcePath`,字符串类型;非资源请求路径。 - - 例如:`/version`或 `/apis` + - 例如:`/version` 或 `/apis` - 通配符: - `*` 匹配所有非资源请求。 - `/foo/*` 匹配 `/foo/` 的所有子路径。 @@ -142,7 +136,7 @@ To permit a user to do anything, write a policy with the apiGroup, namespace, resource, and nonResourcePath properties set to `"*"`. --> -## 鉴权算法 +## 鉴权算法 {#authorization-algorithm} 请求具有与策略对象的属性对应的属性。 @@ -154,10 +148,7 @@ resource, and nonResourcePath properties set to `"*"`. 要允许任何经过身份验证的用户执行某些操作,请将策略组属性设置为 `"system:authenticated"`。 -要允许任何未经身份验证的用户执行某些操作,请将策略组属性设置为 `"system:authentication"`。 - -要允许用户执行任何操作,请使用 apiGroup,命名空间, -资源和 nonResourcePath 属性设置为 `"*"` 的策略。 +要允许任何未经身份验证的用户执行某些操作,请将策略组属性设置为 `"system:unauthenticated"`。 要允许用户执行任何操作,请使用设置为 `"*"` 的 apiGroup,namespace,resource 和 nonResourcePath 属性编写策略。 @@ -181,17 +172,18 @@ up the verbosity: kubectl --v=8 version --> -## Kubectl +## kubectl -Kubectl 使用 api-server 的 `/api` 和 `/apis` 端点来发现服务资源类型,并使用位于 `/openapi/v2` 的模式信息来验证通过创建/更新操作发送到 API 的对象。 +kubectl 使用 api-server 的 `/api` 和 `/apis` 端点来发现服务资源类型, +并使用位于 `/openapi/v2` 的模式信息来验证通过创建/更新操作发送到 API 的对象。 当使用 ABAC 鉴权时,这些特殊资源必须显式地通过策略中的 `nonResourcePath` 属性暴露出来(参见下面的 [示例](#examples)): -* `/api`,`/api/*`,`/apis`和 `/apis/*` 用于 API 版本协商。 +* `/api`,`/api/*`,`/apis` 和 `/apis/*` 用于 API 版本协商。 * `/version` 通过 `kubectl version` 检索服务器版本。 * `/swaggerapi/*` 用于创建 / 更新操作。 -要检查涉及到特定 kubectl 操作的 HTTP 调用,您可以调整详细程度: +要检查涉及到特定 kubectl 操作的 HTTP 调用,你可以调整详细程度: kubectl --v=8 version - ## 例子 {#examples} 1. Alice 可以对所有资源做任何事情: @@ -221,12 +212,12 @@ Kubectl 使用 api-server 的 `/api` 和 `/apis` 端点来发现服务资源类 ```json {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}} ``` -2. Kubelet 可以读取任何 pod: +2. kubelet 可以读取任何 pod: ```json {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}} ``` -3. Kubelet 可以读写事件: +3. kubelet 可以读写事件: ```json {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}} @@ -245,8 +236,8 @@ Kubectl 使用 api-server 的 `/api` 和 `/apis` 端点来发现服务资源类 {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}} ``` --> - 4. Bob 可以在命名空间 `projectCaribou` 中读取 pod: + ```json {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}} ``` @@ -269,10 +260,9 @@ system:serviceaccount:: ``` --> - [完整文件示例](https://releases.k8s.io/{{< param "fullversion" >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl) -## 服务帐户的快速说明 +## 服务帐户的快速说明 {#a-quick-note-on-service-accounts} 服务帐户自动生成用户。用户名是根据命名约定生成的: diff --git a/content/zh/docs/reference/access-authn-authz/admission-controllers.md b/content/zh-cn/docs/reference/access-authn-authz/admission-controllers.md similarity index 58% rename from content/zh/docs/reference/access-authn-authz/admission-controllers.md rename to content/zh-cn/docs/reference/access-authn-authz/admission-controllers.md index b5f0976fcdb12..a64cd6d2524e2 100644 --- a/content/zh/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/zh-cn/docs/reference/access-authn-authz/admission-controllers.md @@ -21,14 +21,14 @@ weight: 30 -此页面概述了准入控制器。 +此页面提供准入控制器(Admission Controllers)的概述。 -## 什么是准入控制插件? +## 什么是准入控制插件? {#what-are-they} 准入控制器是一段代码,它会在请求通过认证和授权之后、对象被持久化之前拦截到达 API 服务器的请求。控制器由下面的[列表](#what-does-each-admission-controller-do)组成, -并编译进 `kube-apiserver` 二进制文件,并且只能由集群管理员配置。 +并编译进 `kube-apiserver` 可执行文件,并且只能由集群管理员配置。 在该列表中,有两个特殊的控制器:MutatingAdmissionWebhook 和 ValidatingAdmissionWebhook。 它们根据 API 中的配置,分别执行变更和验证 [准入控制 webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)。 @@ -64,14 +64,14 @@ If any of the controllers in either phase reject the request, the entire request is rejected immediately and an error is returned to the end-user. --> 准入控制器可以执行 “验证(Validating)” 和/或 “变更(Mutating)” 操作。 -变更(mutating)控制器可以根据被其接受的请求修改相关对象;验证(validating)控制器则不行。 +变更(mutating)控制器可以根据被其接受的请求更改相关对象;验证(validating)控制器则不行。 准入控制器限制创建、删除、修改对象或连接到代理的请求,不限制读取对象的请求。 准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器。 再次提醒,某些控制器既是变更准入控制器又是验证准入控制器。 -如果任何一个阶段的任何控制器拒绝了该请求,则整个请求将立即被拒绝,并向终端用户返回一个错误。 +如果两个阶段之一的任何一个控制器拒绝了某请求,则整个请求将立即被拒绝,并向最终用户返回错误。 -最后,除了对对象进行变更外,准入控制器还可以有其它作用:将相关资源作为请求处理的一部分进行变更。 -增加使用配额就是一个典型的示例,说明了这样做的必要性。 +最后,除了对对象进行变更外,准入控制器还可能有其它副作用:将相关资源作为请求处理的一部分进行变更。 +增加配额用量就是一个典型的示例,说明了这样做的必要性。 此类用法都需要相应的回收或回调过程,因为任一准入控制器都无法确定某个请求能否通过所有其它准入控制器。 -## 为什么需要准入控制器? +## 为什么需要准入控制器? {#why-do-i-need-them} Kubernetes 的许多高级功能都要求启用一个准入控制器,以便正确地支持该特性。 -因此,没有正确配置准入控制器的 Kubernetes API 服务器是不完整的,它无法支持你期望的所有特性。 +因此,没有正确配置准入控制器的 Kubernetes API 服务器是不完整的,它无法支持你所期望的所有特性。 -## 如何启用一个准入控制器? +## 如何启用一个准入控制器? {#how-do-i-turn-on-an-admission-controller} -Kubernetes API 服务器的 `enable-admission-plugins` 标志接受一个用于在集群修改对象之前 -调用的(以逗号分隔的)准入控制插件顺序列表。 +Kubernetes API 服务器的 `enable-admission-plugins` 标志接受一个(以逗号分隔的)准入控制插件列表, +这些插件会在集群修改对象之前被调用。 -例如,下面的命令就启用了 `NamespaceLifecycle` 和 `LimitRanger` 准入控制插件: +例如,下面的命令启用 `NamespaceLifecycle` 和 `LimitRanger` 准入控制插件: ```shell kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ... @@ -128,7 +128,7 @@ have to modify the systemd unit file if the API server is deployed as a systemd service, you may modify the manifest file for the API server if Kubernetes is deployed in a self-hosted way. --> -根据你 Kubernetes 集群的部署方式以及 API 服务器的启动方式的不同,你可能需要以不同的方式应用设置。 +根据你 Kubernetes 集群的部署方式以及 API 服务器的启动方式,你可能需要以不同的方式应用设置。 例如,如果将 API 服务器部署为 systemd 服务,你可能需要修改 systemd 单元文件; 如果以自托管方式部署 Kubernetes,你可能需要修改 API 服务器的清单文件。 {{< /note >}} @@ -138,7 +138,7 @@ in a self-hosted way. The Kubernetes API server flag `disable-admission-plugins` takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default. --> -## 怎么关闭准入控制器? +## 怎么关闭准入控制器? {#how-do-i-turn-off-an-admission-controller} Kubernetes API 服务器的 `disable-admission-plugins` 标志,会将传入的(以逗号分隔的) 准入控制插件列表禁用,即使是默认启用的插件也会被禁用。 @@ -152,9 +152,9 @@ kube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ... To see which admission plugins are enabled: --> -## 哪些插件是默认启用的? +## 哪些插件是默认启用的? {#which-plugins-are-enabled-by-default} -下面的命令可以查看哪些插件是默认启用的: +要查看哪些插件是被启用的: ```shell kube-apiserver -h | grep enable-admission-plugins @@ -164,26 +164,25 @@ kube-apiserver -h | grep enable-admission-plugins In the current version, the default ones are: --> -在目前版本中,它们是: +在目前版本中,默认启用的插件有: -```shell -CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook +``` +CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook ``` -## 每个准入控制器的作用是什么? +## 每个准入控制器的作用是什么? {#what-does-each-admission-controller-do} -### AlwaysAdmit {#alwaysadmit} +### AlwaysAdmit {#alwaysadmit} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} -该准入控制器会允许所有的 pod 接入集群。已废弃,因为它的行为根本就和没有准入控制器一样。 + +该准入控制器允许所有的 Pod 进入集群。此插件已被弃用,因其行为与没有准入控制器一样。 ### AlwaysDeny {#alwaysdeny} @@ -192,7 +191,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec -拒绝所有的请求。由于它没有实际意义,已废弃。 +拒绝所有的请求。由于它没有实际意义,已被弃用。 ### AlwaysPullImages {#alwayspullimages} @@ -205,58 +204,58 @@ scheduled onto the right node), without any authorization check against the imag is enabled, images are always pulled prior to starting containers, which means valid credentials are required. --> -该准入控制器会修改每一个新创建的 Pod 的镜像拉取策略为 Always 。 +该准入控制器会修改每个新创建的 Pod,将其镜像拉取策略设置为 Always。 这在多租户集群中是有用的,这样用户就可以放心,他们的私有镜像只能被那些有凭证的人使用。 -如果没有这个准入控制器,一旦镜像被拉取到节点上,任何用户的 Pod 都可以通过已了解到的镜像 -的名称(假设 Pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何授权检查。 -当启用这个准入控制器时,总是在启动容器之前拉取镜像,这意味着需要有效的凭证。 +如果没有这个准入控制器,一旦镜像被拉取到节点上,任何用户的 Pod 都可以通过已了解到的镜像的名称 +(假设 Pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何鉴权检查。 +启用这个准入控制器之后,启动容器之前必须拉取镜像,这意味着需要有效的凭证。 -### CertificateApproval +### CertificateApproval {#certificateapproval} - -此准入控制器获取“审批” CertificateSigningRequest 资源的请求并执行额外的授权检查, -以确保审批请求的用户有权限审批 `spec.signerName` 请求 CertificateSigningRequest 资源的证书请求。 +此准入控制器获取“审批” CertificateSigningRequest 资源的请求并执行额外的鉴权检查, +以确保针对设置了 `spec.signerName` 的 CertificateSigningRequest 资源而言, +审批请求的用户有权限对证书请求执行 `approve` 操作。 +有关对 CertificateSigningRequest 资源执行不同操作所需权限的详细信息, +请参阅[证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/)。 -有关对证书签名请求资源执行不同操作所需权限的详细信息, -请参阅[证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/) - -### CertificateSigning +### CertificateSigning {#certificatesigning} -此准入控制器获取 CertificateSigningRequest 资源的 `status.certificate` 字段更新请求并执行额外的授权检查, -以确保签发证书的用户有权限为 `spec.signerName` 请求 CertificateSigningRequest 资源的证书请求`签发`证书。 +此准入控制器监视对 CertificateSigningRequest 资源的 `status.certificate` 字段的更新请求, +并执行额外的鉴权检查,以确保针对设置了 `spec.signerName` 的 CertificateSigningRequest 资源而言, +签发证书的用户有权限对证书请求执行 `sign` 操作。 -有关对证书签名请求资源执行不同操作所需权限的详细信息, -请参阅[证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/) +有关对 CertificateSigningRequest 资源执行不同操作所需权限的详细信息, +请参阅[证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/)。 -### CertificateSubjectRestrictions +### CertificateSubjectRestriction {#certificatesubjectrestriction} -此准入控制器获取具有 `kubernetes.io/kube-apiserver-client` 的 `spec.signerName` 的 -CertificateSigningRequest 资源创建请求, -它拒绝任何包含了 `system:masters` 一个“组”(或者“组织”)的请求。 +此准入控制器监视 `spec.signerName` 被设置为 `kubernetes.io/kube-apiserver-client` 的 +CertificateSigningRequest 资源创建请求,并拒绝所有将 “group”(或 “organization attribute”) +设置为 `system:masters` 的请求。 ### DefaultIngressClass {#defaultingressclass} @@ -266,8 +265,8 @@ ingress class and automatically adds a default ingress class to them. This way, request any special ingress class do not need to care about them at all and they will get the default one. --> -该准入控制器监测没有请求任何特定 Ingress 类的 `Ingress` 对象的创建,并自动向其添加默认 Ingress 类。 -这样,没有任何特殊 Ingress 类需求的用户根本不需要关心它们,它们将获得默认 Ingress 类。 +该准入控制器监测没有请求任何特定 Ingress 类的 `Ingress` 对象创建请求,并自动向其添加默认 Ingress 类。 +这样,没有任何特殊 Ingress 类需求的用户根本不需要关心它们,他们将被设置为默认 Ingress 类。 -当未配置默认 Ingress 类时,此准入控制器不执行任何操作。如果将多个 Ingress 类标记为默认 Ingress 类, -它将拒绝任何创建 `Ingress` 的操作,并显示错误。 -要修复此错误,管理员必须重新检查其 `IngressClass` 对象,并仅将其中一个标记为默认(通过注解 -"ingressclass.kubernetes.io/is-default-class")。 -此准入控制器会忽略所有 `Ingress` 更新操作,仅响应创建操作。 +当未配置默认 Ingress 类时,此准入控制器不执行任何操作。如果有多个 Ingress 类被标记为默认 Ingress 类, +此控制器将拒绝所有创建 `Ingress` 的操作,并返回错误信息。 +要修复此错误,管理员必须重新检查其 `IngressClass` 对象,并仅将其中一个标记为默认 +(通过注解 "ingressclass.kubernetes.io/is-default-class")。 +此准入控制器会忽略所有 `Ingress` 更新操作,仅处理创建操作。 关于 Ingress 类以及如何将 Ingress 类标记为默认的更多信息,请参见 -[ingress](/zh/docs/concepts/services-networking/ingress/)。 +[Ingress](/zh/docs/concepts/services-networking/ingress/) 页面。 ### DefaultStorageClass {#defaultstorageclass} @@ -297,9 +296,9 @@ and automatically adds a default storage class to them. This way, users that do not request any special storage class do not need to care about them at all and they will get the default one. --> -该准入控制器监测没有请求任何特定存储类的 `PersistentVolumeClaim` 对象的创建, +此准入控制器监测没有请求任何特定存储类的 `PersistentVolumeClaim` 对象的创建请求, 并自动向其添加默认存储类。 -这样,没有任何特殊存储类需求的用户根本不需要关心它们,它们将获得默认存储类。 +这样,没有任何特殊存储类需求的用户根本不需要关心它们,它们将被设置为使用默认存储类。 当未配置默认存储类时,此准入控制器不执行任何操作。如果将多个存储类标记为默认存储类, -它将拒绝任何创建 `PersistentVolumeClaim` 的操作,并显示错误。 -要修复此错误,管理员必须重新访问其 `StorageClass` 对象,并仅将其中一个标记为默认。 -此准入控制器会忽略所有 `PersistentVolumeClaim` 更新操作,仅响应创建操作。 +此控制器将拒绝所有创建 `PersistentVolumeClaim` 的请求,并返回错误信息。 +要修复此错误,管理员必须重新检查其 `StorageClass` 对象,并仅将其中一个标记为默认。 +此准入控制器会忽略所有 `PersistentVolumeClaim` 更新操作,仅处理创建操作。 -关于持久化卷和存储类,以及如何将存储类标记为默认,请参见 -[持久化卷](/zh/docs/concepts/storage/persistent-volumes/)。 +关于持久卷申领和存储类,以及如何将存储类标记为默认,请参见[持久卷](/zh/docs/concepts/storage/persistent-volumes/)页面。 ### DefaultTolerationSeconds {#defaulttolerationseconds} -该准入控制器基于 k8s-apiserver 输入参数 `default-not-ready-toleration-seconds` 和 +此准入控制器基于 k8s-apiserver 的输入参数 `default-not-ready-toleration-seconds` 和 `default-unreachable-toleration-seconds` 为 Pod 设置默认的容忍度,以容忍 `notready:NoExecute` 和 -`unreachable:NoExecute` 污点。 +`unreachable:NoExecute` 污点 (如果 Pod 尚未容忍 `node.kubernetes.io/not-ready:NoExecute` 和 -`node.kubernetes.io/unreachable:NoExecute` 污点的话) -`default-not-ready-toleration-seconds` 和 `default-unreachable-toleration-seconds` 的默认值是 5 分钟。 +`node.kubernetes.io/unreachable:NoExecute` 污点的话)。 +`default-not-ready-toleration-seconds` 和 `default-unreachable-toleration-seconds` +的默认值是 5 分钟。 ### DenyEscalatingExec {#denyescalatingexec} @@ -345,9 +344,9 @@ This admission controller will deny exec and attach commands to pods that run wi allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. --> -该准入控制器将拒绝在由于拥有升级特权,而具备访问宿主机能力的 Pod 中执行 exec 和 -attach 命令。这包括在特权模式运行的 Pod,可以访问主机 IPC 名字空间的 Pod, -和访问主机 PID 名字空间的 Pod 。 +此准入控制器将拒绝在由于拥有提级特权而具备访问宿主机能力的 Pod 中执行 exec 和 +attach 命令。这类 Pod 包括在特权模式运行的 Pod、可以访问主机 IPC 名字空间的 Pod、 +和访问主机 PID 名字空间的 Pod。 -DenyExecOnPrivileged 准入插件已被废弃。 +DenyEscalatingExec 准入插件已被弃用。 建议使用基于策略的准入插件(例如 [PodSecurityPolicy](#podsecuritypolicy) 和自定义准入插件), -该插件可以针对特定用户或名字空间,还可以防止创建权限过高的 Pod。 +这类插件可以针对特定用户或名字空间,还可以防止创建权限过高的 Pod。 ### DenyExecOnPrivileged {#denyexeconprivileged} @@ -368,14 +367,14 @@ DenyExecOnPrivileged 准入插件已被废弃。 -如果一个 pod 拥有一个特权容器,该准入控制器将拦截所有在该 pod 中执行 exec 命令的请求。 +如果一个 Pod 中存在特权容器,该准入控制器将拦截所有在该 Pod 中执行 exec 命令的请求。 此功能已合并至 [DenyEscalatingExec](#denyescalatingexec)。 -而 DenyExecOnPrivileged 准入插件已被废弃。 +而 DenyExecOnPrivileged 准入插件已被弃用。 建议使用基于策略的准入插件(例如 [PodSecurityPolicy](#podsecuritypolicy) 和自定义准入插件), -该插件可以针对特定用户或名字空间,还可以防止创建权限过高的 Pod。 +这类插件可以针对特定用户或名字空间,还可以防止创建权限过高的 Pod。 -### DenyServiceExternalIPs +### DenyServiceExternalIPs {#denyserviceexternalips} -该准入控制器拒绝 `Service` 字段 `externalIPs` 的所有新规使用。 此功能非常强大(允许网络流量拦截), -并且无法很好地受策略控制。 启用后,群集用户将无法创建使用 `externalIPs` 的新服务,也无法在现有 -`Service` 对象上向 `externalIPs` 添加新值。 `externalIPs` 的现有使用不受影响,用户可以从现有 -`Service` 对象上的 `externalIPs` 中删除值。 +此准入控制器拒绝新的 `Service` 中使用字段 `externalIPs`。 +此功能非常强大(允许网络流量拦截),并且无法很好地受策略控制。 +启用后,集群用户将无法创建使用 `externalIPs` 的新 `Service`,也无法在现有 +`Service` 对象上为 `externalIPs` 添加新值。 +`externalIPs` 的现有使用不受影响,用户可以在现有 `Service` 对象上从 +`externalIPs` 中删除值。 -大多数用户根本不需要此功能,集群管理员应考虑将其禁用。 -确实需要使用此功能的集群应考虑使用一些自定义策略来管理其的使用。 +大多数用户根本不需要此特性,集群管理员应考虑将其禁用。 +确实需要使用此特性的集群应考虑使用一些自定义策略来管理 `externalIPs` 的使用。 ### EventRateLimit {#eventratelimit} @@ -416,45 +417,30 @@ of it. This admission controller mitigates the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by: --> -该准入控制器缓解了事件请求淹没 API 服务器的问题。集群管理员可以通过以下方式指定事件速率限制: +此准入控制器缓解了事件请求淹没 API 服务器的问题。集群管理员可以通过以下方式指定事件速率限制: * 启用 `EventRateLimit` 准入控制器; -* 从文件中引用 `EventRateLimit` 配置文件,并提供给 API 服务器命令的 - `--admission-control-config-file` 标志: +* 在通过 API 服务器的命令行标志 `--admission-control-config-file` 设置的文件中, + 引用 `EventRateLimit` 配置文件: -{{< tabs name="eventratelimit_example" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} -```yaml -apiVersion: apiserver.config.k8s.io/v1 -kind: AdmissionConfiguration -plugins: -- name: EventRateLimit - path: eventconfig.yaml -... -``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: EventRateLimit - path: eventconfig.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} + ```yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: EventRateLimit + path: eventconfig.yaml + ... + ``` -可以在配置中指定四种类型的限制: +可以在配置中指定的限制有四种类型: * `Server`: API 服务器收到的所有事件请求共享一个桶。 -* `Namespace`: 每个名字空间都有一个专用的桶。 -* `User`: 给每个用户都分配一个桶。 -* `SourceAndObject`: 根据事件的源和涉及对象的每种组合分配桶。 +* `Namespace`: 每个名字空间都对应一个专用的桶。 +* `User`: 为每个用户分配一个桶。 +* `SourceAndObject`: 根据事件的来源和涉及对象的各种组合分配桶。 -下面是一个配置示例 `eventconfig.yaml`: +下面是一个针对此配置的 `eventconfig.yaml` 示例: ```yaml apiVersion: eventratelimit.admission.k8s.io/v1alpha1 kind: Configuration limits: -- type: Namespace - qps: 50 - burst: 100 - cacheSize: 2000 -- type: User - qps: 10 - burst: 50 + - type: Namespace + qps: 50 + burst: 100 + cacheSize: 2000 + - type: User + qps: 10 + burst: 50 ``` 详情请参见 -[事件速率限制提案](https://git.k8s.io/community/contributors/design-proposals/api-machinery/admission_control_event_rate_limit.md)。 +[EventRateLimit 配置 API 文档(v1alpha1)](/zh/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)。 ### ExtendedResourceToleration {#extendedresourcetoleration} @@ -503,30 +489,29 @@ name as the key. This admission controller, if enabled, automatically adds tolerations for such taints to pods requesting extended resources, so users don't have to manually add these tolerations. --> -该插件有助于创建可扩展资源的专用节点。 -如果运营商想创建可扩展资源的专用节点(如 GPU、FPGA 等), -那他们应该以扩展资源名称作为键名, +此插件有助于创建带有扩展资源的专用节点。 +如果运维人员想要创建带有扩展资源(如 GPU、FPGA 等)的专用节点,他们应该以扩展资源名称作为键名, [为节点设置污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 -如果启用了该准入控制器,会将此类污点的容忍自动添加到请求扩展资源的 Pod 中, -用户不必再手动添加这些容忍。 +如果启用了此准入控制器,会将此类污点的容忍度自动添加到请求扩展资源的 Pod 中, +用户不必再手动添加这些容忍度。 ### ImagePolicyWebhook {#imagepolicywebhook} -ImagePolicyWebhook 准入控制器允许使用一个后端的 webhook 做出准入决策。 +ImagePolicyWebhook 准入控制器允许使用后端 Webhook 做出准入决策。 -#### 配置文件格式 +#### 配置文件格式 {#configuration-file-format} -ImagePolicyWebhook 使用配置文件来为后端行为设置配置选项。该文件可以是 JSON 或 YAML, +ImagePolicyWebhook 使用配置文件来为后端行为设置选项。该文件可以是 JSON 或 YAML, 并具有以下格式: ```yaml @@ -545,11 +530,9 @@ imagePolicy: -从文件中引用 ImagePolicyWebhook 的配置文件,并将其提供给 API 服务器命令标志 -`--admission-control-config-file`: +在通过命令行标志 `--admission-control-config-file` 为 API 服务器提供的文件中, +引用 ImagePolicyWebhook 配置文件: -{{< tabs name="imagepolicywebhook_example1" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -558,27 +541,12 @@ plugins: path: imagepolicyconfig.yaml ... ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# v1.17 中已废弃以鼓励使用 apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: ImagePolicyWebhook - path: imagepolicyconfig.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} -或者,你也可以直接将配置嵌入到文件中: +或者,你也可以直接将配置嵌入到该文件中: -{{< tabs name="imagepolicywebhook_example2" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration @@ -592,24 +560,6 @@ plugins: retryBackoff: 500 defaultAllow: true ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# v1.17 中已废弃以鼓励使用 apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: ImagePolicyWebhook - configuration: - imagePolicy: - kubeConfigFile: - allowTTL: 50 - denyTTL: 50 - retryBackoff: 500 - defaultAllow: true -``` -{{% /tab %}} -{{< /tabs >}} ImagePolicyWebhook 的配置文件必须引用 [kubeconfig](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) -格式的文件;该文件设置了到后端的连接参数。 -要求后端使用 TLS 进行通信。 +格式的文件;该文件用来设置与后端的连接。要求后端使用 TLS 进行通信。 -kubeconfig 文件的 cluster 字段需要指向远端服务,user 字段需要包含已返回的授权者。 +kubeconfig 文件的 `clusters` 字段需要指向远端服务,`users` 字段需要包含已返回的授权者。 #### 请求载荷 当面对一个准入决策时,API 服务器发送一个描述操作的 JSON 序列化的 `imagepolicy.k8s.io/v1alpha1` `ImageReview` 对象。 -该对象包含描述被审核容器的字段,以及所有匹配 `*.image-policy.k8s.io/*` 的 -Pod 注解。 +该对象包含描述被准入容器的字段,以及与 `*.image-policy.k8s.io/*` 匹配的所有 Pod 注解。 +{{ note }} 注意,Webhook API 对象与其他 Kubernetes API 对象一样受制于相同的版本控制兼容性规则。 -实现者应该知道对 alpha 对象的更宽松的兼容性,并检查请求的 "apiVersion" 字段, +实现者应该知道对 alpha 对象兼容性是相对宽松的,并检查请求的 "apiVersion" 字段, 以确保正确的反序列化。 此外,API 服务器必须启用 `imagepolicy.k8s.io/v1alpha1` API 扩展组 (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`)。 +{{ /note }} 远程服务将填充请求的 `ImageReviewStatus` 字段,并返回允许或不允许访问的响应。 -响应体的 "spec" 字段会被忽略,并且可以省略。一个允许访问应答会返回: +响应体的 `spec` 字段会被忽略,并且可以被省略。一个允许访问应答会返回: ```json { @@ -750,22 +711,25 @@ To disallow access, the service would return: ``` -更多的文档,请参阅 `imagepolicy.v1alpha1` API 对象和 -`plugin/pkg/admission/imagepolicy/admission.go`。 +更多的文档,请参阅 [`imagepolicy.v1alpha1` API](/zh/docs/reference/config-api/imagepolicy.v1alpha1/)。 -#### 使用注解进行扩展 +#### 使用注解进行扩展 {#extending-with-annotations} 一个 Pod 中匹配 `*.image-policy.k8s.io/*` 的注解都会被发送给 Webhook。 -这样做使得了解后端镜像策略的用户可以向它发送额外的信息,并为不同的后端实现 -接收不同的信息。 +这样做使得了解后端镜像策略的用户可以向它发送额外的信息, +并让不同的后端实现接收不同的信息。 -* 在紧急情况下,请求 "break glass" 覆盖一个策略。 -* 从一个记录了 break-glass 的请求的 ticket 系统得到的一个 ticket 号码。 -* 向策略服务器提供一个提示,用于提供镜像的 imageID,以方便它进行查找。 +* 在紧急情况下,请求破例覆盖某个策略。 +* 从一个记录了破例的请求的工单(Ticket)系统得到的一个工单号码。 +* 向策略服务器提供提示信息,用于提供镜像的 imageID,以方便它进行查找。 在任何情况下,注解都是由用户提供的,并不会被 Kubernetes 以任何方式进行验证。 -在将来,如果一个注解确定将被广泛使用,它可能会被提升为 ImageReviewSpec 的一个命名字段。 -### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology} +### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology} -该准入控制器拒绝(定义了 `AntiAffinity` 拓扑键的)任何 Pod -(`requiredDuringSchedulingRequiredDuringExecution` 中的 -`kubernetes.io/hostname` 除外)。 +此准入控制器拒绝定义了 `AntiAffinity` 拓扑键的任何 Pod +(`requiredDuringSchedulingRequiredDuringExecution` 中的 `kubernetes.io/hostname` 除外)。 ### LimitRanger {#limitranger} -该准入控制器会观察传入的请求,并确保它不会违反 `Namespace` 中 `LimitRange` -对象枚举的任何约束。 -如果你在 Kubernetes 部署中使用了 `LimitRange` 对象,则必须使用此准入控制器来 -执行这些约束。 -LimitRanger 还可以用于将默认资源请求应用到没有指定任何内容的 Pod; -当前,默认的 LimitRanger 对 `default` 名字空间中的所有 Pod 都应用了 -0.1 CPU 的需求。 +此准入控制器会监测传入的请求,并确保请求不会违反 `Namespace` 中 `LimitRange` 对象所设置的任何约束。 +如果你在 Kubernetes 部署中使用了 `LimitRange` 对象,则必须使用此准入控制器来执行这些约束。 +LimitRanger 还可以用于将默认资源请求应用到没有设定资源约束的 Pod; +当前,默认的 LimitRanger 对 `default` 名字空间中的所有 Pod 都设置 0.1 CPU 的需求。 请查看 -[limitRange 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) -和 [LimitRange 例子](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -以了解更多细节。 +[limitRange API 文档](/zh/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)和 +[LimitRange 例子](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)以了解更多细节。 -### MutatingAdmissionWebhook {#mutatingadmissionwebhook} +### MutatingAdmissionWebhook {#mutatingadmissionwebhook} -该准入控制器调用任何与请求匹配的变更 Webhook。匹配的 Webhook 将被串行调用。 -每一个 Webhook 都可以根据需要修改对象。 +此准入控制器调用任何与请求匹配的变更(Mutating) Webhook。匹配的 Webhook 将被顺序调用。 +每一个 Webhook 都可以自由修改对象。 `MutatingAdmissionWebhook`,顾名思义,仅在变更阶段运行。 @@ -841,42 +801,41 @@ If a webhook called by this has side effects (for example, decrementing quota) i *must* have a reconciliation system, as it is not guaranteed that subsequent webhooks or validating admission controllers will permit the request to finish. --> -如果由此准入控制器调用的 Webhook 有副作用(如降低配额), -则它 *必须* 具有协调系统,因为不能保证后续的 Webhook 和验证准入控制器都会允许完成请求。 +如果由此准入控制器调用的 Webhook 有副作用(如:减少配额), +则它 **必须** 具有协调系统,因为不能保证后续的 Webhook 和验证准入控制器都会允许完成请求。 如果你禁用了 MutatingAdmissionWebhook,那么还必须使用 `--runtime-config` 标志禁止 -`admissionregistration.k8s.io/v1` 组/版本中的 `MutatingWebhookConfiguration` -对象(版本 >=1.9 时,这两个对象都是默认启用的)。 +`admissionregistration.k8s.io/v1` 组/版本中的 `MutatingWebhookConfiguration`, +二者都是默认启用的。 -#### 谨慎编写和安装变更 webhook +#### 谨慎编写和安装变更 webhook {#use-caution-when-authoring-and-installing-mutating-webhooks} * 当用户尝试创建的对象与返回的对象不同时,用户可能会感到困惑。 -* 当它们回读的对象与尝试创建的对象不同,内建的控制环可能会出问题。 +* 当他们读回的对象与尝试创建的对象不同,内建的控制回路可能会出问题。 * 与覆盖原始请求中设置的字段相比,使用原始请求未设置的字段会引起问题的可能性较小。 - 应尽量避免前面那种方式。 -* 内建资源和第三方资源的控制回路未来可能会受到破坏性的更改,使现在运行良好的 Webhook - 无法再正常运行。即使完成了 Webhook API 安装,也不代表会为该 webhook 提供无限期的支持。 + 应尽量避免覆盖原始请求中的字段设置。 +* 内建资源和第三方资源的控制回路未来可能会出现破坏性的变更,使现在运行良好的 Webhook + 无法再正常运行。即使完成了 Webhook API 安装,也不代表该 Webhook 会被提供无限期的支持。 ### NamespaceAutoProvision {#namespaceautoprovision} @@ -887,9 +846,9 @@ It creates a namespace if it cannot be found. This admission controller is useful in deployments that do not want to restrict creation of a namespace prior to its usage. --> -该准入控制器会检查名字空间资源上的所有传入请求,并检查所引用的名字空间是否确实存在。 -如果找不到,它将创建一个名字空间。 -此准入控制器对于不想要求名字空间必须先创建后使用的集群部署中很有用。 +此准入控制器会检查针对名字空间域资源的所有传入请求,并检查所引用的名字空间是否确实存在。 +如果找不到所引用的名字空间,控制器将创建一个名字空间。 +此准入控制器对于不想要求名字空间必须先创建后使用的集群部署很有用。 ### NamespaceExists {#namespaceexists} @@ -897,26 +856,28 @@ a namespace prior to its usage. This admission controller checks all requests on namespaced resources other than `Namespace` itself. If the namespace referenced from a request doesn't exist, the request is rejected. --> -该准入控制器检查除 `Namespace` 以外的名字空间作用域资源上的所有请求。 +此准入控制器检查针对名字空间作用域的资源(除 `Namespace` 自身)的所有请求。 如果请求引用的名字空间不存在,则拒绝该请求。 ### NamespaceLifecycle {#namespacelifecycle} -该准入控制器禁止在一个正在被终止的 `Namespace` 中创建新对象,并确保 -使用不存在的 `Namespace` 的请求被拒绝。 +该准入控制器禁止在一个正在被终止的 `Namespace` 中创建新对象,并确保针对不存在的 +`Namespace` 的请求被拒绝。 该准入控制器还会禁止删除三个系统保留的名字空间,即 `default`、 `kube-system` 和 `kube-public`。 -删除 `Namespace` 会触发删除该名字空间中所有对象(Pod、Service 等)的一系列操作。 +`Namespace` 的删除操作会触发一系列删除该名字空间中所有对象(Pod、Service 等)的操作。 为了确保这个过程的完整性,我们强烈建议启用这个准入控制器。 ### NodeRestriction {#noderestriction} @@ -926,51 +887,52 @@ This admission controller limits the `Node` and `Pod` objects a kubelet can modi kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:`. Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. --> -该准入控制器限制了 kubelet 可以修改的 `Node` 和 `Pod` 对象。 +该准入控制器限制了某 kubelet 可以修改的 `Node` 和 `Pod` 对象。 为了受到这个准入控制器的限制,kubelet 必须使用在 `system:nodes` 组中的凭证, 并使用 `system:node:` 形式的用户名。 -这样,kubelet 只可修改自己的 `Node` API 对象,只能修改绑定到节点本身的 Pod 对象。 +这样,kubelet 只可修改自己的 `Node` API 对象,只能修改绑定到自身节点的 Pod 对象。 -在 Kubernetes 1.11+ 的版本中,不允许 kubelet 从 `Node` API 对象中更新或删除污点。 +不允许 kubelet 更新或删除 `Node` API 对象的污点。 -在 Kubernetes 1.13+ 的版本中,`NodeRestriction` 准入插件可防止 kubelet 删除 -`Node` API 对象,并对 `kubernetes.io/` 或 `k8s.io/` 前缀标签的 kubelet -强制进行如下修改: +`NodeRestriction` 准入插件可防止 kubelet 删除其 `Node` API 对象, +并对前缀为 `kubernetes.io/` 或 `k8s.io/` 的标签的修改对 kubelet 作如下限制: -* **防止** kubelet 添加/删除/更新带有 `node-restriction.kubernetes.io/` 前缀的标签。 - 保留此前缀的标签,供管理员用来标记 Node 对象以隔离工作负载,并且不允许 kubelet +* **禁止** kubelet 添加、删除或更新前缀为 `node-restriction.kubernetes.io/` 的标签。 + 这类前缀的标签时保留给管理员的,用以为 `Node` 对象设置标签以隔离工作负载,而不允许 kubelet 修改带有该前缀的标签。 -* **允许** kubelet 添加/删除/更新这些和这些前缀的标签: +* **允许** kubelet 添加、删除、更新以下标签: * `kubernetes.io/hostname` * `kubernetes.io/arch` * `kubernetes.io/os` * `beta.kubernetes.io/instance-type` * `node.kubernetes.io/instance-type` * `failure-domain.beta.kubernetes.io/region` (已弃用) - * `failure-domain.beta.kubernetes.io/zone` (已弃用) + * `failure-domain.beta.kubernetes.io/zone` (已弃用) * `topology.kubernetes.io/region` * `topology.kubernetes.io/zone` - * `kubelet.kubernetes.io/`-prefixed labels - * `node.kubernetes.io/`-prefixed labels + * `kubelet.kubernetes.io/` 为前缀的标签 + * `node.kubernetes.io/` 为前缀的标签 -kubelet 保留 `kubernetes.io` 或 `k8s.io` 前缀的所有标签,并且将来可能会被 +以 `kubernetes.io` 或 `k8s.io` 为前缀的所有其他标签都限制 kubelet 使用,并且将来可能会被 `NodeRestriction` 准入插件允许或禁止。 将来的版本可能会增加其他限制,以确保 kubelet 具有正常运行所需的最小权限集。 @@ -984,39 +946,32 @@ This admission controller also protects the access to `metadata.ownerReferences[ of an object, so that only users with "update" permission to the `finalizers` subresource of the referenced *owner* can change it. --> -该准入控制器保护对 `metadata.ownerReferences` 对象的访问,以便只有对该对象具有 -“删除” 权限的用户才能对其进行更改。 +此准入控制器保护对对象的 `metadata.ownerReferences` 的访问,以便只有对该对象具有 +“delete” 权限的用户才能对其进行更改。 该准入控制器还保护对 `metadata.ownerReferences[x].blockOwnerDeletion` 对象的访问, -以便只有对所引用的 **属主(owner)** 的 `finalizers` 子资源具有 “更新” +以便只有对所引用的 **属主(owner)** 的 `finalizers` 子资源具有 “update” 权限的用户才能对其进行更改。 ### PersistentVolumeClaimResize {#persistentvolumeclaimresize} - -该准入控制器检查传入的 `PersistentVolumeClaim` 调整大小请求,对其执行额外的验证操作。 +{{< feature-state for_k8s_version="v1.24" state="stable" >}} -{{< note >}} -对调整卷大小的支持是一种 Beta 特性。作为集群管理员,你必须确保特性门控 `ExpandPersistentVolumes` -设置为 `true` 才能启用调整大小。 -{{< /note >}} +此准入控制器检查传入的 `PersistentVolumeClaim` 调整大小请求,对其执行额外的验证检查操作。 -启用 `ExpandPersistentVolumes` 特性门控之后,建议将 `PersistentVolumeClaimResize` -准入控制器也启用。除非 PVC 的 `StorageClass` 明确地将 `allowVolumeExpansion` 设置为 -`true` 来显式启用调整大小。否则,默认情况下该准入控制器会阻止所有对 PVC 大小的调整。 +建议启用 `PersistentVolumeClaimResize` 准入控制器。除非 PVC 的 `StorageClass` 明确地将 +`allowVolumeExpansion` 设置为 `true` 来显式启用调整大小。 +否则,默认情况下该准入控制器会阻止所有对 PVC 大小的调整。 例如:由以下 `StorageClass` 创建的所有 `PersistentVolumeClaim` 都支持卷容量扩充: @@ -1038,7 +993,7 @@ allowVolumeExpansion: true For more information about persistent volume claims, see [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims). --> 关于持久化卷申领的更多信息,请参见 -[PersistentVolumeClaims](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 +[PersistentVolumeClaim](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 ### PersistentVolumeLabel {#persistentvolumelabel} @@ -1052,15 +1007,15 @@ region and/or zone. If the admission controller doesn't support automatic labelling your PersistentVolumes, you may need to add the labels manually to prevent pods from mounting volumes from a different zone. PersistentVolumeLabel is DEPRECATED and labeling persistent volumes has been taken over by -[cloud controller manager](/docs/tasks/administer-cluster/running-cloud-controller/). +the {{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}. Starting from 1.11, this admission controller is disabled by default. --> -该准入控制器会自动将区(region)或区域(zone)标签附加到由云提供商(如 GCE、AWS) -定义的 PersistentVolume。这有助于确保 Pod 和 PersistentVolume 位于相同的区或区域。 +此准入控制器会自动将由云提供商(如 GCE、AWS)定义的区(region)或区域(zone) +标签附加到 PersistentVolume 上。这有助于确保 Pod 和 PersistentVolume 位于相同的区或区域。 如果准入控制器不支持为 PersistentVolumes 自动添加标签,那你可能需要手动添加标签, 以防止 Pod 挂载其他区域的卷。 -PersistentVolumeLabel 已被废弃,标记持久卷已由 -[云管理控制器](/zh/docs/tasks/administer-cluster/running-cloud-controller/)接管。 +PersistentVolumeLabel 已被弃用,为持久卷添加标签的操作已由 +{{< glossary_tooltip text="云管理控制器" term_id="cloud-controller-manager" >}}接管。 从 1.11 开始,默认情况下禁用此准入控制器。 ### PodNodeSelector {#podnodeselector} @@ -1068,70 +1023,56 @@ PersistentVolumeLabel 已被废弃,标记持久卷已由 {{< feature-state for_k8s_version="v1.5" state="alpha" >}} -该准入控制器通过读取名字空间注解和全局配置,来为名字空间中可以使用的节点选择器 -设置默认值并实施限制。 +此准入控制器通过读取名字空间注解和全局配置,来为名字空间中可以使用的节点选择器设置默认值并实施限制。 -#### 配置文件格式 +#### 配置文件格式 {#configuration-file-format-podnodeselector} -`PodNodeSelector` 使用配置文件来设置后端行为的选项。 -请注意,配置文件格式将在将来某个版本中改为版本化文件。 +`PodNodeSelector` 使用配置文件来设置后端行为的选项。请注意,配置文件格式将在将来某个版本中改为版本化文件。 该文件可以是 JSON 或 YAML,格式如下: ```yaml podNodeSelectorPluginConfig: - clusterDefaultNodeSelector: name-of-node-selector - namespace1: name-of-node-selector - namespace2: name-of-node-selector + clusterDefaultNodeSelector: name-of-node-selector + namespace1: name-of-node-selector + namespace2: name-of-node-selector ``` -基于提供给 API 服务器命令行标志 `--admission-control-config-file` 的文件名, -从文件中引用 `PodNodeSelector` 配置文件: +通过 API 服务器命令行标志 `--admission-control-config-file` 为 API 服务器提供的文件中, +需要引用 `PodNodeSelector` 配置文件: -{{< tabs name="podnodeselector_example1" >}} -{{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: -- name: PodNodeSelector - path: podnodeselector.yaml + - name: PodNodeSelector + path: podnodeselector.yaml ... ``` -{{% /tab %}} -{{% tab name="apiserver.k8s.io/v1alpha1" %}} -```yaml -# 在 v1.17 中废弃,以鼓励使用 apiserver.config.k8s.io/v1 -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: PodNodeSelector - path: podnodeselector.yaml -... -``` -{{% /tab %}} -{{< /tabs >}} -#### 配置注解格式 +#### 配置注解格式 {#configuration-annotation-format} -`PodNodeSelector` 使用键为 `scheduler.alpha.kubernetes.io/node-selector` 的注解 -为名字空间设置节点选择算符。 +`PodNodeSelector` 使用键为 `scheduler.alpha.kubernetes.io/node-selector` +的注解为名字空间设置节点选择算符。 ```yaml apiVersion: v1 @@ -1144,28 +1085,30 @@ metadata: -#### 内部行为 +#### 内部行为 {#internal-behavior} -该准入控制器行为如下: +此准入控制器行为如下: 1. 如果 `Namespace` 的注解带有键 `scheduler.alpha.kubernetes.io/node-selector`, 则将其值用作节点选择算符。 2. 如果名字空间缺少此类注解,则使用 `PodNodeSelector` 插件配置文件中定义的 `clusterDefaultNodeSelector` 作为节点选择算符。 -3. 评估 Pod 节点选择算符和名字空间节点选择算符是否存在冲突。存在冲突将导致拒绝。 +3. 评估 Pod 节点选择算符和名字空间节点选择算符是否存在冲突。存在冲突将拒绝 Pod。 4. 评估 Pod 节点选择算符和特定于名字空间的被允许的选择算符所定义的插件配置文件是否存在冲突。 - 存在冲突将导致拒绝。 + 存在冲突将导致拒绝 Pod。 {{< note >}} -这是下节已被废弃的 [PodSecurityPolicy](#podsecuritypolicy) 准入控制器的替代品。 -此准入控制器负责在创建和修改 Pod 时根据请求的安全上下文和 +这是下节所讨论的已被废弃的 [PodSecurityPolicy](#podsecuritypolicy) 准入控制器的替代品。 +此准入控制器负责在创建和修改 Pod 时,根据请求的安全上下文和 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/) 来确定是否可以执行请求。 @@ -1212,31 +1155,33 @@ See also the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) d for more information. --> 查看 [Pod 安全策略文档](/zh/docs/concepts/security/pod-security-policy/) -了解更多细节。 +进一步了解其间细节。 ### PodTolerationRestriction {#podtolerationrestriction} {{< feature-state for_k8s_version="v1.7" state="alpha" >}} -准入控制器 PodTolerationRestriction 检查 Pod 的容忍度与其名字空间的容忍度之间 -是否存在冲突。如果存在冲突,则拒绝 Pod 请求。 -然后,它将名字空间的容忍度合并到 Pod 的容忍度中,之后根据名字空间的容忍度 -白名单检查所得到的容忍度结果。如果检查成功,则将接受 Pod 请求,否则拒绝该请求。 +准入控制器 PodTolerationRestriction 检查 Pod 的容忍度与其名字空间的容忍度之间是否存在冲突。 +如果存在冲突,则拒绝 Pod 请求。 +控制器接下来会将名字空间的容忍度合并到 Pod 的容忍度中, +根据名字空间的容忍度白名单检查所得到的容忍度结果。 +如果检查成功,则将接受 Pod 请求,否则拒绝该请求。 -如果 Pod 的名字空间没有任何关联的默认容忍度或容忍度白名单,则使用集群级别的 -默认容忍度或容忍度白名单(如果有的话)。 +如果 Pod 的名字空间没有任何关联的默认容忍度或容忍度白名单, +则使用集群级别的默认容忍度或容忍度白名单(如果有的话)。 -名字空间的容忍度通过注解健 `scheduler.alpha.kubernetes.io/defaultTolerations` +名字空间的容忍度通过注解键 `scheduler.alpha.kubernetes.io/defaultTolerations` 来设置。可接受的容忍度可以通过 `scheduler.alpha.kubernetes.io/tolerationsWhitelist` 注解键来添加。 @@ -1263,7 +1208,9 @@ metadata: ### 优先级 {#priority} @@ -1273,21 +1220,22 @@ The priority admission controller uses the `priorityClassName` field and populat ### ResourceQuota {#resourcequota} -该准入控制器会监测传入的请求,并确保它不违反任何一个 `Namespace` 中的 `ResourceQuota` -对象中枚举出来的约束。 -如果你在 Kubernetes 部署中使用了 `ResourceQuota`,你必须使用这个准入控制器来强制 -执行配额限制。 +此准入控制器会监测传入的请求,并确保它不违反任何一个 `Namespace` 中的 `ResourceQuota` +对象中列举的约束。如果你在 Kubernetes 部署中使用了 `ResourceQuota`, +则必须使用这个准入控制器来强制执行配额限制。 -请查看 -[resourceQuota 设计文档](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md)和 [Resource Quota 例子](/zh/docs/concepts/policy/resource-quotas/) -了解更多细节。 +请参阅 +[resourceQuota API 参考](/zh/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/) +和 [Resource Quota 例子](/zh/docs/concepts/policy/resource-quotas/)了解更多细节。 ### RuntimeClass {#runtimeclass} -+{{< feature-state for_k8s_version="v1.20" state="stable" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} -如果你开启 `PodOverhead` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), -并且通过 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) -配置来定义一个 RuntimeClass,这个准入控制器会检查新的 Pod。 -当启用的时候,这个准入控制器会拒绝任何 overhead 字段已经设置的 Pod。 +如果你所定义的 RuntimeClass 包含 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/), +这个准入控制器会检查新的 Pod。被启用后,此准入控制器会拒绝所有已经设置了 overhead 字段的 Pod 创建请求。 对于配置了 RuntimeClass 并在其 `.spec` 中选定 RuntimeClass 的 Pod, 此准入控制器会根据相应 RuntimeClass 中定义的值为 Pod 设置 `.spec.overhead`。 -{{< note >}} -Pod 的 `.spec.overhead` 字段和 RuntimeClass 的 `.overhead` 字段均为处于 beta 版本。 -如果你未启用 `PodOverhead` 特性门控,则所有 Pod 均被视为未设置 `.spec.overhead`。 -{{< /note >}} - 详情请参见 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)。 ### SecurityContextDeny {#securitycontextdeny} @@ -1340,14 +1280,13 @@ then you could use this admission controller to restrict the set of values a sec See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for more context on restricting pod privileges. --> -该准入控制器将拒绝任何试图设置特定提升 +此准入控制器将拒绝任何试图设置特定提升 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) -字段的 Pod,正如任务 -[为 Pod 或 Container 配置安全上下文](/zh/docs/tasks/configure-pod-container/security-context/) -中所展示的那样。 -如果集群没有使用 [Pod 安全性准入](/zh/docs/concepts/security/pod-security-admission/)、 -[PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/), -也没有任何外部执行机制,那么你可以使用此准入控制器来限制安全上下文所能获取的值集。 +中某些字段的 Pod,正如任务[为 Pod 或 Container 配置安全上下文](/zh/docs/tasks/configure-pod-container/security-context/) +中所展示的那样。如果集群没有使用 +[Pod 安全性准入](/zh/docs/concepts/security/pod-security-admission/)、 +[PodSecurityPolicy](/zh/docs/concepts/security/pod-security-policy/), +也没有任何外部强制机制,那么你可以使用此准入控制器来限制安全上下文所能获取的值集。 有关限制 Pod 权限的更多内容,请参阅 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)。 @@ -1357,14 +1296,15 @@ pod privileges. 此准入控制器实现了 [ServiceAccount](/zh/docs/tasks/configure-pod-container/configure-service-account/) 的自动化。 如果你打算使用 Kubernetes 的 ServiceAccount 对象,我们强烈建议你使用这个准入控制器。 -### StorageObjectInUseProtection +### StorageObjectInUseProtection {#storageobjectinuseprotection} `StorageObjectInUseProtection` 插件将 `kubernetes.io/pvc-protection` 或 -`kubernetes.io/pv-protection` finalizers 添加到新创建的持久化卷声明(PVC) -或持久化卷(PV)中。 -如果用户尝试删除 PVC/PV,除非 PVC/PV 的保护控制器移除 finalizers,否则 -PVC/PV 不会被删除。 -有关更多详细信息,请参考 +`kubernetes.io/pv-protection` finalizers 添加到新创建的持久卷申领(PVC) +或持久卷(PV)中。如果用户尝试删除 PVC/PV,除非 PVC/PV 的保护控制器移除 finalizers, +否则 PVC/PV 不会被删除。有关更多详细信息,请参考 [保护使用中的存储对象](/zh/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection)。 -### TaintNodesByCondition {#taintnodesbycondition} +### TaintNodesByCondition {#taintnodesbycondition} {{< feature-state for_k8s_version="v1.17" state="stable" >}} -该准入控制器为新创建的节点添加 `NotReady` 和 `NoSchedule` -{{< glossary_tooltip text="污点" term_id="taint" >}}。 -这些污点能够避免一些竞态条件的发生,这类静态条件可能导致 Pod 在更新节点污点以准确 -反映其所报告状况之前,就被调度到新节点上。 +该准入控制器为新创建的节点添加 `NotReady` 和 `NoSchedule` {{< glossary_tooltip text="污点" term_id="taint" >}}。 +这些污点能够避免一些竞态条件的发生,而这类竞态条件可能导致 Pod +在更新节点污点以准确反映其所报告状况之前,就被调度到新节点上。 ### ValidatingAdmissionWebhook {#validatingadmissionwebhook} @@ -1403,18 +1343,18 @@ webhooks are called in parallel; if any of them rejects the request, the request fails. This admission controller only runs in the validation phase; the webhooks it calls may not mutate the object, as opposed to the webhooks called by the `MutatingAdmissionWebhook` admission controller. --> -该准入控制器调用与请求匹配的所有验证 Webhook。 +此准入控制器调用与请求匹配的所有验证性 Webhook。 匹配的 Webhook 将被并行调用。如果其中任何一个拒绝请求,则整个请求将失败。 -该准入控制器仅在验证(Validating)阶段运行;与 `MutatingAdmissionWebhook` 准入控制器 -所调用的 Webhook 相反,它调用的 Webhook 应该不会使对象出现变更。 +该准入控制器仅在验证(Validating)阶段运行;与 `MutatingAdmissionWebhook` +准入控制器所调用的 Webhook 相反,它调用的 Webhook 不可以变更对象。 -如果以此方式调用的 Webhook 有其它作用(如,降低配额),则它必须具有协调机制。 -这是因为无法保证后续的 Webhook 或其他有效的准入控制器都允许请求完成。 +如果以此方式调用的 Webhook 有其它副作用(如:减少配额),则它必须具有协调机制。 +这是因为无法保证后续的 Webhook 或其他验证性准入控制器都允许请求完成。 如果你禁用了 ValidatingAdmissionWebhook,还必须通过 `--runtime-config` 标志来禁用 -`admissionregistration.k8s.io/v1` 组/版本中的 `ValidatingWebhookConfiguration` -对象(默认情况下在 1.9 版和更高版本中均处于启用状态)。 - +`admissionregistration.k8s.io/v1` 组/版本中的 `ValidatingWebhookConfiguration` +对象(默认情况下在 v1.9 和更高版本中均处于启用状态)。 ## 有推荐的准入控制器吗? @@ -1439,10 +1382,3 @@ Yes. The recommended admission controllers are enabled by default (shown [here]( 因此,你无需显式指定它们。 你可以使用 `--enable-admission-plugins` 标志( **顺序不重要** )来启用默认设置以外的其他准入控制器。 -{{< note >}} - -`--admission-control` 在 1.10 中已废弃,由 `--enable-admission-plugins` 取代。 -{{< /note >}} - diff --git a/content/zh/docs/reference/access-authn-authz/authentication.md b/content/zh-cn/docs/reference/access-authn-authz/authentication.md similarity index 89% rename from content/zh/docs/reference/access-authn-authz/authentication.md rename to content/zh-cn/docs/reference/access-authn-authz/authentication.md index 5bde8a2a4098b..c9b05082b0906 100644 --- a/content/zh/docs/reference/access-authn-authz/authentication.md +++ b/content/zh-cn/docs/reference/access-authn-authz/authentication.md @@ -47,7 +47,7 @@ Kubernetes 假定普通用户是由一个与集群无关的服务通过以下方 - 类似 Keystone 或者 Google Accounts 这类用户数据库 - 包含用户名和密码列表的文件 -有鉴于此,_Kubernetes 并不包含用来代表普通用户账号的对象_。 +有鉴于此,**Kubernetes 并不包含用来代表普通用户账号的对象**。 普通用户的信息无法通过 API 调用添加到集群中。 -尽管无法通过 API 调用来添加普通用户,Kubernetes 仍然认为能够提供由集群的证书 -机构签名的合法证书的用户是通过身份认证的用户。基于这样的配置,Kubernetes -使用证书中的 'subject' 的通用名称(Common Name)字段(例如,"/CN=bob")来 -确定用户名。接下来,基于角色访问控制(RBAC)子系统会确定用户是否有权针对 -某资源执行特定的操作。进一步的细节可参阅 -[证书请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user) +尽管无法通过 API 调用来添加普通用户, +Kubernetes 仍然认为能够提供由集群的证书机构签名的合法证书的用户是通过身份认证的用户。 +基于这样的配置,Kubernetes 使用证书中的 'subject' 的通用名称(Common Name)字段 +(例如,"/CN=bob")来确定用户名。 +接下来,基于角色访问控制(RBAC)子系统会确定用户是否有权针对某资源执行特定的操作。 +进一步的细节可参阅 +[证书请求](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user) 下普通用户主题。 与此不同,服务账号是 Kubernetes API 所管理的用户。它们被绑定到特定的名字空间, -或者由 API 服务器自动创建,或者通过 API 调用创建。服务账号与一组以 Secret 保存 -的凭据相关,这些凭据会被挂载到 Pod 中,从而允许集群内的进程访问 Kubernetes -API。 +或者由 API 服务器自动创建,或者通过 API 调用创建。服务账号与一组以 Secret +保存的凭据相关,这些凭据会被挂载到 Pod 中,从而允许集群内的进程访问 Kubernetes API。 -API 请求则或者与某普通用户相关联,或者与某服务账号相关联,亦或者被视作 -[匿名请求](#anonymous-requests)。这意味着集群内外的每个进程在向 API 服务器发起 -请求时都必须通过身份认证,否则会被视作匿名用户。这里的进程可以是在某工作站上 -输入 `kubectl` 命令的操作人员,也可以是节点上的 `kubelet` 组件,还可以是控制面 -的成员。 +API 请求则或者与某普通用户相关联,或者与某服务账号相关联, +亦或者被视作[匿名请求](#anonymous-requests)。这意味着集群内外的每个进程在向 API +服务器发起请求时都必须通过身份认证,否则会被视作匿名用户。这里的进程可以是在某工作站上输入 +`kubectl` 命令的操作人员,也可以是节点上的 `kubelet` 组件,还可以是控制面的成员。 -所有(属性)值对于身份认证系统而言都是不透明的,只有被 -[鉴权组件](/zh/docs/reference/access-authn-authz/authorization/) -解释过之后才有意义。 +所有(属性)值对于身份认证系统而言都是不透明的, +只有被[鉴权组件](/zh-cn/docs/reference/access-authn-authz/authorization/)解释过之后才有意义。 你可以同时启用多种身份认证方法,并且你通常会至少使用两种方法: @@ -148,14 +146,14 @@ Integrations with other authentication protocols (LDAP, SAML, Kerberos, alternat can be accomplished using an [authenticating proxy](#authenticating-proxy) or the [authentication webhook](#webhook-token-authentication). --> -当集群中启用了多个身份认证模块时,第一个成功地对请求完成身份认证的模块会 -直接做出评估决定。API 服务器并不保证身份认证模块的运行顺序。 +当集群中启用了多个身份认证模块时,第一个成功地对请求完成身份认证的模块会直接做出评估决定。 +API 服务器并不保证身份认证模块的运行顺序。 对于所有通过身份认证的用户,`system:authenticated` 组都会被添加到其组列表中。 -与其它身份认证协议(LDAP、SAML、Kerberos、X509 的替代模式等等)都可以通过 -使用一个[身份认证代理](#authenticating-proxy)或 -[身份认证 Webhoook](#webhook-token-authentication)来实现。 +与其它身份认证协议(LDAP、SAML、Kerberos、X509 的替代模式等等) +都可以通过使用一个[身份认证代理](#authenticating-proxy)或[身份认证 Webhoook](#webhook-token-authentication) +来实现。 ### 静态令牌文件 {#static-token-file} -当 API 服务器的命令行设置了 `--token-auth-file=SOMEFILE` 选项时,会从文件中 -读取持有者令牌。目前,令牌会长期有效,并且在不重启 API 服务器的情况下 -无法更改令牌列表。 +当 API 服务器的命令行设置了 `--token-auth-file=SOMEFILE` 选项时,会从文件中读取持有者令牌。 +目前,令牌会长期有效,并且在不重启 API 服务器的情况下无法更改令牌列表。 令牌文件是一个 CSV 文件,包含至少 3 个列:令牌、用户名和用户的 UID。 其余列被视为可选的组名。 +{{< note >}} -{{< note >}} 如果要设置的组名不止一个,则对应的列必须用双引号括起来,例如 ```conf @@ -241,12 +234,10 @@ header as shown below. --> #### 在请求中放入持有者令牌 {#putting-a-bearer-token-in-a-request} -当使用持有者令牌来对某 HTTP 客户端执行身份认证时,API 服务器希望看到 -一个名为 `Authorization` 的 HTTP 头,其值格式为 `Bearer `。 -持有者令牌必须是一个可以放入 HTTP 头部值字段的字符序列,至多可使用 -HTTP 的编码和引用机制。 -例如:如果持有者令牌为 `31ada4fd-adec-460c-809a-9e56ceb75269`,则其 -出现在 HTTP 头部时如下所示: +当使用持有者令牌来对某 HTTP 客户端执行身份认证时,API 服务器希望看到一个名为 +`Authorization` 的 HTTP 头,其值格式为 `Bearer `。 +持有者令牌必须是一个可以放入 HTTP 头部值字段的字符序列,至多可使用 HTTP 的编码和引用机制。 +例如:如果持有者令牌为 `31ada4fd-adec-460c-809a-9e56ceb75269`,则其出现在 HTTP 头部时如下所示: ```http Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269 @@ -267,7 +258,7 @@ dynamically managed and created. Controller Manager contains a TokenCleaner controller that deletes bootstrap tokens as they expire. --> 为了支持平滑地启动引导新的集群,Kubernetes 包含了一种动态管理的持有者令牌类型, -称作 *启动引导令牌(Bootstrap Token)*。 +称作 **启动引导令牌(Bootstrap Token)**。 这些令牌以 Secret 的形式保存在 `kube-system` 名字空间中,可以被动态管理和创建。 控制器管理器包含的 `TokenCleaner` 控制器能够在启动引导令牌过期时将其删除。 @@ -276,8 +267,8 @@ The tokens are of the form `[a-z0-9]{6}.[a-z0-9]{16}`. The first component is a Token ID and the second component is the Token Secret. You specify the token in an HTTP header as follows: --> -这些令牌的格式为 `[a-z0-9]{6}.[a-z0-9]{16}`。第一个部分是令牌的 ID;第二个部分 -是令牌的 Secret。你可以用如下所示的方式来在 HTTP 头部设置令牌: +这些令牌的格式为 `[a-z0-9]{6}.[a-z0-9]{16}`。第一个部分是令牌的 ID; +第二个部分是令牌的 Secret。你可以用如下所示的方式来在 HTTP 头部设置令牌: ```http Authorization: Bearer 781292.db7bc3a58fc5f07e @@ -290,8 +281,7 @@ the TokenCleaner controller via the `-controllers` flag on the Controller Manager. This is done with something like `-controllers=*,tokencleaner`. `kubeadm` will do this for you if you are using it to bootstrap a cluster. --> -你必须在 API 服务器上设置 `--enable-bootstrap-token-auth` 标志来启用基于启动 -引导令牌的身份认证组件。 +你必须在 API 服务器上设置 `--enable-bootstrap-token-auth` 标志来启用基于启动引导令牌的身份认证组件。 你必须通过控制器管理器的 `--controllers` 标志来启用 TokenCleaner 控制器; 这可以通过类似 `--controllers=*,tokencleaner` 这种设置来做到。 如果你使用 `kubeadm` 来启动引导新的集群,该工具会帮你完成这些设置。 @@ -306,17 +296,16 @@ cluster. --> 身份认证组件的认证结果为 `system:bootstrap:<令牌 ID>`,该用户属于 `system:bootstrappers` 用户组。 -这里的用户名和组设置都是有意设计成这样,其目的是阻止用户在启动引导集群之后 -继续使用这些令牌。 -这里的用户名和组名可以用来(并且已经被 `kubeadm` 用来)构造合适的鉴权 -策略,以完成启动引导新集群的工作。 +这里的用户名和组设置都是有意设计成这样,其目的是阻止用户在启动引导集群之后继续使用这些令牌。 +这里的用户名和组名可以用来(并且已经被 `kubeadm` 用来)构造合适的鉴权策略, +以完成启动引导新集群的工作。 -请参阅[启动引导令牌](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) +请参阅[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) 以了解关于启动引导令牌身份认证组件与控制器的更深入的信息,以及如何使用 `kubeadm` 来管理这些令牌。 @@ -332,8 +321,8 @@ If unspecified, the API server's TLS private key will be used. --> ### 服务账号令牌 {#service-account-tokens} -服务账号(Service Account)是一种自动被启用的用户认证机制,使用经过签名的 -持有者令牌来验证请求。该插件可接受两个可选参数: +服务账号(Service Account)是一种自动被启用的用户认证机制,使用经过签名的持有者令牌来验证请求。 +该插件可接受两个可选参数: * `--service-account-key-file` 一个包含用来为持有者令牌签名的 PEM 编码密钥。 若未指定,则使用 API 服务器的 TLS 私钥。 @@ -348,15 +337,15 @@ talk to the API server. Accounts may be explicitly associated with pods using th `serviceAccountName` field of a `PodSpec`. --> 服务账号通常由 API 服务器自动创建并通过 `ServiceAccount` -[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) +[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) 关联到集群中运行的 Pod 上。 持有者令牌会挂载到 Pod 中可预知的位置,允许集群内进程与 API 服务器通信。 服务账号也可以使用 Pod 规约的 `serviceAccountName` 字段显式地关联到 Pod 上。 +{{< note >}} -{{< note >}} `serviceAccountName` 通常会被忽略,因为关联关系是自动建立的。 {{< /note >}} @@ -385,10 +374,10 @@ Kubernetes API. To manually create a service account, use the `kubectl create serviceaccount (NAME)` command. This creates a service account in the current namespace and an associated secret. --> -在集群外部使用服务账号持有者令牌也是完全合法的,且可用来为长时间运行的、需要与 -Kubernetes API 服务器通信的任务创建标识。要手动创建服务账号,可以使用 -`kubectl create serviceaccount <名称>` 命令。此命令会在当前的名字空间中生成一个 -服务账号和一个与之关联的 Secret。 +在集群外部使用服务账号持有者令牌也是完全合法的,且可用来为长时间运行的、需要与 Kubernetes +API 服务器通信的任务创建标识。要手动创建服务账号,可以使用 +`kubectl create serviceaccount <名称>` 命令。 +此命令会在当前的名字空间中生成一个服务账号和一个与之关联的 Secret。 ```bash kubectl create serviceaccount jenkins @@ -420,8 +409,7 @@ secrets: The created secret holds the public CA of the API server and a signed JSON Web Token (JWT). --> -所创建的 Secret 中会保存 API 服务器的公开的 CA 证书和一个已签名的 JSON Web -令牌(JWT)。 +所创建的 Secret 中会保存 API 服务器的公开的 CA 证书和一个已签名的 JSON Web 令牌(JWT)。 ```bash kubectl get secret jenkins-token-1yvwg -o yaml @@ -452,10 +440,10 @@ metadata: type: kubernetes.io/service-account-token ``` +{{< note >}} -{{< note >}} 字段值是按 Base64 编码的,这是因为 Secret 数据总是采用 Base64 编码来存储。 {{< /note >}} @@ -481,8 +469,8 @@ when granting permissions to service accounts and read capabilities for secrets. 服务账号被身份认证后,所确定的用户名为 `system:serviceaccount:<名字空间>:<服务账号>`, 并被分配到用户组 `system:serviceaccounts` 和 `system:serviceaccounts:<名字空间>`。 -警告:由于服务账号令牌保存在 Secret 对象中,任何能够读取这些 Secret 的用户 -都可以被认证为对应的服务账号。在为用户授予访问服务账号的权限时,以及对 Secret +警告:由于服务账号令牌保存在 Secret 对象中,任何能够读取这些 Secret +的用户都可以被认证为对应的服务账号。在为用户授予访问服务账号的权限时,以及对 Secret 的读权限时,要格外小心。 要识别用户,身份认证组件使用 OAuth2 -[令牌响应](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse) -中的 `id_token`(而非 `access_token`)作为持有者令牌。 +[令牌响应](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse)中的 +`id_token`(而非 `access_token`)作为持有者令牌。 关于如何在请求中设置令牌,可参见[前文](#putting-a-bearer-token-in-a-request)。 {{< mermaid >}} @@ -626,9 +614,8 @@ wish to utilize multiple OAuth clients should explore providers which support th `azp` (authorized party) claim, a mechanism for allowing one client to issue tokens on behalf of another. --> -很重要的一点是,API 服务器并非一个 OAuth2 客户端,相反,它只能被配置为 -信任某一个令牌发放者。这使得使用公共服务(如 Google)的用户可以不信任发放给 -第三方的凭据。 +很重要的一点是,API 服务器并非一个 OAuth2 客户端,相反,它只能被配置为信任某一个令牌发放者。 +这使得使用公共服务(如 Google)的用户可以不信任发放给第三方的凭据。 如果管理员希望使用多个 OAuth 客户端,他们应该研究一下那些支持 `azp` (Authorized Party,被授权方)申领的服务。 `azp` 是一种允许某客户端代替另一客户端发放令牌的机制。 @@ -643,8 +630,8 @@ CloudFoundry [UAA](https://github.com/cloudfoundry/uaa), or Tremolo Security's [OpenUnison](https://openunison.github.io/). --> Kubernetes 并未提供 OpenID Connect 的身份服务。 -你可以使用现有的公共的 OpenID Connect 身份服务(例如 Google 或者 -[其他服务](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers))。 +你可以使用现有的公共的 OpenID Connect 身份服务 +(例如 Google 或者[其他服务](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers))。 或者,你也可以选择自己运行一个身份服务,例如 CoreOS [dex](https://github.com/coreos/dex)、 [Keycloak](https://github.com/keycloak/keycloak)、 @@ -672,12 +659,10 @@ Or you can use [this similar script](https://raw.githubusercontent.com/TremoloSe 关于上述第三条需求,即要求具备 CA 签名的证书,有一些额外的注意事项。 如果你部署了自己的身份服务,而不是使用云厂商(如 Google 或 Microsoft)所提供的服务, 你必须对身份服务的 Web 服务器证书进行签名,签名所用证书的 `CA` 标志要设置为 -`TRUE`,即使用的是自签名证书。这是因为 GoLang 的 TLS 客户端实现对证书验证 -标准方面有非常严格的要求。如果你手头没有现成的 CA 证书,可以使用 CoreOS +`TRUE`,即使用的是自签名证书。这是因为 GoLang 的 TLS 客户端实现对证书验证标准方面有非常严格的要求。如果你手头没有现成的 CA 证书,可以使用 CoreOS 团队所开发的[这个脚本](https://github.com/dexidp/dex/blob/master/examples/k8s/gencert.sh) 来创建一个简单的 CA 和被签了名的证书与密钥对。 -或者你也可以使用 -[这个类似的脚本](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh), +或者你也可以使用[这个类似的脚本](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh), 生成一个合法期更长、密钥尺寸更大的 SHA256 证书。 #### 使用 kubectl {#using-kubectl} -##### 选项一 - OIDC 身份认证组件 +##### 选项一:OIDC 身份认证组件 -第一种方案是使用 kubectl 的 `oidc` 身份认证组件,该组件将 `id_token` 设置 -为所有请求的持有者令牌,并且在令牌过期时自动刷新。在你登录到你的身份服务之后, +第一种方案是使用 kubectl 的 `oidc` 身份认证组件,该组件将 `id_token` 设置为所有请求的持有者令牌, +并且在令牌过期时自动刷新。在你登录到你的身份服务之后, 可以使用 kubectl 来添加你的 `id_token`、`refresh_token`、`client_id` 和 `client_secret`,以配置该插件。 @@ -769,7 +754,7 @@ Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option: --> -##### 选项二 - 使用 `--token` 选项 +##### 选项二:使用 `--token` 选项 `kubectl` 命令允许你使用 `--token` 选项传递一个令牌。 你可以将 `id_token` 的内容复制粘贴过来,作为此标志的取值: @@ -790,8 +775,8 @@ Webhook authentication is a hook for verifying bearer tokens. Webhook 身份认证是一种用来验证持有者令牌的回调机制。 -* `--authentication-token-webhook-config-file` 指向一个配置文件,其中描述 - 如何访问远程的 Webhook 服务。 +* `--authentication-token-webhook-config-file` 指向一个配置文件, + 其中描述如何访问远程的 Webhook 服务。 * `--authentication-token-webhook-cache-ttl` 用来设定身份认证决定的缓存时间。 默认时长为 2 分钟。 @@ -800,7 +785,7 @@ The configuration file uses the [kubeconfig](/docs/concepts/configuration/organi file format. Within the file, `clusters` refers to the remote service and `users` refers to the API server webhook. An example would be: --> -配置文件使用 [kubeconfig](/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +配置文件使用 [kubeconfig](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 文件的格式。文件中,`clusters` 指代远程服务,`users` 指代远程 API 服务 Webhook。下面是一个例子: @@ -865,11 +850,10 @@ contexts: When a client attempts to authenticate with the API server using a bearer token as discussed [above](#putting-a-bearer-token-in-a-request), the authentication webhook POSTs a JSON-serialized `TokenReview` object containing the token to the remote service. --> -当客户端尝试在 API 服务器上使用持有者令牌完成身份认证( -如[前](#putting-a-bearer-token-in-a-request)所述)时, +当客户端尝试在 API 服务器上使用持有者令牌完成身份认证 +(如[前](#putting-a-bearer-token-in-a-request)所述)时, 身份认证 Webhook 会用 POST 请求发送一个 JSON 序列化的对象到远程服务。 -该对象是 `TokenReview` 对象, -其中包含持有者令牌。 +该对象是 `TokenReview` 对象,其中包含持有者令牌。 Kubernetes 不会强制请求提供此 HTTP 头部。 -要注意的是,Webhook API 对象和其他 Kubernetes API 对象一样,也要受到同一 -[版本兼容规则](/zh/docs/concepts/overview/kubernetes-api/)约束。 +要注意的是,Webhook API 对象和其他 Kubernetes API 对象一样, +也要受到同一[版本兼容规则](/zh-cn/docs/concepts/overview/kubernetes-api/)约束。 实现者应检查请求的 `apiVersion` 字段以确保正确的反序列化, -并且**必须**以与请求相同版本的 `TokenReview` 对象进行响应。 +并且 **必须** 以与请求相同版本的 `TokenReview` 对象进行响应。 {{< tabs name="TokenReview_request" >}} @@ -891,7 +875,8 @@ The Kubernetes API server defaults to sending `authentication.k8s.io/v1beta1` to To opt into receiving `authentication.k8s.io/v1` token reviews, the API server must be started with `--authentication-token-webhook-version=v1`. --> Kubernetes API 服务器默认发送 `authentication.k8s.io/v1beta1` 令牌以实现向后兼容性。 -要选择接收 `authentication.k8s.io/v1` 令牌认证,API 服务器必须以 `--authentication-token-webhook-version=v1` 启动。 +要选择接收 `authentication.k8s.io/v1` 令牌认证,API 服务器必须带着参数 +`--authentication-token-webhook-version=v1` 启动。 {{< /note >}} ```yaml @@ -944,7 +929,7 @@ A successful validation of the bearer token would return: 远程服务预计会填写请求的 `status` 字段以指示登录成功。 响应正文的 `spec` 字段被忽略并且可以省略。 远程服务必须使用它收到的相同 `TokenReview` API 版本返回响应。 -承载令牌的成功验证将返回: +持有者令牌的成功验证将返回: {{< tabs name="TokenReview_response_success" >}} {{% tab name="authentication.k8s.io/v1" %}} @@ -1066,22 +1051,24 @@ API 服务器可以配置成从请求的头部字段值(如 `X-Remote-User`) * `--requestheader-group-headers` 1.6+. Optional, case-insensitive. "X-Remote-Group" is suggested. Header names to check, in order, for the user's groups. All values in all specified headers are used as group names. * `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested. Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin). Any headers beginning with any of the specified prefixes have the prefix removed. The remainder of the header name is lowercased and [percent-decoded](https://tools.ietf.org/html/rfc3986#section-2.1) and becomes the extra key, and the header value is the extra value. --> -* `--requestheader-username-headers` 必需字段,大小写不敏感。用来设置要获得用户身份所要检查的头部字段名称列表(有序)。第一个包含数值的字段会被用来提取用户名。 +* `--requestheader-username-headers` 必需字段,大小写不敏感。 + 用来设置要获得用户身份所要检查的头部字段名称列表(有序)。 + 第一个包含数值的字段会被用来提取用户名。 * `--requestheader-group-headers` 可选字段,在 Kubernetes 1.6 版本以后支持,大小写不敏感。 建议设置为 "X-Remote-Group"。用来指定一组头部字段名称列表,以供检查用户所属的组名称。 所找到的全部头部字段的取值都会被用作用户组名。 * `--requestheader-extra-headers-prefix` 可选字段,在 Kubernetes 1.6 版本以后支持,大小写不敏感。 - 建议设置为 "X-Remote-Extra-"。用来设置一个头部字段的前缀字符串,API 服务器会基于所给 - 前缀来查找与用户有关的一些额外信息。这些额外信息通常用于所配置的鉴权插件。 - API 服务器会将与所给前缀匹配的头部字段过滤出来,去掉其前缀部分,将剩余部分 - 转换为小写字符串并在必要时执行[百分号解码](https://tools.ietf.org/html/rfc3986#section-2.1) - 后,构造新的附加信息字段键名。原来的头部字段值直接作为附加信息字段的值。 + 建议设置为 "X-Remote-Extra-"。用来设置一个头部字段的前缀字符串, + API 服务器会基于所给前缀来查找与用户有关的一些额外信息。这些额外信息通常用于所配置的鉴权插件。 + API 服务器会将与所给前缀匹配的头部字段过滤出来,去掉其前缀部分,将剩余部分转换为小写字符串, + 并在必要时执行[百分号解码](https://tools.ietf.org/html/rfc3986#section-2.1)后, + 构造新的附加信息字段键名。原来的头部字段值直接作为附加信息字段的值。 +{{< note >}} -{{< note >}} 在 1.13.3 版本之前(包括 1.10.7、1.9.11),附加字段的键名只能包含 [HTTP 头部标签的合法字符](https://tools.ietf.org/html/rfc7230#section-3.2.6)。 {{< /note >}} @@ -1137,17 +1124,16 @@ the risks and the mechanisms to protect the CA's usage. * `--requestheader-allowed-names` Optional. List of Common Name values (CNs). If set, a valid client certificate with a CN in the specified list must be presented before the request headers are checked for user names. If empty, any CN is allowed. --> 为了防范头部信息侦听,在请求中的头部字段被检视之前, -身份认证代理需要向 API 服务器提供一份合法的客户端证书, -供后者使用所给的 CA 来执行验证。 -警告:*不要* 在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及 -应如何保护 CA 用法的机制。 +身份认证代理需要向 API 服务器提供一份合法的客户端证书,供后者使用所给的 CA 来执行验证。 +警告:**不要** 在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及应如何保护 +CA 用法的机制。 * `--requestheader-client-ca-file` 必需字段,给出 PEM 编码的证书包。 在检查请求的头部字段以提取用户名信息之前,必须提供一个合法的客户端证书, 且该证书要能够被所给文件中的机构所验证。 * `--requestheader-allowed-names` 可选字段,用来给出一组公共名称(CN)。 - 如果此标志被设置,则在检视请求中的头部以提取用户信息之前,必须提供 - 包含此列表中所给的 CN 名的、合法的客户端证书。 + 如果此标志被设置,则在检视请求中的头部以提取用户信息之前, + 必须提供包含此列表中所给的 CN 名的、合法的客户端证书。 ## 匿名请求 {#anonymous-requests} -启用匿名请求支持之后,如果请求没有被已配置的其他身份认证方法拒绝,则被视作 -匿名请求(Anonymous Requests)。这类请求获得用户名 `system:anonymous` 和 -对应的用户组 `system:unauthenticated`。 +启用匿名请求支持之后,如果请求没有被已配置的其他身份认证方法拒绝, +则被视作匿名请求(Anonymous Requests)。这类请求获得用户名 `system:anonymous` +和对应的用户组 `system:unauthenticated`。 -例如,在一个配置了令牌身份认证且启用了匿名访问的服务器上,如果请求提供了非法的 -持有者令牌,则会返回 `401 Unauthorized` 错误。 -如果请求没有提供持有者令牌,则被视为匿名请求。 +例如,在一个配置了令牌身份认证且启用了匿名访问的服务器上,如果请求提供了非法的持有者令牌, +则会返回 `401 Unauthorized` 错误。如果请求没有提供持有者令牌,则被视为匿名请求。 在 1.5.1-1.5.x 版本中,匿名访问默认情况下是被禁用的,可以通过为 API 服务器设定 `--anonymous-auth=true` 来启用。 @@ -1186,8 +1171,8 @@ that grant access to the `*` user or `*` group do not include anonymous users. --> 在 1.6 及之后版本中,如果所使用的鉴权模式不是 `AlwaysAllow`,则匿名访问默认是被启用的。 从 1.6 版本开始,ABAC 和 RBAC 鉴权模块要求对 `system:anonymous` 用户或者 -`system:unauthenticated` 用户组执行显式的权限判定,所以之前的为 `*` 用户或 -`*` 用户组赋予访问权限的策略规则都不再包含匿名用户。 +`system:unauthenticated` 用户组执行显式的权限判定,所以之前的为用户 `*` 或用户组 +`*` 赋予访问权限的策略规则都不再包含匿名用户。 -带伪装的请求首先会被身份认证识别为发出请求的用户,之后会切换到使用被伪装的用户 -的用户信息。 +带伪装的请求首先会被身份认证识别为发出请求的用户, +之后会切换到使用被伪装的用户的用户信息。 -* 用户发起 API 调用时 _同时_ 提供自身的凭据和伪装头部字段信息 +* 用户发起 API 调用时 **同时** 提供自身的凭据和伪装头部字段信息 * API 服务器对用户执行身份认证 * API 服务器确认通过认证的用户具有伪装特权 * 请求用户的信息被替换成伪装字段的值 @@ -1238,19 +1223,17 @@ The following HTTP headers can be used to performing an impersonation request: 可选字段;要求 "Impersonate-User" 必须被设置。 * `Impersonate-Extra-<附加名称>`:一个动态的头部字段,用来设置与用户相关的附加字段。 此字段可选;要求 "Impersonate-User" 被设置。为了能够以一致的形式保留, - `<附加名称>`部分必须是小写字符,如果有任何字符不是 - [合法的 HTTP 头部标签字符](https://tools.ietf.org/html/rfc7230#section-3.2.6), + `<附加名称>`部分必须是小写字符, + 如果有任何字符不是[合法的 HTTP 头部标签字符](https://tools.ietf.org/html/rfc7230#section-3.2.6), 则必须是 utf8 字符,且转换为[百分号编码](https://tools.ietf.org/html/rfc3986#section-2.1)。 * `Impersonate-Uid`:一个唯一标识符,用来表示所伪装的用户。此头部可选。 - 如果设置,则要求 "Impersonate-User" 也存在。 - Kubernetes 对此字符串没有格式要求。 + 如果设置,则要求 "Impersonate-User" 也存在。Kubernetes 对此字符串没有格式要求。 +{{< note >}} -{{< note >}} -在 1.11.3 版本之前(以及 1.10.7、1.9.11),`<附加名称>` 只能包含 -合法的 HTTP 标签字符。 +在 1.11.3 版本之前(以及 1.10.7、1.9.11),`<附加名称>` 只能包含合法的 HTTP 标签字符。 {{< /note >}} {{< note >}} @@ -1379,31 +1362,45 @@ kind: ClusterRole metadata: name: limited-impersonator rules: -# 可以伪装成用户 "jane.doe@example.com" -- apiGroups: [""] - resources: ["users"] - verbs: ["impersonate"] - resourceNames: ["jane.doe@example.com"] - -# 可以伪装成用户组 "developers" 和 "admins" -- apiGroups: [""] - resources: ["groups"] - verbs: ["impersonate"] - resourceNames: ["developers","admins"] - -# 可以将附加字段 "scopes" 伪装成 "view" 和 "development" -- apiGroups: ["authentication.k8s.io"] - resources: ["userextras/scopes"] - verbs: ["impersonate"] - resourceNames: ["view", "development"] - -# 可以伪装 UID "06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b" -- apiGroups: ["authentication.k8s.io"] - resources: ["uids"] - verbs: ["impersonate"] - resourceNames: ["06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b"] + # 可以伪装成用户 "jane.doe@example.com" + - apiGroups: [""] + resources: ["users"] + verbs: ["impersonate"] + resourceNames: ["jane.doe@example.com"] + + # 可以伪装成用户组 "developers" 和 "admins" + - apiGroups: [""] + resources: ["groups"] + verbs: ["impersonate"] + resourceNames: ["developers","admins"] + + # 可以将附加字段 "scopes" 伪装成 "view" 和 "development" + - apiGroups: ["authentication.k8s.io"] + resources: ["userextras/scopes"] + verbs: ["impersonate"] + resourceNames: ["view", "development"] + + # 可以伪装 UID "06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b" + - apiGroups: ["authentication.k8s.io"] + resources: ["uids"] + verbs: ["impersonate"] + resourceNames: ["06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b"] ``` +{{< note >}} + +基于伪装成一个用户或用户组的能力,你可以执行任何操作,好像你就是那个用户或用户组一样。 +出于这一原因,伪装操作是不受名字空间约束的。 +如果你希望允许使用 Kubernetes RBAC 来执行身份伪装,就需要使用 `ClusterRole` +和 `ClusterRoleBinding`,而不是 `Role` 或 `RoleBinding`。 +{{< /note >}} + @@ -1421,11 +1418,11 @@ protocol specific logic, then returns opaque credentials to use. Almost all cred use cases require a server side component with support for the [webhook token authenticator](#webhook-token-authentication) to interpret the credential format produced by the client plugin. --> -`k8s.io/client-go` 及使用它的工具(如 `kubectl` 和 `kubelet`)可以执行某个外部 -命令来获得用户的凭据信息。 +`k8s.io/client-go` 及使用它的工具(如 `kubectl` 和 `kubelet`) +可以执行某个外部命令来获得用户的凭据信息。 -这一特性的目的是便于客户端与 `k8s.io/client-go` 并不支持的身份认证协议(LDAP、 -Kerberos、OAuth2、SAML 等)继承。 +这一特性的目的是便于客户端与 `k8s.io/client-go` 并不支持的身份认证协议 +(LDAP、Kerberos、OAuth2、SAML 等)继承。 插件实现特定于协议的逻辑,之后返回不透明的凭据以供使用。 几乎所有的凭据插件使用场景中都需要在服务器端存在一个支持 [Webhook 令牌身份认证组件](#webhook-token-authentication)的模块, @@ -1441,10 +1438,10 @@ to install a credential plugin on their workstation. --> ### 示例应用场景 {#example-use-case} -在一个假想的应用场景中,某组织运行这一个外部的服务,能够将特定用户的已签名的 -令牌转换成 LDAP 凭据。此服务还能够对 -[Webhook 令牌身份认证组件](#webhook-token-authentication)的请求做出响应以 -验证所提供的令牌。用户需要在自己的工作站上安装一个凭据插件。 +在一个假想的应用场景中,某组织运行这一个外部的服务,能够将特定用户的已签名的令牌转换成 +LDAP 凭据。此服务还能够对 +[Webhook 令牌身份认证组件](#webhook-token-authentication)的请求做出响应以验证所提供的令牌。 +用户需要在自己的工作站上安装一个凭据插件。 ### 配置 {#configuration} -凭据插件通过 [kubectl 配置文件](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +凭据插件通过 [kubectl 配置文件](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 来作为 user 字段的一部分设置。 {{< tabs name="exec_plugin_kubeconfig_example_1" >}} @@ -1562,7 +1559,6 @@ users: command: "example-client-go-exec-plugin" # 解析 ExecCredentials 资源时使用的 API 版本。必需。 - # # 插件返回的 API 版本必需与这里列出的版本匹配。 # # 要与支持多个版本的工具(如 client.authentication.k8s.io/v1beta1)集成, @@ -1706,7 +1702,6 @@ users: command: "example-client-go-exec-plugin" # 解析 ExecCredentials 资源时使用的 API 版本。必需。 - # # 插件返回的 API 版本必需与这里列出的版本匹配。 # # 要与支持多个版本的工具(如 client.authentication.k8s.io/v1)集成, @@ -1823,7 +1818,7 @@ and required in `client.authentication.k8s.io/v1`. 输入对象中的 `spec.interactive` 字段来确定是否提供了 `stdin`。 插件的 `stdin` 需求(即,为了能够让插件成功运行,是否 `stdin` 是可选的、 必须提供的或者从不会被使用的)是通过 -[kubeconfig](/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +[kubeconfig](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 中的 `user.exec.interactiveMode` 来声明的(参见下面的表格了解合法值)。 字段 `user.exec.interactiveMode` 在 `client.authentication.k8s.io/v1beta1` 中是可选的,在 `client.authentication.k8s.io/v1` 中是必需的。 @@ -1848,7 +1843,7 @@ and required in `client.authentication.k8s.io/v1`. To use bearer token credentials, the plugin returns a token in the status of the [`ExecCredential`](/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential) --> -与使用持有者令牌凭据,插件在 [`ExecCredential`](/zh/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential) +与使用持有者令牌凭据,插件在 [`ExecCredential`](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential) 的状态中返回一个令牌: {{< tabs name="exec_plugin_ExecCredential_example_1" >}} @@ -1933,8 +1928,7 @@ Presence or absence of an expiry has the following impact: - If an expiry is omitted, the bearer token and TLS credentials are cached until the server responds with a 401 HTTP status code or until the process exits. --> -作为一种可选方案,响应中还可以包含以 -[RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) +作为一种可选方案,响应中还可以包含以 [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) 时间戳格式给出的证书到期时间。 证书到期时间的有无会有如下影响: @@ -1979,7 +1973,7 @@ credential acquisition logic. The following `ExecCredential` manifest describes a cluster information sample. --> 为了让 exec 插件能够获得特定与集群的信息,可以在 -[kubeconfig](/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +[kubeconfig](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 中的 `user.exec` 设置 `provideClusterInfo`。 这一特定于集群的信息就会通过 `KUBERNETES_EXEC_INFO` 环境变量传递给插件。 此环境变量中的信息可以用来执行特定于集群的凭据获取逻辑。 @@ -2034,6 +2028,6 @@ The following `ExecCredential` manifest describes a cluster information sample. * Read the [client authentication reference (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) * Read the [client authentication reference (v1)](/docs/reference/config-api/client-authentication.v1/) --> -* 阅读[客户端认证参考文档 (v1beta1)](/zh/docs/reference/config-api/client-authentication.v1beta1/) -* 阅读[客户端认证参考文档 (v1)](/zh/docs/reference/config-api/client-authentication.v1/) +* 阅读[客户端认证参考文档 (v1beta1)](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/) +* 阅读[客户端认证参考文档 (v1)](/zh-cn/docs/reference/config-api/client-authentication.v1/) diff --git a/content/zh/docs/reference/access-authn-authz/authorization.md b/content/zh-cn/docs/reference/access-authn-authz/authorization.md similarity index 100% rename from content/zh/docs/reference/access-authn-authz/authorization.md rename to content/zh-cn/docs/reference/access-authn-authz/authorization.md diff --git a/content/zh/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens.md similarity index 92% rename from content/zh/docs/reference/access-authn-authz/bootstrap-tokens.md rename to content/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens.md index 0960a967165ab..961517d5ad815 100644 --- a/content/zh/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -21,14 +21,14 @@ creating new clusters or joining new nodes to an existing cluster. It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts for users that wish to start clusters without `kubeadm`. It is also built to work, via RBAC policy, with the -[Kubelet TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) system. +[Kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system. --> 启动引导令牌是一种简单的持有者令牌(Bearer Token),这种令牌是在新建集群 或者在现有集群中添加新节点时使用的。 -它被设计成能够支持 [`kubeadm`](/zh/docs/reference/setup-tools/kubeadm/), +它被设计成能够支持 [`kubeadm`](/zh-cn/docs/reference/setup-tools/kubeadm/), 但是也可以被用在其他的案例中以便用户在不使用 `kubeadm` 的情况下启动集群。 它也被设计成可以通过 RBAC 策略,结合 -[Kubelet TLS 启动引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) +[Kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) 系统进行工作。 @@ -108,12 +108,16 @@ controller on the controller manager. 过期的令牌可以通过启用控制器管理器中的 `tokencleaner` 控制器来删除。 +``` +--controllers=*,tokencleaner +``` + @@ -121,7 +125,7 @@ Here is what the secret looks like. 每个合法的令牌背后对应着 `kube-system` 名字空间中的某个 Secret 对象。 你可以从 -[这里](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md) +[这里](https://github.com/kubernetes/design-proposals-archive/blob/main/cluster-lifecycle/bootstrap-discovery.md) 找到完整设计文档。 这是 Secret 看起来的样子。 @@ -142,10 +146,11 @@ stringData: # 令牌 ID 和秘密信息,必需。 token-id: 07401b - token-secret: base64(f395accd246ae52d) + token-secret: f395accd246ae52d # 可选的过期时间字段 - expiration: "2017-03-10T03:22:11Z" + expiration: 2017-03-10T03:22:11Z + # 允许的用法 usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" @@ -197,7 +202,7 @@ You can use the `kubeadm` tool to manage tokens on a running cluster. See the ## 使用 `kubeadm` 管理令牌 {#token-management-with-kubeadm} 你可以使用 `kubeadm` 工具管理运行中集群上的令牌。 -参见 [kubeadm token 文档](/zh/docs/reference/setup-tools/kubeadm/kubeadm-token/) +参见 [kubeadm token 文档](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-token/) 以了解详细信息。 @@ -280,7 +285,7 @@ is used. `kubeconfig` 的载荷进行编码。完成编码的载荷会被插入到两个句点中间,形成完整的 JWS。你可以使用完整的令牌(比如 `07401b.f395accd246ae52d`)作为共享密钥, 通过 `HS256` 方式 (HMAC-SHA256) 对 JWS 进行校验。 -用户 _必须_ 确保使用了 HS256。 +用户**必须**确保使用了 HS256。 {{< warning >}} -参考 [kubeadm 实现细节](/zh/docs/reference/setup-tools/kubeadm/implementation-details/) +参考 [kubeadm 实现细节](/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/) 了解更多信息。 diff --git a/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md similarity index 100% rename from content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md rename to content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md diff --git a/content/zh/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers.md similarity index 100% rename from content/zh/docs/reference/access-authn-authz/extensible-admission-controllers.md rename to content/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers.md diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md similarity index 100% rename from content/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md rename to content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md similarity index 94% rename from content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md rename to content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index 62aff4d4c20db..05a3a8d81ba21 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -132,7 +132,7 @@ In the bootstrap initialization process, the following occurs: 9. Certificate is created for the kubelet --> 1. kubelet 启动 -2. kubelet 看到自己 *没有* 对应的 `kubeconfig` 文件 +2. kubelet 看到自己**没有**对应的 `kubeconfig` 文件 3. kubelet 搜索并发现 `bootstrap-kubeconfig` 文件 4. kubelet 读取该启动引导文件,从中获得 API 服务器的 URL 和用途有限的 一个“令牌(Token)” @@ -302,10 +302,10 @@ requests related to certificate provisioning. With RBAC in place, scoping the tokens to a group allows for great flexibility. For example, you could disable a particular bootstrap group's access when you are done provisioning the nodes. --> -随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的的访问控制(RBAC) +随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC) 策略上,从而严格限制请求(使用[启动引导令牌](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)) -仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组,从而 -提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。 +仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组, +从而提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。 从 kubelet 的角度,所有令牌看起来都很像,没有特别的含义。 从 kube-apiserver 服务器的角度,启动引导令牌是很特殊的。 -根据其 `type`、`namespace` 和 `name`,kube-apiserver 能够将认作特殊的令牌, +根据其 `type`、`namespace` 和 `name`,kube-apiserver 能够将其认作特殊的令牌, 并授予携带该令牌的任何人特殊的启动引导权限,换言之,将其视为 `system:bootstrappers` 组的成员。这就满足了 TLS 启动引导的基本需求。 @@ -366,8 +366,8 @@ systems). There are multiple ways you can generate a token. For example: #### 令牌认证文件 {#token-authentication-file} kube-apiserver 能够将令牌视作身份认证依据。 -这些令牌可以是任意数据,但必须表示为基于某安全的随机数生成器而得到的 -至少 128 位混沌数据。这里的随机数生成器可以是现代 Linux 系统上的 +这些令牌可以是任意数据,但必须表示为基于某安全的随机数生成器而得到的至少 +128 位混沌数据。这里的随机数生成器可以是现代 Linux 系统上的 `/dev/urandom`。生成令牌的方式有很多种。例如: ```shell @@ -380,10 +380,10 @@ will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`. The token file should look like the following example, where the first three values can be anything and the quoted group name should be as depicted: --> -上面的命令会生成类似于 `02b50b05283e98dd0fd71db496ef01e8` 这样的令牌。 +上面的命令会生成类似于 `02b50b05283e98dd0fd71db496ef01e8` 这样的令牌。 -令牌文件看起来是下面的例子这样,其中前面三个值可以是任何值,用引号括起来 -的组名称则只能用例子中给的值。 +令牌文件看起来是下面的例子这样,其中前面三个值可以是任何值, +用引号括起来的组名称则只能用例子中给的值。 ```console 02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers" @@ -413,7 +413,7 @@ To do this, you only need to create a `ClusterRoleBinding` that binds the `syste --> ### 授权 kubelet 创建 CSR {#authorize-kubelet-to-create-csr} -现在启动引导节点被身份认证为 `system:bootstrapping` 组的成员,它需要被 _授权_ +现在启动引导节点被身份认证为 `system:bootstrapping` 组的成员,它需要被**授权** 创建证书签名请求(CSR)并在证书被签名之后将其取回。 幸运的是,Kubernetes 提供了一个 `ClusterRole`,其中精确地封装了这些许可, `system:node-bootstrapper`。 @@ -491,7 +491,7 @@ To provide the Kubernetes CA key and certificate to kube-controller-manager, use 由于这些被签名的证书反过来会被 kubelet 用来在 kube-apiserver 执行普通的 kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA 也被 kube-apiserver 信任用来执行身份认证。CA 密钥和证书是通过 kube-apiserver 的标志 -`--client-ca-file=FILENAME`(例如,`--client-ca-file=/var/lib/kubernetes/ca.pem`), +`--client-ca-file=FILENAME`(例如,`--client-ca-file=/var/lib/kubernetes/ca.pem`), 来设定的,正如 kube-apiserver 配置节所述。 要将 Kubernetes CA 密钥和证书提供给 kube-controller-manager,可使用以下标志: @@ -593,18 +593,18 @@ roleRef: -作为 [kube-controller-manager](/zh/docs/reference/generated/kube-controller-manager/) +作为 [kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) 的一部分的 `csrapproving` 控制器是自动被启用的。 该控制器使用 [`SubjectAccessReview` API](/zh/docs/reference/access-authn-authz/authorization/#checking-api-access) -来确定是否某给定用户被授权请求 CSR,之后基于鉴权结果执行批复操作。 +来确定给定用户是否被授权请求 CSR,之后基于鉴权结果执行批复操作。 为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。 该组件仅是忽略未被授权的请求。 控制器也会作为垃圾收集的一部分清除已过期的证书。 @@ -612,7 +612,7 @@ collection. ## kubelet 配置 {#kubelet-configuration} @@ -678,7 +678,7 @@ The important elements to note are: * `certificate-authority`:指向 CA 文件的路径,用来对 kube-apiserver 所出示 的服务器证书进行验证 -* `server`: 用来访问 kube-apiserver 的 URL +* `server`:用来访问 kube-apiserver 的 URL * `token`:要使用的令牌 ### 客户和服务证书 {#client-and-serving-certificates} -前文所述的内容都与 kubelet _客户端_ 证书相关,尤其是 kubelet 用来向 +前文所述的内容都与 kubelet **客户端**证书相关,尤其是 kubelet 用来向 kube-apiserver 认证自身身份的证书。 -kubelet 也可以使用 _服务(Serving)_ 证书。kubelet 自身向外提供一个 +kubelet 也可以使用**服务(Serving)**证书。kubelet 自身向外提供一个 HTTPS 末端,包含若干功能特性。要保证这些末端的安全性,kubelet 可以执行以下操作 之一: @@ -814,9 +814,9 @@ controller, or manually approve the serving certificate requests. --> Kubernetes 核心中所实现的 CSR 批复控制器出于 [安全原因](https://github.com/kubernetes/community/pull/1982) -并不会自动批复节点的 _服务_ 证书。 -要使用 `RotateKubeletServerCertificate` 功能特性,集群运维人员需要运行一个 -定制的控制器或者手动批复服务证书的请求。 +并不会自动批复节点的**服务**证书。 +要使用 `RotateKubeletServerCertificate` 功能特性, +集群运维人员需要运行一个定制的控制器或者手动批复服务证书的请求。 ## 其它身份认证组件 {#other-authenticating-components} -本文所描述的所有 TLS 启动引导内容都与 kubelet 相关。不过,其它组件也可能需要 -直接与 kube-apiserver 直接通信。容易想到的是 kube-proxy,同样隶属于 -Kubernetes 的控制面并且运行在所有节点之上,不过也可能包含一些其它负责 -监控或者联网的组件。 +本文所描述的所有 TLS 启动引导内容都与 kubelet 相关。不过,其它组件也可能需要直接与 +kube-apiserver 直接通信。容易想到的是 kube-proxy,同样隶属于 +Kubernetes 的控制面并且运行在所有节点之上,不过也可能包含一些其它负责监控或者联网的组件。 -* 较老的方式:和 kubelet 在 TLS 启动引导之前所做的一样,用类似的方式 - 创建和分发证书 +* 较老的方式:和 kubelet 在 TLS 启动引导之前所做的一样,用类似的方式创建和分发证书。 * DaemonSet:由于 kubelet 自身被加载到所有节点之上,并且有足够能力来启动基本服务, 你可以运行将 kube-proxy 和其它特定节点的服务作为 `kube-system` 名字空间中的 - DaemonSet 来执行,而不是独立的进程。由于 DaemonSet 位于集群内部,你可以为其 - 指派一个合适的服务账户,使之具有适当的访问权限来完成其使命。这也许是配置此类 - 服务的最简单的方法。 + DaemonSet 来执行,而不是独立的进程。由于 DaemonSet 位于集群内部, + 你可以为其指派一个合适的服务账户,使之具有适当的访问权限来完成其使命。 + 这也许是配置此类服务的最简单的方法。 -签名控制器并不会立即对所有证书请求执行签名操作。相反,它会等待这些请求被某 -具有适当特权的用户标记为 “Approved(已批准)”状态。 -这一流程有意允许由外部批复控制器来自动执行的批复,或者由控制器管理器内置的 -批复控制器来自动批复。 +签名控制器并不会立即对所有证书请求执行签名操作。相反, +它会等待这些请求被某具有适当特权的用户标记为 “Approved(已批准)”状态。 +这一流程有意允许由外部批复控制器来自动执行的批复, +或者由控制器管理器内置的批复控制器来自动批复。 不过,集群管理员也可以使用 `kubectl` 来手动批准证书请求。 管理员可以通过 `kubectl get csr` 来列举所有的 CSR,使用 `kubectl descsribe csr ` 来描述某个 CSR 的细节。 管理员可以使用 `kubectl certificate approve ` 来拒绝某 CSR。 - diff --git a/content/zh/docs/reference/access-authn-authz/node.md b/content/zh-cn/docs/reference/access-authn-authz/node.md similarity index 97% rename from content/zh/docs/reference/access-authn-authz/node.md rename to content/zh-cn/docs/reference/access-authn-authz/node.md index 72f8545fe62c4..2a9753860dc46 100644 --- a/content/zh/docs/reference/access-authn-authz/node.md +++ b/content/zh-cn/docs/reference/access-authn-authz/node.md @@ -90,12 +90,12 @@ have the minimal set of permissions required to operate correctly. --> 为了获得节点鉴权器的授权,kubelet 必须使用一个凭证以表示它在 `system:nodes` 组中,用户名为 `system:node:`。 -上述的组名和用户名格式要与 [kubelet TLS 启动引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)过程中为每个 kubelet 创建的标识相匹配。 +上述的组名和用户名格式要与 [kubelet TLS 启动引导](/zh/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)过程中为每个 kubelet 创建的标识相匹配。 要启用节点授权器,请使用 `--authorization-mode = Node` 启动 apiserver。 diff --git a/content/zh/docs/reference/access-authn-authz/psp-to-pod-security-standards.md b/content/zh-cn/docs/reference/access-authn-authz/psp-to-pod-security-standards.md similarity index 99% rename from content/zh/docs/reference/access-authn-authz/psp-to-pod-security-standards.md rename to content/zh-cn/docs/reference/access-authn-authz/psp-to-pod-security-standards.md index 97f66636298b3..67c7ccd03e883 100644 --- a/content/zh/docs/reference/access-authn-authz/psp-to-pod-security-standards.md +++ b/content/zh-cn/docs/reference/access-authn-authz/psp-to-pod-security-standards.md @@ -20,7 +20,7 @@ The tables below enumerate the configuration parameters on and/or validates pods, and how the configuration values map to the [Pod Security Standards](/docs/concepts/security/pod-security-standards/). --> -下面的表格列举了[PodSecurityPolicy](/zh/docs/concepts/policy/pod-security-policy/) +下面的表格列举了 [PodSecurityPolicy](/zh/docs/concepts/security/pod-security-policy/) 对象上的配置参数,这些字段是否会变更或检查 Pod 配置,以及这些配置值如何映射到 [Pod 安全性标准(Pod Security Standards)](/zh/docs/concepts/security/pod-security-standards/) 之上。 @@ -30,7 +30,7 @@ For each applicable parameter, the allowed values for the [Baseline](/docs/concepts/security/pod-security-standards/#baseline) and [Restricted](/docs/concepts/security/pod-security-standards/#restricted) profiles are listed. Anything outside the allowed values for those profiles would fall under the -[Privileged](/docs/concepts/security/pod-security-standards/#priveleged) profile. "No opinion" +[Privileged](/docs/concepts/security/pod-security-standards/#privileged) profile. "No opinion" means all values are allowed under all Pod Security Standards. --> 对于每个可应用的参数,表格中给出了 @@ -38,7 +38,7 @@ means all values are allowed under all Pod Security Standards. [Restricted](/zh/docs/concepts/security/pod-security-standards/#restricted) 配置下可接受的取值。 对这两种配置而言不可接受的取值均归入 -[Privileged](/zh/docs/concepts/security/pod-security-standards/#priveleged) +[Privileged](/zh/docs/concepts/security/pod-security-standards/#privileged) 配置下。“无意见”意味着对所有 Pod 安全性标准而言所有取值都可接受。 @@ -25,7 +27,7 @@ network resources based on the roles of individual users within your organizatio RBAC 鉴权机制使用 `rbac.authorization.k8s.io` @@ -34,12 +36,17 @@ RBAC 鉴权机制使用 `rbac.authorization.k8s.io` 要启用 RBAC,在启动 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} 时将 `--authorization-mode` 参数设置为一个逗号分隔的列表并确保其中包含 `RBAC`。 + ```shell kube-apiserver --authorization-mode=Example,RBAC --<其他选项> --<其他选项> ``` @@ -123,11 +130,24 @@ ClusterRole 有若干用法。你可以用它来: Here's an example Role in the "default" namespace that can be used to grant read access to {{< glossary_tooltip text="pods" term_id="pod" >}}: --> -#### Role 示例 +#### Role 示例 {#role-example} 下面是一个位于 "default" 名字空间的 Role 的示例,可用来授予对 {{< glossary_tooltip text="pods" term_id="pod" >}} 的读访问权限: + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role @@ -144,16 +164,16 @@ rules: #### ClusterRole example A ClusterRole can be used to grant the same permissions as a Role. -Because they are cluster-scoped, you can also use them to grant access to: +Because ClusterRoles are cluster-scoped, you can also use them to grant access to: * cluster-scoped resources (like {{< glossary_tooltip text="nodes" term_id="node" >}}) * non-resource endpoints (like `/healthz`) * namespaced resources (like Pods), across all namespaces For example: you can use a ClusterRole to allow a particular user to run - `kubectl get pods -all-namespaces` + `kubectl get pods --all-namespaces` --> -### ClusterRole 示例 +### ClusterRole 示例 {#clusterrole-example} ClusterRole 可以和 Role 相同完成授权。 因为 ClusterRole 属于集群范围,所以它也可以为以下资源授予访问权限: @@ -173,6 +193,22 @@ or across all namespaces (depending on how it is [bound](#rolebinding-and-cluste {{< glossary_tooltip text="Secret" term_id="secret" >}} 授予读访问权限, 或者跨名字空间的访问权限(取决于该角色是如何[绑定](#rolebinding-and-clusterrolebinding)的): + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole @@ -181,7 +217,7 @@ metadata: name: secret-reader rules: - apiGroups: [""] - # 在 HTTP 层面,用来访问 Secret 对象的资源的名称为 "secrets" + # 在 HTTP 层面,用来访问 Secret 资源的名称为 "secrets" resources: ["secrets"] verbs: ["get", "watch", "list"] ``` @@ -236,9 +272,31 @@ This allows "jane" to read pods in the "default" namespace. 下面的例子中的 RoleBinding 将 "pod-reader" Role 授予在 "default" 名字空间中的用户 "jane"。 这样,用户 "jane" 就具有了读取 "default" 名字空间中 pods 的权限。 + ```yaml apiVersion: rbac.authorization.k8s.io/v1 # 此角色绑定允许 "jane" 读取 "default" 名字空间中的 Pods +# 你需要在该命名空间中有一个名为 “pod-reader” 的 Role kind: RoleBinding metadata: name: read-pods @@ -251,7 +309,7 @@ subjects: roleRef: # "roleRef" 指定与某 Role 或 ClusterRole 的绑定关系 kind: Role # 此字段必须是 Role 或 ClusterRole - name: pod-reader # 此字段必须与你要绑定的 Role 或 ClusterRole 的名称匹配 + name: pod-reader # 此字段必须与你要绑定的 Role 或 ClusterRole 的名称匹配 apiGroup: rbac.authorization.k8s.io ``` @@ -273,6 +331,28 @@ RoleBinding 所在名字空间的资源。这种引用使得你可以跨整个 区分大小写)只能访问 "development" 名字空间中的 Secrets 对象,因为 RoleBinding 所在的名字空间(由其 metadata 决定)是 "development"。 + ```yaml apiVersion: rbac.authorization.k8s.io/v1 # 此角色绑定使得用户 "dave" 能够读取 "development" 名字空间中的 Secrets @@ -306,6 +386,23 @@ secrets in any namespace. 下面的 ClusterRoleBinding 允许 "manager" 组内的所有用户访问任何名字空间中的 Secrets。 + ```yaml apiVersion: rbac.authorization.k8s.io/v1 # 此集群角色绑定允许 “manager” 组中的任何人访问任何名字空间中的 secrets @@ -337,20 +434,24 @@ There are two reasons for this restriction: 这种限制有两个主要原因: +1. 将 `roleRef` 设置为不可以改变,这使得可以为用户授予对现有绑定对象的 `update` 权限, + 这样可以让他们管理主体列表,同时不能更改被授予这些主体的角色。 + 1. 针对不同角色的绑定是完全不一样的绑定。要求通过删除/重建绑定来更改 `roleRef`, 这样可以确保要赋予绑定的所有主体会被授予新的角色(而不是在允许或者不小心修改 了 `roleRef` 的情况下导致所有现有主体未经验证即被授予新角色对应的权限)。 -1. 将 `roleRef` 设置为不可以改变,这使得可以为用户授予对现有绑定对象的 `update` 权限, - 这样可以让他们管理主体列表,同时不能更改被授予这些主体的角色。 +### 对资源的引用 {#referring-to-resources} + -### 对资源的引用 {#referring-to-resources} - 在 Kubernetes API 中,大多数资源都是使用对象名称的字符串表示来呈现与访问的。 例如,对于 Pod 应使用 "pods"。 RBAC 使用对应 API 端点的 URL 中呈现的名字来引用资源。 @@ -415,6 +517,7 @@ Here is an example that restricts its subject to only `get` or `update` a 下面的例子中限制可以 "get" 和 "update" 一个名为 `my-configmap` 的 {{< glossary_tooltip term_id="ConfigMap" >}}: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + namespace: default + name: configmap-updater +rules: +- apiGroups: [""] + # 在 HTTP 层面,用来访问 ConfigMap 资源的名称为 "configmaps" resources: ["configmaps"] resourceNames: ["my-configmap"] verbs: ["update", "get"] @@ -465,6 +584,19 @@ Here is an example aggregated ClusterRole: 下面是一个聚合 ClusterRole 的示例: + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole @@ -488,6 +620,22 @@ ClusterRole labeled `rbac.example.com/aggregate-to-monitoring: true`. 下面的例子中,通过创建一个标签同样为 `rbac.example.com/aggregate-to-monitoring: true` 的 ClusterRole,新的规则可被添加到 "monitoring" ClusterRole 中。 + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole @@ -513,7 +661,7 @@ For example: the following ClusterRoles let the "admin" and "edit" default roles named CronTab, whereas the "view" role can perform only read actions on CronTab resources. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. --> -默认的[面向用户的角色](#default-roles-and-role-bindings) 使用 ClusterRole 聚合。 +默认的[面向用户的角色](#default-roles-and-role-bindings)使用 ClusterRole 聚合。 这使得作为集群管理员的你可以为扩展默认规则,包括为定制资源设置规则, 比如通过 CustomResourceDefinitions 或聚合 API 服务器提供的定制资源。 @@ -521,6 +669,34 @@ You can assume that CronTab objects are named `"crontabs"` in URLs as seen by th "view" 角色对 CronTab 资源拥有读操作权限。 你可以假定 CronTab 对象在 API 服务器所看到的 URL 中被命名为 `"crontabs"`。 + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole @@ -549,7 +725,7 @@ rules: ``` +```yaml +rules: +- apiGroups: [""] + # 在 HTTP 层面,用来访问 Pod 资源的名称为 "pods" resources: ["pods"] verbs: ["get", "list", "watch"] ``` @@ -576,12 +763,25 @@ rules: Allow reading/writing Deployments (at the HTTP level: objects with `"deployments"` in the resource part of their URL) in the `"apps"` API groups: --> -允许读/写在 `"apps"` API 组中的 Deployment(在 HTTP 层面,对应 -URL 中资源部分为 "deployments"): +允许在 `"apps"` API 组中读/写 Deployment(在 HTTP 层面,对应 URL +中资源部分为 `"deployments"`): + ```yaml rules: - apiGroups: ["apps"] + # + # 在 HTTP 层面,用来访问 Deployment 资源的名称为 "deployments" resources: ["deployments"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] ``` @@ -590,15 +790,33 @@ rules: Allow reading Pods in the core API group, as well as reading or writing Job resources in the `"batch"` API group: --> -允许读取核心 API 组中的 "pods" 和读/写 `"batch"` API 组中的 -"jobs": +允许读取核心 API 组中的 Pod 和读/写 `"batch"` API 组中的 Job 资源: + ```yaml rules: - apiGroups: [""] + # 在 HTTP 层面,用来访问 Pod 资源的名称为 "pods" resources: ["pods"] verbs: ["get", "list", "watch"] - apiGroups: ["batch"] + # 在 HTTP 层面,用来访问 Job 资源的名称为 "jobs" resources: ["jobs"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] ``` @@ -610,9 +828,22 @@ RoleBinding to limit to a single ConfigMap in a single namespace): 允许读取名称为 "my-config" 的 ConfigMap(需要通过 RoleBinding 绑定以 限制为某名字空间中特定的 ConfigMap): + ```yaml rules: - apiGroups: [""] + # 在 HTTP 层面,用来访问 ConfigMap 资源的名称为 "configmaps" resources: ["configmaps"] resourceNames: ["my-config"] verbs: ["get"] @@ -623,12 +854,24 @@ Allow reading the resource `"nodes"` in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective): --> -允许读取在核心组中的 "nodes" 资源(因为 `Node` 是集群作用域的,所以需要 +允许读取在核心组中的 `"nodes"` 资源(因为 `Node` 是集群作用域的,所以需要 ClusterRole 绑定到 ClusterRoleBinding 才生效): + ```yaml rules: - apiGroups: [""] + # 在 HTTP 层面,用来访问 Node 资源的名称为 "nodes" resources: ["nodes"] verbs: ["get", "list", "watch"] ``` @@ -641,14 +884,21 @@ to be effective): 允许针对非资源端点 `/healthz` 和其子路径上发起 GET 和 POST 请求 (必须在 ClusterRole 绑定 ClusterRoleBinding 才生效): + ```yaml rules: - - nonResourceURLs: ["/healthz", "/healthz/*"] # nonResourceURL 中的 '*' 是一个全局通配符 - verbs: ["get", "post"] +- nonResourceURLs: ["/healthz", "/healthz/*"] # nonResourceURL 中的 '*' 是一个全局通配符 + verbs: ["get", "post"] ``` ### 对主体的引用 {#referring-to-subjects} -RoleBinding 或者 ClusterRoleBinding 可绑定角色到某 *主体(Subject)*上。 +RoleBinding 或者 ClusterRoleBinding 可绑定角色到某 **主体(Subject)** 上。 主体可以是组,用户或者 {{< glossary_tooltip text="服务账户" term_id="service-account" >}}。 @@ -692,8 +942,8 @@ In Kubernetes, Authenticator modules provide group information. Groups, like users, are represented as strings, and that string has no format requirements, other than that the prefix `system:` is reserved. -[Service Accounts](/docs/tasks/configure-pod-container/configure-service-account/) have usernames with the `system:serviceaccount:` prefix and belong -to groups with the `system:serviceaccounts:` prefix. +[ServiceAccounts](/docs/tasks/configure-pod-container/configure-service-account/) have names prefixed +with `system:serviceaccount:`, and belong to groups that have names prefixed with `system:serviceaccounts:`. --> 在 Kubernetes 中,鉴权模块提供用户组信息。 与用户名一样,用户组名也用字符串来表示,而且对该字符串没有格式要求, @@ -713,7 +963,7 @@ to groups with the `system:serviceaccounts:` prefix. {{< /note >}} -## 默认 Roles 和 Role Bindings +## 默认 Roles 和 Role Bindings {#default-roles-and-role-bindings} API 服务器创建一组默认的 ClusterRole 和 ClusterRoleBinding 对象。 这其中许多是以 `system:` 为前缀的,用以标识对应资源是直接由集群控制面管理的。 @@ -844,7 +1094,7 @@ Modifications to these resources can result in non-functional clusters. --> 在修改名称包含 `system:` 前缀的 ClusterRole 和 ClusterRoleBinding 时要格外小心。 -对这些资源的更改可能导致集群无法继续工作。 +对这些资源的更改可能导致集群无法正常运作。 {{< /caution >}} ### API 发现角色 {#discovery-roles} 无论是经过身份验证的还是未经过身份验证的用户,默认的角色绑定都授权他们读取被认为 -是可安全地公开访问的 API( 包括 CustomResourceDefinitions)。 +是可安全地公开访问的 API(包括 CustomResourceDefinitions)。 如果要禁用匿名的未经过身份验证的用户访问,请在 API 服务器配置中中添加 `--anonymous-auth=false` 的配置选项。 @@ -900,19 +1150,17 @@ If you edit that ClusterRole, your changes will be overwritten on API server res via [auto-reconciliation](#auto-reconciliation). To avoid that overwriting, either do not manually edit the role, or disable auto-reconciliation. --> -如果你编辑该 ClusterRole,你所作的变更会被 API 服务器在重启时自动覆盖,这是通过 -[自动协商](#auto-reconciliation)机制完成的。要避免这类覆盖操作, +如果你编辑该 ClusterRole,你所作的变更会被 API 服务器在重启时自动覆盖, +这是通过[自动协商](#auto-reconciliation)机制完成的。要避免这类覆盖操作, 要么不要手动编辑这些角色,要么禁止自动协商机制。 {{< /note >}} - - + + - - @@ -1003,7 +1251,7 @@ metadata: ```
        -Kubernetes RBAC API 发现角色 -Kubernetes RBAC API 发现角色
        system:authenticated - -允许用户以只读的方式去访问他们自己的基本信息。在 1.14 版本之前,这个角色在默认情况下也绑定在 system:unauthenticated 上。 +允许用户以只读的方式去访问他们自己的基本信息。在 v1.14 版本之前,这个角色在默认情况下也绑定在 system:unauthenticated 上。
        system:discovery system:authenticated - 允许以只读方式访问 API 发现端点,这些端点用来发现和协商 API 级别。 -在 1.14 版本之前,这个角色在默认情况下绑定在 system:unauthenticated 上。 +在 v1.14 版本之前,这个角色在默认情况下绑定在 system:unauthenticated 上。
        system:public-info-viewer system:authenticatedsystem:unauthenticated - -允许对集群的非敏感信息进行只读访问,它是在 1.14 版本中引入的。 +允许对集群的非敏感信息进行只读访问,它是在 v1.14 版本中引入的。
        - + - - - - - - - - @@ -73,7 +61,7 @@ Should CIDRs for Pods be allocated and set on the cloud provider. - + - - - - - - - @@ -232,7 +208,7 @@ Path to the file containing Azure container registry configuration information. - + + + + + + + + @@ -676,7 +665,7 @@ Interval between starting controller managers. - + - - - - - - - @@ -822,19 +799,6 @@ The length of endpoint slice updates batching period. Processing of pod changes - - - - - - - @@ -864,96 +828,99 @@ APIServerIdentity=true|false (ALPHA - default=false)
        APIServerTracing=true|false (ALPHA - default=false)
        AllAlpha=true|false (ALPHA - default=false)
        AllBeta=true|false (BETA - default=false)
        -AnyVolumeDataSource=true|false (ALPHA - default=false)
        +AnyVolumeDataSource=true|false (BETA - default=true)
        AppArmor=true|false (BETA - default=true)
        CPUManager=true|false (BETA - default=true)
        -CPUManagerPolicyOptions=true|false (ALPHA - default=false)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
        +CPUManagerPolicyOptions=true|false (BETA - default=true)
        CSIInlineVolume=true|false (BETA - default=true)
        CSIMigration=true|false (BETA - default=true)
        -CSIMigrationAWS=true|false (BETA - default=false)
        -CSIMigrationAzureDisk=true|false (BETA - default=false)
        -CSIMigrationAzureFile=true|false (BETA - default=false)
        -CSIMigrationGCE=true|false (BETA - default=false)
        -CSIMigrationOpenStack=true|false (BETA - default=true)
        +CSIMigrationAWS=true|false (BETA - default=true)
        +CSIMigrationAzureFile=true|false (BETA - default=true)
        +CSIMigrationGCE=true|false (BETA - default=true)
        +CSIMigrationPortworx=true|false (ALPHA - default=false)
        +CSIMigrationRBD=true|false (ALPHA - default=false)
        CSIMigrationvSphere=true|false (BETA - default=false)
        -CSIStorageCapacity=true|false (BETA - default=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
        CSIVolumeHealth=true|false (ALPHA - default=false)
        -CSRDuration=true|false (BETA - default=true)
        -ConfigurableFSGroupPolicy=true|false (BETA - default=true)
        -ControllerManagerLeaderMigration=true|false (BETA - default=true)
        +ContextualLogging=true|false (ALPHA - default=false)
        +CronJobTimeZone=true|false (ALPHA - default=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
        +CustomResourceValidationExpressions=true|false (ALPHA - default=false)
        DaemonSetUpdateSurge=true|false (BETA - default=true)
        -DefaultPodTopologySpread=true|false (BETA - default=true)
        -DelegateFSGroupToCSIDriver=true|false (ALPHA - default=false)
        +DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
        DevicePlugins=true|false (BETA - default=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
        DisableCloudProviders=true|false (ALPHA - default=false)
        -DownwardAPIHugePages=true|false (BETA - default=false)
        -EfficientWatchResumption=true|false (BETA - default=true)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
        +DownwardAPIHugePages=true|false (BETA - default=true)
        EndpointSliceTerminatingCondition=true|false (BETA - default=true)
        -EphemeralContainers=true|false (ALPHA - default=false)
        -ExpandCSIVolumes=true|false (BETA - default=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - default=true)
        -ExpandPersistentVolumes=true|false (BETA - default=true)
        +EphemeralContainers=true|false (BETA - default=true)
        ExpandedDNSConfig=true|false (ALPHA - default=false)
        ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
        -GenericEphemeralVolume=true|false (BETA - default=true)
        +GRPCContainerProbe=true|false (BETA - default=true)
        GracefulNodeShutdown=true|false (BETA - default=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
        HPAContainerMetrics=true|false (ALPHA - default=false)
        HPAScaleToZero=true|false (ALPHA - default=false)
        -IPv6DualStack=true|false (BETA - default=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - default=false)
        +IdentifyPodOS=true|false (BETA - default=true)
        InTreePluginAWSUnregister=true|false (ALPHA - default=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
        InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
        InTreePluginGCEUnregister=true|false (ALPHA - default=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - default=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
        -IndexedJob=true|false (BETA - default=true)
        -IngressClassNamespacedParams=true|false (BETA - default=true)
        -JobTrackingWithFinalizers=true|false (ALPHA - default=false)
        -KubeletCredentialProviders=true|false (ALPHA - default=false)
        +JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
        +JobReadyPods=true|false (BETA - default=true)
        +JobTrackingWithFinalizers=true|false (BETA - default=false)
        +KubeletCredentialProviders=true|false (BETA - default=true)
        KubeletInUserNamespace=true|false (ALPHA - default=false)
        KubeletPodResources=true|false (BETA - default=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - default=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
        LocalStorageCapacityIsolation=true|false (BETA - default=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
        LogarithmicScaleDown=true|false (BETA - default=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
        MemoryManager=true|false (BETA - default=true)
        MemoryQoS=true|false (ALPHA - default=false)
        -MixedProtocolLBService=true|false (ALPHA - default=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
        +MixedProtocolLBService=true|false (BETA - default=true)
        NetworkPolicyEndPort=true|false (BETA - default=true)
        +NetworkPolicyStatus=true|false (ALPHA - default=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
        NodeSwap=true|false (ALPHA - default=false)
        -NonPreemptingPriority=true|false (BETA - default=true)
        -PodAffinityNamespaceSelector=true|false (BETA - default=true)
        +OpenAPIEnums=true|false (BETA - default=true)
        +OpenAPIV3=true|false (BETA - default=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
        PodDeletionCost=true|false (BETA - default=true)
        -PodOverhead=true|false (BETA - default=true)
        -PodSecurity=true|false (ALPHA - default=false)
        -PreferNominatedNode=true|false (BETA - default=true)
        +PodSecurity=true|false (BETA - default=true)
        ProbeTerminationGracePeriod=true|false (BETA - default=false)
        ProcMountType=true|false (ALPHA - default=false)
        ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
        QOSReserved=true|false (ALPHA - default=false)
        ReadWriteOncePod=true|false (ALPHA - default=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
        RemainingItemCount=true|false (BETA - default=true)
        -RemoveSelfLink=true|false (BETA - default=true)
        RotateKubeletServerCertificate=true|false (BETA - default=true)
        SeccompDefault=true|false (ALPHA - default=false)
        +ServerSideFieldValidation=true|false (ALPHA - default=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - default=false)
        ServiceInternalTrafficPolicy=true|false (BETA - default=true)
        -ServiceLBNodePortControl=true|false (BETA - default=true)
        -ServiceLoadBalancerClass=true|false (BETA - default=true)
        SizeMemoryBackedVolumes=true|false (BETA - default=true)
        -StatefulSetMinReadySeconds=true|false (ALPHA - default=false)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
        +StatefulSetMinReadySeconds=true|false (BETA - default=true)
        StorageVersionAPI=true|false (ALPHA - default=false)
        StorageVersionHash=true|false (BETA - default=true)
        -SuspendJob=true|false (BETA - default=true)
        -TTLAfterFinished=true|false (BETA - default=true)
        -TopologyAwareHints=true|false (ALPHA - default=false)
        +TopologyAwareHints=true|false (BETA - default=true)
        TopologyManager=true|false (BETA - default=true)
        VolumeCapacityPriority=true|false (ALPHA - default=false)
        WinDSR=true|false (ALPHA - default=false)
        WinOverlay=true|false (BETA - default=true)
        -WindowsHostProcessContainers=true|false (ALPHA - default=false) +WindowsHostProcessContainers=true|false (BETA - default=true) --> 一组 key=value 对,用来描述测试性/试验性功能的特性门控(Feature Gate)。可选项有: APIListChunking=true|false (BETA - 默认值=true)
        @@ -963,96 +930,99 @@ APIServerIdentity=true|false (ALPHA - 默认值=false)
        APIServerTracing=true|false (ALPHA - 默认值=false)
        AllAlpha=true|false (ALPHA - 默认值=false)
        AllBeta=true|false (BETA - 默认值=false)
        -AnyVolumeDataSource=true|false (ALPHA - 默认值=false)
        +AnyVolumeDataSource=true|false (BETA - 默认值=true)
        AppArmor=true|false (BETA - 默认值=true)
        CPUManager=true|false (BETA - 默认值=true)
        -CPUManagerPolicyOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - 默认值=true)
        +CPUManagerPolicyOptions=true|false (BETA - 默认值=true)
        CSIInlineVolume=true|false (BETA - 默认值=true)
        CSIMigration=true|false (BETA - 默认值=true)
        -CSIMigrationAWS=true|false (BETA - 默认值=false)
        -CSIMigrationAzureDisk=true|false (BETA - 默认值=false)
        -CSIMigrationAzureFile=true|false (BETA - 默认值=false)
        -CSIMigrationGCE=true|false (BETA - 默认值=false)
        -CSIMigrationOpenStack=true|false (BETA - 默认值=true)
        +CSIMigrationAWS=true|false (BETA - 默认值=true)
        +CSIMigrationAzureFile=true|false (BETA - 默认值=true)
        +CSIMigrationGCE=true|false (BETA - 默认值=true)
        +CSIMigrationPortworx=true|false (ALPHA - 默认值=false)
        +CSIMigrationRBD=true|false (ALPHA - 默认值=false)
        CSIMigrationvSphere=true|false (BETA - 默认值=false)
        -CSIStorageCapacity=true|false (BETA - 默认值=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - 默认值=true)
        CSIVolumeHealth=true|false (ALPHA - 默认值=false)
        -CSRDuration=true|false (BETA - 默认值=true)
        -ConfigurableFSGroupPolicy=true|false (BETA - 默认值=true)
        -ControllerManagerLeaderMigration=true|false (BETA - 默认值=true)
        +ContextualLogging=true|false (ALPHA - 默认值=false)
        +CronJobTimeZone=true|false (ALPHA - 默认值=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
        +CustomResourceValidationExpressions=true|false (ALPHA - 默认值=false)
        DaemonSetUpdateSurge=true|false (BETA - 默认值=true)
        -默认值PodTopologySpread=true|false (BETA - 默认值=true)
        -DelegateFSGroupToCSIDriver=true|false (ALPHA - 默认值=false)
        +DelegateFSGroupToCSIDriver=true|false (BETA - 默认值=true)
        DevicePlugins=true|false (BETA - 默认值=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - 默认值=true)
        DisableCloudProviders=true|false (ALPHA - 默认值=false)
        -DownwardAPIHugePages=true|false (BETA - 默认值=false)
        -EfficientWatchResumption=true|false (BETA - 默认值=true)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值=false)
        +DownwardAPIHugePages=true|false (BETA - 默认值=true)
        EndpointSliceTerminatingCondition=true|false (BETA - 默认值=true)
        -EphemeralContainers=true|false (ALPHA - 默认值=false)
        -ExpandCSIVolumes=true|false (BETA - 默认值=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
        -ExpandPersistentVolumes=true|false (BETA - 默认值=true)
        +EphemeralContainers=true|false (BETA - 默认值=true)
        ExpandedDNSConfig=true|false (ALPHA - 默认值=false)
        -ExperimentalHostUserNamespace默认值ing=true|false (BETA - 默认值=false)
        -GenericEphemeralVolume=true|false (BETA - 默认值=true)
        +ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值=false)
        +GRPCContainerProbe=true|false (BETA - 默认值=true)
        GracefulNodeShutdown=true|false (BETA - 默认值=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值=true)
        HPAContainerMetrics=true|false (ALPHA - 默认值=false)
        HPAScaleToZero=true|false (ALPHA - 默认值=false)
        -IPv6DualStack=true|false (BETA - 默认值=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - 默认值=false)
        +IdentifyPodOS=true|false (BETA - 默认值=true)
        InTreePluginAWSUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - 默认值=false)
        -IndexedJob=true|false (BETA - 默认值=true)
        -IngressClassNamespacedParams=true|false (BETA - 默认值=true)
        -JobTrackingWithFinalizers=true|false (ALPHA - 默认值=false)
        -KubeletCredentialProviders=true|false (ALPHA - 默认值=false)
        +JobMutableNodeSchedulingDirectives=true|false (BETA - 默认值=true)
        +JobReadyPods=true|false (BETA - 默认值=true)
        +JobTrackingWithFinalizers=true|false (BETA - 默认值=false)
        +KubeletCredentialProviders=true|false (BETA - 默认值=true)
        KubeletInUserNamespace=true|false (ALPHA - 默认值=false)
        KubeletPodResources=true|false (BETA - 默认值=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - 默认值=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
        LogarithmicScaleDown=true|false (BETA - 默认值=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - 默认值=false)
        MemoryManager=true|false (BETA - 默认值=true)
        MemoryQoS=true|false (ALPHA - 默认值=false)
        -MixedProtocolLBService=true|false (ALPHA - 默认值=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - 默认值=false)
        +MixedProtocolLBService=true|false (BETA - 默认值=true)
        NetworkPolicyEndPort=true|false (BETA - 默认值=true)
        +NetworkPolicyStatus=true|false (ALPHA - 默认值=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - 默认值=false)
        NodeSwap=true|false (ALPHA - 默认值=false)
        -NonPreemptingPriority=true|false (BETA - 默认值=true)
        -PodAffinityNamespaceSelector=true|false (BETA - 默认值=true)
        +OpenAPIEnums=true|false (BETA - 默认值=true)
        +OpenAPIV3=true|false (BETA - 默认值=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值=false)
        PodDeletionCost=true|false (BETA - 默认值=true)
        -PodOverhead=true|false (BETA - 默认值=true)
        -PodSecurity=true|false (ALPHA - 默认值=false)
        -PreferNominatedNode=true|false (BETA - 默认值=true)
        +PodSecurity=true|false (BETA - 默认值=true)
        ProbeTerminationGracePeriod=true|false (BETA - 默认值=false)
        ProcMountType=true|false (ALPHA - 默认值=false)
        ProxyTerminatingEndpoints=true|false (ALPHA - 默认值=false)
        QOSReserved=true|false (ALPHA - 默认值=false)
        ReadWriteOncePod=true|false (ALPHA - 默认值=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值=false)
        RemainingItemCount=true|false (BETA - 默认值=true)
        -RemoveSelfLink=true|false (BETA - 默认值=true)
        RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
        -Seccomp默认值=true|false (ALPHA - 默认值=false)
        +SeccompDefault=true|false (ALPHA - 默认值=false)
        +ServerSideFieldValidation=true|false (ALPHA - 默认值=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - 默认值=false)
        ServiceInternalTrafficPolicy=true|false (BETA - 默认值=true)
        -ServiceLBNodePortControl=true|false (BETA - 默认值=true)
        -ServiceLoadBalancerClass=true|false (BETA - 默认值=true)
        SizeMemoryBackedVolumes=true|false (BETA - 默认值=true)
        -StatefulSetMinReadySeconds=true|false (ALPHA - 默认值=false)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - 默认值=false)
        +StatefulSetMinReadySeconds=true|false (BETA - 默认值=true)
        StorageVersionAPI=true|false (ALPHA - 默认值=false)
        StorageVersionHash=true|false (BETA - 默认值=true)
        -SuspendJob=true|false (BETA - 默认值=true)
        -TTLAfterFinished=true|false (BETA - 默认值=true)
        -TopologyAwareHints=true|false (ALPHA - 默认值=false)
        +TopologyAwareHints=true|false (BETA - 默认值=true)
        TopologyManager=true|false (BETA - 默认值=true)
        VolumeCapacityPriority=true|false (ALPHA - 默认值=false)
        WinDSR=true|false (ALPHA - 默认值=false)
        WinOverlay=true|false (BETA - 默认值=true)
        -WindowsHostProcessContainers=true|false (ALPHA - 默认值=false) +WindowsHostProcessContainers=true|false (BETA - 默认值=true)

        @@ -1181,7 +1151,7 @@ Content type of requests sent to apiserver. - + @@ -1326,56 +1296,6 @@ Path to the config file for controller leader migration, or empty to use the val

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -1394,31 +1314,19 @@ Maximum number of seconds between log flushes - - - - - - - @@ -1492,10 +1400,10 @@ EndpointSlice 更改的处理将延迟此持续时间, @@ -1548,15 +1456,15 @@ Mask size for IPv6 node cidr in dual-stack cluster. Default is 64. - + @@ -1601,29 +1509,17 @@ Amount of time which we allow starting Node to be unresponsive before marking it - - - - - - - @@ -1637,7 +1533,7 @@ If true, SO_REUSEADDR will be used when binding the port. This allows binding to If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] --> 如果为 true,则在绑定端口时将使用 SO_REUSEPORT, -这允许多个实例在同一地址和端口上进行绑定。 +这允许多个实例在同一地址和端口上进行绑定。[默认值=false]。 @@ -1722,7 +1618,7 @@ The file path to a pod definition used as a template for HostPath persistent vol @@ -1759,7 +1655,8 @@ The period for syncing persistent volumes and persistent volume claims - - - - - - - - - - - - - - - - - - - - - @@ -1992,11 +1853,11 @@ File containing the default x509 Certificate for HTTPS. (CA cert, if any, concat @@ -2026,12 +1887,12 @@ File containing the default x509 private key matching --tls-cert-file. - + - + diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md similarity index 68% rename from content/zh/docs/reference/command-line-tools-reference/kube-proxy.md rename to content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md index 0eff062d25f5d..3484f8c4da843 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md @@ -54,15 +54,16 @@ kube-proxy [flags] - + +如果为 true,将文件目录添加到日志消息的头部 +

        + @@ -73,35 +74,22 @@ If true, adds the file directory to the header of the log messages -将日志输出到文件时也输出到标准错误输出(stderr)。 -

        - - - - - - - - + +代理服务器的 IP 地址(所有 IPv4 接口设置为 “0.0.0.0”,所有 IPv6 接口设置为 “::”)。 +如果配置文件由 --config 指定,则忽略此参数。 +

        @@ -112,20 +100,17 @@ The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 i -若此标志为 true,kube-proxy 会将无法绑定端口的失败操作视为致命错误并退出。 +如果为 true,kube-proxy 会将无法绑定端口的失败操作视为致命错误并退出。

        - + @@ -148,11 +133,14 @@ If true cleanup iptables and ipvs rules and exit. @@ -243,20 +231,23 @@ Idle timeout for established TCP connections (0 to leave as-is) - + @@ -463,13 +460,12 @@ WindowsHostProcessContainers=true|false (ALPHA - 默认值=false) @@ -699,72 +695,62 @@ Path to kubeconfig file with authorization information (the master location is s - + - - + + +如果非空,则在此目录中写入日志文件 +

        - + - + - - + - + @@ -826,25 +811,23 @@ metrics 服务器要使用的 IP 地址和端口 - + @@ -854,38 +837,68 @@ If true, only write logs to their native severity level (vs also writing to each + + + + + + + + + + + + + + - + @@ -911,38 +924,35 @@ Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusi - + - + @@ -951,10 +961,8 @@ If true, avoid headers when opening log files @@ -977,10 +985,8 @@ How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be @@ -998,15 +1004,12 @@ Print version information and quit - + diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md similarity index 60% rename from content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md rename to content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md index dbb9eb3cfa211..28fb4f2a61bc0 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -2,7 +2,6 @@ title: kube-scheduler content_type: tool-reference weight: 30 -auto_generated: true --- Kubernetes 调度器是一个控制面进程,负责将 Pods 指派到节点上。 调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。 调度器之后对所有合法的节点进行排序,将 Pod 绑定到一个合适的节点。 在同一个集群中可以使用多个不同的调度器;kube-scheduler 是其参考实现。 -参阅[调度](/zh/docs/concepts/scheduling-eviction/) -以获得关于调度和 kube-scheduler 组件的更多信息。 +参阅[调度](/zh/docs/concepts/scheduling-eviction/)以获得关于调度和 +kube-scheduler 组件的更多信息。 ``` kube-scheduler [flags] @@ -44,71 +43,23 @@ kube-scheduler [flags] - - - - - - - - - - - - - - - - - - +默认值:[] - - - - - - - @@ -168,7 +119,7 @@ If true, failures to look up missing authentication configuration from the clust -在授权过程中跳过的 HTTP 路径列表,即在不联系 'core' kubernetes 服务器的情况下被授权的 HTTP 路径。 +在授权过程中跳过的 HTTP 路径列表,即在不联系 “core” kubernetes 服务器的情况下被授权的 HTTP 路径。 @@ -194,7 +145,7 @@ Kubernetes 核心服务器的 kubeconfig 文件。这是可选的。 -缓存来自 Webhook 授权者的 'authorized' 响应的持续时间。 +缓存来自 Webhook 授权者的 “authorized” 响应的持续时间。 @@ -206,7 +157,7 @@ The duration to cache 'authorized' responses from the webhook authorizer. -缓存来自 Webhook 授权者的 'unauthorized' 响应的持续时间。 +缓存来自 Webhook 授权者的 “unauthorized” 响应的持续时间。 @@ -232,7 +183,7 @@ The IP address on which to listen for the --secure-port port. The associated int --> 监听 --secure-port 端口的 IP 地址。 集群的其余部分以及 CLI/ Web 客户端必须可以访问关联的接口。 -如果为空,将使用所有接口(0.0.0.0 表示使用所有 IPv4 接口,"::" 表示使用所有 IPv6 接口)。 +如果为空,将使用所有接口(0.0.0.0 表示使用所有 IPv4 接口,“::” 表示使用所有 IPv6 接口)。 如果为空或未指定地址 (0.0.0.0 或 ::),所有接口将被使用。 @@ -269,22 +220,14 @@ If set, any request presenting a client certificate signed by one of the authori - + - - - - - - @@ -334,213 +264,204 @@ APIListChunking=true|false (BETA - default=true)
        APIPriorityAndFairness=true|false (BETA - default=true)
        APIResponseCompression=true|false (BETA - default=true)
        APIServerIdentity=true|false (ALPHA - default=false)
        +APIServerTracing=true|false (ALPHA - default=false)
        AllAlpha=true|false (ALPHA - default=false)
        AllBeta=true|false (BETA - default=false)
        -AnyVolumeDataSource=true|false (ALPHA - default=false)
        +AnyVolumeDataSource=true|false (BETA - default=true)
        AppArmor=true|false (BETA - default=true)
        -BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
        -BoundServiceAccountTokenVolume=true|false (BETA - default=true)
        CPUManager=true|false (BETA - default=true)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
        +CPUManagerPolicyOptions=true|false (BETA - default=true)
        CSIInlineVolume=true|false (BETA - default=true)
        CSIMigration=true|false (BETA - default=true)
        CSIMigrationAWS=true|false (BETA - default=false)
        -CSIMigrationAzureDisk=true|false (BETA - default=false)
        CSIMigrationAzureFile=true|false (BETA - default=false)
        -CSIMigrationGCE=true|false (BETA - default=false)
        -CSIMigrationOpenStack=true|false (BETA - default=true)
        +CSIMigrationGCE=true|false (BETA - default=true)
        +CSIMigrationPortworx=true|false (ALPHA - default=false)
        +CSIMigrationRBD=true|false (ALPHA - default=false)
        CSIMigrationvSphere=true|false (BETA - default=false)
        -CSIMigrationvSphereComplete=true|false (BETA - default=false)
        -CSIServiceAccountToken=true|false (BETA - default=true)
        -CSIStorageCapacity=true|false (BETA - default=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
        CSIVolumeHealth=true|false (ALPHA - default=false)
        -ConfigurableFSGroupPolicy=true|false (BETA - default=true)
        -ControllerManagerLeaderMigration=true|false (ALPHA - default=false)
        -CronJobControllerV2=true|false (BETA - default=true)
        +ContextualLogging=true|false (ALPHA - default=false)
        +CronJobTimeZone=true|false (ALPHA - default=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
        -DaemonSetUpdateSurge=true|false (ALPHA - default=false)
        -DefaultPodTopologySpread=true|false (BETA - default=true)
        +CustomResourceValidationExpressions=true|false (ALPHA - default=false)
        +DaemonSetUpdateSurge=true|false (BETA - default=true)
        +DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
        DevicePlugins=true|false (BETA - default=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
        -DownwardAPIHugePages=true|false (BETA - default=false)
        -DynamicKubeletConfig=true|false (BETA - default=true)
        -EfficientWatchResumption=true|false (BETA - default=true)
        -EndpointSliceProxying=true|false (BETA - default=true)
        -EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
        -EphemeralContainers=true|false (ALPHA - default=false)
        -ExpandCSIVolumes=true|false (BETA - default=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - default=true)
        -ExpandPersistentVolumes=true|false (BETA - default=true)
        +DisableCloudProviders=true|false (ALPHA - default=false)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
        +DownwardAPIHugePages=true|false (BETA - default=true)
        +EndpointSliceTerminatingCondition=true|false (BETA - default=true)
        +EphemeralContainers=true|false (BETA - default=true)
        +ExpandedDNSConfig=true|false (ALPHA - default=false)
        ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
        -GenericEphemeralVolume=true|false (BETA - default=true)
        +GRPCContainerProbe=true|false (BETA - default=true)
        GracefulNodeShutdown=true|false (BETA - default=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
        HPAContainerMetrics=true|false (ALPHA - default=false)
        HPAScaleToZero=true|false (ALPHA - default=false)
        -HugePageStorageMediumSize=true|false (BETA - default=true)
        -IPv6DualStack=true|false (BETA - default=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - default=false)
        +IdentifyPodOS=true|false (BETA - default=true)
        InTreePluginAWSUnregister=true|false (ALPHA - default=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
        InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
        InTreePluginGCEUnregister=true|false (ALPHA - default=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - default=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
        -IndexedJob=true|false (ALPHA - default=false)
        -IngressClassNamespacedParams=true|false (ALPHA - default=false)
        -KubeletCredentialProviders=true|false (ALPHA - default=false)
        +obMutableNodeSchedulingDirectives=true|false (BETA - default=true)
        +JobReadyPods=true|false (BETA - default=true)
        +JobTrackingWithFinalizers=true|false (BETA - default=false)
        +KubeletCredentialProviders=true|false (BETA - default=true)
        +KubeletInUserNamespace=true|false (ALPHA - default=false)
        KubeletPodResources=true|false (BETA - default=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - default=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
        LocalStorageCapacityIsolation=true|false (BETA - default=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
        -LogarithmicScaleDown=true|false (ALPHA - default=false)
        -MemoryManager=true|false (ALPHA - default=false)
        -MixedProtocolLBService=true|false (ALPHA - default=false)
        -NamespaceDefaultLabelName=true|false (BETA - default=true)
        -NetworkPolicyEndPort=true|false (ALPHA - default=false)
        -NonPreemptingPriority=true|false (BETA - default=true)
        -PodAffinityNamespaceSelector=true|false (ALPHA - default=false)
        -PodDeletionCost=true|false (ALPHA - default=false)
        -PodOverhead=true|false (BETA - default=true)
        -PreferNominatedNode=true|false (ALPHA - default=false)
        -ProbeTerminationGracePeriod=true|false (ALPHA - default=false)
        +LogarithmicScaleDown=true|false (BETA - default=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
        +MemoryManager=true|false (BETA - default=true)
        +MemoryQoS=true|false (ALPHA - default=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
        +MixedProtocolLBService=true|false (BETA - default=true)
        +NetworkPolicyEndPort=true|false (BETA - default=true)
        +NetworkPolicyStatus=true|false (ALPHA - default=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
        +NodeSwap=true|false (ALPHA - default=false)
        +OpenAPIEnums=true|false (BETA - default=true)
        +OpenAPIV3=true|false (BETA - default=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
        +PodDeletionCost=true|false (BETA - default=true)
        +PodSecurity=true|false (BETA - default=true)
        +ProbeTerminationGracePeriod=true|false (BETA - default=false)
        ProcMountType=true|false (ALPHA - default=false)
        +ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
        QOSReserved=true|false (ALPHA - default=false)
        +ReadWriteOncePod=true|false (ALPHA - default=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
        RemainingItemCount=true|false (BETA - default=true)
        -RemoveSelfLink=true|false (BETA - default=true)
        RotateKubeletServerCertificate=true|false (BETA - default=true)
        -ServerSideApply=true|false (BETA - default=true)
        -ServiceInternalTrafficPolicy=true|false (ALPHA - default=false)
        -ServiceLBNodePortControl=true|false (ALPHA - default=false)
        -ServiceLoadBalancerClass=true|false (ALPHA - default=false)
        -ServiceTopology=true|false (ALPHA - default=false)
        -SetHostnameAsFQDN=true|false (BETA - default=true)
        -SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
        +SeccompDefault=true|false (ALPHA - default=false)
        +ServerSideFieldValidation=true|false (ALPHA - default=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - default=false)
        +ServiceInternalTrafficPolicy=true|false (BETA - default=true)
        +SizeMemoryBackedVolumes=true|false (BETA - default=true)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
        +StatefulSetMinReadySeconds=true|false (BETA - default=true)
        StorageVersionAPI=true|false (ALPHA - default=false)
        StorageVersionHash=true|false (BETA - default=true)
        -SuspendJob=true|false (ALPHA - default=false)
        -TTLAfterFinished=true|false (BETA - default=true)
        -TopologyAwareHints=true|false (ALPHA - default=false)
        +TopologyAwareHints=true|false (BETA - default=true)
        TopologyManager=true|false (BETA - default=true)
        -ValidateProxyRedirects=true|false (BETA - default=true)
        VolumeCapacityPriority=true|false (ALPHA - default=false)
        -WarningHeaders=true|false (BETA - default=true)
        WinDSR=true|false (ALPHA - default=false)
        WinOverlay=true|false (BETA - default=true)
        -WindowsEndpointSliceProxying=true|false (BETA - default=true) +WindowsHostProcessContainers=true|false (BETA - default=true) --> 一组 key=value 对,描述了 alpha/experimental 特征开关。选项包括:
        -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
        -APIListChunking=true|false (BETA - 默认值=true)
        -APIPriorityAndFairness=true|false (BETA - 默认值=true)
        -APIResponseCompression=true|false (BETA - 默认值=true)
        -APIServerIdentity=true|false (ALPHA - 默认值=false)
        -AllAlpha=true|false (ALPHA - 默认值=false)
        -AllBeta=true|false (BETA - 默认值=false)
        -AnyVolumeDataSource=true|false (ALPHA - 默认值=false)
        -AppArmor=true|false (BETA - 默认值=true)
        -BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值=false)
        -BoundServiceAccountTokenVolume=true|false (BETA - 默认值=true)
        -CPUManager=true|false (BETA - 默认值=true)
        -CSIInlineVolume=true|false (BETA - 默认值=true)
        -CSIMigration=true|false (BETA - 默认值=true)
        -CSIMigrationAWS=true|false (BETA - 默认值=false)
        -CSIMigrationAzureDisk=true|false (BETA - 默认值=false)
        -CSIMigrationAzureFile=true|false (BETA - 默认值=false)
        -CSIMigrationGCE=true|false (BETA - 默认值=false)
        -CSIMigrationOpenStack=true|false (BETA - 默认值=true)
        -CSIMigrationvSphere=true|false (BETA - 默认值=false)
        -CSIMigrationvSphereComplete=true|false (BETA - 默认值=false)
        -CSIServiceAccountToken=true|false (BETA - 默认值=true)
        -CSIStorageCapacity=true|false (BETA - 默认值=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - 默认值=true)
        -CSIVolumeHealth=true|false (ALPHA - 默认值=false)
        -ConfigurableFSGroupPolicy=true|false (BETA - 默认值=true)
        -ControllerManagerLeaderMigration=true|false (ALPHA - 默认值=false)
        -CronJobControllerV2=true|false (BETA - 默认值=true)
        -CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
        -DaemonSetUpdateSurge=true|false (ALPHA - 默认值=false)
        -DefaultPodTopologySpread=true|false (BETA - 默认值=true)
        -DevicePlugins=true|false (BETA - 默认值=true)
        -DisableAcceleratorUsageMetrics=true|false (BETA - 默认值=true)
        -DownwardAPIHugePages=true|false (BETA - 默认值=false)
        -DynamicKubeletConfig=true|false (BETA - 默认值=true)
        -EfficientWatchResumption=true|false (BETA - 默认值=true)
        -EndpointSliceProxying=true|false (BETA - 默认值=true)
        -EndpointSliceTerminatingCondition=true|false (ALPHA - 默认值=false)
        -EphemeralContainers=true|false (ALPHA - 默认值=false)
        -ExpandCSIVolumes=true|false (BETA - 默认值=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
        -ExpandPersistentVolumes=true|false (BETA - 默认值=true)
        -ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值=false)
        -GenericEphemeralVolume=true|false (BETA - 默认值=true)
        -GracefulNodeShutdown=true|false (BETA - 默认值=true)
        -HPAContainerMetrics=true|false (ALPHA - 默认值=false)
        -HPAScaleToZero=true|false (ALPHA - 默认值=false)
        -HugePageStorageMediumSize=true|false (BETA - 默认值=true)
        -IPv6DualStack=true|false (BETA - 默认值=true)
        -InTreePluginAWSUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginvSphereUnregister=true|false (ALPHA - 默认值=false)
        -IndexedJob=true|false (ALPHA - 默认值=false)
        -IngressClassNamespacedParams=true|false (ALPHA - 默认值=false)
        -KubeletCredentialProviders=true|false (ALPHA - 默认值=false)
        -KubeletPodResources=true|false (BETA - 默认值=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - 默认值=false)
        -LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
        -LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
        -LogarithmicScaleDown=true|false (ALPHA - 默认值=false)
        -MemoryManager=true|false (ALPHA - 默认值=false)
        -MixedProtocolLBService=true|false (ALPHA - 默认值=false)
        -NamespaceDefaultLabelName=true|false (BETA - 默认值=true)
        -NetworkPolicyEndPort=true|false (ALPHA - 默认值=false)
        -NonPreemptingPriority=true|false (BETA - 默认值=true)
        -PodAffinityNamespaceSelector=true|false (ALPHA - 默认值=false)
        -PodDeletionCost=true|false (ALPHA - 默认值=false)
        -PodOverhead=true|false (BETA - 默认值=true)
        -PreferNominatedNode=true|false (ALPHA - 默认值=false)
        -ProbeTerminationGracePeriod=true|false (ALPHA - 默认值=false)
        -ProcMountType=true|false (ALPHA - 默认值=false)
        -QOSReserved=true|false (ALPHA - 默认值=false)
        -RemainingItemCount=true|false (BETA - 默认值=true)
        -RemoveSelfLink=true|false (BETA - 默认值=true)
        -RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
        -ServerSideApply=true|false (BETA - 默认值=true)
        -ServiceInternalTrafficPolicy=true|false (ALPHA - 默认值=false)
        -ServiceLBNodePortControl=true|false (ALPHA - 默认值=false)
        -ServiceLoadBalancerClass=true|false (ALPHA - 默认值=false)
        -ServiceTopology=true|false (ALPHA - 默认值=false)
        -SetHostnameAsFQDN=true|false (BETA - 默认值=true)
        -SizeMemoryBackedVolumes=true|false (ALPHA - 默认值=false)
        -StorageVersionAPI=true|false (ALPHA - 默认值=false)
        -StorageVersionHash=true|false (BETA - 默认值=true)
        -SuspendJob=true|false (ALPHA - 默认值=false)
        -TTLAfterFinished=true|false (BETA - 默认值=true)
        -TopologyAwareHints=true|false (ALPHA - 默认值=false)
        -TopologyManager=true|false (BETA - 默认值=true)
        -ValidateProxyRedirects=true|false (BETA - 默认值=true)
        -VolumeCapacityPriority=true|false (ALPHA - 默认值=false)
        -WarningHeaders=true|false (BETA - 默认值=true)
        -WinDSR=true|false (ALPHA - 默认值=false)
        -WinOverlay=true|false (BETA - 默认值=true)
        -WindowsEndpointSliceProxying=true|false (BETA - 默认值=true) - - - - - - - - @@ -595,7 +516,7 @@ DEPRECATED: content type of requests sent to apiserver. This parameter is ignore - + @@ -736,55 +656,6 @@ DEPRECATED: define the namespace of the lock object. Will be removed in favor of - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -804,19 +675,19 @@ Maximum number of seconds between log flushes @@ -845,18 +716,6 @@ Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)。 - - - - - - - @@ -868,7 +727,7 @@ If true, SO_REUSEADDR will be used when binding the port. This allows binding to 如果为 true,在绑定端口时将使用 SO_REUSEADDR。 这将允许同时绑定诸如 0.0.0.0 这类通配符 IP和特定 IP, 并且它避免等待内核释放处于 TIME_WAIT 状态的套接字。 -默认值: false +默认值:false @@ -881,69 +740,30 @@ If true, SO_REUSEADDR will be used when binding the port. This allows binding to If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] --> 如果此标志为 true,在绑定端口时会使用 SO_REUSEPORT,从而允许不止一个 -实例绑定到同一地址和端口。 +实例绑定到同一地址和端口。 默认值:false - - - - - - - - - - - - - - + - +已弃用:Pod 可以在 unschedulablePods 中停留的最长时间。 +如果 Pod 在 unschedulablePods 中停留的时间超过此值,则该 pod 将被从 +unschedulablePods 移动到 backoffQ 或 activeQ。 +此标志已弃用,将在 1.2 中删除。 - - - - - - + +默认值:"x-remote-extra-" +默认值:"x-remote-group" +默认值:"x-remote-user" - - - - - - - @@ -1065,41 +870,6 @@ The previous version for which you want to show hidden metrics. Only the previou - - - - - - - - - - - - - - - - - - - - @@ -1109,7 +879,7 @@ logs at or above this threshold go to stderr -包含默认的 HTTPS x509 证书的文件。(CA证书(如果有)在服务器证书之后并置)。 +包含默认的 HTTPS x509 证书的文件。(如果有 CA 证书,在服务器证书之后并置)。 如果启用了 HTTPS 服务,并且未提供 --tls-cert-file--tls-private-key-file,则会为公共地址生成一个自签名证书和密钥, 并将其保存到 --cert-dir 指定的目录中。 @@ -1174,20 +944,7 @@ A pair of x509 certificate and private key file paths, optionally suffixed with 如果未提供域名匹配模式,则提取证书名称。 非通配符匹配优先于通配符匹配,显式域名匹配优先于提取而来的名称。 若有多个密钥/证书对,可多次使用 --tls-sni-cert-key。 -例子: "example.crt,example.key" 或者 "foo.crt,foo.key:*.foo.com,foo.com"。 - - - - - - - - @@ -1216,14 +973,14 @@ Print version information and quit - + @@ -1235,7 +992,7 @@ comma-separated list of pattern=N settings for file-filtered logging -如果已设置,将配置值写入此文件并退出。 +如果设置此参数,将配置值写入此文件并退出。 @@ -1244,5 +1001,3 @@ If set, write the configuration values to this file and exit. - - diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet.md b/content/zh-cn/docs/reference/command-line-tools-reference/kubelet.md similarity index 66% rename from content/zh/docs/reference/command-line-tools-reference/kubelet.md rename to content/zh-cn/docs/reference/command-line-tools-reference/kubelet.md index 7d127e479f7ba..fc7f1f31ef796 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kubelet.md @@ -8,15 +8,19 @@ weight: 28 kubelet 是在每个 Node 节点上运行的主要 “节点代理”。它可以使用以下之一向 apiserver 注册: 主机名(hostname);覆盖主机名的参数;某云驱动的特定逻辑。 kubelet 是基于 PodSpec 来工作的。每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。 kubelet 接受通过各种机制(主要是通过 apiserver)提供的一组 PodSpec,并确保这些 @@ -24,27 +28,32 @@ PodSpec 中描述的容器处于运行状态且运行状况良好。 kubelet 不管理不是由 Kubernetes 创建的容器。 除了来自 apiserver 的 PodSpec 之外,还可以通过以下三种方式将容器清单(manifest)提供给 kubelet。 -文件(File):利用命令行参数传递路径。kubelet 周期性地监视此路径下的文件是否有更新。 -监视周期默认为 20s,且可通过参数进行配置。 +- 文件(File):利用命令行参数传递路径。kubelet 周期性地监视此路径下的文件是否有更新。 + 监视周期默认为 20s,且可通过参数进行配置。 -HTTP 端点(HTTP endpoint):利用命令行参数指定 HTTP 端点。 -此端点的监视周期默认为 20 秒,也可以使用参数进行配置。 +- HTTP 端点(HTTP endpoint):利用命令行参数指定 HTTP 端点。 + 此端点的监视周期默认为 20 秒,也可以使用参数进行配置。 -HTTP 服务器(HTTP server):kubelet 还可以侦听 HTTP 并响应简单的 API -(目前没有完整规范)来提交新的清单。 +- HTTP 服务器(HTTP server):kubelet 还可以侦听 HTTP 并响应简单的 API + (目前没有完整规范)来提交新的清单。 ``` kubelet [flags] @@ -65,9 +74,10 @@ kubelet [flags] @@ -77,11 +87,12 @@ If true, adds the file directory to the header @@ -91,11 +102,11 @@ kubelet 用来提供服务的 IP 地址(设置为0.0.0.0 表示 @@ -105,9 +116,10 @@ Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in @@ -117,12 +129,12 @@ log to standard error as well as files @@ -132,11 +144,11 @@ Enables anonymous requests to the Kubelet server. Requests that are not rejected @@ -146,11 +158,11 @@ Use the TokenReview API to determine authentication for bearer tokens. (DEPRECAT @@ -160,13 +172,13 @@ The duration to cache responses from the webhook token authenticator. (default 2 @@ -176,11 +188,11 @@ kubelet 服务器的鉴权模式。可选值包括:AlwaysAllow、 @@ -190,12 +202,12 @@ The duration to cache 'authorized' responses from the webhook authorizer. (DEPRE @@ -246,12 +258,12 @@ TLS 证书所在的目录。如果设置了 --tls-cert-file @@ -261,12 +273,12 @@ kubelet 用来操作本机 cgroup 时使用的驱动程序。支持的选项包 @@ -276,24 +288,11 @@ Optional root cgroup to use for pods. This is handled by the container runtime o - - - - - - - @@ -303,12 +302,12 @@ If > 0.0, introduce random client errors and latency. Intended for testing. ( @@ -345,15 +344,15 @@ The provider for cloud services. Set to empty string for running with no cloud p @@ -363,13 +362,13 @@ DNS 服务器的 IP 地址,以逗号分隔。此标志值用于 Pod 中设置 @@ -379,11 +378,12 @@ Domain for this cluster. If set, kubelet will configure all containers to search @@ -393,10 +393,11 @@ kubelet 将在所指定路径中搜索 CNI 插件的可执行文件。 @@ -406,10 +407,11 @@ kubelet 将在所指定路径中搜索 CNI 插件的可执行文件。 @@ -433,12 +435,12 @@ kubelet 将从此标志所指的文件中加载其初始配置。此路径可以 @@ -448,12 +450,12 @@ Set the maximum number of container log files that can be present for a containe @@ -490,11 +492,11 @@ Windows 系统上的 npipe 和 TCP 端点。例如: @@ -504,11 +506,11 @@ Enable lock contention profiling, if profiling is enabled (DEPRECATED: This para @@ -518,11 +520,11 @@ Enable CPU CFS quota enforcement for containers that specify CPU limits (DEPRECA @@ -532,11 +534,11 @@ Sets CPU CFS quota period value, cpu.cfs_period_us, defaults to Lin @@ -546,12 +548,12 @@ CPU Manager policy to use. Possible values: 'none', 'static'. Default: 'none' (d @@ -561,10 +563,11 @@ CPU Manager policy to use. Possible values: 'none', 'static'. Default: 'none' (d @@ -594,9 +597,11 @@ kubelet 使用此目录来保存所下载的配置,跟踪配置运行状况。 @@ -606,11 +611,11 @@ Enables the Attach/Detach controller to manage attachment/detachment of volumes @@ -620,11 +625,11 @@ Enables server endpoints for log collection and local running of containers and @@ -634,16 +639,16 @@ Enable the Kubelet's server. (DEPRECATED: This parameter should be set via the c @@ -653,12 +658,12 @@ A comma separated list of levels of node allocatable enforcement to be enforced @@ -668,11 +673,11 @@ Maximum size of a bursty event records, temporarily allows event records to burs @@ -682,13 +687,13 @@ If > 0, limit event creations per second to this value. If @@ -698,12 +703,12 @@ A set of eviction thresholds (e.g. memory.available<1Gi) that if me @@ -713,12 +718,12 @@ Maximum allowed grace period (in seconds) to use when terminating pods in respon @@ -728,11 +733,11 @@ A set of minimum reclaims (e.g. imagefs.available=2Gi) that describ @@ -742,12 +747,12 @@ kubelet 在驱逐压力状况解除之前的最长等待时间。 @@ -757,12 +762,12 @@ A set of eviction thresholds (e.g. memory.available>1.5Gi) that if @@ -793,18 +798,6 @@ When set to true, Hard eviction thresholds will be ignored while ca - - - - - - - @@ -825,13 +818,13 @@ When set to true, Hard eviction thresholds will be ignored while ca @@ -841,13 +834,13 @@ If enabled, the kubelet will integrate with the kernel memcg notification to det @@ -869,11 +862,11 @@ If enabled, the kubelet will integrate with the kernel memcg notification to det @@ -970,7 +963,7 @@ WinDSR=true|false (ALPHA - default=false)
        WinOverlay=true|false (BETA - default=true)
        WindowsHostProcessContainers=true|false (BETA - default=true)
        csiMigrationRBD=true|false (ALPHA - default=false)
        -(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --> 用于 alpha 实验性特性的特性开关组,每个开关以 key=value 形式表示。当前可用开关包括:
        APIListChunking=true|false (BETA - 默认值为 true)
        @@ -1072,11 +1065,11 @@ csiMigrationRBD=true|false (ALPHA - 默认值为 false)
        @@ -1086,13 +1079,13 @@ Duration between checking config files for new data. (DEPRECATED: This parameter @@ -1102,12 +1095,12 @@ How should the kubelet setup hairpin NAT. This allows endpoints of a Service to @@ -1117,11 +1110,11 @@ The IP address for the healthz server to serve on (set to 0.0.0.0 f @@ -1151,29 +1144,17 @@ If non-empty, will use this string as identification instead of the actual hostn - - - - - - - @@ -1206,12 +1187,12 @@ The path to the credential provider plugin config file. @@ -1221,12 +1202,12 @@ The percent of disk usage after which image garbage collection is always run. Va @@ -1236,10 +1217,11 @@ The percent of disk usage before which image garbage collection is never run. Lo @@ -1263,11 +1245,11 @@ If no pulling progress is made before this deadline, the image pulling will be c @@ -1277,12 +1259,12 @@ The bit of the fwmark space to mark packets for dropping. Must be w @@ -1305,12 +1287,12 @@ Keep terminated pod volumes mounted to the node after the pod terminates. Can be @@ -1320,11 +1302,11 @@ If enabled, the kubelet will integrate with the kernel memcg notification to det @@ -1334,11 +1316,11 @@ Burst to use while talking with kubernetes apiserver. (DEPRECATED: This paramete @@ -1348,13 +1330,13 @@ Content type of requests sent to apiserver. (default "application/vnd.kubernetes @@ -1364,14 +1346,14 @@ QPS to use while talking with kubernetes API server. The number must be >= 0. @@ -1381,12 +1363,12 @@ kubernetes 系统预留的资源配置,以一组 资源名称=资源数 @@ -1409,11 +1391,11 @@ kubeconfig 配置文件的路径,指定如何连接到 API 服务器。 @@ -1435,11 +1417,11 @@ Optional absolute name of cgroups to create and run the Kubelet in. (DEPRECATED: @@ -1449,10 +1431,10 @@ When logging hits line :, emit a stack trace. (DEPRECATED: @@ -1462,9 +1444,10 @@ If non-empty, write log files in this directory. (DEPRECATED: will be removed in @@ -1474,10 +1457,10 @@ If non-empty, use this log file @@ -1499,12 +1482,12 @@ Maximum number of seconds between log flushes @@ -1514,12 +1497,12 @@ Maximum number of seconds between log flushes @@ -1529,16 +1512,16 @@ Maximum number of seconds between log flushes @@ -1548,11 +1531,11 @@ Sets the log format. Permitted formats: text, json.
        @@ -1562,11 +1545,11 @@ log to standard error instead of files. (DEPRECATED: will be removed in a future @@ -1576,11 +1559,11 @@ If true, kubelet will ensure iptables utility rules are present on @@ -1590,15 +1573,16 @@ URL for accessing additional Pod specifications to run (DEPRECATED: This paramet + @@ -1618,11 +1602,11 @@ kubelet 向 Pod 注入 Kubernetes 主控服务信息时使用的命名空间。 @@ -1632,11 +1616,11 @@ kubelet 进程可以打开的最大文件数量。 @@ -1650,8 +1634,8 @@ Maximum number of old instances of containers to retain globally. Each container --> 设置全局可保留的已停止容器实例个数上限。 每个实例会占用一些磁盘空间。要禁用,请设置为负数。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +已弃用:改用 --eviction-hard--eviction-soft。 +此标志将在未来的版本中删除。 @@ -1661,11 +1645,11 @@ Maximum number of old instances of containers to retain globally. Each container @@ -1675,11 +1659,11 @@ Maximum number of old instances to retain per container. Each container takes up @@ -1689,10 +1673,10 @@ Memory Manager policy to use. Possible values: 'None', 'Stati @@ -1704,12 +1688,12 @@ Minimum age for a finished container before it is garbage collected. Examples: @@ -1719,10 +1703,11 @@ Minimum age for an unused image before it is garbage collected. Examples: @@ -1782,11 +1767,11 @@ IP address (or comma-separated dual-stack IP addresses) of the node. If unset, k @@ -1796,12 +1781,12 @@ The maximum number of images to report in node.status.images. If @@ -1815,8 +1800,7 @@ Traffic to IPs outside this range will use IP masquerade. Set to '0.0.0.0/0' to --> kubelet 向该 IP 段之外的 IP 地址发送的流量将使用 IP 伪装技术。 设置为 0.0.0.0/0 则不使用伪装。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:将在未来的版本中删除。) @@ -1826,12 +1810,12 @@ kubelet 向该 IP 段之外的 IP 地址发送的流量将使用 IP 伪装技术 @@ -1841,11 +1825,11 @@ If true, only write logs to their native severity level (vs also writing to each @@ -1855,12 +1839,12 @@ kubelet 进程的 oom-score-adj 参数值。有效范围为 [-1000,1000] @@ -1885,12 +1869,12 @@ The CIDR to use for pod IP addresses, only used in standalone mode. In cluster m @@ -1900,11 +1884,11 @@ Path to the directory containing static pod files to run, or the path to a singl @@ -1914,13 +1898,13 @@ Set the maximum number of processes per pod. If -1, the kubelet de @@ -1930,11 +1914,11 @@ kubelet 在每个处理器核上可运行的 Pod 数量。此 kubelet 上的 Pod @@ -1944,12 +1928,12 @@ kubelet 服务监听的本机端口号。 @@ -1959,11 +1943,11 @@ kubelet 默认值不同时,kubelet 都会出错。 @@ -1973,13 +1957,13 @@ Unique identifier for identifying the node in a machine database, i.e cloud prov @@ -1989,11 +1973,11 @@ Unique identifier for identifying the node in a machine database, i.e cloud prov @@ -2016,12 +2000,12 @@ If true, when panics occur crash. Intended for testing. (DEPRECATED: will be rem @@ -2044,11 +2028,12 @@ Register the node as schedulable. Won't have any effect if --register-node @@ -2058,12 +2043,12 @@ Register the node with the given list of taints (comma separated = @@ -2073,11 +2058,11 @@ Maximum size of a bursty pulls, temporarily allows pulls to burst to this number @@ -2087,13 +2072,13 @@ If > 0, limit registry pull QPS to this value. If 0, unlimited. @@ -2103,13 +2088,13 @@ A comma-separated list of CPUs or CPU ranges that are reserved for system and ku @@ -2119,11 +2104,11 @@ A comma-separated list of memory reservations for NUMA nodes. (e.g. --rese @@ -2145,12 +2130,12 @@ Directory path for managing kubelet files (volume mounts, etc). @@ -2160,13 +2145,13 @@ Directory path for managing kubelet files (volume mounts, etc). @@ -2176,12 +2161,12 @@ Auto-request and rotate the kubelet serving certificates by requesting new certi @@ -2203,26 +2188,25 @@ Optional absolute name of cgroups to create and run the runtime in. - + @@ -2232,12 +2216,12 @@ Timeout of all runtime requests except long running request - pull, @@ -2247,10 +2231,10 @@ Pull images one at a time. We recommend *not* changing the default value on node @@ -2260,10 +2244,10 @@ If true, avoid header prefixes in the log messages. (DEPRECATED: will be removed @@ -2273,10 +2257,10 @@ If true, avoid headers when opening log files. (DEPRECATED: will be removed in a @@ -2286,13 +2270,13 @@ logs at or above this threshold go to stderr. (DEPRECATED: will be removed in a @@ -2302,11 +2286,11 @@ Maximum time a streaming connection can be idle before the connection is automat @@ -2316,12 +2300,12 @@ Max period between synchronizing running containers and config. (DEPRECATED: Thi @@ -2331,15 +2315,15 @@ Optional absolute name of cgroups in which to place all non-kernel processes tha @@ -2349,13 +2333,13 @@ A set of = (e.g. cpu=200m,m @@ -2365,15 +2349,15 @@ Absolute name of the top level cgroup that is used to manage non-kubernetes comp @@ -2388,15 +2372,15 @@ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
        Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. -(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) --> 服务器端加密算法列表,以逗号分隔。如果不设置,则使用 Go 语言加密包的默认算法列表。
        首选算法: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
        不安全算法: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。) @@ -2406,12 +2390,12 @@ TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_E
        @@ -2421,11 +2405,11 @@ Minimum TLS version supported. Possible values: VersionTLS10, @@ -2435,12 +2419,12 @@ File containing x509 private key matching --tls-cert-file. (DEPRECA @@ -2450,13 +2434,13 @@ Topology Manager policy to use. Possible values: none, best-e @@ -2502,11 +2486,11 @@ Comma-separated list of pattern=N settings for file-filtered loggin @@ -2516,12 +2500,12 @@ The full path of the directory in which to search for additional third party vol diff --git a/content/zh/docs/reference/config-api/_index.md b/content/zh-cn/docs/reference/config-api/_index.md similarity index 100% rename from content/zh/docs/reference/config-api/_index.md rename to content/zh-cn/docs/reference/config-api/_index.md diff --git a/content/zh/docs/reference/config-api/apiserver-audit.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md similarity index 83% rename from content/zh/docs/reference/config-api/apiserver-audit.v1.md rename to content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md index 245096e999ba1..e7bc19c4f68d0 100644 --- a/content/zh/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md @@ -34,7 +34,9 @@ auto_generated: true +

        Event 结构包含可出现在 API 审计日志中的所有信息。 +

        system:masters - 允许超级用户在平台上的任何资源上执行所有操作。 当在 ClusterRoleBinding 中使用时,可以授权对集群中以及所有名字空间中的全部资源进行完全控制。 -当在 RoleBinding 中使用时,可以授权控制 RoleBinding 所在名字空间中的所有资源,包括名字空间本身。 +当在 RoleBinding 中使用时,可以授权控制角色绑定所在名字空间中的所有资源,包括名字空间本身。
        admin - - - @@ -1102,7 +1350,7 @@ It does not allow viewing roles or rolebindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount -in the namespace (a form of privilege escalation). +in the namespace (a form of privilege escalation). --> 此角色不允许查看 Secrets,因为读取 Secret 的内容意味着可以访问名字空间中 ServiceAccount 的凭据信息,进而允许利用名字空间中任何 ServiceAccount 的 @@ -1118,7 +1366,7 @@ ServiceAccount 的凭据信息,进而允许利用名字空间中任何 Service ### 核心组件角色 {#core-component-roles} - + - - - @@ -1187,17 +1435,17 @@ Allows access to resources required by the kubelet, including read access to --> 允许访问 kubelet 所需要的资源,包括对所有 Secret 的读操作和对所有 Pod 状态对象的写操作。 - -你应该使用 Node 鉴权组件 和 -NodeRestriction 准入插件 -而不是 system:node 角色。同时基于 kubelet 上调度执行的 Pod 来授权 +你应该使用 Node 鉴权组件和 +NodeRestriction 准入插件而不是 +system:node 角色。同时基于 kubelet 上调度执行的 Pod 来授权 kubelet 对 API 的访问。 - system:node 角色的意义仅是为了与从 v1.8 之前版本升级而来的集群兼容。 @@ -1220,13 +1468,13 @@ The system:node role only exists for compatibility with Kubernetes clus ### 其他组件角色 {#other-component-roles}
        system:kube-scheduler 用户 - 允许访问 {{< glossary_tooltip term_id="kube-scheduler" text="scheduler" >}} @@ -1148,8 +1396,8 @@ Allows access to the resources required by the {{< glossary_tooltip term_id="kub
        system:volume-scheduler system:kube-scheduler 用户 @@ -1161,23 +1409,23 @@ Allows access to the volume resources required by the kube-scheduler component.
        system:kube-controller-manager system:kube-controller-manager 用户 - 允许访问{{< glossary_tooltip term_id="kube-controller-manager" text="控制器管理器" >}} 组件所需要的资源。 -各个控制回路所需要的权限在控制器角色 详述。 +各个控制回路所需要的权限在控制器角色详述。
        system:node
        - + - @@ -1236,12 +1484,12 @@ The system:node role only exists for compatibility with Kubernetes clus - - - @@ -1273,52 +1521,50 @@ Role for the Heapster compo - + - + - - + - - - +动态卷驱动 +所需要的资源。 + - - - - - - - - @@ -96,12 +84,11 @@ the host's default interface will be used. The map from metric-label to value allow-list of this label. The key's format is <MetricName>,<LabelName>. The value's format is <allowed_value>,<allowed_value>...e.g. metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'. --> 允许使用的指标标签到指标值的映射列表。键的格式为 <MetricName>,<LabelName>. -值的格式为 <allowed_value>,<allowed_value>...。 +值的格式为 <allowed_value>,<allowed_value>...。 例如:metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'

        -
        @@ -110,19 +97,7 @@ The map from metric-label to value allow-list of this label. The key's format is -如果为 true, 将允许特权容器。[默认值=false] - - - - - - - - @@ -156,27 +131,13 @@ of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL. --> -API 的标识符。 -服务帐户令牌验证者将验证针对 API 使用的令牌是否已绑定到这些受众中的至少一个。 +API 的标识符。 +服务帐户令牌验证者将验证针对 API 使用的令牌是否已绑定到这些受众中的至少一个。 如果配置了 --service-account-issuer 标志,但未配置此标志, 则此字段默认为包含发布者 URL 的单个元素列表。 - - - - - - - @@ -299,9 +260,10 @@ The maximum number of days to retain old audit log files based on the timestamp @@ -630,7 +592,7 @@ The API version of the authentication.k8s.io TokenReview to send to and expect f - + @@ -871,7 +833,7 @@ that do not have a default watch size set. - + - @@ -926,7 +887,7 @@ File with apiserver egress selector configuration. - + @@ -1159,18 +1121,6 @@ Amount of time to retain events. - - - - - - - @@ -1198,96 +1148,99 @@ APIServerIdentity=true|false (ALPHA - default=false)
        APIServerTracing=true|false (ALPHA - default=false)
        AllAlpha=true|false (ALPHA - default=false)
        AllBeta=true|false (BETA - default=false)
        -AnyVolumeDataSource=true|false (ALPHA - default=false)
        +AnyVolumeDataSource=true|false (BETA - default=true)
        AppArmor=true|false (BETA - default=true)
        CPUManager=true|false (BETA - default=true)
        -CPUManagerPolicyOptions=true|false (ALPHA - default=false)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
        +CPUManagerPolicyOptions=true|false (BETA - default=true)
        CSIInlineVolume=true|false (BETA - default=true)
        CSIMigration=true|false (BETA - default=true)
        -CSIMigrationAWS=true|false (BETA - default=false)
        -CSIMigrationAzureDisk=true|false (BETA - default=false)
        -CSIMigrationAzureFile=true|false (BETA - default=false)
        -CSIMigrationGCE=true|false (BETA - default=false)
        -CSIMigrationOpenStack=true|false (BETA - default=true)
        +CSIMigrationAWS=true|false (BETA - default=true)
        +CSIMigrationAzureFile=true|false (BETA - default=true)
        +CSIMigrationGCE=true|false (BETA - default=true)
        +CSIMigrationPortworx=true|false (ALPHA - default=false)
        +CSIMigrationRBD=true|false (ALPHA - default=false)
        CSIMigrationvSphere=true|false (BETA - default=false)
        -CSIStorageCapacity=true|false (BETA - default=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
        CSIVolumeHealth=true|false (ALPHA - default=false)
        -CSRDuration=true|false (BETA - default=true)
        -ConfigurableFSGroupPolicy=true|false (BETA - default=true)
        -ControllerManagerLeaderMigration=true|false (BETA - default=true)
        +ContextualLogging=true|false (ALPHA - default=false)
        +CronJobTimeZone=true|false (ALPHA - default=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
        +CustomResourceValidationExpressions=true|false (ALPHA - default=false)
        DaemonSetUpdateSurge=true|false (BETA - default=true)
        -DefaultPodTopologySpread=true|false (BETA - default=true)
        -DelegateFSGroupToCSIDriver=true|false (ALPHA - default=false)
        +DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
        DevicePlugins=true|false (BETA - default=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
        DisableCloudProviders=true|false (ALPHA - default=false)
        -DownwardAPIHugePages=true|false (BETA - default=false)
        -EfficientWatchResumption=true|false (BETA - default=true)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
        +DownwardAPIHugePages=true|false (BETA - default=true)
        EndpointSliceTerminatingCondition=true|false (BETA - default=true)
        -EphemeralContainers=true|false (ALPHA - default=false)
        -ExpandCSIVolumes=true|false (BETA - default=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - default=true)
        -ExpandPersistentVolumes=true|false (BETA - default=true)
        +EphemeralContainers=true|false (BETA - default=true)
        ExpandedDNSConfig=true|false (ALPHA - default=false)
        ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
        -GenericEphemeralVolume=true|false (BETA - default=true)
        +GRPCContainerProbe=true|false (BETA - default=true)
        GracefulNodeShutdown=true|false (BETA - default=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
        HPAContainerMetrics=true|false (ALPHA - default=false)
        HPAScaleToZero=true|false (ALPHA - default=false)
        -IPv6DualStack=true|false (BETA - default=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - default=false)
        +IdentifyPodOS=true|false (BETA - default=true)
        InTreePluginAWSUnregister=true|false (ALPHA - default=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
        -InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
        -InTreePluginGCEUnregister=true|false (ALPHA - default=false)
        +InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
        I +nTreePluginGCEUnregister=true|false (ALPHA - default=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - default=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
        -IndexedJob=true|false (BETA - default=true)
        -IngressClassNamespacedParams=true|false (BETA - default=true)
        -JobTrackingWithFinalizers=true|false (ALPHA - default=false)
        -KubeletCredentialProviders=true|false (ALPHA - default=false)
        +JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
        +JobReadyPods=true|false (BETA - default=true)
        +JobTrackingWithFinalizers=true|false (BETA - default=false)
        +KubeletCredentialProviders=true|false (BETA - default=true)
        KubeletInUserNamespace=true|false (ALPHA - default=false)
        KubeletPodResources=true|false (BETA - default=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - default=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
        LocalStorageCapacityIsolation=true|false (BETA - default=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
        LogarithmicScaleDown=true|false (BETA - default=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
        MemoryManager=true|false (BETA - default=true)
        MemoryQoS=true|false (ALPHA - default=false)
        -MixedProtocolLBService=true|false (ALPHA - default=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
        +MixedProtocolLBService=true|false (BETA - default=true)
        NetworkPolicyEndPort=true|false (BETA - default=true)
        +NetworkPolicyStatus=true|false (ALPHA - default=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
        NodeSwap=true|false (ALPHA - default=false)
        -NonPreemptingPriority=true|false (BETA - default=true)
        -PodAffinityNamespaceSelector=true|false (BETA - default=true)
        +OpenAPIEnums=true|false (BETA - default=true)
        +OpenAPIV3=true|false (BETA - default=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
        PodDeletionCost=true|false (BETA - default=true)
        -PodOverhead=true|false (BETA - default=true)
        -PodSecurity=true|false (ALPHA - default=false)
        -PreferNominatedNode=true|false (BETA - default=true)
        +PodSecurity=true|false (BETA - default=true)
        ProbeTerminationGracePeriod=true|false (BETA - default=false)
        ProcMountType=true|false (ALPHA - default=false)
        ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
        QOSReserved=true|false (ALPHA - default=false)
        ReadWriteOncePod=true|false (ALPHA - default=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
        RemainingItemCount=true|false (BETA - default=true)
        -RemoveSelfLink=true|false (BETA - default=true)
        RotateKubeletServerCertificate=true|false (BETA - default=true)
        SeccompDefault=true|false (ALPHA - default=false)
        +ServerSideFieldValidation=true|false (ALPHA - default=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - default=false)
        ServiceInternalTrafficPolicy=true|false (BETA - default=true)
        -ServiceLBNodePortControl=true|false (BETA - default=true)
        -ServiceLoadBalancerClass=true|false (BETA - default=true)
        SizeMemoryBackedVolumes=true|false (BETA - default=true)
        -StatefulSetMinReadySeconds=true|false (ALPHA - default=false)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
        +StatefulSetMinReadySeconds=true|false (BETA - default=true)
        StorageVersionAPI=true|false (ALPHA - default=false)
        StorageVersionHash=true|false (BETA - default=true)
        -SuspendJob=true|false (BETA - default=true)
        -TTLAfterFinished=true|false (BETA - default=true)
        -TopologyAwareHints=true|false (ALPHA - default=false)
        +TopologyAwareHints=true|false (BETA - default=true)
        TopologyManager=true|false (BETA - default=true)
        VolumeCapacityPriority=true|false (ALPHA - default=false)
        WinDSR=true|false (ALPHA - default=false)
        WinOverlay=true|false (BETA - default=true)
        -WindowsHostProcessContainers=true|false (ALPHA - default=false) +WindowsHostProcessContainers=true|false (BETA - default=true) -->

        一组 key=value 对,用来描述测试性/试验性功能的特性门控。可选项有: APIListChunking=true|false (BETA - 默认值=true)
        @@ -1297,96 +1250,99 @@ APIServerIdentity=true|false (ALPHA - 默认值=false)
        APIServerTracing=true|false (ALPHA - 默认值=false)
        AllAlpha=true|false (ALPHA - 默认值=false)
        AllBeta=true|false (BETA - 默认值=false)
        -AnyVolumeDataSource=true|false (ALPHA - 默认值=false)
        +AnyVolumeDataSource=true|false (BETA - 默认值=true)
        AppArmor=true|false (BETA - 默认值=true)
        CPUManager=true|false (BETA - 默认值=true)
        -CPUManagerPolicyOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - 默认值=true)
        +CPUManagerPolicyOptions=true|false (BETA - 默认值=true)
        CSIInlineVolume=true|false (BETA - 默认值=true)
        CSIMigration=true|false (BETA - 默认值=true)
        -CSIMigrationAWS=true|false (BETA - 默认值=false)
        -CSIMigrationAzureDisk=true|false (BETA - 默认值=false)
        -CSIMigrationAzureFile=true|false (BETA - 默认值=false)
        -CSIMigrationGCE=true|false (BETA - 默认值=false)
        -CSIMigrationOpenStack=true|false (BETA - 默认值=true)
        +CSIMigrationAWS=true|false (BETA - 默认值=true)
        +CSIMigrationAzureFile=true|false (BETA - 默认值=true)
        +CSIMigrationGCE=true|false (BETA - 默认值=true)
        +CSIMigrationPortworx=true|false (ALPHA - 默认值=false)
        +CSIMigrationRBD=true|false (ALPHA - 默认值=false)
        CSIMigrationvSphere=true|false (BETA - 默认值=false)
        -CSIStorageCapacity=true|false (BETA - 默认值=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - 默认值=true)
        CSIVolumeHealth=true|false (ALPHA - 默认值=false)
        -CSRDuration=true|false (BETA - 默认值=true)
        -ConfigurableFSGroupPolicy=true|false (BETA - 默认值=true)
        -ControllerManagerLeaderMigration=true|false (BETA - 默认值=true)
        +ContextualLogging=true|false (ALPHA - 默认值=false)
        +CronJobTimeZone=true|false (ALPHA - 默认值=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
        +CustomResourceValidationExpressions=true|false (ALPHA - 默认值=false)
        DaemonSetUpdateSurge=true|false (BETA - 默认值=true)
        -默认值PodTopologySpread=true|false (BETA - 默认值=true)
        -DelegateFSGroupToCSIDriver=true|false (ALPHA - 默认值=false)
        +DelegateFSGroupToCSIDriver=true|false (BETA - 默认值=true)
        DevicePlugins=true|false (BETA - 默认值=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - 默认值=true)
        DisableCloudProviders=true|false (ALPHA - 默认值=false)
        -DownwardAPIHugePages=true|false (BETA - 默认值=false)
        -EfficientWatchResumption=true|false (BETA - 默认值=true)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值=false)
        +DownwardAPIHugePages=true|false (BETA - 默认值=true)
        EndpointSliceTerminatingCondition=true|false (BETA - 默认值=true)
        -EphemeralContainers=true|false (ALPHA - 默认值=false)
        -ExpandCSIVolumes=true|false (BETA - 默认值=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
        -ExpandPersistentVolumes=true|false (BETA - 默认值=true)
        +EphemeralContainers=true|false (BETA - 默认值=true)
        ExpandedDNSConfig=true|false (ALPHA - 默认值=false)
        -ExperimentalHostUserNamespace默认值ing=true|false (BETA - 默认值=false)
        -GenericEphemeralVolume=true|false (BETA - 默认值=true)
        +ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值=false)
        +GRPCContainerProbe=true|false (BETA - 默认值=true)
        GracefulNodeShutdown=true|false (BETA - 默认值=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值=true)
        HPAContainerMetrics=true|false (ALPHA - 默认值=false)
        HPAScaleToZero=true|false (ALPHA - 默认值=false)
        -IPv6DualStack=true|false (BETA - 默认值=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - 默认值=false)
        +IdentifyPodOS=true|false (BETA - 默认值=true)
        InTreePluginAWSUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
        -InTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
        I +nTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - 默认值=false)
        -IndexedJob=true|false (BETA - 默认值=true)
        -IngressClassNamespacedParams=true|false (BETA - 默认值=true)
        -JobTrackingWithFinalizers=true|false (ALPHA - 默认值=false)
        -KubeletCredentialProviders=true|false (ALPHA - 默认值=false)
        +JobMutableNodeSchedulingDirectives=true|false (BETA - 默认值=true)
        +JobReadyPods=true|false (BETA - 默认值=true)
        +JobTrackingWithFinalizers=true|false (BETA - 默认值=false)
        +KubeletCredentialProviders=true|false (BETA - 默认值=true)
        KubeletInUserNamespace=true|false (ALPHA - 默认值=false)
        KubeletPodResources=true|false (BETA - 默认值=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - 默认值=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
        LogarithmicScaleDown=true|false (BETA - 默认值=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - 默认值=false)
        MemoryManager=true|false (BETA - 默认值=true)
        MemoryQoS=true|false (ALPHA - 默认值=false)
        -MixedProtocolLBService=true|false (ALPHA - 默认值=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - 默认值=false)
        +MixedProtocolLBService=true|false (BETA - 默认值=true)
        NetworkPolicyEndPort=true|false (BETA - 默认值=true)
        +NetworkPolicyStatus=true|false (ALPHA - 默认值=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - 默认值=false)
        NodeSwap=true|false (ALPHA - 默认值=false)
        -NonPreemptingPriority=true|false (BETA - 默认值=true)
        -PodAffinityNamespaceSelector=true|false (BETA - 默认值=true)
        +OpenAPIEnums=true|false (BETA - 默认值=true)
        +OpenAPIV3=true|false (BETA - 默认值=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值=false)
        PodDeletionCost=true|false (BETA - 默认值=true)
        -PodOverhead=true|false (BETA - 默认值=true)
        -PodSecurity=true|false (ALPHA - 默认值=false)
        -PreferNominatedNode=true|false (BETA - 默认值=true)
        +PodSecurity=true|false (BETA - 默认值=true)
        ProbeTerminationGracePeriod=true|false (BETA - 默认值=false)
        ProcMountType=true|false (ALPHA - 默认值=false)
        ProxyTerminatingEndpoints=true|false (ALPHA - 默认值=false)
        QOSReserved=true|false (ALPHA - 默认值=false)
        ReadWriteOncePod=true|false (ALPHA - 默认值=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值=false)
        RemainingItemCount=true|false (BETA - 默认值=true)
        -RemoveSelfLink=true|false (BETA - 默认值=true)
        RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
        Seccomp默认值=true|false (ALPHA - 默认值=false)
        +ServerSideFieldValidation=true|false (ALPHA - 默认值=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - 默认值=false)
        ServiceInternalTrafficPolicy=true|false (BETA - 默认值=true)
        -ServiceLBNodePortControl=true|false (BETA - 默认值=true)
        -ServiceLoadBalancerClass=true|false (BETA - 默认值=true)
        SizeMemoryBackedVolumes=true|false (BETA - 默认值=true)
        -StatefulSetMinReadySeconds=true|false (ALPHA - 默认值=false)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - 默认值=false)
        +StatefulSetMinReadySeconds=true|false (BETA - 默认值=true)
        StorageVersionAPI=true|false (ALPHA - 默认值=false)
        StorageVersionHash=true|false (BETA - 默认值=true)
        -SuspendJob=true|false (BETA - 默认值=true)
        -TTLAfterFinished=true|false (BETA - 默认值=true)
        -TopologyAwareHints=true|false (ALPHA - 默认值=false)
        +TopologyAwareHints=true|false (BETA - 默认值=true)
        TopologyManager=true|false (BETA - 默认值=true)
        VolumeCapacityPriority=true|false (ALPHA - 默认值=false)
        WinDSR=true|false (ALPHA - 默认值=false)
        WinOverlay=true|false (BETA - 默认值=true)
        -WindowsHostProcessContainers=true|false (ALPHA - 默认值=false)

        +WindowsHostProcessContainers=true|false (BETA - 默认值=true)

        @@ -1407,10 +1363,10 @@ Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point. --> 为防止 HTTP/2 客户端卡在单个 API 服务器上,可启用随机关闭连接(GOAWAY)。 客户端的其他运行中请求将不会受到影响,并且客户端将重新连接, -可能会在再次通过负载平衡器后登陆到其他 API 服务器上。 -此参数设置将发送 GOAWAY 的请求的比例。 -具有单个 API 服务器或不使用负载平衡器的群集不应启用此功能。 -最小值为0(关闭),最大值为 .02(1/50 请求); 建议使用 .001(1/1000)。 +可能会在再次通过负载平衡器后登陆到其他 API 服务器上。 +此参数设置将发送 GOAWAY 的请求的比例。 +具有单个 API 服务器或不使用负载平衡器的集群不应启用此功能。 +最小值为0(关闭),最大值为 .02(1/50 请求);建议使用 .001(1/1000)。 @@ -1573,56 +1529,6 @@ post-start hooks will complete successfully and therefore return true. -
        - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -1641,26 +1547,14 @@ Maximum number of seconds between log flushes - - - - - - - @@ -1823,11 +1717,11 @@ Repeat this flag to specify multiple claims. @@ -1865,20 +1759,7 @@ If not provided, username claims other than 'email' are prefixed - - - - - - - - + - + @@ -2123,9 +2004,9 @@ and all are used to determine which issuers are accepted. 颁发者将在已办法令牌的 "iss" 声明中检查此标识符。 此值为字符串或 URI。 如果根据 OpenID Discovery 1.0 规范检查此选项不是有效的 URI,则即使特性门控设置为 true, -ServiceAccountIssuerDiscovery 功能也将保持禁用状态。 -强烈建议该值符合 OpenID 规范:https://openid.net/specs/openid-connect-discovery-1_0.html。 -实践中,这意味着 service-account-issuer 取值必须是 HTTPS URL。 +ServiceAccountIssuerDiscovery 功能也将保持禁用状态。 +强烈建议该值符合 OpenID 规范: https://openid.net/specs/openid-connect-discovery-1_0.html 。 +实践中,这意味着 service-account-issuer 取值必须是 HTTPS URL。 还强烈建议此 URL 能够在 {service-account-issuer}/.well-known/openid-configuration 处提供 OpenID 发现文档。 当此值被多次指定时,第一次的值用于生成令牌,所有的值用于确定接受哪些发行人。 @@ -2141,13 +2022,11 @@ ServiceAccountIssuerDiscovery 功能也将保持禁用状态。 Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the -API server's external (as auto-detected or overridden with external-hostname). -Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled. +API server's external (as auto-detected or overridden with external-hostname). --> 覆盖 /.well-known/openid-configuration 提供的发现文档中 JSON Web 密钥集的 URI。 如果发现文档和密钥集是通过 API 服务器外部 (而非自动检测到或被外部主机名覆盖)之外的 URL 提供给依赖方的,则此标志很有用。 -仅在启用 ServiceAccountIssuerDiscovery 特性门控的情况下有效。 @@ -2161,12 +2040,12 @@ File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. -Must be specified when --service-account-signing-key is provided +Must be specified when --service-account-signing-key-file is provided --> 包含 PEM 编码的 x509 RSA 或 ECDSA 私钥或公钥的文件,用于验证 ServiceAccount 令牌。 指定的文件可以包含多个键,并且可以使用不同的文件多次指定标志。 如果未指定,则使用 --tls-private-key-file。 -提供 --service-account-signing-key 时必须指定。 +提供 --service-account-signing-key-file 时必须指定。 @@ -2279,38 +2158,18 @@ This can be used to allow load balancer to stop sending traffic to this server. - - - - - - - - - - - - - - - + @@ -2350,11 +2209,10 @@ List of directives for HSTS, comma separated. If this list is empty, then HSTS d --> 为 HSTS 所设置的指令列表,用逗号分隔。 如果此列表为空,则不会添加 HSTS 指令。 -例如: 'max-age=31536000,includeSubDomains,preload' +例如:'max-age=31536000,includeSubDomains,preload'

        -
        @@ -2383,15 +2241,17 @@ the public address and saved to the directory specified by --cert-dir. Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
        Preferred values: -TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
        +TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
        Insecure values: -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. --> 服务器的密码套件的列表,以逗号分隔。如果省略,将使用默认的 Go 密码套件。
        首选值: -TLS_AES_128_GCM_SHA256、TLS_AES_256_GCM_SHA384、TLS_CHACHA20_POLY1305_SHA256、TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256、TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256、TLS_RSA_WITH_3DES_EDE_CBC_SHA、TLS_RSA_WITH_AES_128_CBC_SHA、TLS_RSA_WITH_AES_128_GCM_SHA256、 TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. +TLS_AES_128_GCM_SHA256、TLS_AES_256_GCM_SHA384、TLS_CHACHA20_POLY1305_SHA256、TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA、 +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256、TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256、TLS_RSA_WITH_AES_128_CBC_SHA、TLS_RSA_WITH_AES_128_GCM_SHA256、TLS_RSA_WITH_AES_256_CBC_SHA、TLS_RSA_WITH_AES_256_GCM_SHA384。 不安全的值有: -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_ECDSA_WITH_RC4_128_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_RSA_WITH_RC4_128_SHA、TLS_RSA_WITH_AES_128_CBC_SHA256、TLS_RSA_WITH_RC4_128_SHA。 +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_ECDSA_WITH_RC4_128_SHA、TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_RSA_WITH_RC4_128_SHA、TLS_RSA_WITH_3DES_EDE_CBC_SHA、TLS_RSA_WITH_AES_128_CBC_SHA256、TLS_RSA_WITH_RC4_128_SHA。 @@ -2420,7 +2280,7 @@ File containing the default x509 private key matching --tls-cert-file.
        - + - + @@ -2547,4 +2408,3 @@ heuristics, others default to default-watch-cache-size
        默认 ClusterRole 默认 ClusterRoleBinding
        system:auth-delegator - @@ -1251,8 +1499,8 @@ This is commonly used by add-on API servers for unified authentication and autho
        system:heapster @@ -1264,8 +1512,8 @@ Role for the Heapster compo
        system:kube-aggregator
        system:kube-dns -在 kube-system 名字空间中的 kube-dns 服务账户kube-system 名字空间中的 kube-dns 服务账户 kube-dns 组件定义的角色。 -kube-dns 组件定义的角色。
        system:kubelet-api-admin - 允许 kubelet API 的完全访问权限。
        system:node-bootstrapper - 允许访问执行 -kubelet TLS 启动引导 +kubelet TLS 启动引导 所需要的资源。
        system:node-problem-detector -node-problem-detector 组件定义的角色。 @@ -1326,30 +1572,30 @@ Role for the node-
        system:persistent-volume-provisioner - 允许访问大部分 -动态卷驱动 - -所需要的资源。
        system:monitoringsystem:monitoring @@ -1368,10 +1614,10 @@ Allows read access to control-plane monitoring endpoints The Kubernetes {{< glossary_tooltip term_id="kube-controller-manager" text="controller manager" >}} runs {{< glossary_tooltip term_id="controller" text="controllers" >}} that are built in to the Kubernetes control plane. -When invoked with `-use-service-account-credentials`, kube-controller-manager starts each controller +When invoked with `--use-service-account-credentials`, kube-controller-manager starts each controller using a separate service account. Corresponding roles exist for each built-in controller, prefixed with `system:controller:`. -If the controller manager is not started with `-use-service-account-credentials`, it runs all control loops +If the controller manager is not started with `--use-service-account-credentials`, it runs all control loops using its own credential, which must be granted all the relevant roles. These roles include: --> @@ -1379,12 +1625,12 @@ These roles include: Kubernetes {{< glossary_tooltip term_id="kube-controller-manager" text="控制器管理器" >}} 运行内建于 Kubernetes 控制面的{{< glossary_tooltip term_id="controller" text="控制器" >}}。 -当使用 `--use-service-account-credentials` 参数启动时, kube-controller-manager +当使用 `--use-service-account-credentials` 参数启动时,kube-controller-manager 使用单独的服务账户来启动每个控制器。 每个内置控制器都有相应的、前缀为 `system:controller:` 的角色。 如果控制管理器启动时未设置 `--use-service-account-credentials`, 它使用自己的身份凭据来运行所有的控制器,该身份必须被授予所有相关的角色。 -这些角色包括: +这些角色包括: * `system:controller:attachdetach-controller` * `system:controller:certificate-controller` @@ -1415,12 +1661,12 @@ Kubernetes {{< glossary_tooltip term_id="kube-controller-manager" text="控制 * `system:controller:ttl-controller` -## 初始化与预防权限提升 +## 初始化与预防权限提升 {#privilege-escalation-prevention-and-bootstrapping} RBAC API 会阻止用户通过编辑角色或者角色绑定来提升权限。 由于这一点是在 API 级别实现的,所以在 RBAC 鉴权组件未启用的状态下依然可以正常工作。 @@ -1434,7 +1680,7 @@ You can only create/update a role if at least one of the following things is tru (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role). 2. You are granted explicit permission to perform the `escalate` verb on the `roles` or `clusterroles` resource in the `rbac.authorization.k8s.io` API group. --> -### 对角色创建或更新的限制 +### 对角色创建或更新的限制 {#restrictions-on-role-creation-or-update} 只有在符合下列条件之一的情况下,你才能创建/更新角色: @@ -1470,7 +1716,7 @@ You can only create/update a role binding if you already have all the permission For example, if `user-1` does not have the ability to list Secrets cluster-wide, they cannot create a ClusterRoleBinding to a role that grants that permission. To allow a user to create/update role bindings: --> -### 对角色绑定创建或更新的限制 +### 对角色绑定创建或更新的限制 {#restrictions-on-role-binding-creation-or-update} 只有你已经具有了所引用的角色中包含的全部权限时,或者你被授权在所引用的角色上执行 `bind` 动词时,你才可以创建或更新角色绑定。这里的权限与角色绑定的作用域相同。 @@ -1495,6 +1741,37 @@ For example, this ClusterRole and RoleBinding would allow `user-1` to grant othe 例如,下面的 ClusterRole 和 RoleBinding 将允许用户 `user-1` 把名字空间 `user-1-namespace` 中的 `admin`、`edit` 和 `view` 角色赋予其他用户: + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole @@ -1529,33 +1806,32 @@ subjects: When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings: -* Use a credential with the `system:masters` group, which is bound to the `cluster-admin` super-user role by the default bindings. -* If your API server runs with the insecure port enabled (`-insecure-port`), you can also make API calls via that port, which does not enforce authentication or authorization. +* Use a credential with the "system:masters" group, which is bound to the "cluster-admin" super-user role by the default bindings. +* If your API server runs with the insecure port enabled (`--insecure-port`), you can also make API calls via that port, which does not enforce authentication or authorization. --> 当启动引导第一个角色和角色绑定时,需要为初始用户授予他们尚未拥有的权限。 对初始角色和角色绑定进行初始化时需要: * 使用用户组为 `system:masters` 的凭据,该用户组由默认绑定关联到 `cluster-admin` 这个超级用户角色。 -* 如果你的 API 服务器启动时启用了不安全端口(使用 `--insecure-port`), 你也可以通过 - 该端口调用 API ,这样的操作会绕过身份验证或鉴权。 +* 如果你的 API 服务器启动时启用了不安全端口(使用 `--insecure-port`),你也可以通过 + 该端口调用 API,这样的操作会绕过身份验证或鉴权。 +## 一些命令行工具 {#command-line-utilities} ### `kubectl create role` -Creates a `Role` object defining permissions within a single namespace. Examples: + -## 一些命令行工具 - -### `kubectl create role` - 创建 Role 对象,定义在某一名字空间中的权限。例如: -* 创建名称为 "pod-reader" 的 Role 对象,允许用户对 Pods 执行 `get`、`watch` 和 `list` 操作: +* 创建名称为 “pod-reader” 的 Role 对象,允许用户对 Pods 执行 `get`、`watch` 和 `list` 操作: ```shell kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods @@ -1564,16 +1840,16 @@ Creates a `Role` object defining permissions within a single namespace. Examples -* 创建名称为 "pod-reader" 的 Role 对象并指定 `resourceNames`: +* 创建名称为 “pod-reader” 的 Role 对象并指定 `resourceNames`: ```shell kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod ``` -* 创建名为 "foo" 的 Role 对象并指定 `apiGroups`: +* 创建名为 “foo” 的 Role 对象并指定 `apiGroups`: ```shell kubectl create role foo --verb=get,list,watch --resource=replicasets.apps @@ -1582,7 +1858,7 @@ Creates a `Role` object defining permissions within a single namespace. Examples -* 创建名为 "foo" 的 Role 对象并指定子资源权限: +* 创建名为 “foo” 的 Role 对象并指定子资源权限: ```shell kubectl create role foo --verb=get,list,watch --resource=pods,pods/status @@ -1591,7 +1867,7 @@ Creates a `Role` object defining permissions within a single namespace. Examples -* 创建名为 "my-component-lease-holder" 的 Role 对象,使其具有对特定名称的 +* 创建名为 “my-component-lease-holder” 的 Role 对象,使其具有对特定名称的 资源执行 get/update 的权限: ```shell @@ -1607,7 +1883,7 @@ Creates a ClusterRole. Examples: --> 创建 ClusterRole 对象。例如: -* 创建名称为 "pod-reader" 的 ClusterRole`对象,允许用户对 Pods 对象执行 `get`、 +* 创建名称为 “pod-reader” 的 ClusterRole 对象,允许用户对 Pods 对象执行 `get`、 `watch` 和 `list` 操作: ```shell @@ -1617,7 +1893,7 @@ Creates a ClusterRole. Examples: -* 创建名为 "pod-reader" 的 ClusterRole 对象并指定 `resourceNames`: +* 创建名为 “pod-reader” 的 ClusterRole 对象并指定 `resourceNames`: ```shell kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod @@ -1626,7 +1902,7 @@ Creates a ClusterRole. Examples: -* 创建名为 "foo" 的 ClusterRole 对象并指定 `apiGroups`: +* 创建名为 “foo” 的 ClusterRole 对象并指定 `apiGroups`: ```shell kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps @@ -1635,7 +1911,7 @@ Creates a ClusterRole. Examples: -* 创建名为 "foo" 的 ClusterRole 对象并指定子资源: +* 创建名为 “foo” 的 ClusterRole 对象并指定子资源: ```shell kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status @@ -1644,7 +1920,7 @@ Creates a ClusterRole. Examples: -* 创建名为 "foo" 的 ClusterRole 对象并指定 `nonResourceURL`: +* 创建名为 “foo” 的 ClusterRole 对象并指定 `nonResourceURL`: ```shell kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/* @@ -1653,7 +1929,7 @@ Creates a ClusterRole. Examples: -* 创建名为 "monitoring" 的 ClusterRole 对象并指定 `aggregationRule`: +* 创建名为 “monitoring” 的 ClusterRole 对象并指定 `aggregationRule`: ```shell kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" @@ -1668,7 +1944,7 @@ Grants a Role or ClusterRole within a specific namespace. Examples: --> 在特定的名字空间中对 `Role` 或 `ClusterRole` 授权。例如: -* 在名字空间 "acme" 中,将名为 `admin` 的 ClusterRole 中的权限授予名称 "bob" 的用户: +* 在名字空间 “acme” 中,将名为 `admin` 的 ClusterRole 中的权限授予名称 “bob” 的用户: ```shell kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme @@ -1677,7 +1953,7 @@ Grants a Role or ClusterRole within a specific namespace. Examples: -* 在名字空间 "acme" 中,将名为 `view` 的 ClusterRole 中的权限授予名字空间 "acme" +* 在名字空间 “acme” 中,将名为 `view` 的 ClusterRole 中的权限授予名字空间 “acme” 中名为 `myapp` 的服务账户: ```shell @@ -1687,8 +1963,8 @@ Grants a Role or ClusterRole within a specific namespace. Examples: -* 在名字空间 "acme" 中,将名为 `view` 的 ClusterRole 对象中的权限授予名字空间 - "myappnamespace" 中名称为 `myapp` 的服务账户: +* 在名字空间 “acme” 中,将名为 `view` 的 ClusterRole 对象中的权限授予名字空间 + “myappnamespace” 中名称为 `myapp` 的服务账户: ```shell kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme @@ -1704,7 +1980,7 @@ Grants a ClusterRole across the entire cluster (all namespaces). Examples: 在整个集群(所有名字空间)中用 ClusterRole 授权。例如: * 在整个集群范围,将名为 `cluster-admin` 的 ClusterRole 中定义的权限授予名为 - "root" 用户: + “root” 用户: ```shell kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root @@ -1714,7 +1990,7 @@ Grants a ClusterRole across the entire cluster (all namespaces). Examples: * Across the entire cluster, grant the permissions in the "system:node-proxier" ClusterRole to a user named "system:kube-proxy": --> * 在整个集群范围内,将名为 `system:node-proxier` 的 ClusterRole 的权限授予名为 - "system:kube-proxy" 的用户: + “system:kube-proxy” 的用户: ```shell kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy @@ -1723,8 +1999,8 @@ Grants a ClusterRole across the entire cluster (all namespaces). Examples: -* 在整个集群范围内,将名为 `view` 的 ClusterRole 中定义的权限授予 "acme" 名字空间中 - 名为 "myapp" 的服务账户: +* 在整个集群范围内,将名为 `view` 的 ClusterRole 中定义的权限授予 “acme” 名字空间中 + 名为 “myapp” 的服务账户: ```shell kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp @@ -1762,7 +2038,7 @@ Examples: * 测试应用 RBAC 对象的清单文件,显示将要进行的更改: ```shell - kubectl auth reconcile -f my-rbac-rules.yaml --dry-run + kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client ``` -* 应用 RBAC 对象的清单文件, 删除角色中的额外权限和绑定中的其他主体: +* 应用 RBAC 对象的清单文件,删除角色中的额外权限和绑定中的其他主体: ```shell kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions ``` -查看 CLI 帮助获取详细的用法。 - - ## 服务账户权限 {#service-account-permissions} @@ -1805,9 +2077,9 @@ Broader grants can give unnecessary (and potentially escalating) API access to s 但是不会对 `kube-system` 名字空间之外的服务账户授予权限。 (除了授予所有已认证用户的发现权限) -这使得你可以根据需要向特定服务账户授予特定权限。 +这使得你可以根据需要向特定 ServiceAccount 授予特定权限。 细粒度的角色绑定可带来更好的安全性,但需要更多精力管理。 -粗粒度的授权可能导致服务账户被授予不必要的 API 访问权限(甚至导致潜在的权限提升), +粗粒度的授权可能导致 ServiceAccount 被授予不必要的 API 访问权限(甚至导致潜在的权限提升), 但更易于管理。 这要求应用在其 Pod 规约中指定 `serviceAccountName`, - 并额外创建服务账户(包括通过 API、应用程序清单、`kubectl create serviceaccount` 等)。 + 并额外创建服务账户(包括通过 API、应用程序清单、`kubectl create serviceaccount` 等)。 - 例如,在名字空间 "my-namespace" 中授予服务账户 "my-sa" 只读权限: + 例如,在名字空间 “my-namespace” 中授予服务账户 “my-sa” 只读权限: ```shell kubectl create rolebinding my-sa-view \ @@ -1840,7 +2112,7 @@ In order from most secure to least secure, the approaches are: -2. 将角色授予某名字空间中的 "default" 服务账户 +2. 将角色授予某名字空间中的 “default” 服务账户 - 如果某应用没有指定 `serviceAccountName`,那么它将使用 "default" 服务账户。 + 如果某应用没有指定 `serviceAccountName`,那么它将使用 “default” 服务账户。 {{< note >}} "default" 服务账户所具有的权限会被授予给名字空间中所有未指定 @@ -1874,20 +2146,20 @@ In order from most secure to least secure, the approaches are: To allow those add-ons to run with super-user access, grant cluster-admin permissions to the "default" service account in the `kube-system` namespace. - {{< note >}} + {{< caution >}} Enabling this means the `kube-system` namespace contains Secrets - that grant super-user access to the API. - {{< /note >}} + that grant super-user access to your cluster's API. + {{< /caution >}} --> - 许多[插件组件](/zh/docs/concepts/cluster-administration/addons/) 在 `kube-system` - 名字空间以 "default" 服务账户运行。 + 许多[插件组件](/zh/docs/concepts/cluster-administration/addons/)在 `kube-system` + 名字空间以 “default” 服务账户运行。 要允许这些插件组件以超级用户权限运行,需要将集群的 `cluster-admin` 权限授予 - `kube-system` 名字空间中的 "default" 服务账户。 + `kube-system` 名字空间中的 “default” 服务账户。 - {{< note >}} - 启用这一配置意味着在 `kube-system` 名字空间中包含以超级用户账号来访问 API + {{< caution >}} + 启用这一配置意味着在 `kube-system` 名字空间中包含以超级用户账号来访问集群 API 的 Secrets。 - {{< /note >}} + {{< /caution >}} ```shell kubectl create clusterrolebinding add-on-cluster-admin \ @@ -1907,7 +2179,7 @@ In order from most secure to least secure, the approaches are: 如果你想要名字空间中所有应用都具有某角色,无论它们使用的什么服务账户, 可以将角色授予该名字空间的服务账户组。 - 例如,在名字空间 "my-namespace" 中的只读权限授予该名字空间中的所有服务账户: + 例如,在名字空间 “my-namespace” 中的只读权限授予该名字空间中的所有服务账户: ```shell kubectl create rolebinding serviceaccounts-view \ @@ -1949,7 +2221,7 @@ In order from most secure to least secure, the approaches are: --> 5. 授予超级用户访问权限给集群范围内的所有服务帐户(强烈不鼓励) - 如果你不关心如何区分权限,你可以将超级用户访问权限授予所有服务账户。 + 如果你不在乎如何区分权限,你可以将超级用户访问权限授予所有服务账户。 {{< warning >}} 这样做会允许所有应用都对你的集群拥有完全的访问权限,并将允许所有能够读取 @@ -1978,19 +2250,16 @@ guidance for restricting this access in existing clusters. If you want new clusters to retain this level of access in the aggregated roles, you can create the following ClusterRole: - -{{< codenew file="access/endpoints-aggregated.yaml" >}} --> ## Endpoints 写权限 {#write-access-for-endpoints} 在 Kubernetes v1.22 之前版本创建的集群里, -"edit" 和 "admin" 聚合角色包含对 Endpoints 的写权限。 +“edit” 和 “admin” 聚合角色包含对 Endpoints 的写权限。 作为 [CVE-2021-25740](https://github.com/kubernetes/kubernetes/issues/103675) 的缓解措施, 此访问权限不包含在 Kubernetes 1.22 以及更高版本集群的聚合角色里。 升级到 Kubernetes v1.22 版本的现有集群不会包括此变化。 -[CVE 公告](https://github.com/kubernetes/kubernetes/issues/103675) -包含了在现有集群里限制此访问权限的指引。 +[CVE 公告](https://github.com/kubernetes/kubernetes/issues/103675)包含了在现有集群里限制此访问权限的指引。 如果你希望在新集群的聚合角色里保留此访问权限,你可以创建下面的 ClusterRole: @@ -2010,7 +2279,7 @@ and controllers, but grant *no permissions* to service accounts outside the `kub While far more secure, this can be disruptive to existing workloads expecting to automatically receive API permissions. Here are two approaches for managing this transition: --> -## 从 ABAC 升级 +## 从 ABAC 升级 {#upgrading-from-abac} 原来运行较老版本 Kubernetes 的集群通常会使用限制宽松的 ABAC 策略, 包括授予所有服务帐户全权访问 API 的能力。 @@ -2023,19 +2292,19 @@ Here are two approaches for managing this transition: 这里有两种方法来完成这种转换: ### 并行鉴权 {#parallel-authorizers} -同时运行 RBAC 和 ABAC 鉴权模式, 并指定包含 +同时运行 RBAC 和 ABAC 鉴权模式,并指定包含 [现有的 ABAC 策略](/zh/docs/reference/access-authn-authz/abac/#policy-file-format) 的策略文件: ```shell ---authorization-mode=RBAC,ABAC --authorization-policy-file=mypolicy.json +--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json ``` -如果 API 服务器启动时,RBAC 组件的日志级别为 5 或更高(`--vmodule=rbac*=5` 或 `--v=5`), -你可以在 API 服务器的日志中看到 RBAC 的细节 (前缀 `RBAC:`) +如果 kube-apiserver 启动时,RBAC 组件的日志级别为 5 或更高(`--vmodule=rbac*=5` 或 `--v=5`), +你可以在 API 服务器的日志中看到 RBAC 拒绝的细节(前缀 `RBAC`) 你可以使用这些信息来确定需要将哪些角色授予哪些用户、组或服务帐户。 -一旦你[将角色授予服务账户](#service-account-permissions) ,工作负载运行时 -在服务器日志中没有出现 RBAC 拒绝消息,就可以删除 ABAC 鉴权器。 +一旦你[将角色授予服务账户](#service-account-permissions)且工作负载运行时, +服务器日志中没有出现 RBAC 拒绝消息,就可以删除 ABAC 鉴权器。 ### 宽松的 RBAC 权限 {#permissive-rbac-permissions} -你可以使用 RBAC 角色绑定在多个场合使用宽松的策略。 +你可以使用 RBAC 角色绑定复制宽松的 ABAC 策略。 {{< warning >}} 在你完成到 RBAC 的迁移后,应该调整集群的访问控制,确保相关的策略满足你的信息安全需求。 - diff --git a/content/zh/docs/reference/access-authn-authz/service-accounts-admin.md b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md similarity index 100% rename from content/zh/docs/reference/access-authn-authz/service-accounts-admin.md rename to content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md diff --git a/content/zh/docs/reference/access-authn-authz/webhook.md b/content/zh-cn/docs/reference/access-authn-authz/webhook.md similarity index 83% rename from content/zh/docs/reference/access-authn-authz/webhook.md rename to content/zh-cn/docs/reference/access-authn-authz/webhook.md index 0ef1e6a18d19c..29032a7353b72 100644 --- a/content/zh/docs/reference/access-authn-authz/webhook.md +++ b/content/zh-cn/docs/reference/access-authn-authz/webhook.md @@ -1,15 +1,9 @@ --- -reviewers: -- erictune -- lavalamp -- deads2k -- liggitt title: Webhook 模式 content_type: concept weight: 95 --- @@ -38,7 +31,7 @@ service when determining user privileges. -## 配置文件格式 +## 配置文件格式 {#configuration-file-format} -配置文件的格式使用 [kubeconfig](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。在文件中,"users" 代表着 API 服务器的 webhook,而 "cluster" 代表着远程服务。 +配置文件的格式使用 [kubeconfig](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。 +在该文件中,“users” 代表着 API 服务器的 webhook,而 “cluster” 代表着远程服务。 -## 请求载荷 +## 请求载荷 {#request-payloads} -在做认证决策时,API 服务器会 POST 一个 JSON 序列化的 `authorization.k8s.io/v1beta1` `SubjectAccessReview` 对象来描述这个动作。这个对象包含了描述用户请求的字段,同时也包含了需要被访问资源或请求特征的具体信息。 +在做认证决策时,API 服务器会 POST 一个 JSON 序列化的 `authorization.k8s.io/v1beta1` `SubjectAccessReview` +对象来描述这个动作。这个对象包含了描述用户请求的字段,同时也包含了需要被访问资源或请求特征的具体信息。 -需要注意的是 webhook API 对象与其他 Kubernetes API 对象一样都同样都服从[版本兼容规则](/zh/docs/concepts/overview/kubernetes-api/)。实施人员应该了解 beta 对象的更宽松的兼容性承诺,同时确认请求的 "apiVersion" 字段能被正确地反序列化。此外,API 服务器还必须启用 `authorization.k8s.io/v1beta1` API 扩展组 (`--runtime-config=authorization.k8s.io/v1beta1=true`)。 +需要注意的是 webhook API 对象与其他 Kubernetes API 对象一样都同样都遵从[版本兼容规则](/zh/docs/concepts/overview/kubernetes-api/)。 +实施人员应该了解 beta 对象的更宽松的兼容性承诺,同时确认请求的 "apiVersion" 字段能被正确地反序列化。 +此外,API 服务器还必须启用 `authorization.k8s.io/v1beta1` API 扩展组 (`--runtime-config=authorization.k8s.io/v1beta1=true`)。 期待远程服务填充请求的 `status` 字段并响应允许或禁止访问。响应主体的 `spec` 字段被忽略,可以省略。允许的响应将返回: + ```json { "apiVersion": "authorization.k8s.io/v1beta1", @@ -195,7 +193,8 @@ authorizers are configured, they are given a chance to allow the request. If there are no other authorizers, or none of them allow the request, the request is forbidden. The webhook would return: --> -在大多数情况下,第一种方法是首选方法,它指示授权 webhook 不允许或对请求"无意见",但是,如果配置了其他授权者,则可以给他们机会允许请求。如果没有其他授权者,或者没有一个授权者,则该请求被禁止。webhook 将返回: +在大多数情况下,第一种方法是首选方法,它指示授权 webhook 不允许或对请求 “无意见”。 +但是,如果配置了其他授权者,则可以给他们机会允许请求。如果没有其他授权者,或者没有一个授权者,则该请求被禁止。webhook 将返回: ```json { @@ -214,7 +213,7 @@ configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuration of the cluster. The webhook would return: --> -第二种方法立即拒绝其他配置的授权者进行短路评估。仅应由对集群的完整授权者配置有详细了解的 webhook 使用。webhook 将返回: +第二种方法立即拒绝其他配置的授权者进行短路评估。仅应由对集群的完整授权者配置有详细了解的 webhook 使用。webhook 将返回: ```json { @@ -252,16 +251,16 @@ Access to non-resource paths are sent as: ``` -非资源类的路径包括:`/api`, `/apis`, `/metrics`, `/resetMetrics`, -`/logs`, `/debug`, `/healthz`, `/swagger-ui/`, `/swaggerapi/`, `/ui`, 和 -`/version`。客户端需要访问 `/api`, `/api/*`, `/apis`, `/apis/*`, 和 `/version` 以便 +非资源类的路径包括:`/api`、`/apis`、`/metrics`、`/logs`、`/debug`、 +`/healthz`、`/livez`、`/openapi/v2`、`/readyz`、和 `/version`。 +客户端需要访问 `/api`、`/api/*`、`/apis`、`/apis/*` 和 `/version` 以便 能发现服务器上有什么资源和版本。对于其他非资源类的路径访问在没有 REST API 访问限制的情况下拒绝。 - `CSIVolumeFSGroupPolicy`:允许 CSIDrivers 使用 `fsGroupPolicy` 字段. 该字段能控制由 CSIDriver 创建的卷在挂载这些卷时是否支持卷所有权和权限修改。 @@ -1003,10 +1044,13 @@ Each feature gate is designed for enabling/disabling a specific feature: - `ConfigurableFSGroupPolicy`:在 Pod 中挂载卷时,允许用户为 fsGroup 配置卷访问权限和属主变更策略。请参见 [为 Pod 配置卷访问权限和属主变更策略](/zh/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)。 +- `ContextualLogging`:当你启用这个特性门控,支持日志上下文记录的 Kubernetes + 组件会为日志输出添加额外的详细内容。 - `ControllerManagerLeaderMigration`:为 `kube-controller-manager` 和 `cloud-controller-manager` 开启领导者迁移功能。 - `CronJobControllerV2`:使用 {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} 控制器的一种替代实现。否则,系统会选择同一控制器的 v1 版本。 +- `CronJobTimeZone`:允许在 [CronJobs](/zh/docs/concepts/workloads/controllers/cron-jobs/) 中使用 `timeZone` 可选字段。 -- `DynamicKubeletConfig`:启用 kubelet 的动态配置。请参阅 - [重新配置 kubelet](/zh/docs/tasks/administer-cluster/reconfigure-kubelet/)。 +- `DynamicKubeletConfig`:启用 kubelet 的动态配置。 + 除偏差策略场景外,不再支持该功能。该特性门控在 kubelet 1.24 版本中已被移除。 + 请参阅[重新配置 kubelet](/zh/docs/tasks/administer-cluster/reconfigure-kubelet/)。 - `DynamicProvisioningScheduling`:扩展默认调度器以了解卷拓扑并处理 PV 配置。 此特性已在 v1.12 中完全被 `VolumeScheduling` 特性取代。 - `DynamicVolumeProvisioning`:启用持久化卷到 Pod @@ -1221,6 +1267,9 @@ Each feature gate is designed for enabling/disabling a specific feature: when shutting down a node gracefully. - `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe. See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe). - `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering. +For more details, check the + [PersistentVolume deletion protection finalizer](/docs/concepts/storage/persistent-volumes/#persistentvolume-deletion-protection-finalizer) + documentation. --> - `GracefulNodeShutdownBasedOnPodPriority`:允许 kubelet 在体面终止节点时检查 Pod 的优先级。 @@ -1228,6 +1277,7 @@ Each feature gate is designed for enabling/disabling a specific feature: 参阅[配置活跃态、就绪态和启动探针](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。 - `HonorPVReclaimPolicy`:无论 PV 和 PVC 的删除顺序如何,当持久卷申领的策略为 `Delete` 时,确保这种策略得到处理。 + 更多详细信息,请参阅 [PersistentVolume 删除保护 finalizer](/zh/docs/concepts/storage/persistent-volumes/#persistentvolume-deletion-protection-finalizer)文档。 - `KubeletPodResources`:启用 kubelet 上 Pod 资源 GRPC 端点。更多详细信息, 请参见[支持设备监控](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)。 @@ -1370,6 +1422,8 @@ Each feature gate is designed for enabling/disabling a specific feature: - `LegacyNodeRoleBehavior`:禁用此门控时,服务负载均衡器中和节点干扰中的原先行为会忽略 `node-role.kubernetes.io/master` 标签,使用 `NodeDisruptionExclusion` 和 `ServiceNodeExclusion` 对应特性所提供的标签。 +- `LegacyServiceAccountTokenNoAutoGeneration`:停止基于 Secret 的自动生成 + [服务账号令牌](/zh/docs/reference/access-authn-authz/authentication/#service-account-tokens). - `LogarithmicScaleDown`:启用 Pod 的半随机(semi-random)选择,控制器将根据 Pod 时间戳的对数桶按比例缩小去驱逐 Pod。 -- `MemoryManager`: 允许基于 NUMA 拓扑为容器设置内存亲和性。 -- `MemoryQoS`: 使用 cgroup v2 内存控制器在 pod / 容器上启用内存保护和使用限制。 +- `MaxUnavailableStatefulSet`:启用为 StatefulSet + 的[滚动更新策略](/zh/docs/concepts/workloads/controllers/statefulset/#rolling-updates)设置 + `maxUnavailable` 字段。该字段指定更新过程中不可用 Pod 个数的上限。 +- `MemoryManager`:允许基于 NUMA 拓扑为容器设置内存亲和性。 +- `MemoryQoS`:使用 cgroup v2 内存控制器在 pod / 容器上启用内存保护和使用限制。 +- `MinDomainsInPodTopologySpread`:启用 Pod 的 `minDomains` + [拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/). - `MixedProtocolLBService`:允许在同一 `LoadBalancer` 类型的 Service 实例中使用不同的协议。 - `MountContainers`:允许使用主机上的工具容器作为卷挂载程序。 +- `NodeOutOfServiceVolumeDetach`:当使用 `node.kubernetes.io/out-of-service` + 污点将节点标记为停止服务时,节点上不能容忍这个污点的 Pod 将被强制删除, + 并且该在节点上被终止的 Pod 将立即进行卷分离操作。 - `NodeSwap`: 启用 kubelet 为节点上的 Kubernetes 工作负载分配交换内存的能力。 必须将 `KubeletConfiguration.failSwapOn` 设置为 false 的情况下才能使用。 更多详细信息,请参见[交换内存](/zh/docs/concepts/architecture/nodes/#swap-memory)。 @@ -1524,8 +1600,10 @@ Each feature gate is designed for enabling/disabling a specific feature: - `RemainingItemCount`: Allow the API servers to show a count of remaining items in the response to a [chunking list request](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks). -- `RemoveSelfLink`: Deprecates and removes `selfLink` from ObjectMeta and - ListMeta. +- `RemoveSelfLink`: Sets the `.metadata.selfLink` field to blank (empty string) for all + objects and collections. This field has been deprecated since the Kubernetes v1.16 + release. When this feature is enabled, the `.metadata.selfLink` field remains part of + the Kubernetes API, but is always unset. - `RequestManagement`: Enables managing request concurrency with prioritization and fairness at each API server. Deprecated by `APIPriorityAndFairness` since 1.17. --> @@ -1535,7 +1613,9 @@ Each feature gate is designed for enabling/disabling a specific feature: - `RemainingItemCount`:允许 API 服务器在 [分块列表请求](/zh/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks) 的响应中显示剩余条目的个数。 -- `RemoveSelfLink`:将 ObjectMeta 和 ListMeta 中的 `selfLink` 字段废弃并删除。 +- `RemoveSelfLink`:将所有对象和集合的 `.metadata.selfLink` 字段设置为空(空字符串)。 + 该字段自 Kubernetes v1.16 版本以来已被弃用。 + 启用此功能后,`.metadata.selfLink` 字段仍然是 Kubernetes API 的一部分,但始终未设置。 - `RequestManagement`:允许在每个 API 服务器上通过优先级和公平性管理请求并发性。 自 1.17 以来已被 `APIPriorityAndFairness` 替代。 - `RotateKubeletClientCertificate`:在 kubelet 上启用客户端 TLS 证书的轮换。 更多详细信息,请参见 - [kubelet 配置](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。 + [kubelet 配置](/zh/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration)。 - `RotateKubeletServerCertificate`:在 kubelet 上启用服务器 TLS 证书的轮换。 更多详细信息,请参见 - [kubelet 配置](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。 + [kubelet 配置](/zh/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration)。 - `RunAsGroup`:启用对容器初始化过程中设置的主要组 ID 的控制。 - `ServiceLoadBalancerClass`: 为服务启用 `loadBalancerClass` 字段。 有关更多信息,请参见[指定负载均衡器实现类](/zh/docs/concepts/services-networking/service/#load-balancer-class)。 @@ -1636,6 +1722,9 @@ Each feature gate is designed for enabling/disabling a specific feature: 标签,则可以排除该节点。 - `ServiceTopology`:启用服务拓扑可以让一个服务基于集群的节点拓扑进行流量路由。 有关更多详细信息,请参见[服务拓扑](/zh/docs/concepts/services-networking/service-topology/)。 +- `ServiceIPStaticSubrange`:启用服务 ClusterIP 分配策略,从而细分 ClusterIP 范围。 + 动态分配的 ClusterIP 地址将优先从较高范围分配,以低冲突风险允许用户从较低范围分配静态 ClusterIP。 + 更多详细信息请参阅[避免冲突](/zh/docs/concepts/services-networking/service/#avoiding-collisions) * Kubernetes 的[弃用策略](/zh/docs/reference/using-api/deprecation-policy/) 介绍了项目针对已移除特性和组件的处理方法。 - +* 从 Kubernetes 1.24 开始,默认不启用新的 beta API。 + 启用 beta 功能时,还需要启用所有关联的 API 资源。 + 例如:要启用一个特定资源,如 `storage.k8s.io/v1beta1/csistoragecapacities`, + 请设置 `--runtime-config=storage.k8s.io/v1beta1/csistoragecapacities`。 + 有关命令行标志的更多详细信息,请参阅 [API 版本控制](/zh/docs/reference/using-api/#api-versioning)。 diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver.md similarity index 86% rename from content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md rename to content/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver.md index 74e3868e4b3e6..26bef87ba7cd3 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -10,7 +10,7 @@ The file is auto-generated from the Go source code of the component using a gene [generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how to generate the reference documentation, please read [Contributing to the reference documentation](/docs/contribute/generate-ref-docs/). -To update the reference conent, please follow the +To update the reference conent, please follow the [Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/) guide. You can file document formatting bugs against the [reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project. @@ -18,15 +18,15 @@ guide. You can file document formatting bugs against the ## {{% heading "synopsis" %}} - - + Kubernetes API 服务器验证并配置 API 对象的数据, -这些对象包括 pods、services、replicationcontrollers 等。 +这些对象包括 pods、services、replicationcontrollers 等。 API 服务器为 REST 操作提供服务,并为集群的共享状态提供前端, 所有其他组件都通过该前端进行交互。 @@ -43,18 +43,6 @@ kube-apiserver [flags]
        --add-dir-header
        - -

        如果为 true,则将文件目录添加到日志消息的标题中

        -
        --admission-control-config-file string
        --allow-privileged
        --alsologtostderr
        - -在向文件输出日志的同时,也将日志写到标准输出。 +如果为 true,将允许特权容器。[默认值=false]
        --apiserver-count int     默认值:1
        - -集群中运行的 API 服务器数量,必须为正数。 -(在启用 --endpoint-reconciler-type=master-count 时使用。) -
        --audit-log-batch-buffer-size int     默认值:10000
        要保留的旧的审计日志文件个数上限。 +将值设置为 0 表示对文件个数没有限制。
        --authorization-mode stringSlice     默认值:"AlwaysAllow"--authorization-mode strings     默认值:"AlwaysAllow"
        @@ -837,7 +799,7 @@ CORS 允许的来源清单,以逗号分隔。 -对污点 NotReady:NoExecute 的容忍时长(以秒计)。 +对污点 NotReady:NoExecute 的容忍时长(以秒计)。 默认情况下这一容忍度会被添加到尚未具有此容忍度的每个 pod 中。
        --delete-collection-workers int     默认值: 1--delete-collection-workers int     默认值:1
        @@ -912,7 +874,6 @@ This flag provides an escape hatch for misbehaving metrics. You must provide the
        --egress-selector-config-file string
        --enable-admission-plugins stringSlice--enable-admission-plugins strings
        @@ -1015,9 +976,10 @@ The file containing configuration for encryption providers to be used for storin
        使用端点协调器(master-countleasenone)。 +master-count 已弃用,并将在未来版本中删除。
        --experimental-logging-sanitization
        - -[试验性功能] 启用此标志时,被标记为敏感的字段(密码、密钥、令牌)都不会被日志输出。
        -运行时的日志清理可能会引入相当程度的计算开销,因此不应该在产品环境中启用。 -
        --external-hostname string
        --log-backtrace-at traceLocation     默认值::0
        - -当日志机制执行到'文件 :N'时,生成堆栈跟踪。 -
        --log-dir string
        - -如果为非空,则在此目录中写入日志文件。 -
        --log-file string
        - -如果为非空,使用此值作为日志文件。 -
        --log-file-max-size uint     默认值:1800
        - -定义日志文件可以增长到的最大大小。单位为兆字节。 -如果值为 0,则最大文件大小为无限制。 -
        --log-flush-frequency duration     默认值:5s
        -设置日志格式。允许的格式:"text"。
        -非默认格式不支持以下标志:--add-dir-header--alsologtostderr--log-backtrace-at--log-dir--log-file--log-file-max-size--logtostderr--one-output-skip-headers-skip-log-headers--stderrthreshold-vmodule--log-flush-frequency
        +设置日志格式。允许的格式:"text"。
        +非默认格式不支持以下标志:--add-dir-header--alsologtostderr--log-backtrace-at--log-dir--log-file--log-file-max-size--logtostderr--one-output-skip-headers-skip-log-headers--stderrthreshold-vmodule
        当前非默认选择为 alpha,会随时更改而不会发出警告。
        --logtostderr     默认值:true
        - -在标准错误而不是文件中输出日志记录。 -
        --master-service-namespace string     默认值:"default"
        允许的 JOSE 非对称签名算法的逗号分隔列表。 -若 JWT 所带的 "alg" 标头值不在列表中,则该 JWT 将被拒绝。 +具有收支持 "alg" 标头值的 JWTs 有:RS256、RS384、RS512、ES256、ES384、ES512、PS256、PS384、PS512。 取值依据 RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1 定义。
        --one-output
        - -此标志为真时,日志只会被写入到其原生的严重性级别中(而不是同时写到所有较低 -严重性级别中)。 -
        --permit-address-sharing     默认值:false--permit-address-sharing

        @@ -1891,7 +1772,7 @@ If true, only write logs to their native severity level (vs also writing to each

        --permit-port-sharing     默认值:false--permit-port-sharing
        @@ -1966,7 +1847,7 @@ open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. --> -可选字段,指示处理程序在超时之前必须保持打开请求的持续时间。 +可选字段,指示处理程序在超时之前必须保持打开请求的持续时间。 这是请求的默认请求超时,但对于特定类型的请求,可能会被 --min-request-timeout等标志覆盖。
        --skip-headers
        - -如果为 true,日志消息中避免标题前缀。 -
        --skip-log-headers
        - -如果为 true,则在打开日志文件时避免标题。 -
        --stderrthreshold int     默认值:2--shutdown-send-retry-after
        -将达到或超过此阈值的日志写到标准错误输出 +值为 true 表示 HTTP 服务器将继续监听直到耗尽所有非长时间运行的请求, +在此期间,所有传入请求将被拒绝,状态码为 429,响应头为 "Retry-After", +此外,设置 "Connection: close" 响应头是为了在空闲时断开 TCP 链接。
        --tls-cert-file string
        --tls-sni-cert-key string     默认值: []--tls-sni-cert-key string     默认值:[]
        @@ -2494,14 +2354,15 @@ Print version information and quit
        --vmodule <用逗号分隔的多个 'pattern=N' 配置字符串>--vmodule pattern=N,...
        -以逗号分隔的 pattern=N 设置列表,用于文件过滤的日志记录。 +以逗号分隔的 pattern=N 设置列表,用于文件过滤的日志记录(仅适用于 text 日志格式)。
        - diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md similarity index 85% rename from content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md rename to content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md index 19037bd19b89c..69d31090b9530 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -48,18 +48,6 @@ kube-controller-manager [flags]
        --add-dir-header
        - -若为 true,将文件目录添加到日志消息的头部。 -
        --allocate-node-cidrs
        --allow-metric-labels stringToString     默认值:""--allow-metric-labels stringToString     默认值:[]

        @@ -89,18 +77,6 @@ metric2,label='v1,v2,v3'。

        --alsologtostderr
        - -在向文件输出日志的同时,也将日志写到标准输出。 -
        --attach-detach-reconcile-sync-period duration     默认值:1m0s
        --bind-address ip     默认值:0.0.0.0--bind-address string     默认值:0.0.0.0
        @@ -511,6 +487,19 @@ The number of endpoint syncing operations that will be done concurrently. Larger
        --concurrent-ephemeralvolume-syncs int32     默认值:5
        + +可以并发执行的 EphemeralVolume 同步操作个数。数值越大意味着更快的 EphemeralVolume 更新操作, +同时也意味着更大的 CPU (和网络)压力。 +
        --concurrent-gc-syncs int32     默认值:20
        --controllers strings     默认值:[*]--controllers strings     默认值:*
        @@ -690,18 +679,6 @@ A list of controllers to enable. '*' enables all on-by-default controllers, 'foo 默认禁用的控制器有:bootstrapsigner 和 tokencleaner。
        --deployment-controller-sync-period duration     默认值:30s
        - -Deployment 资源的同步周期。 -
        --disable-attach-detach-reconcile-sync
        --experimental-logging-sanitization
        - -[试验性功能] 当启用此标志时,被标记为敏感的字段(密码、密钥、令牌)不会被日志输出。
        -运行时的日志清理操作可能会引入相当程度的计算开销,因此不应在生产环境中启用。 -
        --external-cloud-volume-plugin string
        --kube-api-qps float32     默认值:20--kube-api-qps float     默认值:20
        @@ -1267,10 +1237,10 @@ The interval between attempts by the acting master to renew a leadership slot be
        -在领导者选举期间用于锁定的资源对象的类型。 支持的选项为 "endpoints"、 -"configmaps"、"leases"、"endpointsleases" 和 "configmapsleases"。 +在领导者选举期间用于锁定的资源对象的类型。 支持的选项为 +"leases"、"endpointsleases" 和 "configmapsleases"。
        --log-backtrace-at traceLocation     默认值::0
        - -当执行到 file:N 所给的文件和代码行时,日志机制会生成一个调用栈快照。 -
        --log-dir string
        - -此标志为非空字符串时,日志文件会写入到所给的目录中。 -
        --log-file string
        - -此标志为非空字符串时,意味着日志会写入到所给的文件中。 -
        --log-file-max-size uint     默认值:1800
        - -定义日志文件大小的上限。单位是兆字节(MB)。 -若此值为 0,则不对日志文件尺寸进行约束。 -
        --log-flush-frequency duration     默认值:5s
        -设置日志格式。允许的格式:"text"。 +设置日志格式。允许的格式:"text"。
        非默认格式不支持以下标志:--add-dir-header、 ---alsologtostderr》、--log-backtrace-at、 +--alsologtostderr--log-backtrace-at--log-dir--log-file--log-file-max-size--logtostderr--one-output--skip-headers--skip-log-headers--stderrthreshold、 ---vmodule--log-flush-frequency。 +--vmodule
        当前非默认选项为 Alpha 阶段,如有更改,恕不另行通知。
        --logtostderr     默认值:true
        - -将日志写出到标准错误输出(stderr)而不是写入到日志文件。 -
        --master string
        EndpointSliceMirroring 控制器将添加到 EndpointSlice 的最大端点数。 -每个分片的端点越多,端点分片越少,但资源越大。 +每个分片的端点越多,端点分片越少,但资源越大。默认为 100。
        --node-eviction-rate float32     默认值:0.1--node-eviction-rate float     默认值:0.1
        -当某区域变得不健康,节点失效时,每秒钟可以从此标志所设定的节点 -个数上删除 Pods。请参阅 --unhealthy-zone-threshold +当某区域健康时,在节点故障的情况下每秒删除 Pods 的节点数。 +请参阅 --unhealthy-zone-threshold 以了解“健康”的判定标准。这里的区域(zone)在集群并不跨多个区域时 指的是整个集群。
        --one-output
        - -如果此标志为 true,则仅将日志写入其自身的严重性级别(而不是同时写入更低的严重性级别中)。 -
        --permit-address-sharing

        如果此标志为 true,则在绑定端口时使用 SO_REUSEADDR。 这就意味着可以同时绑定到 0.0.0.0 和特定的 IP 地址, -并且避免等待内核释放处于 TIME_WAITE 状态的套接字。 +并且避免等待内核释放处于 TIME_WAITE 状态的套接字。[默认值=false]。

        对 NFS 卷执行回收利用时,用作模版的 Pod 定义文件所在路径。
        +List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed. +--> 标志值是客户端证书中的 Common Names 列表。其中所列的名称可以通过 --requestheader-username-headers 所设置的 HTTP 头部来提供用户名。 如果此标志值为空表,则被 --requestheader-client-ca-file @@ -1921,42 +1818,6 @@ The previous version for which you want to show hidden metrics. Only the previou
        --skip-headers
        - -若此标志为 true,则在日志消息中避免写入头部前缀信息。 -
        --skip-log-headers
        - -若此标志为 true,则在写入日志文件时避免写入头部信息。 -
        --stderrthreshold severity     默认值:2
        - -等于或大于此阈值的日志信息会被写入到标准错误输出(stderr)。 -
        --terminated-pod-gc-threshold int32     默认值:12500
        供服务器使用的加密包的逗号分隔列表。若忽略此标志,则使用 Go 语言默认的加密包。
        -可选值包括:TLS_AES_128_GCM_SHA256、TLS_AES_256_GCM_SHA384、TLS_CHACHA20_POLY1305_SHA256、TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256、TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256、TLS_RSA_WITH_3DES_EDE_CBC_SHA、TLS_RSA_WITH_AES_128_CBC_SHA、TLS_RSA_WITH_AES_128_GCM_SHA256、TLS_RSA_WITH_AES_256_CBC_SHA、TLS_RSA_WITH_AES_256_GCM_SHA384. -
        不安全的值: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_ECDSA_WITH_RC4_128_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_RSA_WITH_RC4_128_SHA、TLS_RSA_WITH_AES_128_CBC_SHA256、TLS_RSA_WITH_RC4_128_SHA +可选值包括:TLS_AES_128_GCM_SHA256、TLS_AES_256_GCM_SHA384、TLS_CHACHA20_POLY1305_SHA256、TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256、TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305、TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256、TLS_RSA_WITH_AES_128_CBC_SHA、TLS_RSA_WITH_AES_128_GCM_SHA256、TLS_RSA_WITH_AES_256_CBC_SHA、TLS_RSA_WITH_AES_256_GCM_SHA384。 +
        不安全的值: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_ECDSA_WITH_RC4_128_SHA、TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA、TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256、TLS_ECDHE_RSA_WITH_RC4_128_SHA、TLS_RSA_WITH_3DES_EDE_CBC_SHA、TLS_RSA_WITH_AES_128_CBC_SHA256、TLS_RSA_WITH_RC4_128_SHA。
        --tls-sni-cert-key namedCertKey     默认值:[]--tls-sni-cert-key string
        X509 证书和私钥文件路径的耦对。作为可选项,可以添加域名模式的列表, 其中每个域名模式都是可以带通配片段前缀的全限定域名(FQDN)。 @@ -2092,14 +1953,14 @@ Print version information and quit
        --vmodule <逗号分隔的 'pattern=N' 配置值>--vmodule pattern=N,...
        -由逗号分隔的列表,每一项都是 pattern=N 格式,用来执行根据文件过滤的日志行为。 +由逗号分隔的列表,每一项都是 pattern=N 格式,用来执行根据文件过滤的日志行为(仅适用于 text 日志格式)。
        --add-dir-header--add_dir_header

        -若此标志为 true,则将文件目录添加到日志消息的头部。 -

        --azure-container-registry-config string

        - -包含 Azure 容器仓库配置信息的文件的路径。 +设置为 true 表示将日志输出到文件的同时输出到 stderr

        --bind-address 0.0.0.0     默认值:0.0.0.0--bind-address string     默认值:0.0.0.0

        -代理服务器要使用的 IP 地址(设置为 '0.0.0.0' 表示要使用所有 IPv4 接口; -设置为 '::' 表示使用所有 IPv6 接口)。 -

        -
        --boot-id-file string     默认值:"/proc/sys/kernel/random/boot_id"--boot_id_file string     默认值:"/proc/sys/kernel/random/boot_id"

        - -用来检查 Boot-ID 的文件名,用逗号隔开。 -第一个存在的文件会被使用。 + +逗号分隔的文件列表,用于检查 boot-id。使用第一个存在的文件。

        集群中 Pod 的 CIDR 范围。配置后,将从该范围之外发送到服务集群 IP -的流量被伪装,从 Pod 发送到外部 LoadBalancer IP 的流量将被重定向 -到相应的集群 IP。 +的流量被伪装,从 Pod 发送到外部 LoadBalancer IP +的流量将被重定向到相应的集群 IP。 +对于双协议栈集群,接受一个逗号分隔的列表, +每个 IP 协议族(IPv4 和 IPv6)至少包含一个 CIDR。 +如果配置文件由 --config 指定,则忽略此参数。

        - + 用于检测本地流量的模式。 +如果配置文件由 --config 指定,则忽略此参数。

        --feature-gates <逗号分隔的 'key=True|False' 对’>--feature-gates <逗号分隔的 'key=True|False' 对>

        -一组键=值(key=value)对,描述了 alpha/experimental 的特征。可选项有: +一组键=值(key=value)对,描述了 alpha/experimental 的特征。可选项有:
        APIListChunking=true|false (BETA - 默认值=true)
        APIPriorityAndFairness=true|false (BETA - 默认值=true)
        APIResponseCompression=true|false (BETA - 默认值=true)
        @@ -363,96 +357,99 @@ APIServerIdentity=true|false (ALPHA - 默认值=false)
        APIServerTracing=true|false (ALPHA - 默认值=false)
        AllAlpha=true|false (ALPHA - 默认值=false)
        AllBeta=true|false (BETA - 默认值=false)
        -AnyVolumeDataSource=true|false (ALPHA - 默认值=false)
        +AnyVolumeDataSource=true|false (BETA - 默认值=true)
        AppArmor=true|false (BETA - 默认值=true)
        CPUManager=true|false (BETA - 默认值=true)
        -CPUManagerPolicyOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值=false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - 默认值=true)
        +CPUManagerPolicyOptions=true|false (BETA - 默认值=true)
        CSIInlineVolume=true|false (BETA - 默认值=true)
        CSIMigration=true|false (BETA - 默认值=true)
        -CSIMigrationAWS=true|false (BETA - 默认值=false)
        -CSIMigrationAzureDisk=true|false (BETA - 默认值=false)
        -CSIMigrationAzureFile=true|false (BETA - 默认值=false)
        -CSIMigrationGCE=true|false (BETA - 默认值=false)
        -CSIMigrationOpenStack=true|false (BETA - 默认值=true)
        +CSIMigrationAWS=true|false (BETA - 默认值=true)
        +CSIMigrationAzureFile=true|false (BETA - 默认值=true)
        +CSIMigrationGCE=true|false (BETA - 默认值=true)
        +CSIMigrationPortworx=true|false (ALPHA - 默认值=false)
        +CSIMigrationRBD=true|false (ALPHA - 默认值=false)
        CSIMigrationvSphere=true|false (BETA - 默认值=false)
        -CSIStorageCapacity=true|false (BETA - 默认值=true)
        -CSIVolumeFSGroupPolicy=true|false (BETA - 默认值=true)
        CSIVolumeHealth=true|false (ALPHA - 默认值=false)
        -CSRDuration=true|false (BETA - 默认值=true)
        -ConfigurableFSGroupPolicy=true|false (BETA - 默认值=true)
        -ControllerManagerLeaderMigration=true|false (BETA - 默认值=true)
        +CronJobTimeZone=true|false (ALPHA - 默认值=false)
        CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
        +CustomResourceValidationExpressions=true|false (ALPHA - 默认值=false)
        DaemonSetUpdateSurge=true|false (BETA - 默认值=true)
        -DefaultPodTopologySpread=true|false (BETA - 默认值=true)
        -DelegateFSGroupToCSIDriver=true|false (ALPHA - 默认值=false)
        +DelegateFSGroupToCSIDriver=true|false (BETA - 默认值=true)
        DevicePlugins=true|false (BETA - 默认值=true)
        DisableAcceleratorUsageMetrics=true|false (BETA - 默认值=true)
        DisableCloudProviders=true|false (ALPHA - 默认值=false)
        -DownwardAPIHugePages=true|false (BETA - 默认值=false)
        -EfficientWatchResumption=true|false (BETA - 默认值=true)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值=false)
        +DownwardAPIHugePages=true|false (BETA - 默认值=true)
        EndpointSliceTerminatingCondition=true|false (BETA - 默认值=true)
        -EphemeralContainers=true|false (ALPHA - 默认值=false)
        -ExpandCSIVolumes=true|false (BETA - 默认值=true)
        -ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
        -ExpandPersistentVolumes=true|false (BETA - 默认值=true)
        +EphemeralContainers=true|false (BETA - 默认值=true)
        ExpandedDNSConfig=true|false (ALPHA - 默认值=false)
        ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值=false)
        -GenericEphemeralVolume=true|false (BETA - 默认值=true)
        +GRPCContainerProbe=true|false (BETA - 默认值=true)
        GracefulNodeShutdown=true|false (BETA - 默认值=true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值=true)
        HPAContainerMetrics=true|false (ALPHA - 默认值=false)
        HPAScaleToZero=true|false (ALPHA - 默认值=false)
        -IPv6DualStack=true|false (BETA - 默认值=true)
        +HonorPVReclaimPolicy=true|false (ALPHA - 默认值=false)
        +IdentifyPodOS=true|false (BETA - 默认值=true)
        InTreePluginAWSUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - 默认值=false)
        +InTreePluginRBDUnregister=true|false (ALPHA - 默认值=false)
        InTreePluginvSphereUnregister=true|false (ALPHA - 默认值=false)
        -IndexedJob=true|false (BETA - 默认值=true)
        -IngressClassNamespacedParams=true|false (BETA - 默认值=true)
        -JobTrackingWithFinalizers=true|false (ALPHA - 默认值=false)
        -KubeletCredentialProviders=true|false (ALPHA - 默认值=false)
        +JobMutableNodeSchedulingDirectives=true|false (BETA - 默认值=true)
        +JobReadyPods=true|false (BETA - 默认值=true)
        +JobTrackingWithFinalizers=true|false (BETA - 默认值=false)
        +KubeletCredentialProviders=true|false (BETA - 默认值=true)
        KubeletInUserNamespace=true|false (ALPHA - 默认值=false)
        KubeletPodResources=true|false (BETA - 默认值=true)
        -KubeletPodResourcesGetAllocatable=true|false (ALPHA - 默认值=false)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值=true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
        LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
        LogarithmicScaleDown=true|false (BETA - 默认值=true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - 默认值=false)
        MemoryManager=true|false (BETA - 默认值=true)
        MemoryQoS=true|false (ALPHA - 默认值=false)
        -MixedProtocolLBService=true|false (ALPHA - 默认值=false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - 默认值=false)
        +MixedProtocolLBService=true|false (BETA - 默认值=true)
        NetworkPolicyEndPort=true|false (BETA - 默认值=true)
        +NetworkPolicyStatus=true|false (ALPHA - 默认值=false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - 默认值=false)
        NodeSwap=true|false (ALPHA - 默认值=false)
        -NonPreemptingPriority=true|false (BETA - 默认值=true)
        -PodAffinityNamespaceSelector=true|false (BETA - 默认值=true)
        +OpenAPIEnums=true|false (BETA - 默认值=true)
        +OpenAPIV3=true|false (BETA - 默认值=true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值=false)
        PodDeletionCost=true|false (BETA - 默认值=true)
        -PodOverhead=true|false (BETA - 默认值=true)
        -PodSecurity=true|false (ALPHA - 默认值=false)
        -PreferNominatedNode=true|false (BETA - 默认值=true)
        +PodSecurity=true|false (BETA - 默认值=true)
        ProbeTerminationGracePeriod=true|false (BETA - 默认值=false)
        ProcMountType=true|false (ALPHA - 默认值=false)
        ProxyTerminatingEndpoints=true|false (ALPHA - 默认值=false)
        QOSReserved=true|false (ALPHA - 默认值=false)
        ReadWriteOncePod=true|false (ALPHA - 默认值=false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值=false)
        RemainingItemCount=true|false (BETA - 默认值=true)
        -RemoveSelfLink=true|false (BETA - 默认值=true)
        RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
        SeccompDefault=true|false (ALPHA - 默认值=false)
        +ServerSideFieldValidation=true|false (ALPHA - 默认值=false)
        +ServiceIPStaticSubrange=true|false (ALPHA - 默认值=false)
        ServiceInternalTrafficPolicy=true|false (BETA - 默认值=true)
        -ServiceLBNodePortControl=true|false (BETA - 默认值=true)
        -ServiceLoadBalancerClass=true|false (BETA - 默认值=true)
        SizeMemoryBackedVolumes=true|false (BETA - 默认值=true)
        -StatefulSetMinReadySeconds=true|false (ALPHA - 默认值=false)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - 默认值=false)
        +StatefulSetMinReadySeconds=true|false (BETA - 默认值=true)
        StorageVersionAPI=true|false (ALPHA - 默认值=false)
        StorageVersionHash=true|false (BETA - 默认值=true)
        -SuspendJob=true|false (BETA - 默认值=true)
        -TTLAfterFinished=true|false (BETA - 默认值=true)
        -TopologyAwareHints=true|false (ALPHA - 默认值=false)
        +TopologyAwareHints=true|false (BETA - 默认值=true)
        TopologyManager=true|false (BETA - 默认值=true)
        VolumeCapacityPriority=true|false (ALPHA - 默认值=false)
        WinDSR=true|false (ALPHA - 默认值=false)
        WinOverlay=true|false (BETA - 默认值=true)
        -WindowsHostProcessContainers=true|false (ALPHA - 默认值=false) +WindowsHostProcessContainers=true|false (BETA - 默认值=true) +如果配置文件由 --config 指定,则忽略此参数。

        服务健康状态检查的 IP 地址和端口(设置为 '0.0.0.0:10256' 表示使用所有 IPv4 接口,设置为 '[::]:10256' 表示使用所有 IPv6 接口); 设置为空则禁用。 +如果配置文件由 --config 指定,则忽略此参数。

        --log-backtrace-at <形式为 'file:N' 的字符串>     Default: :0--log_backtrace_at <“file:N” 格式的字符串>     默认值:0

        -当日志逻辑执行到文件 file 的第 N 行时,输出调用堆栈跟踪。 +当日志命中 file:N,触发一次堆栈追踪

        --log-dir string--log_dir string

        - -若此标志费控,则将日志文件写入到此标志所给的目录下。 -

        -
        --log-file string--log_file string

        - -若此标志非空,则该字符串作为日志文件名。 + +如果非空,使用此日志文件

        --log-file-max-size uint     默认值:1800--log_file_max_size uint     默认值:1800

        - -定义日志文件可增长到的最大尺寸。单位是兆字节(MB)。 -如果此值为 0,则最大文件大小无限制。 + +定义日志文件可以增长到的最大大小。单位是兆字节。 +如果值为 0,则最大文件大小不受限制。

        --log-flush-frequency duration     默认值:5s ---logtostderr     默认值:true
        - -两次日志刷新之间的最大秒数。 +

        + +日志输出到 stderr 而不是文件。 +

        --machine-id-file string     默认值:"/etc/machine-id,/var/lib/dbus/machine-id"--machine_id_file string     默认值:"/etc/machine-id,/var/lib/dbus/machine-id"

        @@ -809,13 +795,12 @@ Kubernetes API 服务器的地址(覆盖 kubeconfig 中的相关值)。

        metrics 服务器要使用的 IP 地址和端口 (设置为 '0.0.0.0:10249' 则使用所有 IPv4 接口,设置为 '[::]:10249' 则使用所有 IPv6 接口) 设置为空则禁用。 +如果配置文件由 --config 指定,则忽略此参数。

        一个字符串值,指定用于 NodePort 服务的地址。 值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。 默认的空字符串切片([])表示使用所有本地地址。 +如果配置文件由 --config 指定,则忽略此参数。

        --one-output--one_output

        - -若此标志为 true,则仅将日志写入到其原本的严重性级别之下 -(而不是将其写入到所有更低严重性级别中)。 + +如果为 true,则仅将日志写入本地的严重性级别(而不是写入每个较低的严重性级别)

        kube-proxy 进程中的 oom-score-adj 值,必须在 [-1000,1000] 范围内。 +如果配置文件由 --config 指定,则忽略此参数。

        --pod-bridge-interface string
        + +集群中的一个桥接接口名称。 +Kube-proxy 将来自与该值匹配的桥接接口的流量视为本地流量。 +如果 DetectLocalMode 设置为 BridgeInterface,则应设置该参数。 +
        --pod-interface-name-prefix string
        + +集群中的一个接口前缀。 +Kube-proxy 将来自与给定前缀匹配的接口的流量视为本地流量。 +如果 DetectLocalMode 设置为 InterfaceNamePrefix,则应设置该参数。 +
        --profiling

        如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。 +如果配置文件由 --config 指定,则忽略此参数。

        --proxy-mode string--proxy-mode ProxyMode

        -使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'。 -如果为空,使用最佳可用代理(当前为 iptables)。 -如果选择了 iptables 代理(无论是否为显式设置),但系统的内核或 -iptables 版本较低,总是会回退到 userspace 代理。 +使用哪种代理模式:'iptables'(仅 Linux)、'ipvs'(仅 Linux)、'kernelspace'(仅 Linux) +或者 'userspace'(Linux/Windows, 已弃用)。 +Linux 系统上的默认值是 'iptables',Windows 系统上的默认值是 'userspace'。 +如果配置文件由 --config 指定,则忽略此参数。

        要显示隐藏指标的先前版本。 仅先前的次要版本有意义,不允许其他值。 格式为 <major>.<minor> ,例如:'1.16'。 这种格式的目的是确保你有机会注意到下一个发行版是否隐藏了其他指标, 而不是在之后将其永久删除时感到惊讶。 +如果配置文件由 --config 指定,则忽略此参数。

        --skip-headers--skip_headers

        - -若此标志为 true,则避免在日志消息中包含头部前缀。 + +如果为 true,则避免在日志消息中使用头部前缀

        --skip-log-headers--skip_log_headers

        - -如果此标志为 true,则避免在打开日志文件时使用头部。 + +如果为 true,则在打开日志文件时避免使用头部

        - -如果日志消息处于或者高于此阈值所设置的级别,则将其输出到标准错误输出(stderr)。 + +设置严重程度达到或超过此阈值的日志输出到标准错误输出。

        - -用来设置日志详细程度的数值。 + +设置日志级别详细程度的数值。

        --vmodule <逗号分隔的 'pattern=N' 设置’>--vmodule <逗号分割的 “pattern=N” 设置>

        - -用逗号分隔的列表,其中每一项为 'pattern=N' 格式。 -用来支持基于文件过滤的日志机制。 + +以逗号分割的 pattern=N 设置的列表,用于文件过滤日志

        --add-dir-header
        - -如果为 true,则将文件目录添加到日志消息的头部 -
        --address string     默认值:"0.0.0.0"
        - -已弃用: 要监听 --port 端口的 IP 地址(将其设置为 0.0.0.0 或者 :: 用于监听所有接口和 IP族)。 -请参阅 --bind-address。 -如果在 --config 中指定了一个配置文件,这个参数将被忽略。 -
        --algorithm-provider string
        - -已弃用: 要使用的调度算法驱动,此标志设置组件配置框架的默认插件。 -可选值:ClusterAutoscalerProvider | DefaultProvider -
        --allow-metric-labels stringToString      -默认值: []
        -这个键值映射表设置 度量标签 所允许设置的值。 +这个键值映射表设置度量标签所允许设置的值。 其中键的格式是 <MetricName>,<LabelName>。 值的格式是 <allowed_value>,<allowed_value>。 例如:metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'。
        --alsologtostderr
        - -日志记录到标准错误以及文件 -
        --authentication-kubeconfig string
        -配置文件的路径。以下标志会覆盖此文件中的值:
        ---algorithm-provider
        ---policy-config-file
        ---policy-configmap
        ---policy-configmap-namespace +配置文件的路径。
        --contention-profiling     默认值: true--contention-profiling     默认值:true
        @@ -309,19 +252,6 @@ This flag provides an escape hatch for misbehaving metrics. You must provide the
        --experimental-logging-sanitization
        - -[试验性功能] 当启用此标志时,标记为敏感的字段(密码、密钥、令牌)等不会被日志 -输出。
        -运行时的日志清理操作可能引入相当程度的计算开销,因此不应在生产环境中启用。 -
        --feature-gates <逗号分隔的 'key=True|False' 对>
        --hard-pod-affinity-symmetric-weight int32     默认值:1
        - -已弃用: RequiredDuringScheduling 亲和性是不对称的,但是存在与每个 -RequiredDuringScheduling 关联性规则相对应的隐式 PreferredDuringScheduling 关联性规则。 ---hard-pod-affinity-symmetric-weight 代表隐式 PreferredDuringScheduling -关联性规则的权重。权重必须在 0-100 范围内。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略。 +APIListChunking=true|false (BETA - 默认值为 true)
        +APIPriorityAndFairness=true|false (BETA - 默认值为 true)
        +APIResponseCompression=true|false (BETA - 默认值为 true)
        +APIServerIdentity=true|false (ALPHA - 默认值为 false)
        +APIServerTracing=true|false (ALPHA - 默认值为 false)
        +AllAlpha=true|false (ALPHA - 默认值为 false)
        +AllBeta=true|false (BETA - 默认值为 false)
        +AnyVolumeDataSource=true|false (BETA - 默认值为 true)
        +AppArmor=true|false (BETA - 默认值为 true)
        +CPUManager=true|false (BETA - 默认值为 true)
        +CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
        +CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
        +CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
        +CSIInlineVolume=true|false (BETA - 默认值为 true)
        +CSIMigration=true|false (BETA - 默认值为 true)
        +CSIMigrationAWS=true|false (BETA - 默认值为 false)
        +CSIMigrationAzureFile=true|false (BETA - 默认值为 false)
        +CSIMigrationGCE=true|false (BETA - 默认值为 true)
        +CSIMigrationPortworx=true|false (ALPHA - 默认值为 false)
        +CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
        +CSIMigrationvSphere=true|false (BETA - 默认值为 false)
        +CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
        +ContextualLogging=true|false (ALPHA - 默认值为 false)
        +CronJobTimeZone=true|false (ALPHA - 默认值为 false)
        +CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
        +CustomResourceValidationExpressions=true|false (ALPHA - 默认值为 false)
        +DaemonSetUpdateSurge=true|false (BETA - 默认值为 true)
        +DelegateFSGroupToCSIDriver=true|false (BETA - 默认值为 true)
        +DevicePlugins=true|false (BETA - 默认值为 true)
        +DisableAcceleratorUsageMetrics=true|false (BETA - 默认值为 true)
        +DisableCloudProviders=true|false (ALPHA - 默认值为 false)
        +DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
        +DownwardAPIHugePages=true|false (BETA - 默认值为 true)
        +EndpointSliceTerminatingCondition=true|false (BETA - 默认值为 true)
        +EphemeralContainers=true|false (BETA - 默认值为 true)
        +ExpandedDNSConfig=true|false (ALPHA - 默认值为 false)
        +ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
        +GRPCContainerProbe=true|false (BETA - 默认值为 true)
        +GracefulNodeShutdown=true|false (BETA - 默认值为 true)
        +GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
        +HPAContainerMetrics=true|false (ALPHA - 默认值为 false)
        +HPAScaleToZero=true|false (ALPHA - 默认值为 false)
        +HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
        +IdentifyPodOS=true|false (BETA - 默认值为 true)
        +InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
        +InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
        +obMutableNodeSchedulingDirectives=true|false (BETA - 默认值为 true)
        +JobReadyPods=true|false (BETA - 默认值为 true)
        +JobTrackingWithFinalizers=true|false (BETA - 默认值为 false)
        +KubeletCredentialProviders=true|false (BETA - 默认值为 true)
        +KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
        +KubeletPodResources=true|false (BETA - 默认值为 true)
        +KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
        +LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - 默认值为 true)
        +LocalStorageCapacityIsolation=true|false (BETA - 默认值为 true)
        +LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
        +LogarithmicScaleDown=true|false (BETA - 默认值为 true)
        +MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
        +MemoryManager=true|false (BETA - 默认值为 true)
        +MemoryQoS=true|false (ALPHA - 默认值为 false)
        +MinDomainsInPodTopologySpread=true|false (ALPHA - 默认值为 false)
        +MixedProtocolLBService=true|false (BETA - 默认值为 true)
        +NetworkPolicyEndPort=true|false (BETA - 默认值为 true)
        +NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
        +NodeOutOfServiceVolumeDetach=true|false (ALPHA - 默认值为 false)
        +NodeSwap=true|false (ALPHA - 默认值为 false)
        +OpenAPIEnums=true|false (BETA - 默认值为 true)
        +OpenAPIV3=true|false (BETA - 默认值为 true)
        +PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
        +PodDeletionCost=true|false (BETA - 默认值为 true)
        +PodSecurity=true|false (BETA - 默认值为 true)
        +ProbeTerminationGracePeriod=true|false (BETA - 默认值为 false)
        +ProcMountType=true|false (ALPHA - 默认值为 false)
        +ProxyTerminatingEndpoints=true|false (ALPHA - 默认值为 false)
        +QOSReserved=true|false (ALPHA - 默认值为 false)
        +ReadWriteOncePod=true|false (ALPHA - 默认值为 false)
        +RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
        +RemainingItemCount=true|false (BETA - 默认值为 true)
        +RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
        +SeccompDefault=true|false (ALPHA - 默认值为 false)
        +ServerSideFieldValidation=true|false (ALPHA - 默认值为 false)
        +ServiceIPStaticSubrange=true|false (ALPHA - 默认值为 false)
        +ServiceInternalTrafficPolicy=true|false (BETA - 默认值为 true)
        +SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
        +StatefulSetAutoDeletePVC=true|false (ALPHA - 默认值为 false)
        +StatefulSetMinReadySeconds=true|false (BETA - 默认值为 true)
        +StorageVersionAPI=true|false (ALPHA - 默认值为 false)
        +StorageVersionHash=true|false (BETA - 默认值为 true)
        +TopologyAwareHints=true|false (BETA - 默认值为 true)
        +TopologyManager=true|false (BETA - 默认值为 true)
        +VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
        +WinDSR=true|false (ALPHA - 默认值为 false)
        +WinOverlay=true|false (BETA - 默认值为 true)
        +WindowsHostProcessContainers=true|false (BETA - 默认值为 true)
        --kube-api-qps float32     默认值:50--kube-api-qps float     默认值:50
        @@ -666,10 +587,9 @@ The interval between attempts by the acting master to renew a leadership slot be
        -在领导者选举期间用于锁定的资源对象的类型。支持的选项是 `endpoints`、 -`configmaps`、`leases`、`endpointleases` 和 `configmapsleases`。 +在领导者选举期间用于锁定的资源对象的类型。支持的选项有 `leases`、`endpointleases` 和 `configmapsleases`。
        --log-backtrace-at <a string in the form 'file:N'>      -默认值: 0
        - -当记录命中行文件 file 的第 N 行时输出堆栈跟踪。 -
        --log-dir string
        - -如果为非空,则在此目录中写入日志文件。 -
        --log-file string
        - -如果为非空,则使用此文件作为日志文件。 -
        --log-file-max-size uint     默认值:1800
        - -定义日志文件可以增长到的最大值。单位为兆字节。 -如果值为 0,则最大文件大小为无限制。 -
        --log-flush-frequency duration     默认值:5s
        -设置日志格式。可选格式:“json”,“text”。
        +设置日志格式。可选格式:“text”。
        采用非默认格式时,以下标识不会生效: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --one-output, --skip-headers, --skip-log-headers, ---stderrthreshold, --vmodule, --log-flush-frequency.
        +--stderrthreshold, --vmodule.
        非默认选项目前处于 Alpha 阶段,有可能会出现变更且无事先警告。
        --one-output
        - -若此标志为 true,则日志仅写入其自身的严重性级别,而不会写入所有较低严重性级别。 -
        --permit-address-sharing
        --policy-config-file string
        - -已弃用:包含调度器策略配置的文件。 -当策略 ConfigMap 为提供时,或者 --use-legacy-policy-config=true 时使用此文件。 -注意:当此标志与插件配置一起使用时,调度器会失败。 -
        --policy-configmap string
        - -已弃用: 包含调度器策略配置的 ConfigMap 对象的名称。 -如果 --use-legacy-policy-config=false,则它必须在调度器初始化之前存在于 -系统命名空间中。配置数据必须对应 'data' 映射中键名为 'policy.cfg' 的元素的值。 -注意:如果与插件配置一起使用,调度器会失败。 -
        --policy-configmap-namespace string     默认值:"kube-system"--pod-max-in-unschedulable-pods-duration duration     默认值:5m0s
        -已弃用: 策略 ConfigMap 所在的名字空间。如果未提供或为空,则将使用 kube-system 名字空间。 -注意:如果与插件配置一起使用,调度器会失败。 -
        --port int     默认值:10251
        - -已弃用: 在没有身份验证和鉴权的情况下不安全地为 HTTP 服务的端口。 -如果为 0,则根本不提供 HTTP。请参见 --secure-port。 -如果 --config 指定了一个配置文件,这个参数将被忽略。
        --profiling     默认值: true--profiling     默认值:true
        @@ -986,7 +806,7 @@ Root certificate bundle to use to verify client certificates on incoming request
        --requestheader-extra-headers-prefix strings      -默认值: "x-remote-extra-"
        @@ -999,7 +819,7 @@ List of request header prefixes to inspect. X-Remote-Extra- is suggested.
        --requestheader-group-headers strings      -默认值: "x-remote-group"
        @@ -1012,7 +832,7 @@ List of request headers to inspect for groups. X-Remote-Group is suggested.
        --requestheader-username-headers strings      -默认值: "x-remote-user"
        @@ -1023,21 +843,6 @@ List of request headers to inspect for usernames. X-Remote-User is common.
        --scheduler-name string      -默认值:"default-scheduler"
        - -已弃用: 调度器名称,用于根据 Pod 的 “spec.schedulerName” 选择此 -调度器将处理的 Pod。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略 -
        --secure-port int     默认值:10259
        --skip-headers
        - -如果为 true,日志消息中不再写入头部前缀。 -
        --skip-log-headers
        - -如果为 true,则在打开日志文件时忽略其头部。 -
        --stderrthreshold int     默认值:2
        - -达到或超过此阈值的日志会被写入到标准错误输出。 -
        --tls-cert-file string
        --use-legacy-policy-config
        - -已弃用:设置为 true 时,调度程序将忽略策略 ConfigMap 并使用策略配置文件。 -注意:当此标志与插件配置一起使用时,调度器会失败。 +例如: "example.crt,example.key" 或者 "foo.crt,foo.key:*.foo.com,foo.com"。
        --vmodule <逗号分隔的 ‘模式=N’ 配置列表>--vmodule pattern=N,...
        -以逗号分隔的 ‘模式=N’ 设置列表,用于文件过滤的日志记录。 +以逗号分隔的 “pattern=N” 设置列表,用于文件过滤的日志记录(仅适用于文本日志格式)。
        设置为 true 表示将文件目录添加到日志消息的头部 +(已弃用:将在未来的版本中删除,进一步了解。)
        kubelet 用来提供服务的 IP 地址(设置为0.0.0.0 表示使用所有 IPv4 接口, -设置为 :: 表示使用所有 IPv6 接口)。已弃用:应在 --config 所给的 -配置文件中进行设置。(进一步了解) +设置为 :: 表示使用所有 IPv6 接口)。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -用逗号分隔的字符串序列设置允许使用的非安全的 sysctls 或 sysctl 模式(以 * 结尾) 。 -使用此参数时风险自担。已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +用逗号分隔的字符串序列设置允许使用的非安全的 sysctls 或 sysctl 模式(以 * 结尾)。 +使用此参数时风险自担。(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置为 true 表示将日志输出到文件的同时输出到 stderr +(已弃用:将在未来的版本中删除,进一步了解。)
        设置为 true 表示 kubelet 服务器可以接受匿名请求。未被任何认证组件拒绝的请求将被视为匿名请求。 匿名请求的用户名为 system:anonymous,用户组为 system:unauthenticated。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        使用 TokenReview API 对持有者令牌进行身份认证。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        对 Webhook 令牌认证组件所返回的响应的缓存时间。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 服务器的鉴权模式。可选值包括:AlwaysAllowWebhookWebhook 模式使用 SubjectAccessReview API 鉴权。 当 --config 参数未被设置时,默认值为 AlwaysAllow,当使用了 --config 时,默认值为 Webhook。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        对 Webhook 认证组件所返回的 “Authorized(已授权)” 应答的缓存时间。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        对 Webhook 认证组件所返回的 “Unauthorized(未授权)” 应答的缓存时间。 --config 时,默认值为 Webhook。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 用来操作本机 cgroup 时使用的驱动程序。支持的选项包括 cgroupfssystemd。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        可选的选项,为 Pod 设置根 cgroup。容器运行时会尽可能使用此配置。 默认值 "" 意味着将使用容器运行时的默认设置。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        启用创建 QoS cgroup 层次结构。此值为 true 时 kubelet 为 QoS 和 Pod 创建顶级的 cgroup。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) -
        --chaos-chance float
        - -如果此值大于 0.0,则引入随机客户端错误和延迟。用于测试。 -已启用:将在未来版本中移除。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        如果设置了此参数,则使用对应文件中机构之一检查请求中所携带的客户端证书。 若客户端证书通过身份认证,则其对应身份为其证书中所设置的 CommonName。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        DNS 服务器的 IP 地址,以逗号分隔。此标志值用于 Pod 中设置了 “dnsPolicy=ClusterFirst” 时为容器提供 DNS 服务。注意:列表中出现的所有 DNS 服务器必须包含相同的记录组, 否则集群中的名称解析可能无法正常工作。至于名称解析过程中会牵涉到哪些 DNS 服务器, 这一点无法保证。 --config 时,默认值为 Webhook。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        集群的域名。如果设置了此值,kubelet 除了将主机的搜索域配置到所有容器之外,还会为其 配置所搜这里指定的域名。 --config 时,默认值为 Webhook。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -<警告:alpha 特性> 此值为以逗号分隔的完整路径列表。 +此值为以逗号分隔的完整路径列表。 kubelet 将在所指定路径中搜索 CNI 插件的可执行文件。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        -<警告:alpha 特性> 此值为一个目录的全路径名。CNI 将在其中缓存文件。 +此值为一个目录的全路径名。CNI 将在其中缓存文件。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        <警告:alpha 特性> 此值为某目录的全路径名。kubelet 将在其中搜索 CNI 配置文件。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        设置容器的日志文件个数上限。此值必须不小于 2。 此标志只能与 --container-runtime=remote 标志一起使用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置容器日志文件在轮换生成新文件时之前的最大值(例如,10Mi)。 此标志只能与 --container-runtime=remote 标志一起使用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        当启用了性能分析时,启用锁竞争分析。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        为设置了 CPU 限制的容器启用 CPU CFS 配额保障。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -设置 CPU CFS 配额周期 cpu.cfs_period_us。默认使用 Linux 内核所设置的默认值 。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +设置 CPU CFS 配额周期 cpu.cfs_period_us。默认使用 Linux 内核所设置的默认值。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        要使用的 CPU 管理器策略。可选值包括:nonestatic。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        <警告:alpha 特性> 设置 CPU 管理器的调和时间。例如:10s 或者 1m。 如果未设置,默认使用节点状态更新频率。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        使用这里的端点与 docker 端点通信。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        启用 Attach/Detach 控制器来挂接和摘除调度到该节点的卷,同时禁用 kubelet 执行挂接和摘除操作。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        启用服务器上用于日志收集和在本地运行容器和命令的端点。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        启用 kubelet 服务器。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用逗号分隔的列表,包含由 kubelet 强制执行的节点可分配资源级别。 可选配置为:nonepodssystem-reservedkube-reserved。 在设置 system-reservedkube-reserved 这两个值时,同时要求设置 --system-reserved-cgroup--kube-reserved-cgroup 这两个参数。 如果设置为 none,则不需要设置其他参数。 -参考相关文档。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +参考相关文档。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        事件记录的个数的突发峰值上限,在遵从 --event-qps 阈值约束的前提下 临时允许事件记录达到此数目。仅在 --event-qps 大于 0 时使用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置大于 0 的值表示限制每秒可生成的事件数量。设置为 0 表示不限制。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        触发 Pod 驱逐操作的一组硬性门限(例如:memory.available<1Gi -(内存可用值小于 1 G))设置。在 Linux 节点上,默认值还包括 +(内存可用值小于 1G)设置。在 Linux 节点上,默认值还包括 nodefs.inodesFree<5%。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        响应满足软性驱逐阈值(Soft Eviction Threshold)而终止 Pod 时使用的最长宽限期(以秒为单位)。 如果设置为负数,则遵循 Pod 的指定值。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        当某资源压力过大时,kubelet 将执行 Pod 驱逐操作。 此参数设置软性驱逐操作需要回收的资源的最小数量(例如:imagefs.available=2Gi)。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 在驱逐压力状况解除之前的最长等待时间。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置一组驱逐阈值(例如:memory.available<1.5Gi)。 如果在相应的宽限期内达到该阈值,则会触发 Pod 驱逐操作。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置一组驱逐宽限期(例如,memory.available=1m30s),对应于触发软性 Pod 驱逐操作之前软性驱逐阈值所需持续的时间长短。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        --experimental-bootstrap-kubeconfig string
        - -已弃用:应使用 --bootstrap-kubeconfig 标志 -
        --experimental-check-node-capabilities-before-mount
        设置为 true 表示 kubelet 将会集成内核的 memcg 通知机制而不是使用轮询机制来 判断是否达到了内存驱逐阈值。 此标志将在 1.24 或更高版本移除。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        [试验性功能] 启用此标志之后,kubelet 会避免将标记为敏感的字段(密码、密钥、令牌等) 写入日志中。运行时的日志清理可能会带来相当的计算开销,因此不应该在 产品环境中启用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置为 true 表示如果主机启用了交换分区,kubelet 将直接失败。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        检查配置文件中新数据的时间间隔。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置 kubelet 执行发夹模式(hairpin)网络地址转译的方式。 该模式允许后端端点对其自身服务的访问能够再次经由负载均衡转发回自身。 可选项包括 promiscuous-bridgehairpin-vethnone。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用于运行 healthz 服务器的 IP 地址(设置为 0.0.0.0 表示使用所有 IPv4 接口, 设置为 :: 表示使用所有 IPv6 接口。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        本地 healthz 端点使用的端口(设置为 0 表示禁用)。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        --housekeeping-interval duration     默认值:10s
        - -清理容器操作的时间间隔。 -
        --http-check-frequency duration     默认值:20s
        HTTP 服务以获取新数据的时间间隔。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        镜像垃圾回收上限。磁盘使用空间达到该百分比时,镜像垃圾回收将持续工作。 值必须在 [0,100] 范围内。要禁用镜像垃圾回收,请设置为 100。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        镜像垃圾回收下限。磁盘使用空间在达到该百分比之前,镜像垃圾回收操作不会运行。 值必须在 [0,100] 范围内,并且不得大于 --image-gc-high-threshold的值。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        如果在该参数值所设置的期限之前没有拉取镜像的进展,镜像拉取操作将被取消。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        标记数据包将被丢弃的 fwmark 位设置。必须在 [0,31] 范围内。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        标记数据包将进行 SNAT 的 fwmark 空间位设置。必须在 [0,31] 范围内。 请将此参数与 kube-proxy 中的相应参数匹配。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        若启用,则 kubelet 将与内核中的 memcg 通知机制集成,不再使用轮询的方式来判定 是否 Pod 达到内存驱逐阈值。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        每秒发送到 apiserver 的突发请求数量上限。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        发送到 apiserver 的请求的内容类型。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        与 apiserver 通信的每秒查询个数(QPS)。 -此值必须 >= 0。如果为 0, 则使用默认 QPS(5)。 +此值必须 >= 0。如果为 0,则使用默认 QPS(5)。 不包含事件和节点心跳 api,它们的速率限制是由一组不同的标志所控制。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -kubernetes 系统预留的资源配置,以一组 资源名称=资源数量 格式表示。 +kubernetes 系统预留的资源配置,以一组 <资源名称>=<资源数量> 格式表示。 (例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100')。 当前支持 cpumemory 和用于根文件系统的 ephemeral-storage。 -请参阅相关文档获取更多信息。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +请参阅这里获取更多信息。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        给出某个顶层 cgroup 绝对名称,该 cgroup 用于管理通过标志 --kube-reserved 为 kubernetes 组件所预留的计算资源。例如:"/kube-reserved"。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用于创建和运行 kubelet 的 cgroup 的绝对名称。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        形式为 <file>:<N>。 当日志逻辑执行到命中 <file> 的第 <N> 行时,转储调用堆栈。 -(已弃用:将在未来的版本中删除,进一步了解) +(已弃用:将在未来的版本中删除,进一步了解。)
        如果此值为非空,则在所指定的目录中写入日志文件。 -(已弃用:将在未来的版本中删除,进一步了解) +(已弃用:将在未来的版本中删除,进一步了解。)
        如果此值非空,使用所给字符串作为日志文件名。 +(已弃用:将在未来的版本中删除,进一步了解。)
        设置日志文件的最大值。单位为兆字节(M)。如果值为 0,则表示文件大小无限制。 -(已弃用:将在未来的版本中删除,进一步了解) +(已弃用:将在未来的版本中删除,进一步了解。)
        [实验性特性]在具有拆分输出流的 JSON 格式中,可以将信息消息缓冲一段时间以提高性能。 零字节的默认值禁用缓冲。大小可以指定为字节数(512)、1000 的倍数(1K)、1024 的倍数(2Ki) 或这些(3M、4G、5Mi、6Gi)的幂。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -[实验性特性]以 JSON 格式,将错误消息写入 stderr,将 info 消息写入 stdout。 +[实验性特性]以 JSON 格式,将错误消息写入 stderr,将 info 消息写入 stdout。 默认是将单个流写入标准输出。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置日志文件格式。可以设置的格式有:"text""json"。 -非默认的格式不会使用以下标志的配置:--add-dir-header, --alsologtostderr, ---log-backtrace-at, --log-dir, --log-file, ---log-file-max-size, --logtostderr, --skip-headers, ---skip-log-headers, --stderrthreshold, --log-flush-frequency。 +非默认的格式不会使用以下标志的配置:--add-dir-header--alsologtostderr、 +--log-backtrace-at--log-dir--log-file, +--log-file-max-size--logtostderr--skip-headers、 +--skip-log-headers--stderrthreshold--log-flush-frequency。 非默认选项的其它值都应视为 Alpha 特性,将来出现更改时不会额外警告。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        日志输出到 stderr 而不是文件。 (已弃用:将会在未来的版本删除, -进一步了解) +进一步了解。)
        设置为 true 表示 kubelet 将确保 iptables 规则在主机上存在。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用于访问要运行的其他 Pod 规范的 URL。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        取值为由 HTTP 头部组成的逗号分隔列表,在访问 --manifest-url 所给出的 URL 时使用。 名称相同的多个头部将按所列的顺序添加。该参数可以多次使用。例如: --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        --master-service-namespace string     默认值:default
        kubelet 进程可以打开的最大文件数量。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        此 kubelet 能运行的 Pod 最大数量。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        每个已停止容器可以保留的的最大实例数量。每个容器占用一些磁盘空间。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +已弃用:改用 --eviction-hard--eviction-soft。 +此标志将在未来的版本中删除。
        -内存管理器策略使用。可选值:'None', 'Static'。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +内存管理器策略使用。可选值:'None''Static'。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        已结束的容器在被垃圾回收清理之前的最少存活时间。 -例如:300ms10s 或者 2h45m。 +例如:'300ms''10s' 或者 '2h45m'。 已弃用:请改用 --eviction-hard 或者 --eviction-soft。 此标志将在未来的版本中删除。
        -不再使用的镜像在被垃圾回收清理之前的最少存活时间。 -例如:300ms10s 或者 2h45m。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +已结束的容器在被垃圾回收清理之前的最少存活时间。 +例如:'300ms''10s' 或者 '2h45m'。 +已弃用:这个参数应该通过 Kubelet 的 --config 标志指定的配置文件来设置。 +(进一步了解
        -<警告:alpha 特性> 设置 kubelet/Pod 生命周期中各种事件调用的网络插件的名称。 +设置 kubelet/Pod 生命周期中各种事件调用的网络插件的名称。 仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +(已弃用:将会随着 dockershim 一起删除。)
        node.status.images 中可以报告的最大镜像数量。如果指定为 -1,则不设上限。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        指定 kubelet 向主控节点汇报节点状态的时间间隔。注意:更改此常量时请务必谨慎, 它必须与节点控制器中的 nodeMonitorGracePeriod 一起使用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        如果设置此标志为 true,则仅将日志写入其原来的严重性级别中, 而不是同时将其写入更低严重性级别中。 -已弃用:将在未来的版本中删除, -(进一步了解) +已弃用:将在未来的版本中删除。 +(进一步了解。)
        kubelet 进程的 oom-score-adj 参数值。有效范围为 [-1000,1000]。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用于给 Pod 分配 IP 地址的 CIDR 地址池,仅在独立运行模式下使用。 在集群模式下,CIDR 设置是从主服务器获取的。对于 IPv6,分配的 IP 的最大数量为 65536。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置包含要运行的静态 Pod 的文件的路径,或单个静态 Pod 文件的路径。以点(.) 开头的文件将被忽略。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置每个 Pod 中的最大进程数目。如果为 -1,则 kubelet 使用节点可分配的 PID 容量作为默认值。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 在每个处理器核上可运行的 Pod 数量。此 kubelet 上的 Pod 总数不能超过 --max-pods 标志值。因此,如果此计算结果导致在 kubelet 上允许更多数量的 Pod,则使用 --max-pods 值。值为 0 表示不作限制。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 服务监听的本机端口号。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置 kubelet 的默认内核调整行为。如果已设置该参数,当任何内核可调参数与 kubelet 默认值不同时,kubelet 都会出错。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置主机数据库(即,云驱动)中用来标识节点的唯一标识。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        <警告:alpha 特性> 设置在指定的 QoS 级别预留的 Pod 资源请求,以一组 "资源名称=百分比" 的形式进行设置,例如 memory=50%。 当前仅支持内存(memory)。要求启用 QOSReserved 特性门控。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        kubelet 可以在没有身份验证/鉴权的情况下提供只读服务的端口(设置为 0 表示禁用)。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        向 API 服务器注册节点,如果未提供 --kubeconfig,此标志无关紧要, 因为 Kubelet 没有 API 服务器可注册。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置本节点的污点标记,格式为 <key>=<value>:<effect>, 以逗号分隔。当 --register-node 为 false 时此标志无效。 -已弃用:将在未来版本中移除。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置突发性镜像拉取的个数上限,在不超过 --registration-qps 设置值的前提下 暂时允许此参数所给的镜像拉取个数。仅在 --registry-qps 大于 0 时使用。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        如此值大于 0,可用来限制镜像仓库的 QPS 上限。设置为 0,表示不受限制。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用逗号分隔的一组 CPU 或 CPU 范围列表,给出为系统和 Kubernetes 保留使用的 CPU。 此列表所给出的设置优先于通过 --system-reserved--kube-reskube-reserved 所保留的 CPU 个数配置。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -以逗号分隔的 NUMA 节点内存预留列表。(例如 --reserved-memory 0:memory=1Gi,hugepages-1M=2Gi --reserved-memory 1:memory=2Gi)。 +以逗号分隔的 NUMA 节点内存预留列表。(例如 --reserved-memory 0:memory=1Gi,hugepages-1M=2Gi --reserved-memory 1:memory=2Gi)。 每种内存类型的总和应该等于--kube-reserved--system-reserved--eviction-threshold了解更多详细信息。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        名字解析服务的配置文件名,用作容器 DNS 解析配置的基础。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        <警告:Beta 特性> 设置当客户端证书即将过期时 kubelet 自动从 kube-apiserver 请求新的证书进行轮换。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        当 kubelet 的服务证书即将过期时自动从 kube-apiserver 请求新的证书进行轮换。 要求启用 RotateKubeletServerCertificate 特性门控,以及对提交的 CertificateSigningRequest 对象进行批复(Approve)操作。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -设置为 true 表示从本地清单或远程 URL 创建完 Pod 后立即退出 kubelet 进程。 +设置为 true 表示从本地清单或远程 URL 创建完 Pod 后立即退出 kubelet 进程。 与 --enable-server 标志互斥。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置除了长时间运行的请求(包括 pulllogsexecattach 等操作)之外的其他运行时请求的超时时间。 到达超时时间时,请求会被取消,抛出一个错误并会等待重试。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        --seccomp-profile-root string     默认值:/var/lib/kubelet/seccomp--seccomp-default RuntimeDefault
        - -<警告:alpha 特性> seccomp 配置文件目录。 -已弃用:将在 1.23 或更高版本中移除,以使用 <root-dir>/seccomp 目录。 +<警告:alpha 特性> 启用 RuntimeDefault 作为所有工作负载的默认 seccomp 配置文件。SeccompDefault 特性门控必须启用以允许此标志,默认情况下禁用。
        逐一拉取镜像。建议 *不要* 在 docker 守护进程版本低于 1.9 或启用了 Aufs 存储后端的节点上 更改默认值。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        -设置为 true 时在日志消息中去掉标头前缀。 -(已弃用:将在未来的版本中删除,进一步了解) +设置为 true 时在日志消息中去掉标头前缀。 +(已弃用:将在未来的版本中删除,进一步了解。)
        -设置为 true,打开日志文件时去掉标头。 -(已弃用:将在未来的版本中删除,进一步了解) +设置为 true,打开日志文件时去掉标头。 +(已弃用:将在未来的版本中删除,进一步了解。)
        设置严重程度达到或超过此阈值的日志输出到标准错误输出。 -(已弃用:将在未来的版本中删除,进一步了解) +(已弃用:将在未来的版本中删除,进一步了解。)
        设置流连接在自动关闭之前可以空闲的最长时间。0 表示没有超时限制。 例如:5m。 注意:与 kubelet 服务器的所有连接最长持续时间为 4 小时。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        在运行中的容器与其配置之间执行同步操作的最长时间间隔。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        此标志值为一个 cgroup 的绝对名称,用于所有尚未放置在根目录下某 cgroup 内的非内核进程。 空值表示不指定 cgroup。回滚该参数需要重启机器。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        系统预留的资源配置,以一组 资源名称=资源数量 的格式表示, (例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100')。 目前仅支持 cpumemory 的设置。 更多细节可参考 -相关文档。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +相关文档。 +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        此标志给出一个顶层 cgroup 绝对名称,该 cgroup 用于管理非 kubernetes 组件, 这些组件的计算资源通过 --system-reserved 标志进行预留。 例如 "/system-reserved"。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        包含 x509 证书的文件路径,用于 HTTPS 认证。 如果有中间证书,则中间证书要串接在在服务器证书之后。 如果未提供 --tls-cert-file--tls-private-key-file, kubelet 会为公开地址生成自签名证书和密钥,并将其保存到通过 --cert-dir 指定的目录中。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置支持的最小 TLS 版本号,可选的版本号包括:VersionTLS10VersionTLS11VersionTLS12VersionTLS13。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        包含与 --tls-cert-file 对应的 x509 私钥文件路径。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        设置拓扑管理策略(Topology Manager policy)。可选值包括:nonebest-effortrestrictedsingle-numa-node。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        拓扑提示信息使用范围。拓扑管理器从提示提供者(Hints Providers)处收集提示信息, 并将其应用到所定义的范围以确保 Pod 准入。 可选值包括:container(默认)、pod。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        用来搜索第三方存储卷插件的目录。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        指定 kubelet 计算和缓存所有 Pod 和卷的磁盘用量总值的时间间隔。要禁用磁盘用量计算, 请设置为 0。 -已弃用:应在 --config 所给的配置文件中进行设置。 -(进一步了解) +(已弃用:应在 --config 所给的配置文件中进行设置。 +请参阅 kubelet-config-file 了解更多信息。)
        @@ -48,16 +50,20 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -66,7 +72,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -75,7 +83,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -86,7 +96,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -95,7 +107,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -104,7 +118,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -112,8 +128,37 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 []string @@ -124,8 +169,10 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -135,7 +182,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 @@ -146,40 +195,46 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 +

        响应的状态,当 responseObject 不是 Status 类型时被赋值。 对于成功的请求,此字段仅包含 code 和 statusSuccess。 对于非 Status 类型的错误响应,此字段会被自动赋值为出错信息。 +

        @@ -188,7 +243,9 @@ at Response Level.--> @@ -197,7 +254,9 @@ at Response Level.--> @@ -211,6 +270,7 @@ at Response Level.--> to the metadata.annotations of the submitted object. Keys should uniquely identify the informing component to avoid name collisions (e.g. podsecuritypolicy.admission.k8s.io/policy). Values should be short. Annotations are included in the Metadata level.--> +

        annotations 是一个无结构的键-值映射,其中保存的是一个审计事件。 该事件可以由请求处理链路上的插件来设置,包括身份认证插件、鉴权插件以及 准入控制插件等。 @@ -220,6 +280,7 @@ at Response Level.--> (例如 podsecuritypolicy.admission.k8s.io/policy)。 映射中的键值应该比较简洁。 当审计级别为 Metadata 时会包含 annotations 字段。 +

        @@ -230,7 +291,9 @@ at Response Level.--> +

        EventList 是审计事件(Event)的列表。 +

        字段描述
        +

        生成事件所对应的审计级别。 +

        auditID [必需]
        -k8s.io/apimachinery/pkg/types.UID +k8s.io/apimachinery/pkg/types.UID
        +

        为每个请求所生成的唯一审计 ID。 +

        +

        生成此事件时请求的处理阶段。 +

        +

        requestURI 是客户端发送到服务器端的请求 URI。 +

        +

        verb 是与请求对应的 Kubernetes 动词。对于非资源请求,此字段为 HTTP 方法的小写形式。 +

        +

        关于认证用户的信息。 +

        +

        关于所伪装(impersonated)的用户的信息。 +

        - + +

        发起请求和中间代理的源 IP 地址。 + 源 IP 从以下(按顺序)列出: +

        +
          +
        1. + +X-Forwarded-For 请求标头 IP +
        2. +
        3. + +X-Real-Ip 标头,如果 X-Forwarded-For 列表中不存在 +
        4. +
        5. + +连接的远程地址,如果它无法与此处列表中的最后一个 IP(X-Forwarded-For 或 X-Real-Ip)匹配。 +注意:除最后一个 IP 外的所有 IP 均可由客户端任意设置。 +
        6. +
        +

        userAgent 中记录客户端所报告的用户代理(User Agent)字符串。 注意 userAgent 信息是由客户端提供的,一定不要信任。 +

        +

        此请求所指向的对象引用。对于 List 类型的请求或者非资源请求,此字段可忽略。 +

        requestObject
        -k8s.io/apimachinery/pkg/runtime.Unknown +k8s.io/apimachinery/pkg/runtime.Unknown
        +

        来自请求的 API 对象,以 JSON 格式呈现。requestObject 在请求中按原样记录 (可能会采用 JSON 重新编码),之后会进入版本转换、默认值填充、准入控制以及 配置信息合并等阶段。此对象为外部版本化的对象类型,甚至其自身可能并不是一个 合法的对象。对于非资源请求,此字段被忽略。 - 只有当审计级别为 Request 或更高的时候才会记录。 + 只有当审计级别为 Request 或更高的时候才会记录。 +

        responseObject
        -k8s.io/apimachinery/pkg/runtime.Unknown +k8s.io/apimachinery/pkg/runtime.Unknown
        +

        响应中包含的 API 对象,以 JSON 格式呈现。requestObject 是在被转换为外部类型 并序列化为 JSON 格式之后才被记录的。 对于非资源请求,此字段会被忽略。 只有审计级别为 Response 时才会记录。 +

        +

        请求到达 API 服务器时的时间。 +

        +

        请求到达当前审计阶段时的时间。 +

        @@ -270,7 +333,9 @@ EventList 是审计事件(Event)的列表。 Policy defines the configuration of audit logging, and the rules for how different request categories are logged. --> +

        Policy 定义的是审计日志的配置以及不同类型请求的日志记录规则。 +

        字段描述
        @@ -284,7 +349,9 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录 @@ -297,11 +364,13 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录 A request may match multiple rules, in which case the FIRST matching rule is used. The default audit level is None, but can be overridden by a catch-all rule at the end of the list. PolicyRules are strictly ordered.--> +

        字段 rules 设置请求要被记录的审计级别(level)。 每个请求可能会与多条规则相匹配;发生这种状况时遵从第一条匹配规则。 默认的审计级别是 None,不过可以在列表的末尾使用一条全抓(catch-all)规则 重载其设置。 列表中的规则(PolicyRule)是严格有序的。 +

        @@ -311,9 +380,34 @@ PolicyRules are strictly ordered.-->
        + + + + + + @@ -324,7 +418,9 @@ PolicyRules are strictly ordered.--> +

        PolicyList 是由审计策略(Policy)组成的列表。 +

        字段描述
        +

        包含 metadata 字段是为了便于与 API 基础设施之间实现互操作。 +

        参考 Kubernetes API 文档了解 metadata 字段的详细信息。
        +

        字段 omitStages 是一个阶段(Stage)列表,其中包含无须生成事件的阶段。 注意这一选项也可以通过每条规则来设置。 审计组件最终会忽略出现在 omitStages 中阶段,也会忽略规则中的阶段。 +

        +
        +omitManagedFields
        +bool +
        + +

        +omitManagedFields 标明将请求和响应主体写入 API 审计日志时,是否省略其托管字段。 +此字段值用作全局默认值 - 'true' 值将省略托管字段,否则托管字段将包含在 API 审计日志中。 +请注意,也可以按规则指定此值,在这种情况下,规则中指定的值将覆盖全局默认值。 +

        @@ -363,7 +459,9 @@ PolicyList 是由审计策略(Policy)组成的列表。 +

        GroupResources 代表的是某 API 组中的资源类别。 +

        字段描述
        @@ -384,28 +482,39 @@ GroupResources 代表的是某 API 组中的资源类别。 []string @@ -416,9 +525,11 @@ For example: +

        字段 resourceNames 是策略将匹配的资源实例名称列表。 使用此字段时,resources 必须指定。 空的 resourceNames 列表意味着资源的所有实例都会匹配到此策略。 +

        @@ -442,7 +553,9 @@ For example: +

        Level 定义的是审计过程中在日志内记录的信息量。 +

        ## `ObjectReference` {#audit-k8s-io-v1-ObjectReference} @@ -456,7 +569,9 @@ Level 定义的是审计过程中在日志内记录的信息量。 +

        ObjectReference 包含的是用来检查或修改所引用对象时将需要的全部信息。 +

        字段描述
        - - 字段 resources 是此规则所适用的资源的列表。
        +'∗/scale' matches all scale subresources. +--> +

        + 字段 resources 是此规则所适用的资源的列表。 +

        +
        +

        例如:
        'pods' 匹配 Pods;
        'pods/log' 匹配 Pods 的 log 子资源;
        '∗' 匹配所有资源及其子资源;
        'pods/∗' 匹配 Pods 的所有子资源;
        '∗/scale' 匹配所有的 scale 子资源。

        +

        - 如果存在通配符,则合法性检查逻辑会确保 resources 中的条目不会彼此重叠。
        +

        + 如果存在通配符,则合法性检查逻辑会确保 resources 中的条目不会彼此重叠。 +

        +
        +

        空的列表意味着规则适用于该 API 组中的所有资源及其子资源。 +

        @@ -487,7 +602,7 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的 @@ -510,7 +627,9 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的 @@ -545,8 +664,10 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的 PolicyRule maps requests based off metadata to an audit Level. Requests must match the rules of every field (an intersection of rules). --> +

        PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别。 请求必须与每个字段所定义的规则都匹配(即 rules 的交集)才被视为匹配。 +

        字段描述
        uid
        -k8s.io/apimachinery/pkg/types.UID +k8s.io/apimachinery/pkg/types.UID
        资源对象的唯一标识(UID)。 @@ -500,8 +615,10 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的 +

        字段 apiGroup 给出包含所引用对象的 API 组的名称。 空字符串代表 core API 组。 +

        +

        字段 apiVersion 是包含所引用对象的 API 组的版本。 +

        @@ -557,7 +678,9 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 @@ -567,8 +690,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 @@ -579,8 +704,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 +

        此规则所适用的用户组的列表。如果用户是所列用户组中任一用户组的成员,则视为匹配。 空列表意味着适用于所有用户组。 +

        @@ -590,8 +717,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
        @@ -600,8 +729,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 @@ -610,12 +741,14 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 +

        此规则所适用的名字空间列表。 - 空字符串("")意味着适用于非名字空间作用域的资源。 + 空字符串("")意味着适用于非名字空间作用域的资源。 空列表意味着适用于所有名字空间。 +

        @@ -642,12 +777,44 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 +

        字段 omitStages 是一个阶段(Stage)列表,针对所列的阶段服务器不会生成审计事件。 注意这一选项也可以在策略(Policy)级别指定。服务器审计组件会忽略 omitStages 中给出的阶段,也会忽略策略中给出的阶段。 空列表意味着不对阶段作任何限制。 - - +

        + + + + +
        + + + +
        字段描述
        +

        与此规则匹配的请求所对应的日志记录级别(Level)。 +

        +

        根据身份认证所确定的用户名的列表,给出此规则所适用的用户。 空列表意味着适用于所有用户。 +

        +

        此规则所适用的动词(verb)列表。 空列表意味着适用于所有动词。 +

        +

        此规则所适用的资源类别列表。 空列表意味着适用于 API 组中的所有资源类别。 +

        nonResourceURLs
        @@ -627,11 +760,13 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 Examples: "/metrics" - Log requests for apiserver metrics "/healthz∗" - Log all health checks--> +

        字段 nonResourceURLs 给出一组需要被审计的 URL 路径。 允许使用 ∗,但只能作为路径中最后一个完整分段。
        例如:
        "/metrics" - 记录对 API 服务器度量值(metrics)的所有请求;
        "/healthz∗" - 记录所有健康检查请求。 +

        omitManagedFields
        + bool +
        + +

        + omitManagedFields 决定将请求和响应主体写入 API 审计日志时,是否省略其托管字段。 +

        +
          +
        • 值为 'true' 将从 API 审计日志中删除托管字段
        • +
        • + 值为 'false' 表示托管字段应包含在 API 审计日志中 + 请注意,如果指定此规则中的值将覆盖全局默认值。 + 如果未指定,则使用 policy.omitManagedFields 中指定的全局默认值。 +
        • +
        +
        @@ -670,5 +837,7 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 +

        Stage 定义在请求处理过程中可以生成审计事件的阶段。 +

        diff --git a/content/zh/docs/reference/config-api/apiserver-config.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-config.v1.md similarity index 100% rename from content/zh/docs/reference/config-api/apiserver-config.v1.md rename to content/zh-cn/docs/reference/config-api/apiserver-config.v1.md diff --git a/content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md similarity index 100% rename from content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md rename to content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md diff --git a/content/zh/docs/reference/config-api/apiserver-encryption.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-encryption.v1.md similarity index 100% rename from content/zh/docs/reference/config-api/apiserver-encryption.v1.md rename to content/zh-cn/docs/reference/config-api/apiserver-encryption.v1.md diff --git a/content/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md b/content/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md new file mode 100644 index 0000000000000..32503d37fb6e1 --- /dev/null +++ b/content/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md @@ -0,0 +1,152 @@ +--- +title: Event Rate Limit Configuration (v1alpha1) +content_type: tool-reference +package: eventratelimit.admission.k8s.io/v1alpha1 +--- + + +## 资源类型 {#resource-types} + +- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) + +## `Configuration` {#eventratelimit-admission-k8s-io-v1alpha1-Configuration} + + +

        Configuration 为 EventRateLimit 准入控制器提供配置数据。

        + + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        eventratelimit.admission.k8s.io/v1alpha1
        kind
        string
        Configuration
        limits [Required]
        +[]Limit +
        + +

        limits 是为所接收到的事件查询设置的限制。可以针对服务器端接收到的事件设置限制, +按逐个名字空间、逐个用户、或逐个来源+对象组合的方式均可以。 +至少需要设置一种限制。

        +
        + +## `Limit` {#eventratelimit-admission-k8s-io-v1alpha1-Limit} + + +**出现在:** + +- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) + + +

        Limit 是为特定限制类型提供的配置数据。

        + + + + + + + + + + + + + + + + + + +
        字段描述
        type [必需]
        +LimitType +
        + +

        type 是此配置所适用的限制的类型。

        +
        qps [必需]
        +int32 +
        + +

        qps 是针对此类型的限制每秒钟所允许的事件查询次数。qps 和 burst +字段一起用来确定是否特定的事件查询会被接受。qps 确定的是当超出查询数量的 +burst 值时可以接受的查询个数。

        +
        burst [必需]
        +int32 +
        + +

        burst 是针对此类型限制的突发事件查询数量。qps 和 burst 字段一起使用可用来确定特定的事件查询是否被接受。 +burst 字段确定针对特定的事件桶(bucket)可以接受的规模上限。 +例如,如果 burst 是 10,qps 是 3,那么准入控制器会在接收 10 个查询之后阻塞所有查询。 +每秒钟可以额外允许 3 个查询。如果这一限额未被用尽,则剩余的限额会被顺延到下一秒钟, +直到再次达到 10 个限额的上限。

        +
        cacheSize
        +int32 +
        + +

        cacheSize 是此类型限制的 LRU 缓存的规模。如果某个事件桶(bucket)被从缓存中剔除, +该事件桶所对应的限额也会被重置。如果后来再次收到针对某个已被剔除的事件桶的查询, +则该事件桶会重新以干净的状态进入缓存,因而获得全量的突发查询配额。

        +

        默认的缓存大小是 4096。

        +

        如果 limitType 是 “server”,则 cacheSize 设置会被忽略。

        +
        + +## `LimitType` {#eventratelimit-admission-k8s-io-v1alpha1-LimitType} + + +(`string` 类型的别名) + +**出现在:** + +- [Limit](#eventratelimit-admission-k8s-io-v1alpha1-Limit) + + +

        LimitType 是限制类型(例如:per-namespace)。

        + + diff --git a/content/zh/docs/reference/config-api/apiserver-webhookadmission.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-webhookadmission.v1.md similarity index 100% rename from content/zh/docs/reference/config-api/apiserver-webhookadmission.v1.md rename to content/zh-cn/docs/reference/config-api/apiserver-webhookadmission.v1.md diff --git a/content/zh/docs/reference/config-api/client-authentication.v1.md b/content/zh-cn/docs/reference/config-api/client-authentication.v1.md similarity index 100% rename from content/zh/docs/reference/config-api/client-authentication.v1.md rename to content/zh-cn/docs/reference/config-api/client-authentication.v1.md diff --git a/content/zh/docs/reference/config-api/client-authentication.v1beta1.md b/content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md similarity index 100% rename from content/zh/docs/reference/config-api/client-authentication.v1beta1.md rename to content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md diff --git a/content/zh-cn/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/zh-cn/docs/reference/config-api/imagepolicy.v1alpha1.md new file mode 100644 index 0000000000000..9c02384d349f8 --- /dev/null +++ b/content/zh-cn/docs/reference/config-api/imagepolicy.v1alpha1.md @@ -0,0 +1,210 @@ +--- +title: Image Policy API (v1alpha1) +content_type: tool-reference +package: imagepolicy.k8s.io/v1alpha1 +--- + + +## 资源类型 {#resource-types} + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + +## `ImageReview` {#imagepolicy-k8s-io-v1alpha1-ImageReview} + + +

        ImageReview 检查某个 Pod 中是否可以使用某些镜像。

        + + + + + + + + + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        imagepolicy.k8s.io/v1alpha1
        kind
        string
        ImageReview
        metadata
        +meta/v1.ObjectMeta +
        + +

        标准的对象元数据。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

        + 参阅 Kubernetes API 文档了解 metadata 字段的内容。 +
        spec [必需]
        +ImageReviewSpec +
        + +

        spec 中包含与被评估的 Pod 相关的信息。

        +
        status
        +ImageReviewStatus +
        + +

        status 由后台负责填充,用来标明 Pod 是否会被准入。

        +
        + +## `ImageReviewContainerSpec` {#imagepolicy-k8s-io-v1alpha1-ImageReviewContainerSpec} + + +**出现在:** + +- [ImageReviewSpec](#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec) + + +

        ImageReviewContainerSpec 是对 Pod 创建请求中的某容器的描述。

        + + + + + + + + + + +
        字段描述
        image
        +string +
        + +

        此字段的格式可以是 image:tag 或 image@SHA:012345679abcdef。

        +
        + +## `ImageReviewSpec` {#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec} + + +**出现在:** + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + + +

        ImageReviewSpec 是对 Pod 创建请求的描述。

        + + + + + + + + + + + + + + + +
        字段描述
        containers
        +[]ImageReviewContainerSpec +
        + +

        containers 是一个列表,其中包含正被创建的 Pod 中各容器的信息子集。

        +
        annotations
        +map[string]string +
        + +

        annotations 是一个键值对列表,内容抽取自 Pod 的注解(annotations)。 +其中仅包含与模式 *.image-policy.k8s.io/* 匹配的键。 +每个 Webhook 后端要负责决定如何解释这些注解(如果有的话)。

        + +
        namespace
        +string +
        + +

        namespace 是 Pod 创建所针对的名字空间。

        +
        + +## `ImageReviewStatus` {#imagepolicy-k8s-io-v1alpha1-ImageReviewStatus} + + +**出现在:** + +- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) + + +

        ImageReviewStatus 是针对 Pod 创建请求所作的评估结果。

        + + + + + + + + + + + + + + + +
        字段描述
        allowed [必需]
        +bool +
        + +

        allowed 表明所有镜像都可以被运行。

        +
        reason
        +string +
        + +

        allowed 不是 false,reason 应该为空。 +否则其中应包含出错信息的简短描述。Kubernetes 在向用户展示此信息时可能会截断过长的错误文字。

        +
        auditAnnotations
        +map[string]string +
        + +

        auditAnnotations 会被通过 AddAnnotation 添加到准入控制器的 attributes 对象上。 +注解键应该不含前缀,换言之,准入控制器会添加合适的前缀。

        +
        + diff --git a/content/zh/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md similarity index 52% rename from content/zh/docs/reference/config-api/kube-proxy-config.v1alpha1.md rename to content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md index 1c89395b51001..d0c388f054b77 100644 --- a/content/zh/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -41,7 +41,8 @@ KubeProxyConfiguration 包含用来配置 Kubernetes 代理服务器的所有配 - featureGates 是一个功能特性名称到布尔值的映射表,用来启用或者禁用测试性质的功能特性。 +

        featureGates 字段是一个功能特性名称到布尔值的映射表, + 用来启用或者禁用测试性质的功能特性。

        bindAddress [必需]
        @@ -52,8 +53,8 @@ KubeProxyConfiguration 包含用来配置 Kubernetes 代理服务器的所有配 bindAddress is the IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces) --> - bindAddress 是代理服务器提供服务时所用 IP 地址(设置为 0.0.0.0 -时意味着在所有网络接口上提供服务)。 +

        bindAddress 字段是代理服务器提供服务时所用 IP 地址(设置为 0.0.0.0 +时意味着在所有网络接口上提供服务)。

        healthzBindAddress [必需]
        @@ -64,8 +65,8 @@ for all interfaces) healthzBindAddress is the IP address and port for the health check server to serve on, defaulting to 0.0.0.0:10256 --> - healthzBindAddress 是健康状态检查服务器提供服务时所使用的的 IP 地址和端口, - 默认设置为 '0.0.0.0:10256'。 +

        healthzBindAddress 字段是健康状态检查服务器提供服务时所使用的的 IP 地址和端口, + 默认设置为 '0.0.0.0:10256'。

        metricsBindAddress [必需]
        @@ -76,8 +77,8 @@ defaulting to 0.0.0.0:10256 metricsBindAddress is the IP address and port for the metrics server to serve on, defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) --> - metricsBindAddress 是度量值服务器提供服务时所使用的的 IP 地址和端口, - 默认设置为 '127.0.0.1:10249'(设置为 0.0.0.0 意味着在所有接口上提供服务)。 +

        metricsBindAddress 字段是度量值服务器提供服务时所使用的的 IP 地址和端口, + 默认设置为 '127.0.0.1:10249'(设置为 0.0.0.0 意味着在所有接口上提供服务)。

        bindAddressHardFail [必需]
        @@ -87,7 +88,8 @@ defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) - bindAddressHardFail 设置为 true 时,kube-proxy 将无法绑定到某端口这类问题视为致命错误并直接退出。 +

        bindAddressHardFail 字段设置为 true 时, + kube-proxy 将无法绑定到某端口这类问题视为致命错误并直接退出。

        enableProfiling [必需]
        @@ -98,8 +100,8 @@ defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) enableProfiling enables profiling via web interface on /debug/pprof handler. Profiling handlers will be handled by metrics server. --> - enableProfiling 通过 '/debug/pprof' 处理程序在 Web 界面上启用性能分析。 - 性能分析处理程序将由度量值服务器执行。 +

        enableProfiling 字段通过 '/debug/pprof' 处理程序在 Web 界面上启用性能分析。 + 性能分析处理程序将由度量值服务器执行。

        clusterCIDR [必需]
        @@ -111,8 +113,9 @@ Profiling handlers will be handled by metrics server. bridge traffic coming from outside of the cluster. If not provided, no off-cluster bridging will be performed. --> - clusterCIDR 是集群中 Pods 所使用的 CIDR 范围。这一地址范围用于对来自集群外的请求 - 流量进行桥接。如果未设置,则 kube-proxy 不会对非集群内部的流量做桥接。 +

        clusterCIDR 字段是集群中 Pods 所使用的 CIDR 范围。 + 这一地址范围用于对来自集群外的请求流量进行桥接。 + 如果未设置,则 kube-proxy 不会对非集群内部的流量做桥接。

        hostnameOverride [必需]
        @@ -122,7 +125,8 @@ no off-cluster bridging will be performed. - hostnameOverride 非空时,所给的字符串(而不是实际的主机名)将被用作 kube-proxy 的标识。 +

        hostnameOverride 字段非空时, + 所给的字符串(而不是实际的主机名)将被用作 kube-proxy 的标识。

        clientConnection [必需]
        @@ -133,7 +137,8 @@ no off-cluster bridging will be performed. clientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver. --> - clientConnection 给出代理服务器与 API 服务器通信时要使用的 kubeconfig 文件和客户端链接设置。 +

        clientConnection 字段给出代理服务器与 API + 服务器通信时要使用的 kubeconfig 文件和客户端链接设置。

        iptables [必需]
        @@ -143,7 +148,7 @@ server to use when communicating with the apiserver. - iptables 字段包含与 iptables 相关的配置选项。 +

        iptables 字段字段包含与 iptables 相关的配置选项。

        ipvs [必需]
        @@ -153,7 +158,7 @@ server to use when communicating with the apiserver. - ipvs 中包含与 ipvs 相关的配置选项。 +

        ipvs 字段中包含与 ipvs 相关的配置选项。

        oomScoreAdj [必需]
        @@ -164,8 +169,8 @@ server to use when communicating with the apiserver. oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000] --> - oomScoreAdj 是为 kube-proxy 进程所设置的 oom-score-adj 值。 - 此设置值必须介于 [-1000, 1000] 范围内。 +

        oomScoreAdj 字段是为 kube-proxy 进程所设置的 oom-score-adj 值。 + 此设置值必须介于 [-1000, 1000] 范围内。

        mode [必需]
        @@ -175,7 +180,7 @@ the range [-1000, 1000] - mode 用来设置将使用的代理模式。 +

        mode 字段用来设置将使用的代理模式。

        portRange [必需]
        @@ -186,20 +191,20 @@ the range [-1000, 1000] portRange is the range of host ports (beginPort-endPort, inclusive) that may be consumed in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen. --> - portRange 是主机端口的范围,形式为 ‘beginPort-endPort’(包含边界), - 用来设置代理服务所使用的端口。如果未指定(即‘0-0’),则代理服务会随机选择端口号。 +

        portRange 字段是主机端口的范围,形式为 ‘beginPort-endPort’(包含边界), + 用来设置代理服务所使用的端口。如果未指定(即‘0-0’),则代理服务会随机选择端口号。

        udpIdleTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration - udpIdleTimeout 用来设置 UDP 链接保持活跃的时长(例如,'250ms'、'2s')。 - 此值必须大于 0。此字段仅适用于 mode 值为 'userspace' 的场合。 +

        udpIdleTimeout 字段用来设置 UDP 链接保持活跃的时长(例如,'250ms'、'2s')。 + 此值必须大于 0。此字段仅适用于 mode 值为 'userspace' 的场合。

        conntrack [必需]
        @@ -209,18 +214,18 @@ Must be greater than 0. Only applicable for proxyMode=userspace. - conntrack 包含与 conntrack 相关的配置选项。 +

        conntrack 字段包含与 conntrack 相关的配置选项。

        configSyncPeriod [必需]
        -meta/v1.Duration +meta/v1.Duration - configSyncPeriod 是从 API 服务器刷新配置的频率。此值必须大于 0。 +

        configSyncPeriod 字段是从 API 服务器刷新配置的频率。此值必须大于 0。

        nodePortAddresses [必需]
        @@ -236,13 +241,15 @@ If set it to "127.0.0.0/8", kube-proxy will only select the loopback interface f If set it to a non-zero IP block, kube-proxy will filter that down to just the IPs that applied to the node. An empty string slice is meant to select all network interfaces. --> - nodePortAddresses 是 kube-proxy 进程的 --nodeport-addresses 命令行参数设置。 +

        nodePortAddresses 字段是 kube-proxy 进程的 + --nodeport-addresses 命令行参数设置。 此值必须是合法的 IP 段。所给的 IP 段会作为参数来选择 NodePort 类型服务所使用的接口。 如果有人希望将本地主机(Localhost)上的服务暴露给本地访问,同时暴露在某些其他网络接口上 以实现某种目标,可以使用 IP 段的列表。 - 如果此值被设置为 "127.0.0.0/8",则 kube-proxy 将仅为 NodePort 服务选择本地回路(loopback)接口。 + 如果此值被设置为 "127.0.0.0/8",则 kube-proxy 将仅为 NodePort + 服务选择本地回路(loopback)接口。 如果此值被设置为非零的 IP 段,则 kube-proxy 会对 IP 作过滤,仅使用适用于当前节点的 IP 地址。 - 空的字符串列表意味着选择所有网络接口。 + 空的字符串列表意味着选择所有网络接口。

        winkernel [必需]
        @@ -252,7 +259,7 @@ An empty string slice is meant to select all network interfaces. - winkernel 包含与 winkernel 相关的配置选项。 +

        winkernel 字段包含与 winkernel 相关的配置选项。

        showHiddenMetricsForVersion [必需]
        @@ -262,8 +269,8 @@ An empty string slice is meant to select all network interfaces. - showHiddenMetricsForVersion 给出的是一个 Kubernetes 版本号字符串,用来设置你希望 - 显示隐藏度量值的版本。 +

        showHiddenMetricsForVersion 字段给出的是一个 Kubernetes 版本号字符串, + 用来设置你希望显示隐藏度量值的版本。

        detectLocalMode [必需]
        @@ -273,7 +280,66 @@ An empty string slice is meant to select all network interfaces. - detectLocalMode 用来确定检测本地流量的方式,默认为 LocalModeClusterCIDR。 +

        detectLocalMode 字段用来确定检测本地流量的方式,默认为 LocalModeClusterCIDR。

        + + +detectLocal [必需]
        +DetectLocalConfiguration + + + +

        detectLocal 字段包含与 DetectLocalMode 相关的可选配置设置。

        + + + + + +## `DetectLocalConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-DetectLocalConfiguration} + + +**出现在:** + +- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) + + +DetectLocalConfiguration 包含与 DetectLocalMode 选项相关的可选设置。 + + + + + + + + + + @@ -306,8 +372,8 @@ KubeProxyConntrackConfiguration 包含为 Kubernetes 代理服务器提供的 co maxPerCore is the maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore min). --> - maxPerCore 是每个 CPU 核所跟踪的 NAT 链接个数上限 - (0 意味着保留当前上限限制并忽略 min 字段设置值)。 +

        maxPerCore 字段是每个 CPU 核所跟踪的 NAT 链接个数上限 + (0 意味着保留当前上限限制并忽略 min 字段设置值)。

        @@ -378,8 +444,8 @@ KubeProxyIPTablesConfiguration 包含用于 Kubernetes 代理服务器的、与 masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using the pure iptables proxy mode. Values must be within the range [0, 31]. --> - masqueradeBit 是 iptables fwmark 空间中的具体一位,用来在纯 iptables 代理模式下 - 设置 SNAT。此值必须介于 [0, 31](含边界值)。 +

        masqueradeBit 字段是 iptables fwmark 空间中的具体一位, + 用来在纯 iptables 代理模式下设置 SNAT。此值必须介于 [0, 31](含边界值)。

        @@ -438,25 +505,25 @@ KubeProxyIPVSConfiguration 包含用于 Kubernetes 代理服务器的、与 ipvs @@ -558,7 +625,7 @@ KubeProxyWinkernelConfiguration 包含 Kubernetes 代理服务器的 Windows/HNS networkName is the name of the network kube-proxy will use to create endpoints and policies --> - networkName 是 kube-proxy 用来创建端点和策略的网络名称。 +

        networkName 字段是 kube-proxy 用来创建端点和策略的网络名称。

        + + + + + + @@ -665,6 +756,12 @@ this always falls back to the userspace proxy. - [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) + @@ -681,7 +778,7 @@ ClientConnectionConfiguration 包含构造客户端所需要的细节信息。 - kubeconfig 是指向一个 KubeConfig 文件的路径。 +

        kubeconfig 字段是指向一个 KubeConfig 文件的路径。

        + + +
        字段描述
        bridgeInterface [必需]
        +string +
        + +

        bridgeInterface 字段是一个表示单个桥接接口名称的字符串参数。 + Kube-proxy 将来自这个给定桥接接口的流量视为本地流量。 + 如果 DetectLocalMode 设置为 LocalModeBridgeInterface,则应设置该参数。

        +
        interfaceNamePrefix [必需]
        +string +
        + +

        interfaceNamePrefix 字段是一个表示单个接口前缀名称的字符串参数。 + Kube-proxy 将来自一个或多个与给定前缀匹配的接口流量视为本地流量。 + 如果 DetectLocalMode 设置为 LocalModeInterfaceNamePrefix,则应设置该参数。

        min [必需]
        @@ -318,24 +384,24 @@ per CPU core (0 to leave the limit as-is and ignore min). min is the minimum value of connect-tracking records to allocate, regardless of conntrackMaxPerCore (set maxPerCore=0 to leave the limit as-is). --> - min 给出要分配的链接跟踪记录个数下限。 - 设置此值时会忽略 maxPerCore 的值(将 maxPerCore 设置为 0 时不会调整上限值)。 +

        min 字段给出要分配的链接跟踪记录个数下限。 + 设置此值时会忽略 maxPerCore 的值(将 maxPerCore 设置为 0 时不会调整上限值)。

        tcpEstablishedTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - tcpEstablishedTimeout 给出空闲 TCP 连接的保留时间(例如,'2s')。 - 此值必须大于 0。 +

        tcpEstablishedTimeout 字段给出空闲 TCP 连接的保留时间(例如,'2s')。 + 此值必须大于 0。

        tcpCloseWaitTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - tcpCloseWaitTimeout 用来设置空闲的、处于 CLOSE_WAIT 状态的 conntrack 条目 +

        tcpCloseWaitTimeout 字段用来设置空闲的、处于 CLOSE_WAIT 状态的 conntrack 条目 保留在 conntrack 表中的时间长度(例如,'60s')。 - 此设置值必须大于 0。 + 此设置值必须大于 0。

        masqueradeAll [必需]
        @@ -389,30 +455,31 @@ the pure iptables proxy mode. Values must be within the range [0, 31]. - masqueradeAll 用来通知 kube-proxy 在使用纯 iptables 代理模式时对所有流量执行 - SNAT 操作。 +

        masqueradeAll 字段用来通知 kube-proxy + 在使用纯 iptables 代理模式时对所有流量执行 SNAT 操作。

        syncPeriod [必需]
        -meta/v1.Duration +meta/v1.Duration
        - syncPeriod 给出 iptables 规则的刷新周期(例如,'5s'、'1m'、'2h22m')。 - 此值必须大于 0。 +

        syncPeriod 字段给出 iptables + 规则的刷新周期(例如,'5s'、'1m'、'2h22m')。此值必须大于 0。

        minSyncPeriod [必需]
        -meta/v1.Duration +meta/v1.Duration
        - minSyncPeriod 给出 iptables 规则被刷新的最小周期(例如,'5s'、'1m'、'2h22m')。 +

        minSyncPeriod 字段给出 iptables + 规则被刷新的最小周期(例如,'5s'、'1m'、'2h22m')。

        syncPeriod [必需]
        -meta/v1.Duration +meta/v1.Duration
        - syncPeriod 给出 ipvs 规则的刷新周期(例如,'5s'、'1m'、'2h22m')。 - 此值必须大于 0。 +

        syncPeriod 字段给出 ipvs 规则的刷新周期(例如,'5s'、'1m'、'2h22m')。 + 此值必须大于 0。

        minSyncPeriod [必需]
        -meta/v1.Duration +meta/v1.Duration
        - minSyncPeriod 给出 ipvs 规则被刷新的最小周期(例如,'5s'、'1m'、'2h22m')。 +

        minSyncPeriod 字段给出 ipvs 规则被刷新的最小周期(例如,'5s'、'1m'、'2h22m')。

        scheduler [必需]
        @@ -466,7 +533,7 @@ KubeProxyIPVSConfiguration 包含用于 Kubernetes 代理服务器的、与 ipvs - IPVS 调度器。 +

        IPVS 调度器。

        excludeCIDRs [必需]
        @@ -477,7 +544,7 @@ KubeProxyIPVSConfiguration 包含用于 Kubernetes 代理服务器的、与 ipvs excludeCIDRs is a list of CIDR's which the ipvs proxier should not touch when cleaning up ipvs services. --> - excludeCIDRs 取值为一个 CIDR 列表,ipvs 代理程序在清理 IPVS 服务时不应触碰这些 IP 地址。 +

        excludeCIDRs 字段取值为一个 CIDR 列表,ipvs 代理程序在清理 IPVS 服务时不应触碰这些 IP 地址。

        strictARP [必需]
        @@ -488,44 +555,44 @@ when cleaning up ipvs services. strict ARP configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface --> - strictARP 用来配置 arp_ignore 和 arp_announce,以避免(错误地)响应来自 kube-ipvs0 接口的 - ARP 查询请求。 +

        strictARP 字段用来配置 arp_ignore 和 arp_announce,以避免(错误地)响应来自 kube-ipvs0 接口的 + ARP 查询请求。

        tcpTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - tcpTimeout 是用于设置空闲 IPVS TCP 会话的超时值。 - 默认值为 0,意味着使用系统上当前的超时值设置。 +

        tcpTimeout 字段是用于设置空闲 IPVS TCP 会话的超时值。 + 默认值为 0,意味着使用系统上当前的超时值设置。

        tcpFinTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - tcpFinTimeout 用来设置 IPVS TCP 会话在收到 FIN 之后的超时值。 - 默认值为 0,意味着使用系统上当前的超时值设置。 +

        tcpFinTimeout 字段用来设置 IPVS TCP 会话在收到 FIN 之后的超时值。 + 默认值为 0,意味着使用系统上当前的超时值设置。

        udpTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - udpTimeout 用来设置 IPVS UDP 包的超时值。 - 默认值为 0,意味着使用系统上当前的超时值设置。 +

        udpTimeout 字段用来设置 IPVS UDP 包的超时值。 + 默认值为 0,意味着使用系统上当前的超时值设置。

        sourceVip [必需]
        @@ -569,7 +636,7 @@ to create endpoints and policies sourceVip is the IP address of the source VIP endoint used for NAT when loadbalancing --> - sourceVip 是执行负载均衡时进行 NAT 转换所使用的源端 VIP 端点 IP 地址。 +

        sourceVip 字段是执行负载均衡时进行 NAT 转换所使用的源端 VIP 端点 IP 地址。

        enableDSR [必需]
        @@ -580,7 +647,31 @@ NAT when loadbalancing enableDSR tells kube-proxy whether HNS policies should be created with DSR --> - enableDSR 通知 kube-proxy 是否使用 DSR 来创建 HNS 策略。 +

        enableDSR 字段通知 kube-proxy 是否使用 DSR 来创建 HNS 策略。

        +
        rootHnsEndpointName [必需]
        +string +
        + +

        rootHnsEndpointName + 字段是附加到用于根网络命名空间二层桥接的 hnsendpoint 的名称。

        +
        forwardHealthCheckVip [必需]
        +bool +
        + +

        forwardHealthCheckVip + 字段为 Windows 上的健康检查端口转发服务 VIP。

        acceptContentTypes [必需]
        @@ -692,9 +789,9 @@ ClientConnectionConfiguration 包含构造客户端所需要的细节信息。 acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the default value of 'application/json'. This field will control all connections to the server used by a particular client. --> - acceptContentTypes 定义客户端在连接到服务器时所发送的 Accept 头部字段。 +

        acceptContentTypes 字段定义客户端在连接到服务器时所发送的 Accept 头部字段。 此设置值会覆盖默认配置 'application/json'。 - 此字段会控制某特定客户端与指定服务器的所有链接。 + 此字段会控制某特定客户端与指定服务器的所有链接。

        contentType [必需]
        @@ -704,7 +801,7 @@ default value of 'application/json'. This field will control all connections to - contentType 是从此客户端向服务器发送数据时使用的内容类型(Content Type)。 +

        contentType 字段是从此客户端向服务器发送数据时使用的内容类型(Content Type)。

        qps [必需]
        @@ -714,7 +811,7 @@ default value of 'application/json'. This field will control all connections to - qps 控制此连接上每秒钟可以发送的查询请求个数。 +

        qps 字段控制此连接上每秒钟可以发送的查询请求个数。

        burst [必需]
        @@ -724,7 +821,55 @@ default value of 'application/json'. This field will control all connections to - 允许客户端超出其速率限制时可以临时累积的额外查询个数。 +

        burst 字段允许客户端超出其速率限制时可以临时累积的额外查询个数。

        +
        + +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) + + +DebuggingConfiguration 包含调试相关功能的配置。 + + + + + + + + + + @@ -754,7 +899,7 @@ FormatOptions 包含不同日志记录格式的配置选项。 - [实验特性] json 字段包含 “JSON” 日志格式的配置选项。 +

        [实验特性] json 字段包含 "JSON" 日志格式的配置选项。

        @@ -772,7 +917,7 @@ FormatOptions 包含不同日志记录格式的配置选项。 -JSONOptions 包含“json”日志格式的配置选项。 +JSONOptions 包含 "json" 日志格式的配置选项。
        字段描述
        enableProfiling [Required]
        +bool +
        + +

        enableProfiling 字段通过位于 host:port/debug/pprof/ + 的 Web 接口启用性能分析。

        +
        enableContentionProfiling [Required]
        +bool +
        + +

        enableContentionProfiling 字段在 enableProfiling + 为 true 时允许执行锁竞争分析。

        @@ -788,8 +933,8 @@ JSONOptions 包含“json”日志格式的配置选项。 info messages go to stdout, with buffering. The default is to write both to stdout, without buffering. --> - [实验特性] splitStream 将信息类型的信息输出到标准输出,错误信息重定向到标准 - 错误输出,并提供缓存。默认行为是将二者都输出到标准输出且不提供缓存。 +

        [实验特性] splitStream 字段将信息类型的信息输出到标准输出,错误信息重定向到标准 + 错误输出,并提供缓存。默认行为是将二者都输出到标准输出且不提供缓存。

        + + +
        字段描述
        infoBufferSize [必需]
        @@ -800,8 +945,220 @@ both to stdout, without buffering. [Experimental] InfoBufferSize sets the size of the info stream when using split streams. The default is zero, which disables buffering. --> - [实验特性] infoBufferSize 设置在使用分离数据流时 info 数据流的缓冲区大小。 - 默认值为 0,意味着不提供缓存。 +

        [实验特性] infoBufferSize 字段设置在使用分离数据流时 info 数据流的缓冲区大小。 + 默认值为 0,意味着不提供缓存。

        +
        + +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) + + +LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。 + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        leaderElect [必需]
        +bool +
        + +

        + leaderElect 字段允许领导者选举客户端在进入主循环执行之前先获得领导者角色。 + 运行多副本组件时启用此功能有助于提高可用性。 +

        +
        leaseDuration [必需]
        +meta/v1.Duration +
        + +

        + leaseDuration 字段是非领导角色候选者在观察到需要领导席位更新时要等待的时间; + 只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。 + 这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。 + 只有当启用了领导者选举时此字段有意义。 +

        +
        renewDeadline [必需]
        +meta/v1.Duration +
        + +

        + renewDeadline 字段设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。 + 此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。 +

        +
        retryPeriod [必需]
        +meta/v1.Duration +
        + +

        + retryPeriod 字段是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。 + 只有当启用了领导者选举时此字段才有意义。 +

        +
        resourceLock [必需]
        +string +
        + +

        resourceLock 字段给出在领导者选举期间要作为锁来使用的资源对象类型。

        +
        resourceName [必需]
        +string +
        + +

        resourceName 字段给出在领导者选举期间要作为锁来使用的资源对象名称。

        +
        resourceNamespace [必需]
        +string +
        + +

        resourceNamespace 字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。

        +
        + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**出现在:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +LoggingConfiguration 包含日志选项。 +参考 [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) 以了解更多信息。 + + + + + + + + + + + + + + + + + + + @@ -816,9 +1173,12 @@ using split streams. The default is zero, which disables buffering. --> (`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) +**出现在:** + +- [LoggingConfiguration](#LoggingConfiguration) + VModuleConfiguration 是一组文件名或文件名模式,及其对应的日志详尽程度阈值配置。 - diff --git a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta2.md b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta2.md similarity index 68% rename from content/zh/docs/reference/config-api/kube-scheduler-config.v1beta2.md rename to content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta2.md index 6a2ab78655af2..91a427ae40225 100644 --- a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta2.md +++ b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta2.md @@ -26,13 +26,440 @@ auto_generated: true - [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs) - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs) + +## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + + +

        ClientConnectionConfiguration 中包含用来构造客户端所需的细节。

        + +
        字段描述
        format [必需]
        +string +
        + +

        format 字段设置日志消息的结构。默认的格式取值为 text

        +
        flushFrequency [必需]
        +time.Duration +
        + +

        对日志进行清洗的最大间隔纳秒数(例如,1s = 1000000000)。 + 如果所选的日志后端在写入日志消息时不提供缓存,则此配置会被忽略。

        +
        verbosity [必需]
        +uint32 +
        + +

        verbosity 字段用来确定日志消息记录的详细程度阈值。 + 默认值为 0,意味着仅记录最重要的消息。 + 数值越大,额外的消息越多。错误消息总是被记录下来。

        +
        vmodule [必需]
        +VModuleConfiguration +
        + +

        vmodule 字段会在单个文件层面重载 verbosity 阈值的设置。 + 这一选项仅支持 "text" 日志格式。

        +
        options [Required]
        +FormatOptions +
        + +

        [实验特性] options 字段中包含特定于不同日志格式的配置参数。 + 只有针对所选格式的选项会被使用,但是合法性检查时会查看所有选项配置。

        + + + + + + + + + + + + + + + + + + + +
        字段描述
        kubeconfig [必需]
        +string +
        + +

        此字段为指向某 KubeConfig 文件的路径。

        +
        acceptContentTypes [必需]
        +string +
        + +

        acceptContentTypes 定义的是客户端与服务器建立连接时要发送的Accept 头部, + 这里的设置值会覆盖默认值 "application/json"。 + 此字段会影响某特定客户端与服务器的所有连接。

        +
        contentType [必需]
        +string +
        + +

        + contentType 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。 +

        +
        qps [必需]
        +float32 +
        + +

        qps 控制此连接允许的每秒查询次数。

        +
        burst [必需]
        +int32 +
        + +

        burst 允许在客户端超出其速率限制时可以累积的额外查询个数。

        +
        + +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + + +

        DebuggingConfiguration 保存与调试功能相关的配置。

        + + + + + + + + + + + + +
        字段描述
        enableProfiling [必需]
        +bool +
        + +

        此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。

        +
        enableContentionProfiling [必需]
        +bool +
        + +

        此字段在 enableProfiling 为 true 时允许执行锁竞争分析。

        +
        + +## `FormatOptions` {#FormatOptions} + + + + +

        FormatOptions 中包含不同日志格式的配置选项。

        + + + + + + + + + +
        字段描述
        json [必需]
        +JSONOptions +
        + +

        [实验特性] json 字段包含为 "json" 日志格式提供的配置选项。

        +
        + +## `JSONOptions` {#JSONOptions} + + +**出现在:** + +- [FormatOptions](#FormatOptions) + + +

        JSONOptions 包含为 "json" 日志格式所设置的配置选项。

        + + + + + + + + + + + + +
        字段描述
        splitStream [必需]
        +bool +
        + +

        [实验特性] 此字段将错误信息重定向到标准错误输出(stderr), + 将提示消息重定向到标准输出(stdout),并且支持缓存。 + 默认配置为将二者都输出到标准输出(stdout),且不提供缓存。

        +
        infoBufferSize [必需]
        +k8s.io/apimachinery/pkg/api/resource.QuantityValue +
        + +

        + [实验特性] infoBufferSize 用来在分离数据流场景是设置提示信息数据流的大小。 + 默认值为 0,意味着禁止缓存。

        +
        + +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + + +

        +LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。 +

        + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        leaderElect [必需]
        +bool +
        + +

        leaderElect 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。 + 运行多副本组件时启用此功能有助于提高可用性。 +

        +
        leaseDuration [必需]
        +meta/v1.Duration +
        + +

        + leaseDuration 是非领导角色候选者在观察到需要领导席位更新时要等待的时间; + 只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。 + 这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。 + 只有当启用了领导者选举时此字段有意义。 +

        +
        renewDeadline [必需]
        +meta/v1.Duration +
        + +

        + renewDeadline 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。 + 此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。 +

        +
        retryPeriod [必需]
        +meta/v1.Duration +
        + +

        + retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。 + 只有当启用了领导者选举时此字段才有意义。 +

        +
        resourceLock [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象类型。

        +
        resourceName [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象名称。

        +
        resourceNamespace [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。

        +
        + + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**出现在:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

        +LoggingConfiguration 包含日志选项。 +参考 [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) 以了解更多信息。 +

        + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        format [必需]
        +string +
        + +

        format 设置日志消息的结构。默认的格式取值为 text

        +
        flushFrequency [必需]
        +time.Duration +
        + +

        对日志进行清洗的最大间隔纳秒数(例如,1s = 1000000000)。 + 如果所选的日志后端在写入日志消息时不提供缓存,则此配置会被忽略。

        +
        verbosity [必需]
        +uint32 +
        + +

        verbosity 用来确定日志消息记录的详细程度阈值。默认值为 0, + 意味着仅记录最重要的消息。数值越大,额外的消息越多。错误消息总是被记录下来。

        +
        vmodule [必需]
        +VModuleConfiguration +
        + +

        vmodule 会在单个文件层面重载 verbosity 阈值的设置。 + 这一选项仅支持 "text" 日志格式。

        +
        options [Required]
        +FormatOptions +
        + +

        [实验特性] options 中包含特定于不同日志格式的配置参数。 + 只有针对所选格式的选项会被使用,但是合法性检查时会查看所有选项配置。

        +
        + +## `VModuleConfiguration` {#VModuleConfiguration} + + +(`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) + +**出现在:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

        VModuleConfiguration 是一组文件名(通配符)及其对应的日志详尽程度阈值。

        + + + ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta2-DefaultPreemptionArgs} -DefaultPreemptionArgs 包含用来配置 DefaultPreemption 插件的参数。 +

        DefaultPreemptionArgs 包含用来配置 DefaultPreemption 插件的参数。

        @@ -52,8 +479,8 @@ shortlist when dry running preemption as a percentage of number of nodes. Must be in the range [0, 100]. Defaults to 10% of the cluster size if unspecified. --> - 此字段为试运行抢占时 shortlist 中候选节点数的下限,数值为节点数的百分比。 -字段值必须介于 [0, 100] 之间。未指定时默认值为整个集群规模的 10%。 +

        此字段为试运行抢占时 shortlist 中候选节点数的下限,数值为节点数的百分比。 + 字段值必须介于 [0, 100] 之间。未指定时默认值为整个集群规模的 10%。

        @@ -84,7 +510,7 @@ that play a role in the number of candidates shortlisted. Must be at least -InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。 +

        InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。

        字段描述
        minCandidateNodesAbsolute [必需]
        @@ -69,11 +496,10 @@ We say "likely" because there are other factors such as PDB violations that play a role in the number of candidates shortlisted. Must be at least 0 nodes. Defaults to 100 nodes if unspecified. --> -

        此字段设置 shortlist 中候选节点的绝对下限。用于试运行抢占而列举的 -候选节点个数近似于通过下面的公式计算的:

        -

        候选节点数 = max(节点数 * minCandidateNodesPercentage, minCandidateNodesAbsolute)

        -

        之所以说是“近似于”是因为存在一些类似于 PDB 违例这种因素,会影响到进入 shortlist -中候选节点的个数。取值至少为 0 节点。若未设置默认为 100 节点。

        +

        此字段设置 shortlist 中候选节点的绝对下限。用于试运行抢占而列举的候选节点个数近似于通过下面的公式计算的:

        +

        候选节点数 = max(节点数 * minCandidateNodesPercentage, minCandidateNodesAbsolute)

        +

        之所以说是"近似于"是因为存在一些类似于 PDB 违例这种因素,会影响到进入 shortlist中候选节点的个数。 + 取值至少为 0 节点。若未设置默认为 100 节点。

        @@ -101,8 +527,8 @@ InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。 HardPodAffinityWeight is the scoring weight for existing pods with a matching hard affinity to the incoming pod. --> - 此字段是一个计分权重值。针对新增的 Pod,要对现存的、带有与新 Pod 匹配的 -硬性亲和性设置的 Pod 计算亲和性得分。 +

        此字段是一个计分权重值。针对新增的 Pod, + 要对现存的、带有与新 Pod 匹配的硬性亲和性设置的 Pod 计算亲和性得分。

        @@ -113,7 +539,7 @@ matching hard affinity to the incoming pod. -KubeSchedulerConfiguration 用来配置调度器。 +

        KubeSchedulerConfiguration 用来配置调度器。

        字段描述
        @@ -129,8 +555,8 @@ KubeSchedulerConfiguration 用来配置调度器。 - 此字段设置为调度 Pod 而执行算法时的并发度。此值必须大于 0。 -默认值为 16。 +

        此字段设置为调度 Pod 而执行算法时的并发度。此值必须大于 0。 + 默认值为 16。

        + @@ -273,7 +706,7 @@ with the extender. These extenders are shared by all scheduler profiles. -NodeAffinityArgs 中包含配置 NodeAffinity 插件的参数。 +

        NodeAffinityArgs 中包含配置 NodeAffinity 插件的参数。

        字段描述
        leaderElection [必需]
        @@ -140,7 +566,7 @@ KubeSchedulerConfiguration 用来配置调度器。 - 此字段用来定义领导者选举客户端的配置。 +

        此字段用来定义领导者选举客户端的配置。

        clientConnection [必需]
        @@ -151,8 +577,7 @@ KubeSchedulerConfiguration 用来配置调度器。 ClientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver. --> - 此字段为与 API 服务器通信时使用的代理服务器设置 kubeconfig 文件和客户端 -连接配置。 +

        此字段为与 API 服务器通信时使用的代理服务器设置 kubeconfig 文件和客户端连接配置。

        healthzBindAddress [必需]
        @@ -164,10 +589,9 @@ settings for the proxy server to use when communicating with the apiserver. Only empty address or port 0 is allowed. Anything else will fail validation. HealthzBindAddress is the IP address and port for the health check server to serve on. --> - healthzBindAddress 是健康检查服务器提供服务所用的 IP 地址和端口。 - 注意:healthzBindAddressmetricsBindAddress -这两个字段都已被弃用。 -只可以设置空地址或者端口 0。其他设置值都无法通过合法性检查。 +

        healthzBindAddress 是健康检查服务器提供服务所用的 IP 地址和端口。 + 注意:healthzBindAddressmetricsBindAddress这两个字段都已被弃用。 + 只可以设置空地址或者端口 0。其他设置值都无法通过合法性检查。

        metricsBindAddress [必需]
        @@ -177,17 +601,22 @@ HealthzBindAddress is the IP address and port for the health check server to ser - metricsBindAddress 是度量值服务器提供服务所用的 IP 地址和端口。 +

        metricsBindAddress 是度量值服务器提供服务所用的 IP 地址和端口。

        DebuggingConfiguration [必需]
        DebuggingConfiguration
        DebuggingConfiguration 的成员被内嵌到此类型中) - 此字段设置与调试相关功能特性的配置。 +

        此字段设置与调试相关功能特性的配置。 + TODO:我们可能想把它做成一个子结构,像调试 component-base/config/v1alpha1.DebuggingConfiguration 一样。

        percentageOfNodesToScore [必需]
        @@ -204,12 +633,13 @@ then scheduler stops finding further feasible nodes once it finds 150 feasible o When the value is 0, default percentage (5%--50% based on the size of the cluster) of the nodes will be scored. --> - 此字段为所有节点的百分比,一旦调度器找到所设置比例的、能够运行 Pod 的节点, -则停止在集群中继续寻找更合适的节点。这一配置有助于提高调度器的性能。调度器 -总会尝试寻找至少 "minFeasibleNodesToFind" 个可行节点,无论此字段的取值如何。 -例如:当集群规模为 500 个节点,而此字段的取值为 30,则调度器在找到 150 个合适 -的节点后会停止继续寻找合适的节点。当此值为 0 时,调度器会使用默认节点数百分比(基于集群规模 -确定的值,在 5% 到 50% 之间)来执行打分操作。 +

        此字段为所有节点的百分比,一旦调度器找到所设置比例的、能够运行 Pod 的节点, + 则停止在集群中继续寻找更合适的节点。这一配置有助于提高调度器的性能。 + 调度器总会尝试寻找至少 "minFeasibleNodesToFind" 个可行节点,无论此字段的取值如何。 + 例如:当集群规模为 500 个节点,而此字段的取值为 30, + 则调度器在找到 150 个合适的节点后会停止继续寻找合适的节点。 + 当此值为 0 时,调度器会使用默认节点数百分比(基于集群规模确定的值,在 5% 到 50% 之间)来执行打分操作。 +

        podInitialBackoffSeconds [必需]
        @@ -221,8 +651,8 @@ nodes will be scored. If specified, it must be greater than 0. If this value is null, the default value (1s) will be used. --> - 此字段设置不可调度 Pod 的初始回退秒数。如果设置了此字段,其取值必须大于零。 -若此值为 null,则使用默认值(1s)。 +

        此字段设置不可调度 Pod 的初始回退秒数。如果设置了此字段,其取值必须大于零。 + 若此值为 null,则使用默认值(1s)。

        podMaxBackoffSeconds [必需]
        @@ -234,8 +664,9 @@ will be used. If specified, it must be greater than podInitialBackoffSeconds. If this value is null, the default value (10s) will be used. --> - 此字段设置不可调度的 Pod 的最大回退秒数。如果设置了此字段,则其值必须大于 -podInitialBackoffSeconds 字段值。如果此值设置为 null,则使用默认值(10s)。 +

        此字段设置不可调度的 Pod 的最大回退秒数。如果设置了此字段, + 则其值必须大于 podInitialBackoffSeconds 字段值。如果此值设置为 null,则使用默认值(10s)。 +

        profiles [必需]
        @@ -248,9 +679,10 @@ choose to be scheduled under a particular profile by setting its associated scheduler name. Pods that don't specify any scheduler name are scheduled with the "default-scheduler" profile, if present here. --> - 此字段为 kube-scheduler 所支持的方案(profiles)。Pod 可以通过设置其对应 -的调度器名称来选择使用特定的方案。未指定调度器名称的 Pod 会使用 -“default-scheduler”方案来调度,如果存在的话。 +

        此字段为 kube-scheduler 所支持的方案(profiles)。 + Pod 可以通过设置其对应的调度器名称来选择使用特定的方案。 + 未指定调度器名称的 Pod 会使用 "default-scheduler" 方案来调度,如果存在的话。 +

        extenders [必需]
        @@ -261,8 +693,9 @@ with the "default-scheduler" profile, if present here. Extenders are the list of scheduler extenders, each holding the values of how to communicate with the extender. These extenders are shared by all scheduler profiles. --> - 此字段为调度器扩展模块(Extender)的列表,每个元素包含如何与某扩展模块 -通信的配置信息。所有调度器模仿会共享此扩展模块列表。 +

        此字段为调度器扩展模块(Extender)的列表, + 每个元素包含如何与某扩展模块通信的配置信息。 + 所有调度器模仿会共享此扩展模块列表。

        @@ -294,11 +727,12 @@ match). When AddedAffinity is used, some Pods with affinity requirements that match a specific Node (such as Daemonset Pods) might remain unschedulable. --> - addedAffinity 会作为附加的亲和性属性添加到所有 Pod 的 -规约中指定的 NodeAffinity 中。换言之,节点需要同时满足 addedAffinity -和 .spec.nodeAffinity。默认情况下,addedAffinity 为空(与所有节点匹配)。 -使用了 addedAffinity 时,某些带有已经能够与某特定节点匹配的亲和性需求 -的 Pod (例如 DaemonSet Pod)可能会继续呈现不可调度状态。 +

        + addedAffinity 会作为附加的亲和性属性添加到所有 Pod 的规约中指定的 NodeAffinity 中。 + 换言之,节点需要同时满足 addedAffinity 和 .spec.nodeAffinity。默认情况下,addedAffinity 为空(与所有节点匹配)。 + 使用了 addedAffinity 时,某些带有已经能够与某特定节点匹配的亲和性需求的 + Pod (例如 DaemonSet Pod)可能会继续呈现不可调度状态。 +

        @@ -309,7 +743,7 @@ a specific Node (such as Daemonset Pods) might remain unschedulable. -NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllocation 插件的参数。 +

        NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllocation 插件的参数。

        字段描述
        @@ -325,7 +759,7 @@ NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllo - 要管理的资源;如果未设置,则默认值为 "cpu" 和 "memory"。 +

        要管理的资源;如果未设置,则默认值为 "cpu" 和 "memory"。

        @@ -336,7 +770,7 @@ NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllo -NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。 +

        NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。

        字段描述
        @@ -353,7 +787,7 @@ NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。 IgnoredResources is the list of resources that NodeResources fit filter should ignore. This doesn't apply to scoring. --> - 此字段为 NodeResources 匹配过滤器要忽略的资源列表。此列表不影响节点打分。 +

        此字段为 NodeResources 匹配过滤器要忽略的资源列表。此列表不影响节点打分。

        @@ -392,7 +826,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "memory" weight. -PodTopologySpreadArgs 包含用来配置 PodTopologySpread 插件的参数。 +

        PodTopologySpreadArgs 包含用来配置 PodTopologySpread 插件的参数。

        字段描述
        ignoredResourceGroups [必需]
        @@ -366,10 +800,10 @@ e.g. if group is ["example.com"], it will ignore all resource names that begin with "example.com", such as "example.com/aaa" and "example.com/bbb". A resource group name can't contain '/'. This doesn't apply to scoring. --> - 此字段定义 NodeResources 匹配过滤器要忽略的资源组列表。 -例如,如果配置值为 ["example.com"],则以 "example.com" 开头的资源名(如 -"example.com/aaa" 和 "example.com/bbb")都会被忽略。 -资源组名称中不可以包含 '/'。此设置不影响节点的打分。 +

        此字段定义 NodeResources 匹配过滤器要忽略的资源组列表。 + 例如,如果配置值为 ["example.com"],则以 "example.com" + 开头的资源名(如"example.com/aaa" 和 "example.com/bbb")都会被忽略。 + 资源组名称中不可以包含 '/'。此设置不影响节点的打分。

        scoringStrategy [必需]
        @@ -380,8 +814,8 @@ A resource group name can't contain '/'. This doesn't apply to scoring. ScoringStrategy selects the node resource scoring strategy. The default strategy is LeastAllocated with an equal "cpu" and "memory" weight. --> - 此字段用来选择节点资源打分策略。默认的策略为 LeastAllocated,且 "cpu" 和 -"memory" 的权重相同。 +

        此字段用来选择节点资源打分策略。默认的策略为 LeastAllocated, + 且 "cpu" 和"memory" 的权重相同。

        @@ -413,11 +847,11 @@ deduced from the Pod's membership to Services, ReplicationControllers, ReplicaSets or StatefulSets. When not empty, .defaultingType must be "List". --> - 此字段针对未定义 .spec.topologySpreadConstraints 的 Pod, -为其提供拓扑分布约束。.defaultConstraints[∗].labelSelectors -必须为空,因为这一信息要从 Pod 所属的 Service、ReplicationController、 -ReplicaSet 或 StatefulSet 来推导。 -此字段不为空时,.defaultingType 必须为 "List"。 +

        此字段针对未定义 .spec.topologySpreadConstraints 的 Pod, + 为其提供拓扑分布约束。.defaultConstraints[∗].labelSelectors必须为空, + 因为这一信息要从 Pod 所属的 Service、ReplicationController、 + ReplicaSet 或 StatefulSet 来推导。 + 此字段不为空时,.defaultingType 必须为 "List"。

        @@ -451,7 +885,7 @@ ReplicaSet 或 StatefulSet 来推导。 -VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。 +

        VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。

        字段描述
        defaultingType
        @@ -434,13 +868,13 @@ ReplicaSet 或 StatefulSet 来推导。 and to "System" if enabled.-->

        defaultingType 决定如何推导 .defaultConstraints。 -可选值为 "System" 或 "List"。 + 可选值为 "System" 或 "List"。

          -
        • "System":使用 Kubernetes 定义的约束,将 Pod 分布到不同节点和可用区;
        • -
        • "List":使用 .defaultConstraints 中定义的约束。
        • +
        • "System":使用 Kubernetes 定义的约束,将 Pod 分布到不同节点和可用区;
        • +
        • "List":使用 .defaultConstraints 中定义的约束。
        -

        当特性门控 DefaultPodTopologySpread 被禁用时,默认值为 "list";反之,默认值为 "System"。

        +

        当特性门控 DefaultPodTopologySpread 被禁用时,默认值为 "list";反之,默认值为 "System"。

        @@ -469,8 +903,8 @@ VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。 Value must be non-negative integer. The value zero indicates no waiting. If this value is nil, the default value (600) will be used. --> - 此字段设置卷绑定操作的超时秒数。字段值必须是非负数。 -取值为 0 意味着不等待。如果此值为 null,则使用默认值(600)。 +

        此字段设置卷绑定操作的超时秒数。字段值必须是非负数。 + 取值为 0 意味着不等待。如果此值为 null,则使用默认值(600)。

        @@ -520,751 +953,407 @@ All points must be sorted in increasing order by utilization. Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, it is assumed that the extender chose not to provide that extension. --> -Extender 包含与扩展模块(Extender)通信所用的参数。 -如果未指定 verb 或者 verb 为空,则假定对应的扩展模块选择不提供该扩展功能。 - -
        字段描述
        shape
        @@ -490,17 +924,16 @@ The default shape points are: 2) 10 for 100 utilization All points must be sorted in increasing order by utilization. --> -

        shape 用来设置打分函数曲线所使用的计分点,这些计分点 -用来基于静态制备的 PV 卷的利用率为节点打分。 -卷的利用率是计算得来的,将 Pod 所请求的总的存储空间大小除以每个节点 -上可用的总的卷容量。每个计分点包含利用率(范围从 0 到 100)和其对应 -的得分(范围从 0 到 10)。你可以通过为不同的使用率值设置不同的得分来 -反转优先级:

        +

        shape 用来设置打分函数曲线所使用的计分点, + 这些计分点用来基于静态制备的 PV 卷的利用率为节点打分。 + 卷的利用率是计算得来的,将 Pod 所请求的总的存储空间大小除以每个节点上可用的总的卷容量。 + 每个计分点包含利用率(范围从 0 到 100)和其对应的得分(范围从 0 到 10)。 + 你可以通过为不同的使用率值设置不同的得分来反转优先级:

        默认的曲线计分点为:

        -
          +
          1. 利用率为 0 时得分为 0;
          2. 利用率为 100 时得分为 10。
          3. -
        +

        所有计分点必须按利用率值的升序来排序。

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        字段描述
        urlPrefix [必需]
        -string -
        - - 用来访问扩展模块的 URL 前缀。 -
        filterVerb [必需]
        -string -
        - - filter 调用所使用的动词,如果不支持过滤操作则为空。 -此动词会在向扩展模块发送 filter 调用时追加到 urlPrefix 后面。 -
        preemptVerb [必需]
        -string -
        - - preempt 调用所使用的动词,如果不支持抢占操作则为空。 -此动词会在向扩展模块发送 preempt 调用时追加到 urlPrefix 后面。 -
        prioritizeVerb [必需]
        -string -
        - - prioritize 调用所使用的动词,如果不支持 prioritize 操作则为空。 -此动词会在向扩展模块发送 prioritize 调用时追加到 urlPrefix 后面。 -
        weight [必需]
        -int64 -
        - - 针对 prioritize 调用所生成的节点分数要使用的数值系数。 -weight 值必须是正整数。 -
        bindVerb [必需]
        -string -
        - - bind 调用所使用的动词,如果不支持 bind 操作则为空。 -此动词会在向扩展模块发送 bind 调用时追加到 urlPrefix 后面。 -如果扩展模块实现了此方法,扩展模块要负责将 Pod 绑定到 API 服务器。 -只有一个扩展模块可以实现此函数。 -
        enableHTTPS [必需]
        -bool -
        - - 此字段设置是否需要使用 HTTPS 来与扩展模块通信。 -
        tlsConfig [必需]
        -ExtenderTLSConfig -
        - - 此字段设置传输层安全性(TLS)配置。 -
        httpTimeout [必需]
        -meta/v1.Duration -
        - - 此字段给出扩展模块功能调用的超时值。filter 操作超时会导致 Pod 无法被调度。 -prioritize 操作超时会被忽略,Kubernetes 或者其他扩展模块所给出的优先级值 -会被用来选择节点。 -
        nodeCacheCapable [必需]
        -bool -
        - - 此字段指示扩展模块可以缓存节点信息,从而调度器应该发送关于可选节点的最少信息, -假定扩展模块已经缓存了集群中所有节点的全部详细信息。 -
        managedResources
        -[]ExtenderManagedResource -
        - -

        managedResources 是一个由此扩展模块所管理的扩展资源的列表。

        -
          -
        • 如果某 Pod 请求了此列表中的至少一个扩展资源,则 Pod 会在 filter、 -prioritize 和 bind (如果扩展模块可以执行绑定操作)阶段被发送到该扩展模块。 -若此字段为空或未设置,则所有 Pod 都会发送到此扩展模块。
        • -
        • 如果某资源上设置了 ignoredByScheduler 为 true,则 kube-scheduler -会在断言阶段略过对该资源的检查。
        • -
        -
        ignorable [必需]
        -bool -
        - - 此字段用来设置扩展模块是否是可忽略的。换言之,当扩展模块返回错误或者 -完全不可达时,调度操作不应失败。 -
        - -## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1beta2-ExtenderManagedResource} - - -**出现在:** - -- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender) - - -ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数。 - - - - - - - - - - - - -
        字段描述
        name [必需]
        -string -
        - - 扩展资源的名称。 -
        ignoredByScheduler [必需]
        -bool -
        - - 此字段标明 kube-scheduler 是否应在应用断言时忽略此资源。 -
        - -## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1beta2-ExtenderTLSConfig} - - -**出现在:** - -- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender) - - -ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数。 +

        Extender 包含与扩展模块(Extender)通信所用的参数。 +如果未指定 verb 或者 verb 为空,则假定对应的扩展模块选择不提供该扩展功能。

        - - - - - - - + + + - - - - - - - -
        字段描述
        insecure [必需]
        -bool -
        - - 访问服务器时不需要检查 TLS 证书。此配置仅针对测试用途。 -
        serverName [必需]
        +
        字段描述
        urlPrefix [必需]
        string
        - serverName 会被发送到服务器端,作为 SNI 标志;客户端会使用 -此设置来检查服务器证书。如果 serverName 为空,则会使用联系 -服务器时所用的主机名。 +

        用来访问扩展模块的 URL 前缀。

        certFile [必需]
        +
        filterVerb [必需]
        string
        - 服务器端所要求的 TLS 客户端证书认证。 +

        filter 调用所使用的动词,如果不支持过滤操作则为空。 + 此动词会在向扩展模块发送 filter 调用时追加到 urlPrefix 后面。

        keyFile [必需]
        +
        preemptVerb [必需]
        string
        - 服务器端所要求的 TLS 客户端秘钥认证。 +

        preempt 调用所使用的动词,如果不支持抢占操作则为空。 + 此动词会在向扩展模块发送 preempt 调用时追加到 urlPrefix 后面。

        caFile [必需]
        +
        prioritizeVerb [必需]
        string
        - 服务器端可信任的根证书。 +

        prioritize 调用所使用的动词,如果不支持 prioritize 操作则为空。 + 此动词会在向扩展模块发送 prioritize 调用时追加到 urlPrefix 后面。

        certData [必需]
        -[]byte +
        weight [必需]
        +int64
        - certData 包含 PEM 编码的字节流(通常从某客户端证书文件读入)。 -此字段优先级高于 certFile 字段。 +

        针对 prioritize 调用所生成的节点分数要使用的数值系数。 + weight 值必须是正整数。

        keyData [必需]
        -[]byte +
        bindVerb [必需]
        +string
        - keyData 包含 PEM 编码的字节流(通常从某客户端证书秘钥文件读入)。 -此字段优先级高于 keyFile 字段。 +

        bind 调用所使用的动词,如果不支持 bind 操作则为空。 + 此动词会在向扩展模块发送 bind 调用时追加到 urlPrefix 后面。 + 如果扩展模块实现了此方法,扩展模块要负责将 Pod 绑定到 API 服务器。 + 只有一个扩展模块可以实现此函数。

        caData [必需]
        -[]byte +
        enableHTTPS [必需]
        +bool
        - caData 包含 PEM 编码的字节流(通常从某根证书包文件读入)。 -此字段优先级高于 caFile 字段。 +

        此字段设置是否需要使用 HTTPS 来与扩展模块通信。

        - -## `KubeSchedulerProfile` {#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile} - - -**出现在:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) - - -KubeSchedulerProfile 是一个调度方案。 - - - - - - - - +

        此字段指示扩展模块可以缓存节点信息,从而调度器应该发送关于可选节点的最少信息, + 假定扩展模块已经缓存了集群中所有节点的全部详细信息。

        - -
        字段描述
        schedulerName [必需]
        -string +
        tlsConfig [必需]
        +ExtenderTLSConfig
        - schedulerName 是与此调度方案相关联的调度器的名称。 -如果 schedulerName 与 Pod 的 spec.schedulerName -匹配,则该 Pod 会使用此方案来调度。 +

        此字段设置传输层安全性(TLS)配置。

        plugins [必需]
        -Plugins +
        httpTimeout [必需]
        +meta/v1.Duration
        -

        plugins 设置一组应该被启用或禁止的插件。 -被启用的插件是指除了默认插件之外需要被启用的插件。被禁止的插件 -是指需要被禁用的默认插件。

        -

        如果针对某个扩展点没有设置被启用或被禁止的插件,则使用该扩展点 -的默认插件(如果有的话)。如果设置了 QueueSort 插件,则同一个 QueueSort -插件和 pluginConfig 要被设置到所有调度方案之上。

        +

        此字段给出扩展模块功能调用的超时值。filter 操作超时会导致 Pod 无法被调度。 + prioritize 操作超时会被忽略,Kubernetes 或者其他扩展模块所给出的优先级值会被用来选择节点。 +

        pluginConfig [必需]
        -[]PluginConfig +
        nodeCacheCapable [必需]
        +bool
        - pluginConfig 是为每个插件提供的一组可选的定制插件参数。 -如果忽略了插件的配置参数,则意味着使用该插件的默认配置。 -
        - -## `Plugin` {#kubescheduler-config-k8s-io-v1beta2-Plugin} - - -**出现在:** - -- [PluginSet](#kubescheduler-config-k8s-io-v1beta2-PluginSet) - - -Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用于评分(Score)插件。 - - - - - - -
        字段描述
        name [必需]
        -string +
        managedResources
        +[]ExtenderManagedResource
        - 插件的名称。 +

        managedResources 是一个由此扩展模块所管理的扩展资源的列表。

        +
          +
        • 如果某 Pod 请求了此列表中的至少一个扩展资源,则 Pod 会在 filter、 + prioritize 和 bind (如果扩展模块可以执行绑定操作)阶段被发送到该扩展模块。 + 若此字段为空或未设置,则所有 Pod 都会发送到此扩展模块。
        • +
        • 如果某资源上设置了 ignoredByScheduler 为 true,则 kube-scheduler + 会在断言阶段略过对该资源的检查。
        • +
        weight [必需]
        -int32 +
        ignorable [必需]
        +bool
        - 插件的权重;仅适用于评分(Score)插件。 +

        此字段用来设置扩展模块是否是可忽略的。 + 换言之,当扩展模块返回错误或者完全不可达时,调度操作不应失败。

        -## `PluginConfig` {#kubescheduler-config-k8s-io-v1beta2-PluginConfig} +## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1beta2-ExtenderManagedResource} **出现在:** -- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile) +- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender) -PluginConfig 给出初始化阶段要传递给插件的参数。 -在多个扩展点被调用的插件仅会被初始化一次。 -参数可以是任意结构。插件负责处理这里所传的参数。 +

        ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数。

        - + - - - - - -
        字段描述
        name [必需]
        string
        - name 是所配置的插件的名称。 -
        args [必需]
        -k8s.io/apimachinery/pkg/runtime.RawExtension -
        - - args 定义在初始化阶段要传递给插件的参数。参数可以为任意结构。 -
        - -## `PluginSet` {#kubescheduler-config-k8s-io-v1beta2-PluginSet} - - -**出现在:** - -- [Plugins](#kubescheduler-config-k8s-io-v1beta2-Plugins) - - -PluginSet 为某扩展点设置要启用或禁用的插件。 -如果数组为空,或者取值为 null,则使用该扩展点的默认插件集合。 - - - - - - - -
        字段描述
        enabled [必需]
        -[]Plugin -
        - - enabled 设置在默认插件之外要启用的插件。如果在调度器的配置 -文件中也配置了默认插件,则对应插件的权重会被覆盖。 -此处所设置的插件会在默认插件之后被调用,调用顺序与数组中元素顺序相同。 +

        扩展资源的名称。

        disabled [必需]
        -[]Plugin +
        ignoredByScheduler [必需]
        +bool
        - disabled 设置要被禁用的默认插件。 -如果需要禁用所有的默认插件,应该提供仅包含一个元素 "∗" 的数组。 +

        此字段标明 kube-scheduler 是否应在应用断言时忽略此资源。

        -## `Plugins` {#kubescheduler-config-k8s-io-v1beta2-Plugins} +## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1beta2-ExtenderTLSConfig} **出现在:** -- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile) +- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender) -Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定扩展点所启用 -的所有插件都在这一列表中。 -如果配置中不包含某个扩展点,则使用该扩展点的默认插件集合。 -被启用的插件的调用顺序与这里指定的顺序相同,都在默认插件之后调用。 -如果它们需要在默认插件之前调用,则需要先行禁止默认插件,之后在这里 -按期望的顺序重新启用。 +

        ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数。

        - - - - - - - - - - - - - - - - - - - -
        字段描述
        queueSort [必需]
        -PluginSet -
        - - queueSort 是一个在对调度队列中 Pod 排序时要调用的插件列表。 -
        preFilter [必需]
        -PluginSet -
        - - preFilter 是一个在调度框架中“PreFilter(预过滤)”扩展点上要 -调用的插件列表。 -
        filter [必需]
        -PluginSet -
        - - filter 是一个在需要过滤掉无法运行 Pod 的节点时被调用的插件列表。 -
        postFilter [必需]
        -PluginSet -
        - - postFilter 是一个在过滤阶段结束后会被调用的插件列表; -这里的插件只有在找不到合适的节点来运行 Pod 时才会被调用。 -
        preScore [必需]
        -PluginSet +
        insecure [必需]
        +bool
        - preScore 是一个在打分之前要调用的插件列表。 +

        访问服务器时不需要检查 TLS 证书。此配置仅针对测试用途。

        score [必需]
        -PluginSet +
        serverName [必需]
        +string
        - score 是一个在对已经通过过滤阶段的节点进行排序时调用的插件的列表。 +

        serverName 会被发送到服务器端,作为 SNI 标志; + 客户端会使用此设置来检查服务器证书。 + 如果 serverName 为空,则会使用联系服务器时所用的主机名。 +

        reserve [必需]
        -PluginSet +
        certFile [必需]
        +string
        - reserve 是一组在运行 Pod 的节点已被选定后,需要预留或者释放资源时调用的插件的列表。 +

        服务器端所要求的 TLS 客户端证书认证。

        permit [必需]
        -PluginSet +
        keyFile [必需]
        +string
        - permit 是一个用来控制 Pod 绑定关系的插件列表。这些插件可以 -禁止或者延迟 Pod 的绑定。 +

        服务器端所要求的 TLS 客户端秘钥认证。

        preBind [必需]
        -PluginSet +
        caFile [必需]
        +string
        - preBind 是一个在 Pod 被绑定到某节点之前要被调用的插件的列表。 +

        服务器端可信任的根证书。

        bind [必需]
        -PluginSet +
        certData [必需]
        +[]byte
        - bind 是一个在调度框架中“Bind(绑定)”扩展点上要调用的 -插件的列表。调度器按顺序调用这些插件。只要其中某个插件返回成功,则调度器 -就略过余下的插件。 +

        certData 包含 PEM 编码的字节流(通常从某客户端证书文件读入)。 + 此字段优先级高于 certFile 字段。

        postBind [必需]
        -PluginSet +
        keyData [必需]
        +[]byte
        - postBind 是一个在 Pod 已经被成功绑定之后要调用的插件的列表。 +

        keyData 包含 PEM 编码的字节流(通常从某客户端证书秘钥文件读入)。 + 此字段优先级高于 keyFile 字段。

        multiPoint [必需]
        -PluginSet +
        caData [必需]
        +[]byte
        -

        multiPoint 是一个简化的配置段落,用来为所有合法的扩展点启用插件。 +

        caData 包含 PEM 编码的字节流(通常从某根证书包文件读入)。 + 此字段优先级高于 caFile 字段。

        -## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadConstraintsDefaulting} - - -(`string` 类型的别名) - -**出现在:** - -- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs) - - -PodTopologySpreadConstraintsDefaulting 定义如何为 PodTopologySpread 插件 -设置默认的约束。 - -## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam} +## `KubeSchedulerProfile` {#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile} **出现在:** -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) -RequestedToCapacityRatioParam 结构定义 RequestedToCapacityRatio 的参数。 +

        KubeSchedulerProfile 是一个调度方案。

        - + + + + + +
        字段描述
        shape [必需]
        -[]UtilizationShapePoint +
        schedulerName [必需]
        +string
        +

        schedulerName 是与此调度方案相关联的调度器的名称。 + 如果 schedulerName 与 Pod 的 spec.schedulerName + 匹配,则该 Pod 会使用此方案来调度。

        +
        plugins [必需]
        +Plugins +
        + +

        plugins 设置一组应该被启用或禁止的插件。 + 被启用的插件是指除了默认插件之外需要被启用的插件。 + 被禁止的插件是指需要被禁用的默认插件。

        +

        如果针对某个扩展点没有设置被启用或被禁止的插件, + 则使用该扩展点的默认插件(如果有的话)。如果设置了 QueueSort 插件,则同一个 QueueSort + 插件和 pluginConfig 要被设置到所有调度方案之上。

        +
        pluginConfig [必需]
        +[]PluginConfig +
        + - shape 是一个定义评分函数曲线的计分点的列表。 +

        pluginConfig 是为每个插件提供的一组可选的定制插件参数。 + 如果忽略了插件的配置参数,则意味着使用该插件的默认配置。

        +
        -## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta2-ResourceSpec} +## `Plugin` {#kubescheduler-config-k8s-io-v1beta2-Plugin} **出现在:** -- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs) -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) +- [PluginSet](#kubescheduler-config-k8s-io-v1beta2-PluginSet) -ResourceSpec 用来代表某个资源。 +

        Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用于评分(Score)插件。

        @@ -1275,460 +1364,478 @@ ResourceSpec 用来代表某个资源。
        字段描述
        - 资源名称。 +

        插件的名称。

        weight [必需]
        -int64 +int32
        - 资源权重。 +

        插件的权重;仅适用于评分(Score)插件。

        -## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy} +## `PluginConfig` {#kubescheduler-config-k8s-io-v1beta2-PluginConfig} **出现在:** -- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs) +- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile) -ScoringStrategy 为节点资源插件定义 ScoringStrategyType。 +

        PluginConfig 给出初始化阶段要传递给插件的参数。 +在多个扩展点被调用的插件仅会被初始化一次。 +参数可以是任意结构。插件负责处理这里所传的参数。

        - - - - -
        字段描述
        type [必需]
        -ScoringStrategyType -
        - - type 用来选择要运行的策略。 -
        resources [必需]
        -[]ResourceSpec +
        name [必需]
        +string
        -

        resources 设置在评分时要考虑的资源。

        -

        默认的资源集合包含 "cpu" 和 "memory",且二者权重相同。

        -

        权重的取值范围为 1 到 100。

        -

        当权重未设置或者显式设置为 0 时,意味着使用默认值 1。

        +

        name 是所配置的插件的名称。

        requestedToCapacityRatio [必需]
        -RequestedToCapacityRatioParam +
        args [必需]
        +k8s.io/apimachinery/pkg/runtime.RawExtension
        - 特定于 RequestedToCapacityRatio 策略的参数。 +

        args 定义在初始化阶段要传递给插件的参数。参数可以为任意结构。

        -## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategyType} - - -(`string` 数据类型的别名) - -**出现在:** - -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) - - -ScoringStrategyType 是 NodeResourcesFit 插件所使用的的评分策略类型。 - -## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint} +## `PluginSet` {#kubescheduler-config-k8s-io-v1beta2-PluginSet} **出现在:** -- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs) -- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam) +- [Plugins](#kubescheduler-config-k8s-io-v1beta2-Plugins) -UtilizationShapePoint 代表的是优先级函数曲线中的一个评分点。 +

        PluginSet 为某扩展点设置要启用或禁用的插件。 +如果数组为空,或者取值为 null,则使用该扩展点的默认插件集合。

        - -
        字段描述
        utilization [必需]
        -int32 +
        enabled [必需]
        +[]Plugin
        - 利用率(x 轴)。合法值为 0 到 100。完全被利用的节点映射到 100。 +

        enabled 设置在默认插件之外要启用的插件。 + 如果在调度器的配置文件中也配置了默认插件,则对应插件的权重会被覆盖。 + 此处所设置的插件会在默认插件之后被调用,调用顺序与数组中元素顺序相同。

        score [必需]
        -int32 +
        disabled [必需]
        +[]Plugin
        - 分配给指定利用率的分值(y 轴)。合法值为 0 到 10。 +

        disabled 设置要被禁用的默认插件。 + 如果需要禁用所有的默认插件,应该提供仅包含一个元素 "∗" 的数组。

        -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} +## `Plugins` {#kubescheduler-config-k8s-io-v1beta2-Plugins} **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile) -ClientConnectionConfiguration 中包含用来构造一个客户端所需的细节。 +

        Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定扩展点所启用的所有插件都在这一列表中。 +如果配置中不包含某个扩展点,则使用该扩展点的默认插件集合。 +被启用的插件的调用顺序与这里指定的顺序相同,都在默认插件之后调用。 +如果它们需要在默认插件之前调用,则需要先行禁止默认插件,之后在这里按期望的顺序重新启用。

        - + + + + + + + + + + + + - - - - - -
        字段描述
        kubeconfig [必需]
        -string +
        queueSort [必需]
        +PluginSet +
        + +

        queueSort 是一个在对调度队列中 Pod 排序时要调用的插件列表。

        +
        preFilter [必需]
        +PluginSet +
        + +

        preFilter 是一个在调度框架中“PreFilter(预过滤)”扩展点上要调用的插件列表。

        +
        filter [必需]
        +PluginSet +
        + +

        filter 是一个在需要过滤掉无法运行 Pod 的节点时被调用的插件列表。

        +
        postFilter [必需]
        +PluginSet +
        + +

        postFilter 是一个在过滤阶段结束后会被调用的插件列表; + 这里的插件只有在找不到合适的节点来运行 Pod 时才会被调用。

        +
        preScore [必需]
        +PluginSet
        - 此字段为指向某 KubeConfig 文件的路径。 +

        preScore 是一个在打分之前要调用的插件列表。

        acceptContentTypes [必需]
        -string +
        score [必需]
        +PluginSet
        - acceptContentTypes 定义的是客户端与服务器建立连接时要发送的 -Accept 头部;这里的设置值会覆盖默认值 "application/json"。 -此字段会影响某特定客户端与服务器的所有连接。 +

        score 是一个在对已经通过过滤阶段的节点进行排序时调用的插件的列表。

        contentType [必需]
        -string +
        reserve [必需]
        +PluginSet
        - contentType 包含的是此客户端向服务器发送数据时使用的 -内容类型(Content Type)。 +

        reserve 是一组在运行 Pod 的节点已被选定后,需要预留或者释放资源时调用的插件的列表。

        qps [必需]
        -float32 +
        permit [必需]
        +PluginSet
        - qps 控制的是此连接上每秒可以发送的查询个数。 +

        permit 是一个用来控制 Pod 绑定关系的插件列表。 + 这些插件可以禁止或者延迟 Pod 的绑定。

        burst [必需]
        -int32 +
        preBind [必需]
        +PluginSet
        - burst 允许在客户端超出其速率限制时可以累积的额外查询个数。 +

        preBind 是一个在 Pod 被绑定到某节点之前要被调用的插件的列表。

        - -## `DebuggingConfiguration` {#DebuggingConfiguration} - - -**出现在:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) - - -DebuggingConfiguration 保存与调试功能相关的配置。 - - - - - - - + + +
        字段描述
        enableProfiling [必需]
        -bool +
        bind [必需]
        +PluginSet
        - 此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。 +

        bind 是一个在调度框架中"Bind(绑定)"扩展点上要调用的插件的列表。 + 调度器按顺序调用这些插件。只要其中某个插件返回成功,则调度器就略过余下的插件。

        enableContentionProfiling [必需]
        -bool +
        postBind [必需]
        +PluginSet
        +

        postBind 是一个在 Pod 已经被成功绑定之后要调用的插件的列表。

        +
        multiPoint [必需]
        +PluginSet +
        + - 此字段在 enableProfiling 为 true 时允许执行锁竞争分析。 +

        multiPoint 是一个简化的配置段落,用来为所有合法的扩展点启用插件。

        -## `FormatOptions` {#FormatOptions} +## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadConstraintsDefaulting} + + +(`string` 类型的别名) + +**出现在:** + +- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs) + + +

        PodTopologySpreadConstraintsDefaulting 定义如何为 +PodTopologySpread 插件设置默认的约束。

        + +## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam} +**出现在:** + +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) -FormatOptions 中包含不同日志格式的配置选项。 +

        RequestedToCapacityRatioParam 结构定义 RequestedToCapacityRatio 的参数。

        - -
        字段描述
        json [必需]
        -JSONOptions + +
        shape [必需]
        +[]UtilizationShapePoint
        - [实验特性] json 字段包含为 "json" 日志格式提供的配置选项。 +

        shape 是一个定义评分函数曲线的计分点的列表。

        -## `JSONOptions` {#JSONOptions} - +## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta2-ResourceSpec} + **出现在:** -- [FormatOptions](#FormatOptions) +- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs) +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) -JSONOptions 包含为 "json" 日志格式所设置的配置选项。 +

        ResourceSpec 用来代表某个资源。

        - - -
        字段描述
        splitStream [必需]
        -bool + +
        name [必需]
        +string
        - [实验特性] 此字段将错误信息重定向到标准错误输出(stderr),将提示消息 -重定向到标准输出(stdout),并且支持缓存。默认配置为将二者都输出到 -标准输出(stdout),且不提供缓存。 +

        资源名称。

        infoBufferSize [必需]
        -k8s.io/apimachinery/pkg/api/resource.QuantityValue +
        weight [必需]
        +int64
        - [实验特性] infoBufferSize 用来在分离数据流场景是设置提示 -信息数据流的大小。默认值为 0,意味着禁止缓存。 +

        资源权重。

        -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} +## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy} **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs) -LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举 -客户端的配置。 +

        ScoringStrategy 为节点资源插件定义 ScoringStrategyType。

        - - - - - - - - - - +
        字段描述
        leaderElect [必需]
        -bool -
        - - leaderElect 启用领导者选举客户端,从而在进入主循环执行之前 -先要获得领导者角色。当运行多副本组件时启用此功能有助于提高可用性。 -
        leaseDuration [必需]
        -meta/v1.Duration -
        - - leaseDuration 是非领导角色候选者在观察到需要领导席位更新时 -要等待的时间;只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要 -被刷新的席位。这里的设置值本质上意味着某个领导者在被另一个候选者替换掉 -之前可以停止运行的最长时长。只有当启用了领导者选举时此字段有意义。 -
        renewDeadline [必需]
        -meta/v1.Duration +
        type [必需]
        +ScoringStrategyType
        - renewDeadline 设置的是当前领导者在停止扮演领导角色之前 -需要刷新领导状态的时间间隔。此值必须小于或等于租约期限的长度。 -只有到启用了领导者选举时此字段才有意义。 +

        type 用来选择要运行的策略。

        retryPeriod [必需]
        -meta/v1.Duration +
        resources [必需]
        +[]ResourceSpec
        - retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态 -之间需要等待的时长。只有当启用了领导者选举时此字段才有意义。 +

        resources 设置在评分时要考虑的资源。

        +

        默认的资源集合包含 "cpu" 和 "memory",且二者权重相同。

        +

        权重的取值范围为 1 到 100。

        +

        当权重未设置或者显式设置为 0 时,意味着使用默认值 1。

        resourceLock [必需]
        -string +
        requestedToCapacityRatio [必需]
        +RequestedToCapacityRatioParam
        - 此字段给出在领导者选举期间要作为锁来使用的资源对象类型。 +

        特定于 RequestedToCapacityRatio 策略的参数。

        resourceName [必需]
        -string +
        + +## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategyType} + + +(`string` 数据类型的别名) + +**出现在:** + +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy) + + +

        ScoringStrategyType 是 NodeResourcesFit 插件所使用的的评分策略类型。

        + +## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint} + + +**出现在:** + +- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs) +- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam) + + +

        UtilizationShapePoint 代表的是优先级函数曲线中的一个评分点。

        + + + + + + -
        字段描述
        utilization [必需]
        +int32
        - 此字段给出在领导者选举期间要作为锁来使用的资源对象名称。 +

        利用率(x 轴)。合法值为 0 到 100。完全被利用的节点映射到 100。

        resourceNamespace [必需]
        -string +
        score [必需]
        +int32
        - 此字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。 +

        分配给指定利用率的分值(y 轴)。合法值为 0 到 10。

        - -## `VModuleConfiguration` {#VModuleConfiguration} - - -(`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) - - -VModuleConfiguration 是一组文件名(通配符)及其对应的日志详尽程度阈值。 - diff --git a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md similarity index 68% rename from content/zh/docs/reference/config-api/kube-scheduler-config.v1beta3.md rename to content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md index d20017253ff4a..a886979a82fe0 100644 --- a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta3.md +++ b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md @@ -25,13 +25,444 @@ auto_generated: true - [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs) - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) +## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

        ClientConnectionConfiguration 中包含用来构造客户端所需的细节。

        + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        kubeconfig [必需]
        +string +
        + +

        此字段为指向 KubeConfig 文件的路径。

        +
        acceptContentTypes [必需]
        +string +
        + +

        + acceptContentTypes 定义的是客户端与服务器建立连接时要发送的 Accept 头部, + 这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。 +

        +
        contentType [必需]
        +string +
        + +

        + contentType 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。 +

        +
        qps [必需]
        +float32 +
        + +

        qps 控制此连接允许的每秒查询次数。

        +
        burst [必需]
        +int32 +
        + +

        burst 允许在客户端超出其速率限制时可以累积的额外查询个数。

        +
        + +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) + + +

        DebuggingConfiguration 保存与调试功能相关的配置。

        + + + + + + + + + + + + +
        字段描述
        enableProfiling [必需]
        +bool +
        + +

        此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。

        +
        enableContentionProfiling [必需]
        +bool +
        + +

        此字段在 enableProfiling 为 true 时允许执行锁竞争分析。

        +
        + +## `FormatOptions` {#FormatOptions} + + + + +

        FormatOptions 中包含不同日志格式的配置选项。

        + + + + + + + + + +
        字段描述
        json [必需]
        +JSONOptions +
        + +

        [实验特性] json 字段包含为 "json" 日志格式提供的配置选项。

        +
        + +## `JSONOptions` {#JSONOptions} + + +**出现在:** + +- [FormatOptions](#FormatOptions) + + +

        JSONOptions 包含为 "json" 日志格式所设置的配置选项。

        + + + + + + + + + + + + +
        字段描述
        splitStream [必需]
        +bool +
        + +

        [实验特性] 此字段将错误信息重定向到标准错误输出(stderr), + 将提示消息重定向到标准输出(stdout),并且支持缓存。 + 默认配置为将二者都输出到标准输出(stdout),且不提供缓存。

        +
        infoBufferSize [必需]
        +k8s.io/apimachinery/pkg/api/resource.QuantityValue +
        + +

        + [实验特性] infoBufferSize 用来在分离数据流场景是设置提示信息数据流的大小。 + 默认值为 0,意味着禁止缓存。 +

        +
        + +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

        +LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。 +

        + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        leaderElect [必需]
        +bool +
        + +

        + leaderElect 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。 + 运行多副本组件时启用此功能有助于提高可用性。 +

        +
        leaseDuration [必需]
        +meta/v1.Duration +
        + +

        + leaseDuration 是非领导角色候选者在观察到需要领导席位更新时要等待的时间; + 只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。 + 这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。 + 只有当启用了领导者选举时此字段有意义。 +

        +
        renewDeadline [必需]
        +meta/v1.Duration +
        + +

        + renewDeadline 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。 + 此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。 +

        +
        retryPeriod [必需]
        +meta/v1.Duration +
        + +

        + retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。 + 只有当启用了领导者选举时此字段才有意义。 +

        +
        resourceLock [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象类型。

        +
        resourceName [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象名称。

        +
        resourceNamespace [必需]
        +string +
        + +

        此字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。

        +
        + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**出现在:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

        +LoggingConfiguration 包含日志选项。 +参考 [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) 以了解更多信息。 +

        + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        format [必需]
        +string +
        + +

        format 设置日志消息的结构。默认的格式取值为 text

        +
        flushFrequency [必需]
        +time.Duration +
        + +

        对日志进行清洗的最大间隔纳秒数(例如,1s = 1000000000)。 + 如果所选的日志后端在写入日志消息时不提供缓存,则此配置会被忽略。

        +
        verbosity [必需]
        +uint32 +
        + +

        verbosity 用来确定日志消息记录的详细程度阈值。 + 默认值为 0,意味着仅记录最重要的消息。 + 数值越大,额外的消息越多。错误消息总是被记录下来。

        +
        vmodule [必需]
        +VModuleConfiguration +
        + +

        vmodule 会在单个文件层面重载 verbosity 阈值的设置。 + 这一选项仅支持 "text" 日志格式。

        +
        options [必需]
        +FormatOptions +
        + +

        [实验特性] options 中包含特定于不同日志格式的配置参数。 + 只有针对所选格式的选项会被使用,但是合法性检查时会查看所有选项配置。

        +
        + +## `VModuleConfiguration` {#VModuleConfiguration} + + + +(`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) + +**出现在:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

        VModuleConfiguration 是一组文件名(通配符)及其对应的日志详尽程度阈值。

        + ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs} -DefaultPreemptionArgs 包含用来配置 DefaultPreemption 插件的参数。 +

        DefaultPreemptionArgs 包含用来配置 DefaultPreemption 插件的参数。

        @@ -50,8 +481,8 @@ shortlist when dry running preemption as a percentage of number of nodes. Must be in the range [0, 100]. Defaults to 10% of the cluster size if unspecified. --> - 此字段为试运行抢占时 shortlist 中候选节点数的下限,数值为节点数的百分比。 -字段值必须介于 [0, 100] 之间。未指定时默认值为整个集群规模的 10%。 +

        此字段为试运行抢占时 shortlist 中候选节点数的下限,数值为节点数的百分比。 + 字段值必须介于 [0, 100] 之间。未指定时默认值为整个集群规模的 10%。

        @@ -82,7 +514,7 @@ that play a role in the number of candidates shortlisted. Must be at least -InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。 +

        InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。

        字段描述
        minCandidateNodesAbsolute [必需]
        @@ -67,11 +498,12 @@ We say "likely" because there are other factors such as PDB violations that play a role in the number of candidates shortlisted. Must be at least 0 nodes. Defaults to 100 nodes if unspecified. --> - 此字段设置 shortlist 中候选节点的绝对下限。用于试运行抢占而列举的 -候选节点个数近似于通过下面的公式计算的:
        -候选节点数 = max(节点数 * minCandidateNodesPercentage, minCandidateNodesAbsolute) -之所以说是“近似于”是因为存在一些类似于 PDB 违例这种因素,会影响到进入 shortlist -中候选节点的个数。取值至少为 0 节点。若未设置默认为 100 节点。 +

        此字段设置 shortlist 中候选节点的绝对下限。 + 用于试运行抢占而列举的候选节点个数近似于通过下面的公式计算的:
        + 候选节点数 = max(节点数 * minCandidateNodesPercentage, minCandidateNodesAbsolute) + 之所以说是"近似于"是因为存在一些类似于 PDB 违例这种因素, + 会影响到进入 shortlist中候选节点的个数。 + 取值至少为 0 节点。若未设置默认为 100 节点。

        @@ -99,8 +531,9 @@ InterPodAffinityArgs 包含用来配置 InterPodAffinity 插件的参数。 HardPodAffinityWeight is the scoring weight for existing pods with a matching hard affinity to the incoming pod. --> - 此字段是一个计分权重值。针对新增的 Pod,要对现存的、带有与新 Pod 匹配的 -硬性亲和性设置的 Pods 计算亲和性得分。 +

        此字段是一个计分权重值。针对新增的 Pod,要对现存的、 + 带有与新 Pod 匹配的硬性亲和性设置的 Pods 计算亲和性得分。 +

        @@ -111,7 +544,7 @@ matching hard affinity to the incoming pod. -KubeSchedulerConfiguration 用来配置调度器。 +

        KubeSchedulerConfiguration 用来配置调度器。

        字段描述
        @@ -127,8 +560,9 @@ KubeSchedulerConfiguration 用来配置调度器。 - 此字段设置为调度 Pod 而执行算法时的并发度。此值必须大于 0。 -默认值为 16。 +

        + 此字段设置为调度 Pod 而执行算法时的并发度。此值必须大于 0。默认值为 16。 +

        + @@ -246,7 +689,7 @@ with the extender. These extenders are shared by all scheduler profiles. -NodeAffinityArgs 中包含配置 NodeAffinity 插件的参数。 +

        NodeAffinityArgs 中包含配置 NodeAffinity 插件的参数。

        字段描述
        leaderElection [必需]
        @@ -138,7 +572,7 @@ KubeSchedulerConfiguration 用来配置调度器。 - 此字段用来定义领导者选举客户端的配置。 +

        此字段用来定义领导者选举客户端的配置。

        clientConnection [必需]
        @@ -149,18 +583,22 @@ KubeSchedulerConfiguration 用来配置调度器。 ClientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver. --> - 此字段为与 API 服务器通信时使用的代理服务器设置 kubeconfig 文件和客户端 -连接配置。 +

        此字段为与 API 服务器通信时使用的代理服务器设置 kubeconfig 文件和客户端连接配置。

        DebuggingConfiguration [必需]
        DebuggingConfiguration
        DebuggingConfiguration 的成员被内嵌到此类型中) - 此字段设置与调试相关功能特性的配置。 +

        此字段设置与调试相关功能特性的配置。 + TODO:我们可能想把它做成一个子结构,像调试 component-base/config/v1alpha1.DebuggingConfiguration 一样。

        percentageOfNodesToScore [必需]
        @@ -177,12 +615,14 @@ then scheduler stops finding further feasible nodes once it finds 150 feasible o When the value is 0, default percentage (5%--50% based on the size of the cluster) of the nodes will be scored. --> +

        此字段为所有节点的百分比,一旦调度器找到所设置比例的、能够运行 Pod 的节点, -则停止在集群中继续寻找更合适的节点。这一配置有助于提高调度器的性能。调度器 -总会尝试寻找至少 "minFeasibleNodesToFind" 个可行节点,无论此字段的取值如何。 -例如:当集群规模为 500 个节点,而此字段的取值为 30,则调度器在找到 150 个合适 -的节点后会停止继续寻找合适的节点。当此值为 0 时,调度器会使用默认节点数百分比(基于集群规模 -确定的值,在 5% 到 50% 之间)来执行打分操作。 + 则停止在集群中继续寻找更合适的节点。这一配置有助于提高调度器的性能。 + 调度器总会尝试寻找至少 "minFeasibleNodesToFind" 个可行节点,无论此字段的取值如何。 + 例如:当集群规模为 500 个节点,而此字段的取值为 30, + 则调度器在找到 150 个合适的节点后会停止继续寻找合适的节点。当此值为 0 时, + 调度器会使用默认节点数百分比(基于集群规模确定的值,在 5% 到 50% 之间)来执行打分操作。 +

        podInitialBackoffSeconds [必需]
        @@ -194,8 +634,8 @@ nodes will be scored. If specified, it must be greater than 0. If this value is null, the default value (1s) will be used. --> - 此字段设置不可调度 Pod 的初始回退秒数。如果设置了此字段,其取值必须大于零。 -若此值为 null,则使用默认值(1s)。 +

        此字段设置不可调度 Pod 的初始回退秒数。如果设置了此字段,其取值必须大于零。 + 若此值为 null,则使用默认值(1s)。

        podMaxBackoffSeconds [必需]
        @@ -207,8 +647,9 @@ will be used. If specified, it must be greater than podInitialBackoffSeconds. If this value is null, the default value (10s) will be used. --> - 此字段设置不可调度的 Pod 的最大回退秒数。如果设置了此字段,则其值必须大于 -podInitialBackoffSeconds 字段值。如果此值设置为 null,则使用默认值(10s)。 +

        此字段设置不可调度的 Pod 的最大回退秒数。 + 如果设置了此字段,则其值必须大于 podInitialBackoffSeconds 字段值。 + 如果此值设置为 null,则使用默认值(10s)。

        profiles [必需]
        @@ -221,9 +662,10 @@ choose to be scheduled under a particular profile by setting its associated scheduler name. Pods that don't specify any scheduler name are scheduled with the "default-scheduler" profile, if present here. --> - 此字段为 kube-scheduler 所支持的方案(profiles)。Pod 可以通过设置其对应 -的调度器名称来选择使用特定的方案。未指定调度器名称的 Pod 会使用 -“default-scheduler”方案来调度,如果存在的话。 +

        此字段为 kube-scheduler 所支持的方案(profiles)。 + Pod 可以通过设置其对应的调度器名称来选择使用特定的方案。 + 未指定调度器名称的 Pod 会使用"default-scheduler"方案来调度,如果存在的话。 +

        extenders [必需]
        @@ -234,8 +676,9 @@ with the "default-scheduler" profile, if present here. Extenders are the list of scheduler extenders, each holding the values of how to communicate with the extender. These extenders are shared by all scheduler profiles. --> - 此字段为调度器扩展模块(Extender)的列表,每个元素包含如何与某扩展模块 -通信的配置信息。所有调度器模仿会共享此扩展模块列表。 +

        此字段为调度器扩展模块(Extender)的列表, + 每个元素包含如何与某扩展模块通信的配置信息。 + 所有调度器模仿会共享此扩展模块列表。

        @@ -267,11 +710,12 @@ match). When AddedAffinity is used, some Pods with affinity requirements that match a specific Node (such as Daemonset Pods) might remain unschedulable. --> - addedAffinity 会作为附加的亲和性属性添加到所有 Pod 的 -规约中指定的 NodeAffinity 中。换言之,节点需要同时满足 addedAffinity -和 .spec.nodeAffinity。默认情况下,addedAffinity 为空(与所有节点匹配)。 -使用了 addedAffinity 时,某些带有已经能够与某特定节点匹配的亲和性需求 -的 Pod (例如 DaemonSet Pod)可能会继续呈现不可调度状态。 +

        + addedAffinity 会作为附加的亲和性属性添加到所有 Pod 的规约中指定的 NodeAffinity 中。 + 换言之,节点需要同时满足 addedAffinity 和 .spec.nodeAffinity。 + 默认情况下,addedAffinity 为空(与所有节点匹配)。使用了 addedAffinity 时, + 某些带有已经能够与某特定节点匹配的亲和性需求的 Pod (例如 DaemonSet Pod)可能会继续呈现不可调度状态。 +

        @@ -282,7 +726,7 @@ a specific Node (such as Daemonset Pods) might remain unschedulable. -NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllocation 插件的参数。 +

        NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllocation 插件的参数。

        字段描述
        @@ -298,7 +742,7 @@ NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllo - 要管理的资源;如果未设置,则默认值为 "cpu" 和 "memory"。 +

        要管理的资源;如果未设置,则默认值为 "cpu" 和 "memory"。

        @@ -309,7 +753,7 @@ NodeResourcesBalancedAllocationArgs 包含用来配置 NodeResourcesBalancedAllo -NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。 +

        NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。

        字段描述
        @@ -326,7 +770,7 @@ NodeResourcesFitArgs 包含用来配置 NodeResourcesFit 插件的参数。 IgnoredResources is the list of resources that NodeResources fit filter should ignore. This doesn't apply to scoring. --> - 此字段为 NodeResources 匹配过滤器要忽略的资源列表。此列表不影响节点打分。 +

        此字段为 NodeResources 匹配过滤器要忽略的资源列表。此列表不影响节点打分。

        @@ -365,7 +810,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "memory" weight. -PodTopologySpreadArgs 包含用来配置 PodTopologySpread 插件的参数。 +

        PodTopologySpreadArgs 包含用来配置 PodTopologySpread 插件的参数。

        字段描述
        ignoredResourceGroups [必需]
        @@ -339,10 +783,11 @@ e.g. if group is ["example.com"], it will ignore all resource names that begin with "example.com", such as "example.com/aaa" and "example.com/bbb". A resource group name can't contain '/'. This doesn't apply to scoring. --> - 此字段定义 NodeResources 匹配过滤器要忽略的资源组列表。 -例如,如果配置值为 ["example.com"],则以 "example.com" 开头的资源名(如 -"example.com/aaa" 和 "example.com/bbb")都会被忽略。 -资源组名称中不可以包含 '/'。此设置不影响节点的打分。 +

        此字段定义 NodeResources 匹配过滤器要忽略的资源组列表。 + 例如,如果配置值为 ["example.com"], + 则以 "example.com" 开头的资源名 + (如"example.com/aaa" 和 "example.com/bbb")都会被忽略。 + 资源组名称中不可以包含 '/'。此设置不影响节点的打分。

        scoringStrategy [必需]
        @@ -353,8 +798,8 @@ A resource group name can't contain '/'. This doesn't apply to scoring. ScoringStrategy selects the node resource scoring strategy. The default strategy is LeastAllocated with an equal "cpu" and "memory" weight. --> - 此字段用来选择节点资源打分策略。默认的策略为 LeastAllocated,且 "cpu" 和 -"memory" 的权重相同。 +

        此字段用来选择节点资源打分策略。默认的策略为 LeastAllocated, + 且 "cpu" 和 "memory" 的权重相同。

        @@ -386,11 +831,10 @@ deduced from the Pod's membership to Services, ReplicationControllers, ReplicaSets or StatefulSets. When not empty, .defaultingType must be "List". --> - 此字段针对未定义 .spec.topologySpreadConstraints 的 Pod, -为其提供拓扑分布约束。.defaultConstraints[∗].labelSelectors -必须为空,因为这一信息要从 Pod 所属的 Service、ReplicationController、 -ReplicaSet 或 StatefulSet 来推导。 -此字段不为空时,.defaultingType 必须为 "List"。 +

        此字段针对未定义 .spec.topologySpreadConstraints 的 Pod, + 为其提供拓扑分布约束。.defaultConstraints[∗].labelSelectors必须为空, + 因为这一信息要从 Pod 所属的 Service、ReplicationController、ReplicaSet 或 StatefulSet 来推导。 + 此字段不为空时,.defaultingType 必须为 "List"。

        @@ -423,7 +866,7 @@ and to "System" if enabled. -VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。 +

        VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。

        字段描述
        defaultingType
        @@ -403,16 +847,15 @@ of "System" or "List". - "System": Use kubernetes defined constraints that spread Pods among Nodes and Zones. - "List": Use constraints defined in .defaultConstraints. -Defaults to "List" if feature gate DefaultPodTopologySpread is disabled -and to "System" if enabled. +Defaults to "System". -->

        defaultingType 决定如何推导 .defaultConstraints。 -可选值为 "System" 或 "List"。

        + 可选值为 "System" 或 "List"。

          -
        • "System":使用 Kubernetes 定义的约束,将 Pod 分布到不同节点和可用区;
        • -
        • "List":使用 .defaultConstraints 中定义的约束。
        • +
        • "System":使用 Kubernetes 定义的约束,将 Pod 分布到不同节点和可用区;
        • +
        • "List":使用 .defaultConstraints 中定义的约束。
        -

        当特性门控 DefaultPodTopologySpread 被禁用时,默认值为 "list";反之,默认值为 "System"。

        +

        默认值为 "System"。

        @@ -441,8 +884,8 @@ VolumeBindingArgs 包含用来配置 VolumeBinding 插件的参数。 Value must be non-negative integer. The value zero indicates no waiting. If this value is nil, the default value (600) will be used. --> - 此字段设置卷绑定操作的超时秒数。字段值必须是非负数。 -取值为 0 意味着不等待。如果此值为 null,则使用默认值(600)。 +

        此字段设置卷绑定操作的超时秒数。字段值必须是非负数。 + 取值为 0 意味着不等待。如果此值为 null,则使用默认值(600)。

        @@ -492,8 +935,8 @@ All points must be sorted in increasing order by utilization. Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, it is assumed that the extender chose not to provide that extension. --> -Extender 包含与扩展模块(Extender)通信所用的参数。 -如果未指定 verb 或者 verb 为空,则假定对应的扩展模块选择不提供该扩展功能。 +

        Extender 包含与扩展模块(Extender)通信所用的参数。 +如果未指定 verb 或者 verb 为空,则假定对应的扩展模块选择不提供该扩展功能。

        字段描述
        shape
        @@ -462,17 +905,17 @@ The default shape points are: 2) 10 for 100 utilization All points must be sorted in increasing order by utilization. --> -

        shape 用来设置打分函数曲线所使用的计分点,这些计分点 -用来基于静态制备的 PV 卷的利用率为节点打分。 -卷的利用率是计算得来的,将 Pod 所请求的总的存储空间大小除以每个节点 -上可用的总的卷容量。每个计分点包含利用率(范围从 0 到 100)和其对应 -的得分(范围从 0 到 10)。你可以通过为不同的使用率值设置不同的得分来 -反转优先级:

        +

        shape 用来设置打分函数曲线所使用的计分点, + 这些计分点用来基于静态制备的 PV 卷的利用率为节点打分。 + 卷的利用率是计算得来的, + 将 Pod 所请求的总的存储空间大小除以每个节点上可用的总的卷容量。 + 每个计分点包含利用率(范围从 0 到 100)和其对应的得分(范围从 0 到 10)。 + 你可以通过为不同的使用率值设置不同的得分来反转优先级:

        默认的曲线计分点为:

        -
          +
          1. 利用率为 0 时得分为 0;
          2. 利用率为 100 时得分为 10。
          3. -
        +

        所有计分点必须按利用率值的升序来排序。

        @@ -506,7 +949,7 @@ Extender 包含与扩展模块(Extender)通信所用的参数。 - 用来访问扩展模块的 URL 前缀。 +

        用来访问扩展模块的 URL 前缀。

        @@ -646,8 +1089,8 @@ prioritize 和 bind (如果扩展模块可以执行绑定操作)阶段被发 Ignorable specifies if the extender is ignorable, i.e. scheduling should not fail when the extender returns an error or is not reachable. --> - 此字段用来设置扩展模块是否是可忽略的。换言之,当扩展模块返回错误或者 -完全不可达时,调度操作不应失败。 +

        此字段用来设置扩展模块是否是可忽略的。 + 换言之,当扩展模块返回错误或者完全不可达时,调度操作不应失败。

        @@ -666,7 +1109,7 @@ fail when the extender returns an error or is not reachable. ExtenderManagedResource describes the arguments of extended resources managed by an extender. --> -ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数。 +

        ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数。

        字段描述
        filterVerb [必需]
        @@ -516,8 +959,8 @@ Extender 包含与扩展模块(Extender)通信所用的参数。 - filter 调用所使用的动词,如果不支持过滤操作则为空。 -此动词会在向扩展模块发送 filter 调用时追加到 urlPrefix 后面。 +

        filter 调用所使用的动词,如果不支持过滤操作则为空。 + 此动词会在向扩展模块发送 filter 调用时追加到 urlPrefix 后面。

        preemptVerb [必需]
        @@ -527,8 +970,8 @@ Extender 包含与扩展模块(Extender)通信所用的参数。 - preempt 调用所使用的动词,如果不支持过滤操作则为空。 -此动词会在向扩展模块发送 preempt 调用时追加到 urlPrefix 后面。 +

        preempt 调用所使用的动词,如果不支持过滤操作则为空。 + 此动词会在向扩展模块发送 preempt 调用时追加到 urlPrefix 后面。

        prioritizeVerb [必需]
        @@ -538,8 +981,8 @@ Extender 包含与扩展模块(Extender)通信所用的参数。 - prioritize 调用所使用的动词,如果不支持过滤操作则为空。 -此动词会在向扩展模块发送 prioritize 调用时追加到 urlPrefix 后面。 +

        prioritize 调用所使用的动词,如果不支持过滤操作则为空。 + 此动词会在向扩展模块发送 prioritize 调用时追加到 urlPrefix 后面。

        weight [必需]
        @@ -550,8 +993,8 @@ Extender 包含与扩展模块(Extender)通信所用的参数。 The numeric multiplier for the node scores that the prioritize call generates. The weight should be a positive integer --> - 针对 prioritize 调用所生成的节点分数要使用的数值系数。 -weight 值必须是正整数。 +

        针对 prioritize 调用所生成的节点分数要使用的数值系数。 + weight 值必须是正整数。

        bindVerb [必需]
        @@ -563,10 +1006,10 @@ weight 值必须是正整数。 If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender can implement this function. --> - bind 调用所使用的动词,如果不支持过滤操作则为空。 -此动词会在向扩展模块发送 bind 调用时追加到 urlPrefix 后面。 -如果扩展模块实现了此方法,扩展模块要负责将 Pod 绑定到 API 服务器。 -只有一个扩展模块可以实现此函数。 +

        bind 调用所使用的动词,如果不支持过滤操作则为空。 + 此动词会在向扩展模块发送 bind 调用时追加到 urlPrefix 后面。 + 如果扩展模块实现了此方法,扩展模块要负责将 Pod 绑定到 API 服务器。 + 只有一个扩展模块可以实现此函数。

        enableHTTPS [必需]
        @@ -576,7 +1019,7 @@ can implement this function. - 此字段设置是否需要使用 HTTPS 来与扩展模块通信。 +

        此字段设置是否需要使用 HTTPS 来与扩展模块通信。

        tlsConfig [必需]
        @@ -586,20 +1029,20 @@ can implement this function. - 此字段设置传输层安全性(TLS)配置。 +

        此字段设置传输层安全性(TLS)配置。

        httpTimeout [必需]
        -meta/v1.Duration +meta/v1.Duration
        - 此字段给出扩展模块功能调用的超时值。filter 操作超时会导致 Pod 无法被调度。 -prioritize 操作超时会被忽略,Kubernetes 或者其他扩展模块所给出的优先级值 -会被用来选择节点。 +

        此字段给出扩展模块功能调用的超时值。filter 操作超时会导致 Pod 无法被调度。 + prioritize 操作超时会被忽略, + Kubernetes 或者其他扩展模块所给出的优先级值会被用来选择节点。

        nodeCacheCapable [必需]
        @@ -611,8 +1054,8 @@ prioritize 操作超时会被忽略,Kubernetes 或者其他扩展模块所给 so the scheduler should only send minimal information about the eligible nodes assuming that the extender already cached full details of all nodes in the cluster --> - 此字段指示扩展模块可以缓存节点信息,从而调度器应该发送关于可选节点的最少信息, -假定扩展模块已经缓存了集群中所有节点的全部详细信息。 +

        此字段指示扩展模块可以缓存节点信息,从而调度器应该发送关于可选节点的最少信息, + 假定扩展模块已经缓存了集群中所有节点的全部详细信息。

        managedResources
        @@ -632,9 +1075,9 @@ this extender.

        managedResources 是一个由此扩展模块所管理的扩展资源的列表。

        • 如果某 Pod 请求了此列表中的至少一个扩展资源,则 Pod 会在 filter、 -prioritize 和 bind (如果扩展模块可以执行绑定操作)阶段被发送到该扩展模块。
        • + prioritize 和 bind (如果扩展模块可以执行绑定操作)阶段被发送到该扩展模块。
        • 如果某资源上设置了 ignoredByScheduler 为 true,则 kube-scheduler -会在断言阶段略过对该资源的检查。
        • + 会在断言阶段略过对该资源的检查。
        @@ -679,7 +1122,7 @@ ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数 - 扩展资源的名称。 +

        扩展资源的名称。

        @@ -708,7 +1151,7 @@ resource when applying predicates. -ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数。 +

        ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数。

        字段描述
        ignoredByScheduler [必需]
        @@ -690,7 +1133,7 @@ ExtenderManagedResource 描述某扩展模块所管理的扩展资源的参数 IgnoredByScheduler indicates whether kube-scheduler should ignore this resource when applying predicates. --> - 此字段标明 kube-scheduler 是否应在应用断言时忽略此资源。 +

        此字段标明 kube-scheduler 是否应在应用断言时忽略此资源。

        @@ -721,7 +1164,7 @@ ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数 - 访问服务器时不需要检查 TLS 证书。此配置仅针对测试用途。 +

        访问服务器时不需要检查 TLS 证书。此配置仅针对测试用途。

        @@ -819,7 +1263,7 @@ CAData takes precedence over CAFile -KubeSchedulerProfile 是一个调度方案。 +

        KubeSchedulerProfile 是一个调度方案。

        字段描述
        serverName [必需]
        @@ -733,9 +1176,10 @@ ExtenderTLSConfig 包含启用与扩展模块间 TLS 传输所需的配置参数 certificates against. If ServerName is empty, the hostname used to contact the server is used. --> - serverName 会被发送到服务器端,作为 SNI 标志;客户端会使用 -此设置来检查服务器证书。如果 serverName 为空,则会使用联系 -服务器时所用的主机名。 +

        serverName 会被发送到服务器端,作为 SNI 标志; + 客户端会使用此设置来检查服务器证书。 + 如果 serverName 为空,则会使用联系服务器时所用的主机名。 +

        certFile [必需]
        @@ -745,7 +1189,7 @@ server is used. - 服务器端所要求的 TLS 客户端证书认证。 +

        服务器端所要求的 TLS 客户端证书认证。

        keyFile [必需]
        @@ -755,7 +1199,7 @@ server is used. - 服务器端所要求的 TLS 客户端秘钥认证。 +

        服务器端所要求的 TLS 客户端秘钥认证。

        caFile [必需]
        @@ -765,7 +1209,7 @@ server is used. - 服务器端被信任的根证书。 +

        服务器端被信任的根证书。

        certData [必需]
        @@ -776,8 +1220,8 @@ server is used. CertData holds PEM-encoded bytes (typically read from a client certificate file). CertData takes precedence over CertFile --> - certData 包含 PEM 编码的字节流(通常从某客户端证书文件读入)。 -此字段优先级高于 certFile 字段。 +

        certData 包含 PEM 编码的字节流(通常从某客户端证书文件读入)。 + 此字段优先级高于 certFile 字段。

        keyData [必需]
        @@ -788,8 +1232,8 @@ CertData takes precedence over CertFile KeyData holds PEM-encoded bytes (typically read from a client certificate key file). KeyData takes precedence over KeyFile --> - keyData 包含 PEM 编码的字节流(通常从某客户端证书秘钥文件读入)。 -此字段优先级高于 keyFile 字段。 +

        keyData 包含 PEM 编码的字节流(通常从某客户端证书秘钥文件读入)。 + 此字段优先级高于 keyFile 字段。

        caData [必需]
        @@ -800,8 +1244,8 @@ KeyData takes precedence over KeyFile CAData holds PEM-encoded bytes (typically read from a root certificates bundle). CAData takes precedence over CAFile --> - caData 包含 PEM 编码的字节流(通常从某根证书包文件读入)。 -此字段优先级高于 caFile 字段。 +

        caData 包含 PEM 编码的字节流(通常从某根证书包文件读入)。 + 此字段优先级高于 caFile 字段。

        @@ -834,9 +1278,9 @@ KubeSchedulerProfile 是一个调度方案。 if schedulername matches with the pod's "spec.schedulername", then the pod is scheduled with this profile. --> - schedulerName 是与此调度方案相关联的调度器的名称。 -如果 schedulerName 与 Pod 的 spec.schedulerName -匹配,则该 Pod 会使用此方案来调度。 +

        schedulerName 是与此调度方案相关联的调度器的名称。 + 如果 schedulerName 与 Pod 的 spec.schedulerName匹配, + 则该 Pod 会使用此方案来调度。

        @@ -889,7 +1334,7 @@ for that plugin. -Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用于评分(Score)插件。 +

        Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用于评分(Score)插件。

        字段描述
        plugins [必需]
        @@ -854,11 +1298,12 @@ If a QueueSort plugin is specified, the same QueueSort Plugin and PluginConfig must be specified for all profiles. -->

        plugins 设置一组应该被启用或禁止的插件。 -被启用的插件是指除了默认插件之外需要被启用的插件。被禁止的插件 -是指需要被禁用的默认插件。

        -

        如果针对某个扩展点没有设置被启用或被禁止的插件,则使用该扩展点 -的默认插件(如果有的话)。如果设置了 QueueSort 插件,则同一个 QueueSort -插件和 pluginConfig 要被设置到所有调度方案之上。

        + 被启用的插件是指除了默认插件之外需要被启用的插件。 + 被禁止的插件是指需要被禁用的默认插件。

        +

        如果针对某个扩展点没有设置被启用或被禁止的插件, + 则使用该扩展点的默认插件(如果有的话)。如果设置了 QueueSort 插件, + 则同一个 QueueSort 插件和 pluginConfig 要被设置到所有调度方案之上。 +

        pluginConfig [必需]
        @@ -870,8 +1315,8 @@ PluginConfig must be specified for all profiles. Omitting config args for a plugin is equivalent to using the default config for that plugin. --> - pluginConfig 是为每个插件提供的一组可选的定制插件参数。 -如果忽略了插件的配置参数,则意味着使用该插件的默认配置。 +

        pluginConfig 是为每个插件提供的一组可选的定制插件参数。 + 如果忽略了插件的配置参数,则意味着使用该插件的默认配置。

        @@ -902,7 +1347,7 @@ Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用 - 插件的名称。 +

        插件的名称。

        @@ -932,9 +1377,9 @@ PluginConfig specifies arguments that should be passed to a plugin at the time o A plugin that is invoked at multiple extension points is initialized once. Args can have arbitrary structure. It is up to the plugin to process these Args. --> -PluginConfig 给出初始化阶段要传递给插件的参数。 +

        PluginConfig 给出初始化阶段要传递给插件的参数。 在多个扩展点被调用的插件仅会被初始化一次。 -参数可以是任意结构。插件负责处理这里所传的参数。 +参数可以是任意结构。插件负责处理这里所传的参数。

        字段描述
        weight [必需]
        @@ -912,7 +1357,7 @@ Plugin 指定插件的名称及其权重(如果适用的话)。权重仅用 - 插件的权重;仅适用于评分(Score)插件。 +

        插件的权重;仅适用于评分(Score)插件。

        @@ -947,17 +1392,17 @@ PluginConfig 给出初始化阶段要传递给插件的参数。 - name 是所配置的插件的名称。 +

        name 是所配置的插件的名称。

        @@ -976,8 +1421,8 @@ PluginConfig 给出初始化阶段要传递给插件的参数。 PluginSet specifies enabled and disabled plugins for an extension point. If an array is empty, missing, or nil, default plugins at that extension point will be used. --> -PluginSet 为某扩展点设置要启用或禁用的插件。 -如果数组为空,或者取值为 null,则使用该扩展点的默认插件集合。 +

        PluginSet 为某扩展点设置要启用或禁用的插件。 +如果数组为空,或者取值为 null,则使用该扩展点的默认插件集合。

        字段描述
        args [必需]
        -k8s.io/apimachinery/pkg/runtime.RawExtension +k8s.io/apimachinery/pkg/runtime.RawExtension
        - args 定义在初始化阶段要传递给插件的参数。参数可以为任意结构。 +

        args 定义在初始化阶段要传递给插件的参数。参数可以为任意结构。

        @@ -993,9 +1438,9 @@ If the default plugin is also configured in the scheduler config file, the weigh be overridden accordingly. These are called after default plugins and in the same order specified here. --> - enabled 设置在默认插件之外要启用的插件。如果在调度器的配置 -文件中也配置了默认插件,则对应插件的权重会被覆盖。 -此处所设置的插件会在默认插件之后被调用,调用顺序与数组中元素顺序相同。 +

        enabled 设置在默认插件之外要启用的插件。 + 如果在调度器的配置文件中也配置了默认插件,则对应插件的权重会被覆盖。 + 此处所设置的插件会在默认插件之后被调用,调用顺序与数组中元素顺序相同。

        @@ -1029,12 +1474,12 @@ omitted from the config, then the default set of plugins is used for that extens Enabled plugins are called in the order specified here, after default plugins. If they need to be invoked before default plugins, default plugins must be disabled and re-enabled here in desired order. --> -Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定扩展点所启用 -的所有插件都在这一列表中。 +

        Plugins 结构中包含多个扩展点。当此结构被设置时, +针对特定扩展点所启用的所有插件都在这一列表中。 如果配置中不包含某个扩展点,则使用该扩展点的默认插件集合。 被启用的插件的调用顺序与这里指定的顺序相同,都在默认插件之后调用。 -如果它们需要在默认插件之前调用,则需要先行禁止默认插件,之后在这里 -按期望的顺序重新启用。 +如果它们需要在默认插件之前调用,则需要先行禁止默认插件, +之后在这里按期望的顺序重新启用。

        字段描述
        disabled [必需]
        @@ -1006,8 +1451,8 @@ These are called after default plugins and in the same order specified here. Disabled specifies default plugins that should be disabled. When all default plugins need to be disabled, an array containing only one "∗" should be provided. --> - disabled 设置要被禁用的默认插件。 -如果需要禁用所有的默认插件,应该提供仅包含一个元素 "∗" 的数组。 +

        disabled 设置要被禁用的默认插件。 + 如果需要禁用所有的默认插件,应该提供仅包含一个元素 "∗" 的数组。

        @@ -1047,7 +1492,7 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - queueSort 是一个在对调度队列中 Pod 排序时要调用的插件列表。 +

        queueSort 是一个在对调度队列中 Pod 排序时要调用的插件列表。

        @@ -633,7 +633,7 @@ ConfigMap 中,之后在新的控制面实例添加到集群或者现有控制 @@ -848,7 +848,7 @@ Defaults to 6443.

        APIServer 包含集群中 API 服务器部署所必需的设置。

        @@ -860,8 +860,17 @@ APIServer 包含集群中 API 服务器部署所必需的设置。 - @@ -1006,10 +1015,10 @@ BootstrapTokenDiscovery 用来设置基于引导令牌的服务发现选项。 @@ -1022,14 +1031,13 @@ information will be fetched. caCertHashes specifies a set of public key pins to verify when token-based discovery is used. The root CA found during discovery must match one of these values. Specifying an empty set disables root CA pinning, which can be unsafe. -Each hash is specified as "<type>:<value>", where the only currently supported type is -"sha256". This is a hex-encoded SHA-256 hash of the Subject Public Key Info (SPKI) +Each hash is specified as "<\!-- raw HTML omitted -->:caCertHashes 设置一组在基于令牌来发现服务时要验证的公钥指纹。 发现过程中获得的根 CA 必须与这里的数值之一匹配。 设置为空集合意味着禁用根 CA 指纹,因而可能是不安全的。 -每个哈希值的形式为 "<type>:<value>",当前唯一支持的 type 为 +每个哈希值的形式为 ":",当前唯一支持的 type 为 "sha256"。 哈希值为主体公钥信息(Subject Public Key Info,SPKI)对象的 SHA-256 哈希值(十六进制编码),形式为 DER 编码的 ASN.1。 @@ -1046,9 +1054,9 @@ object in DER-encoded ASN.1. These hashes can be calculated using, for example, caCertHashes. This can weaken the security of kubeadm since other nodes can impersonate the control-plane.

        --> - unsafeSkipCAVerification 允许在使用基于令牌的服务发现时不使用 +

        unsafeSkipCAVerification 允许在使用基于令牌的服务发现时不使用 caCertHashes 来执行 CA 验证。这会弱化 kubeadm 的安全性, -因为其他节点可以伪装成控制面。 +因为其他节点可以伪装成控制面。

        @@ -1063,11 +1071,11 @@ impersonate the control-plane.

        - [BootstrapToken](#kubeadm-k8s-io-v1beta2-BootstrapToken) - -

        BootstrapTokenString 形式为 abcdef.abcdef0123456789 的一个令牌, +

        BootstrapTokenString 形式为 'abcdef.abcdef0123456789' 的一个令牌, 用来从加入集群的节点角度验证 API 服务器的身份,或者 "kubeadm join" 在节点启动引导是作为一种身份认证方法。 此令牌的生命期是短暂的,并且应该如此。

        @@ -1120,7 +1128,7 @@ ControlPlaneComponent 中包含对集群中所有控制面组件都适用的设 @@ -1178,7 +1187,9 @@ DNS 结构定义要在集群中使用的 DNS 插件。 - @@ -1469,8 +1481,8 @@ file from which to load cluster information.

        string @@ -1533,7 +1545,7 @@ originated from the Kubernetes/Kubernetes release process @@ -1542,9 +1554,9 @@ If not set, the imageRepository defined in ClusterConfiguration wil string @@ -1617,7 +1629,11 @@ Secret 中的证书的秘钥。对应的加密秘钥在 InitConfiguration 结构 - @@ -1717,8 +1733,8 @@ signing certificate.

        string @@ -1751,13 +1767,13 @@ node to the cluster, either via "kubeadm init" or "kubeadm join&q - @@ -2827,19 +2578,6 @@ Only supported for "text" log format.--> - - - - diff --git a/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md new file mode 100644 index 0000000000000..cf3287b09abb3 --- /dev/null +++ b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md @@ -0,0 +1,253 @@ +--- +title: Kubelet CredentialProvider (v1alpha1) +content_type: tool-reference +package: credentialprovider.kubelet.k8s.io/v1alpha1 +--- + + + +## 资源类型 {#resource-types} + +- [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest) +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse) + +## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest} + + +

        +CredentialProviderRequest 包含 kubelet 需要进行身份验证的镜像。 +Kubelet 会通过标准输入将此请求对象传递给插件。一般来说,插件倾向于用它们所收到的相同的 apiVersion 来响应。 +

        + +
        字段描述
        preFilter [必需]
        @@ -1057,8 +1502,8 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - preFilter 是一个在调度框架中“PreFilter(预过滤)”扩展点上要 -调用的插件列表。 +

        preFilter 是一个在调度框架中"PreFilter(预过滤)"扩展点上要 + 调用的插件列表。

        filter [必需]
        @@ -1068,7 +1513,7 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - filter 是一个在需要过滤掉无法运行 Pod 的节点时被调用的插件列表。 +

        filter 是一个在需要过滤掉无法运行 Pod 的节点时被调用的插件列表。

        postFilter [必需]
        @@ -1078,8 +1523,8 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - postFilter 是一个在过滤阶段结束后会被调用的插件列表; -这里的插件只有在找不到合适的节点来运行 Pod 时才会被调用。 +

        postFilter 是一个在过滤阶段结束后会被调用的插件列表; + 这里的插件只有在找不到合适的节点来运行 Pod 时才会被调用。

        preScore [必需]
        @@ -1089,7 +1534,7 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - preScore 是一个在打分之前要调用的插件列表。 +

        preScore 是一个在打分之前要调用的插件列表。

        score [必需]
        @@ -1099,7 +1544,7 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 - score 是一个在对已经通过过滤阶段的节点进行排序时调用的插件的列表。 +

        score 是一个在对已经通过过滤阶段的节点进行排序时调用的插件的列表。

        reserve [必需]
        @@ -1110,7 +1555,7 @@ Plugins 结构中包含多个扩展点。当此结构被设置时,针对特定 Reserve is a list of plugins invoked when reserving/unreserving resources after a node is assigned to run the pod. --> - reserve 是一组在运行 Pod 的节点已被选定后,需要预留或者释放资源时调用的插件的列表。 +

        reserve 是一组在运行 Pod 的节点已被选定后,需要预留或者释放资源时调用的插件的列表。

        permit [必需]
        @@ -1120,8 +1565,8 @@ after a node is assigned to run the pod. - permit 是一个用来控制 Pod 绑定关系的插件列表。这些插件可以 -禁止或者延迟 Pod 的绑定。 +

        permit 是一个用来控制 Pod 绑定关系的插件列表。 + 这些插件可以禁止或者延迟 Pod 的绑定。

        preBind [必需]
        @@ -1131,7 +1576,7 @@ after a node is assigned to run the pod. - preBind 是一个在 Pod 被绑定到某节点之前要被调用的插件的列表。 +

        preBind 是一个在 Pod 被绑定到某节点之前要被调用的插件的列表。

        bind [必需]
        @@ -1142,9 +1587,10 @@ after a node is assigned to run the pod. Bind is a list of plugins that should be invoked at "Bind" extension point of the scheduling framework. The scheduler call these plugins in order. Scheduler skips the rest of these plugins as soon as one returns success. --> - bind 是一个在调度框架中“Bind(绑定)”扩展点上要调用的 -插件的列表。调度器按顺序调用这些插件。只要其中某个插件返回成功,则调度器 -就略过余下的插件。 +

        + bind 是一个在调度框架中"Bind(绑定)"扩展点上要调用的插件的列表。 + 调度器按顺序调用这些插件。只要其中某个插件返回成功,则调度器就略过余下的插件。 +

        postBind [必需]
        @@ -1154,7 +1600,7 @@ The scheduler call these plugins in order. Scheduler skips the rest of these plu - postBind 是一个在 Pod 已经被成功绑定之后要调用的插件的列表。 +

        postBind 是一个在 Pod 已经被成功绑定之后要调用的插件的列表。

        multiPoint [必需]
        @@ -1169,11 +1615,11 @@ The same is true for disabling "∗" through MultiPoint (no default plugins Plugins can still be disabled through their individual extension points. -->

        multiPoint 是一个简化的配置段落,用来为所有合法的扩展点启用插件。 -通过 multiPoint 启用的插件会自动注册到插件所实现的每个独立的扩展点上。 -通过 multiPoint 禁用的插件会禁用对应的操作行为。 -通过 multiPoint 所禁止的 "∗" 也是如此,意味着所有默认 -插件都不会被自动注册。 -插件也可以通过各个独立的扩展点来禁用。

        + 通过 multiPoint 启用的插件会自动注册到插件所实现的每个独立的扩展点上。 + 通过 multiPoint 禁用的插件会禁用对应的操作行为。 + 通过 multiPoint 所禁止的 "∗" + 也是如此,意味着所有默认插件都不会被自动注册。 + 插件也可以通过各个独立的扩展点来禁用。

        -(`string` 类型的别名) - -**出现在:** - -- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs) - - -PodTopologySpreadConstraintsDefaulting 定义如何为 PodTopologySpread 插件 -设置默认的约束。 - -## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam} - - -**出现在:** - -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) - - -RequestedToCapacityRatioParam 结构定义 RequestedToCapacityRatio 的参数。 - - - - - - - - - -
        字段描述
        shape [必需]
        -[]UtilizationShapePoint -
        - - shape 是一个定义评分函数曲线的计分点的列表。 -
        - -## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta3-ResourceSpec} - - -**出现在:** - -- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesBalancedAllocationArgs) -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) - - -ResourceSpec 用来代表某个资源。 - - - - - - - - - - - - -
        字段描述
        name [必需]
        -string -
        - - 资源名称。 -
        weight [必需]
        -int64 -
        - - 资源权重。 -
        - -## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy} - - -**出现在:** - -- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs) - - -ScoringStrategy 为节点资源插件定义 ScoringStrategyType。 - - - - - - - - - - - - -
        字段描述
        type [必需]
        -ScoringStrategyType -
        - - type 用来选择要运行的策略。 -
        resources [必需]
        -[]ResourceSpec -
        - -

        resources 设置在评分时要考虑的资源。

        -

        默认的资源集合包含 "cpu" 和 "memory",且二者权重相同。

        -

        权重的取值范围为 1 到 100。

        -

        当权重未设置或者显式设置为 0 时,意味着使用默认值 1。

        -
        requestedToCapacityRatio [必需]
        -RequestedToCapacityRatioParam -
        - - 特定于 RequestedToCapacityRatio 策略的参数。 + 例如,某插件同时出现在 multiPoint.enabledmultiPoint.disalbed 时, + 该插件会被启用。类似的, + 同时设置 multiPoint.disabled = '∗'multiPoint.enabled = pluginA 时, + 插件 pluginA 仍然会被注册。这一设计与所有其他扩展点的配置行为是相符的。

        -## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategyType} +## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadConstraintsDefaulting} -(`string` 数据类型的别名) +(`string` 类型的别名) **出现在:** -- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) +- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs) -ScoringStrategyType 是 NodeResourcesFit 插件所使用的的评分策略类型。 +

        PodTopologySpreadConstraintsDefaulting +定义如何为 PodTopologySpread 插件设置默认的约束。

        -## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta3-UtilizationShapePoint} +## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam} **出现在:** -- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) -- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam) +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) -UtilizationShapePoint 代表的是优先级函数曲线中的一个评分点。 +

        RequestedToCapacityRatioParam 结构定义 RequestedToCapacityRatio 的参数。

        - - - -
        字段描述
        utilization [必需]
        -int32 -
        - - 利用率(x 轴)。合法值为 0 到 100。完全被利用的节点映射到 100。 -
        score [必需]
        -int32 +
        shape [必需]
        +[]UtilizationShapePoint
        - 分配给指定利用率的分值(y 轴)。合法值为 0 到 10。 +

        shape 是一个定义评分函数曲线的计分点的列表。

        -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} +## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta3-ResourceSpec} **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesBalancedAllocationArgs) +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) -ClientConnectionConfiguration 中包含用来构造一个客户端所需的细节。 +

        ResourceSpec 用来代表某个资源。

        - - - - - - - - - - -
        字段描述
        kubeconfig [必需]
        -string -
        - - 此字段为指向某 KubeConfig 文件的路径。 -
        acceptContentTypes [必需]
        -string -
        - - acceptContentTypes 定义的是客户端与服务器建立连接时要发送的 -Accept 头部;这里的设置值会覆盖默认值 "application/json"。 -此字段会影响某特定客户端与服务器的所有连接。 -
        contentType [必需]
        +
        name [必需]
        string
        - contentType 包含的是此客户端向服务器发送数据时使用的 -内容类型(Content Type)。 -
        qps [必需]
        -float32 -
        - - qps 控制的是此连接上每秒可以发送的查询个数。 +

        资源名称。

        burst [必需]
        -int32 +
        weight [必需]
        +int64
        - burst 允许在客户端超出其速率限制时可以累积的额外查询个数。 +

        资源权重。

        -## `DebuggingConfiguration` {#DebuggingConfiguration} +## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy} **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs) -DebuggingConfiguration 保存与调试功能相关的配置。 +

        ScoringStrategy 为节点资源插件定义 ScoringStrategyType。

        - - - -
        字段描述
        enableProfiling [必需]
        -bool +
        type [必需]
        +ScoringStrategyType
        - 此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。 +

        type 用来选择要运行的策略。

        enableContentionProfiling [必需]
        -bool +
        resources [必需]
        +[]ResourceSpec
        - 此字段在 enableProfiling 为 true 时允许执行锁竞争分析。 +

        resources 设置在评分时要考虑的资源。

        +

        默认的资源集合包含 "cpu" 和 "memory",且二者权重相同。

        +

        权重的取值范围为 1 到 100。

        +

        当权重未设置或者显式设置为 0 时,意味着使用默认值 1。

        - -## `FormatOptions` {#FormatOptions} - - - - -FormatOptions 中包含不同日志格式的配置选项。 - - - - - -
        字段描述
        json [必需]
        -JSONOptions +
        requestedToCapacityRatio [必需]
        +RequestedToCapacityRatioParam
        - [实验特性] json 字段包含为 "json" 日志格式提供的配置选项。 +

        特定于 RequestedToCapacityRatio 策略的参数。

        -## `JSONOptions` {#JSONOptions} +## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategyType} +(`string` 数据类型的别名) + **出现在:** -- [FormatOptions](#FormatOptions) +- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) -JSONOptions 包含为 "json" 日志格式所设置的配置选项。 - - - - - - - - - - - - -
        字段描述
        splitStream [必需]
        -bool -
        - - [实验特性] 此字段将错误信息重定向到标准错误输出(stderr),将提示消息 -重定向到标准输出(stdout),并且支持缓存。默认配置为将二者都输出到 -标准输出(stdout),且不提供缓存。 -
        infoBufferSize [必需]
        -k8s.io/apimachinery/pkg/api/resource.QuantityValue -
        - - [实验特性] infoBufferSize 用来在分离数据流场景是设置提示 -信息数据流的大小。默认值为 0,意味着禁止缓存。 -
        +

        ScoringStrategyType 是 NodeResourcesFit 插件所使用的的评分策略类型。

        -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} +## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta3-UtilizationShapePoint} **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) +- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam) -LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举 -客户端的配置。 +

        UtilizationShapePoint 代表的是优先级函数曲线中的一个评分点。

        - - - - - - - - - - - - - - - - -
        字段描述
        leaderElect [必需]
        -bool -
        - - leaderElect 启用领导者选举客户端,从而在进入主循环执行之前 -先要获得领导者角色。当运行多副本组件时启用此功能有助于提高可用性。 -
        leaseDuration [必需]
        -meta/v1.Duration -
        - - leaseDuration 是非领导角色候选者在观察到需要领导席位更新时 -要等待的时间;只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要 -被刷新的席位。这里的设置值本质上意味着某个领导者在被另一个候选者替换掉 -之前可以停止运行的最长时长。只有当启用了领导者选举时此字段有意义。 -
        renewDeadline [必需]
        -meta/v1.Duration -
        - - renewDeadline 设置的是当前领导者在停止扮演领导角色之前 -需要刷新领导状态的时间间隔。此值必须小于或等于租约期限的长度。 -只有到启用了领导者选举时此字段才有意义。 -
        retryPeriod [必需]
        -meta/v1.Duration -
        - - retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态 -之间需要等待的时长。只有当启用了领导者选举时此字段才有意义。 -
        resourceLock [必需]
        -string -
        - - 此字段给出在领导者选举期间要作为锁来使用的资源对象类型。 -
        resourceName [必需]
        -string +
        utilization [必需]
        +int32
        - 此字段给出在领导者选举期间要作为锁来使用的资源对象名称。 +

        利用率(x 轴)。合法值为 0 到 100。完全被利用的节点映射到 100。

        resourceNamespace [必需]
        -string +
        score [必需]
        +int32
        - 此字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。 +

        分配给指定利用率的分值(y 轴)。合法值为 0 到 10。

        - -## `VModuleConfiguration` {#VModuleConfiguration} - - - -(`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) - - -VModuleConfiguration 是一组文件名(通配符)及其对应的日志详尽程度阈值。 - diff --git a/content/zh/docs/reference/config-api/kubeadm-config.v1beta2.md b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta2.md similarity index 94% rename from content/zh/docs/reference/config-api/kubeadm-config.v1beta2.md rename to content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta2.md index a1a8bee01c16f..bc37e362a7e16 100644 --- a/content/zh/docs/reference/config-api/kubeadm-config.v1beta2.md +++ b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta2.md @@ -292,7 +292,7 @@ https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration。

        criSocket: "/var/run/dockershim.sock" taints: - key: "kubeadmNode" - value: "master" + value: "someValue" effect: "NoSchedule" kubeletExtraArgs: v: 4 @@ -445,7 +445,7 @@ node only (e.g. the node IP).

        etcd 中包含 etcd 服务的配置。

        kind
        string
        ClusterStatus
        apiEndpoints [必需]
        -map[string]APIEndpoint +map[string]github.com/tengqm/kubeconfig/config/kubeadm/v1beta2.APIEndpoint
        - bindPort 设置 API 服务器要绑定到的安全端口。默认值为 6443。 +

        bindPort 设置 API 服务器要绑定到的安全端口。默认值为 6443。

        ControlPlaneComponent [必需]
        ControlPlaneComponent
        ControlPlaneComponent 结构的字段被嵌入到此类型中) - 无描述 + + +(ControlPlaneComponent 结构的字段被嵌入到此类型中) + + + 无描述 +
        certSANs [必需]
        []string @@ -875,7 +884,7 @@ signing certificate.

        timeoutForControlPlane [必需]
        -meta/v1.Duration +meta/v1.Duration
        ttl [必需]
        -meta/v1.Duration +meta/v1.Duration
        - -

        ttl 定义此令牌的声明周期。默认为 24h。 +

        ttl 定义此令牌的声明周期。默认为 '24h'。 expiresttl 是互斥的。

        - apiServerEndpoint

        为 API 服务器的 IP 地址或者域名,从该端点可以获得集群信息。 + apiServerEndpoint 为 API 服务器的 IP 地址或者域名,从该端点可以获得集群信息。

        @@ -1135,9 +1143,10 @@ without leading dash(es).

        - extraVolumes 是一组额外的主机卷,需要挂载到控制面组件中。 + extraVolumes 是一组额外被挂载到控制面组件中的主机卷。

        ImageMeta [必需]
        ImageMeta
        ImageMeta 的成员被内嵌到此类型中)。 + + +(ImageMeta 的成员被内嵌到此类型中)。

        tlsBootstrapToken 是 TLS 启动引导过程中使用的令牌。 如果设置了 bootstrapToken,则此字段默认值为 .bootstrapToken.token, @@ -1276,7 +1288,7 @@ does not contain any other authentication information

        timeout [必需]
        -meta/v1.Duration +meta/v1.Duration

        @@ -1368,7 +1380,7 @@ kubeadm 不清楚证书文件的存放位置,因此必须单独提供证书信

        endpoints 包含一组 etcd 成员的列表。

        - -

        name 为卷在 Pod 模板中的名称。

        + +

        name 字段为卷在 Pod 模板中的名称。

        hostPath [必需]
        @@ -1485,8 +1497,8 @@ file from which to load cluster information.

        string
        - -

        mountPathhostPath 在 Pod 内挂载的路径。

        + +

        mountPath 是 hostPath 在 Pod 内挂载的路径。

        readOnly [必需]
        @@ -1501,8 +1513,8 @@ file from which to load cluster information.

        core/v1.HostPathType
        - -

        pathTypehostPath 的类型。

        + +

        pathType 是 hostPath 的类型。

        +If not set, the imageRepository defined in ClusterConfiguration will be used.

        imageRepository 设置镜像拉取所用的容器仓库。 若未设置,则使用 ClusterConfiguration 中的 imageRepository

        - +

        imageTag 允许用户设置镜像的标签。 如果设置了此字段,则 kubeadm 不再在集群升级时自动更改组件的版本。

        ImageMeta [必需]
        ImageMeta
        ImageMeta 结构的字段被嵌入到此类型中。) + + +(ImageMeta 结构的字段被嵌入到此类型中。) @@ -1642,11 +1658,11 @@ Defaults to "/var/lib/etcd".

        extraArgs 是为 etcd 可执行文件提供的额外参数,用于在静态 -Pod 中运行 etcd。映射中的每一个键对应命令行上的一个标志参数,只是去掉了前置的连字符。

        +pod 中运行 etcd。映射中的每一个键对应命令行上的一个标志参数,只是去掉了前置的连字符。

        serverCertSANs [必需]
        @@ -1654,7 +1670,7 @@ Pod 中运行 etcd。映射中的每一个键对应命令行上的一个标志

        serverCertSANs 为 etcd 服务器的签名证书设置额外的主体替代名 @@ -1699,9 +1715,9 @@ signing certificate.

        -

        serviceSubnet 是 Kubernetes 服务所使用的的子网。 +

        serviceSubnet 是 kubernetes 服务所使用的的子网。 默认值为 "10.96.0.0/12"。

        - -

        dnsDomain 是 Kubernetes 服务所使用的的 DNS 域名。 + +

        dnsDomain 是 kubernetes 服务所使用的的 DNS 域名。 默认值为 "cluster.local"。

        -

        name 是 Node API 对象的 .metadata.name 字段值; +

        name 是 Node API 对象的 .Metadata.Name 字段值; 该 API 对象会在此 kubeadm initkubeadm join 操作期间创建。 在提交给 API 服务器的 kubelet 客户端证书中,此字段也用作其 CommonName。 如果未指定则默认为节点的主机名。

        @@ -1768,29 +1784,28 @@ Defaults to the hostname of the node if not provided.

        criSocket 用来读取容器运行时的信息。 -此信息会被以注解的方式添加到 Node API 对象至上,用于后续用途。

        +此信息会被以注解的方式添加到 Node API 对象之上,用于后续用途。

        taints [必需]
        []core/v1.Taint
        - +

        tains 设定 Node API 对象被注册时要附带的污点。 -若未设置此字段(即字段值为 null), 在 kubeadm init 期间,节点与控制面之间的通信。 -默认值为污点默认设置为 taints: ["node-role.kubernetes.io/master:""]。 -如果你不希望为控制面节点设置污点,可以在 YAML 中将此字段设置为空的列表,即 -taints: []。 此字段仅用在 Node 注册期间。

        +若未设置此字段(即字段值为 null),在 kubeadm init 期间,默认为控制平面节点添加控制平面污点。 +如果你不想污染你的控制平面节点,可以将此字段设置为空列表(即 YAML 文件中的 taints: []), +这个字段只用于节点注册。

        kubeletExtraArgs [必需]
        diff --git a/content/zh/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md similarity index 95% rename from content/zh/docs/reference/config-api/kubeadm-config.v1beta3.md rename to content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md index c51bf4a3ef03b..ef491a424eeea 100644 --- a/content/zh/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -78,7 +78,7 @@ BootstrapToken∗ 结构。
        • kubeadm v1.15.x 及更新的版本可以用来从 v1beta1 迁移到 v1beta2 版本;
        • kubeadm v1.22.x 及更新的版本不再支持 v1beta1 和更老的 API,但可以用来 -从 v1beta2 迁移到 v1beta3。

        类型 `ClusterConfiguration` 用来定制集群范围的设置,具体包括以下设置:

          -
        • networking:其中包含集群的网络拓扑配置。使用这一部分可以定制 Pod 的 -子网或者 Service 的子网。
        • +
        • networking:其中包含集群的网络拓扑配置。使用这一部分可以定制 Pod 的 +子网或者 Service 的子网。

          +
        • -
        • etcd:etcd 数据库的配置。例如使用这个部分可以定制本地 etcd 或者配置 API 服务器 -使用一个外部的 etcd 集群。
        • -
        • kube-apiserverkube-schedulerkube-controller-manager -配置:这些部分可以通过添加定制的设置或者重载 kubeadm 的默认设置来定制控制面组件。
        • +
        • +

          etcd:etcd 数据库的配置。例如使用这个部分可以定制本地 etcd 或者配置 API 服务器 +使用一个外部的 etcd 集群。

          +
        • +
        • +

          kube-apiserverkube-schedulerkube-controller-manager +配置:这些部分可以通过添加定制的设置或者重载 kubeadm 的默认设置来定制控制面组件。

          +
        apiVersion: kubeproxy.config.k8s.io/v1alpha1
        @@ -308,7 +313,7 @@ https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration。

        criSocket: "/var/run/dockershim.sock" taints: - key: "kubeadmNode" - value: "master" + value: "someValue" effect: "NoSchedule" kubeletExtraArgs: v: 4 @@ -471,9 +476,9 @@ node only (e.g. the node ip).

        - networking 字段包含集群的网络拓扑配置。 +

        networking 字段包含集群的网络拓扑配置。

        kubernetesVersion
        @@ -495,8 +500,8 @@ node only (e.g. the node ip).

        It can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port. In case the controlPlaneEndpoint is not specified, the advertiseAddress + bindPort are used; in case the controlPlaneEndpoint is specified but without a TCP port, -the `bindPort` is used. -Possible usages are:

        +the bindPort is used. +Possible usages are:

        -->

        controlPlaneEndpoint 为控制面设置一个稳定的 IP 地址或 DNS 名称。 取值可以是一个合法的 IP 地址或者 RFC-1123 形式的 DNS 子域名,二者均可以带一个 @@ -768,7 +773,7 @@ Defaults to "/etc/kubernetes/pki/ca.crt".

        discovery [必需]-->[必需]
        +
        discovery [必需]
        Discovery
        @@ -1032,7 +1037,7 @@ impersonate the control-plane. -ControlPlaneComponent 中包含对集群中所有控制面组件都适用的设置。 +

        ControlPlaneComponent 中包含对集群中所有控制面组件都适用的设置。

        @@ -1080,16 +1085,20 @@ without leading dash(es). -DNS 结构定义要在集群中使用的 DNS 插件。 +

        DNS 结构定义要在集群中使用的 DNS 插件。

        字段描述
        - - @@ -57,77 +92,47 @@ Default: true--> @@ -136,17 +141,10 @@ Default: "20s" @@ -156,16 +154,10 @@ Default: "" @@ -177,15 +169,10 @@ Default: nil

        address 是 kubelet 提供服务所用的 IP 地址(设置为 0.0.0.0 使用所有网络接口提供服务)。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:"0.0.0.0"

        @@ -197,15 +184,10 @@ Default: "0.0.0.0"

        port 是 kubelet 用来提供服务所使用的端口号。 这一端口号必须介于 1 到 65535 之间,包含 1 和 65535。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:10250

        @@ -218,16 +200,11 @@ Default: 10250 no authentication/authorization. The port number must be between 1 and 65535, inclusive. Setting this field to 0 disables the read-only service. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may disrupt components that interact with the Kubelet server. Default: 0 (disabled) -->

        readOnlyPort 是 kubelet 用来提供服务所使用的只读端口号。 此端口上的服务不支持身份认证或鉴权。这一端口号必须介于 1 到 65535 之间, 包含 1 和 65535。将此字段设置为 0 会禁用只读服务。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:0(禁用)

        @@ -241,17 +218,12 @@ if any, concatenated after server cert). If tlsCertFile and tlsPrivateKeyFile are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to the Kubelet's --cert-dir flag. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may disrupt components that interact with the Kubelet server. Default:"quot; -->

        tlsCertFile是包含 HTTPS 所需要的 x509 证书的文件 (如果有 CA 证书,会串接到服务器证书之后)。如果tlsCertFiletlsPrivateKeyFile都没有设置,则系统会为节点的公开地址生成自签名的证书和私钥, 并将其保存到 kubelet --cert-dir参数所指定的目录下。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:""

        @@ -261,15 +233,10 @@ Default:"quot; @@ -280,15 +247,10 @@ Default: "" @@ -299,15 +261,10 @@ Default: nil @@ -319,17 +276,10 @@ Default: ""

        rotateCertificates用来启用客户端证书轮换。kubelet 会调用 certificates.k8s.io API 来请求新的证书。需要有一个批复人批准证书签名请求。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑禁用此行为时可能导致 kubelet 无法在当前证书过期时向 -API 服务器执行身份认证。

        默认值:false @@ -343,20 +293,12 @@ signing a serving certificate, the Kubelet will request a certificate from the 'certificates.k8s.io' API. This requires an approver to approve the certificate signing requests (CSR). The RotateKubeletServerCertificate feature must be enabled when setting this field. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -disabling it will stop the renewal of Kubelet server certificates, which can -disrupt components that interact with the Kubelet server in the long term, -due to certificate expiration. Default: false -->

        serverTLSBootstrap用来启用服务器证书引导。系统不再使用自签名的服务证书, kubelet 会调用certificates.k8s.io API 来请求证书。 需要有一个批复人来批准证书签名请求(CSR)。 设置此字段时,RotateKubeletServerCertificate特性必须被启用。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑禁用此特性会导致 kubelet 的服务器证书无法被续约, -长期上这会干扰到与 kubelet 服务器交互的组件,因为证书会过期。

        默认值:false

        @@ -366,26 +308,21 @@ kubelet 会调用certificates.k8s.io API 来请求证书。 @@ -395,24 +332,19 @@ Defaults: @@ -424,16 +356,10 @@ Defaults:

        registryPullQPS是每秒钟可以执行的镜像仓库拉取操作限值。 此值必须不能为负数。将其设置为 0 表示没有限值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这类更新可能会因为镜像拉取所产生的流量变化而导致集群可扩缩能力问题。

        默认值:5 @@ -446,17 +372,11 @@ Default: 5 pulls to burst to this number, while still not exceeding registryPullQPS. The value must not be a negative number. Only used if registryPullQPS is greater than 0. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may impact scalability by changing the amount of traffic produced -by image pulls. Default: 10 -->

        registryBurst是突发性镜像拉取的上限值,允许镜像拉取临时上升到所指定数量, 不过仍然不超过registryPullQPS所设置的约束。此值必须是非负值。 只有registryPullQPS参数值大于 0 时才会使用此设置。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能因为镜像拉取所造成的流量变化,导致集群可扩缩能力受影响。

        默认值:10

        @@ -467,16 +387,10 @@ Default: 10 @@ -485,21 +399,16 @@ Default: 5 int32 @@ -511,16 +420,11 @@ Default: 10

        enableDebuggingHandlers启用服务器上用来访问日志、 在本地运行容器和命令的端点,包括execattachlogsportforward等功能。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑禁用此能力可能干扰到与 kubelet 服务器交互的组件。

        默认值:true

        @@ -529,16 +433,12 @@ Default: true bool @@ -547,17 +447,13 @@ Default: false int32 @@ -566,15 +462,11 @@ Default: 10248 string @@ -585,15 +477,10 @@ Default: "127.0.0.1" @@ -605,14 +492,10 @@ Default: -999

        clusterDomain是集群的 DNS 域名。如果设置了此字段,kubelet 会配置所有容器,使之在搜索主机的搜索域的同时也搜索这里指定的 DNS 域。

        -

        DynamicKubeletConfig (已弃用,默认为关闭): -不建议动态更新此字段,因为这一设置值要与整个集群中的其他组件保持一致。

        默认值:""

        @@ -624,43 +507,30 @@ Default: ""

        clusterDNS是集群 DNS 服务器的 IP 地址的列表。 如果设置了,kubelet 将会配置所有容器使用这里的 IP 地址而不是宿主系统上的 DNS 服务器来完成 DNS 解析。 -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑变更仅会对更新后创建的 Pod 起作用。建议在更改此字段之前腾空节点。

        默认值:nil

        @@ -769,19 +617,12 @@ image garbage collection is always run. The percent is calculated by dividing this field value by 100, so this field must be between 0 and 100, inclusive. When specified, the value must be greater than imageGCLowThresholdPercent. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may trigger or delay garbage collection, and may change the image overhead -on the node. Default: 85 -->

        imageGCHighThresholdPercent所给的是镜像的磁盘用量百分数, 一旦镜像用量超过此阈值,则镜像垃圾收集会一直运行。百分比是用这里的值除以 100 得到的,所以此字段取值必须介于 0 和 100 之间,包括 0 和 100。如果设置了此字段, 则取值必须大于imageGCLowThresholdPercent取值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这种变更可能触发垃圾收集或者延迟垃圾收集, -并且可能影响节点上镜像的额外开销。

        默认值:85

        @@ -795,37 +636,25 @@ image garbage collection is never run. Lowest disk usage to garbage collect to. The percent is calculated by dividing this field value by 100, so the field value must be between 0 and 100, inclusive. When specified, the value must be less than imageGCHighThresholdPercent. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may trigger or delay garbage collection, and may change the image overhead -on the node. Default: 80 -->

        imageGCLowThresholdPercent所给的是镜像的磁盘用量百分数, 镜像用量低于此阈值时不会执行镜像垃圾收集操作。垃圾收集操作也将此作为最低磁盘用量边界。 百分比是用这里的值除以 100 得到的,所以此字段取值必须介于 0 和 100 之间,包括 0 和 100。 如果设置了此字段,则取值必须小于imageGCHighThresholdPercent取值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这种变更可能触发垃圾收集或者延迟垃圾收集, -并且可能影响节点上镜像的额外开销。

        默认值:80

        @@ -835,13 +664,9 @@ Default: "1m" @@ -854,15 +679,11 @@ Default: "" all non-kernel processes that are not already in a container. Empty for no container. Rolling back the flag requires a reboot. The cgroupRoot must be specified if this field is not empty. -Dynamic Kubelet Config (deprecated): This field should not be updated without a full node -reboot. It is safest to keep this value the same as the local config. Default: "&qout; -->

        systemCgroups是用来放置那些未被容器化的、非内核的进程的控制组 -(CGroup)的绝对名称。设置为空字符串表示没有这类容器。回滚此字段设置需要重启节点。 +(CGroup)的绝对名称。设置为空字符串表示没有这类容器。回滚此字段设置需要重启节点。 当此字段非空时,必须设置cgroupRoot字段。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:""

        @@ -873,15 +694,9 @@ Default: "&qout; @@ -892,15 +707,11 @@ Default: ""

        cgroupsPerQOS用来启用基于 QoS 的控制组(CGroup)层次结构: 顶层的控制组用于不同 QoS 类,所有BurstableBestEffort Pod 都会被放置到对应的顶级 QoS 控制组下。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:true

        @@ -911,14 +722,10 @@ Default: true @@ -929,14 +736,10 @@ Default: "cgroupfs" @@ -947,36 +750,26 @@ Default: "None" @@ -987,14 +780,10 @@ Default: "10s" @@ -1016,8 +805,6 @@ resources; of CPU and device resources.

        Policies other than "none" require the TopologyManager feature gate to be enabled. -Dynamic Kubelet Config (deprecated): This field should not be updated without a full node -reboot. It is safest to keep this value the same as the local config. Default: "none"

        -->

        topologyManagerPolicy是要使用的拓扑管理器策略名称。合法值包括:

        @@ -1028,8 +815,6 @@ Default: "none"

      • single-numa-node:kubelet 仅允许在 CPU 和设备资源上对齐到同一 NUMA 节点的 Pod。
      • 如果策略不是 "none",则要求启用TopologyManager特性门控。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:"none"

        @@ -1068,34 +853,25 @@ the minimum percentage of a resource reserved for exclusive use by the guaranteed QoS tier. Currently supported resources: "memory" Requires the QOSReserved feature gate to be enabled. -Dynamic Kubelet Config (deprecated): This field should not be updated without a full node -reboot. It is safest to keep this value the same as the local config. Default: nil -->

        qosReserved是一组从资源名称到百分比值的映射,用来为Guaranteed QoS 类型的负载预留供其独占使用的资源百分比。目前支持的资源为:"memory"。 需要启用QOSReserved特性门控。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:nil

        @@ -1123,15 +899,10 @@ themselves if they should try to access their own Service. Values:

        一般而言,用户必须设置--hairpin-mode=hairpin-veth才能实现发夹模式的网络地址转译 (NAT),因为混杂模式的网桥要求存在一个名为cbr0的容器网桥。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑取决于网络插件,可能需要重启节点。

        默认值:"promiscuous-bridge"

        @@ -1142,20 +913,9 @@ Default: "promiscuous-bridge" @@ -1166,15 +926,10 @@ Default: 110 @@ -1184,14 +939,9 @@ Default: "" @@ -1202,17 +952,11 @@ Default: -1 @@ -1238,37 +982,25 @@ Default: false @@ -1280,16 +1012,11 @@ Default: "100ms"

        nodeStatusMaxImages限制Node.status.images中报告的镜像数量。 此值必须大于 -2。

        注意:如果设置为 -1,则不会对镜像数量做限制;如果设置为 0,则不会返回任何镜像。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑节点状态中可能报告不同的数值。

        默认值:50

        @@ -1300,14 +1027,9 @@ Default: 50 @@ -1317,18 +1039,9 @@ Default: 1000000 @@ -1338,15 +1051,9 @@ Default: "application/vnd.kubernetes.protobuf" @@ -1357,16 +1064,10 @@ Default: 5 @@ -1379,16 +1080,11 @@ Default: 10 at a time. We recommend ∗not∗ changing the default value on nodes that run docker daemon with version < 1.9 or an Aufs storage backend. Issue #10959 has more details. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may impact the performance of image pulls. Default: true -->

        serializeImagePulls被启用时会通知 kubelet 每次仅拉取一个镜像。 我们建议不要在所运行的 docker 守护进程版本低于 1.9、使用 aufs 存储后端的节点上更改默认值。详细信息可参见 Issue #10959。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能会影响镜像拉取的性能。

        默认值:true

        @@ -1400,26 +1096,21 @@ Default: true

        evictionHard是一个映射,是从信号名称到定义硬性驱逐阈值的映射。 例如:{"memory.available": "300Mi"}。 如果希望显式地禁用,可以在任意资源上将其阈值设置为 0% 或 100%。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能会触发或延迟 Pod 驱逐操作。

        默认值:

        -  memory.available:  "100Mi"
        -  nodefs.available:  "10%"
        -  nodefs.inodesFree: "5%"
        -  imagefs.available: "15%"
        +   memory.available:  "100Mi"
        +   nodefs.available:  "10%"
        +   nodefs.inodesFree: "5%"
        +   imagefs.available: "15%"
           
        @@ -1430,17 +1121,10 @@ Default: @@ -1451,34 +1135,24 @@ Default: nil @@ -1493,10 +1167,6 @@ effectively caps the Pod's terminationGracePeriodSeconds value during soft evict Note: Due to issue #64530, the behavior has a bug where this value currently just overrides the grace period during soft eviction, which can increase the grace period from what is set on the Pod. This bug will be fixed in a future release. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -lowering it decreases the amount of time Pods will have to gracefully clean -up before being killed during a soft eviction. Default: 0 -->

        evictionMaxPodGracePeriod是指达到软性逐出阈值而引起 Pod 终止时, @@ -1505,9 +1175,6 @@ Pod 可以获得的terminationGracePeriodSeconds

        注意:由于 Issue #64530 的原因,系统中存在一个缺陷,即此处所设置的值会在软性逐出时覆盖 Pod 的宽限期设置,从而有可能增加 Pod 上原本设置的宽限期限时长。 这个缺陷会在未来版本中修复。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短此宽限期限值会导致软性逐出期间 Pod -在被杀死之前用来体面地完成清理工作可用的时间。

        默认值:0

        @@ -1520,16 +1187,11 @@ Pod 的宽限期设置,从而有可能增加 Pod 上原本设置的宽限期 which describe the minimum amount of a given resource the kubelet will reclaim when performing a pod eviction while that resource is under pressure. For example: {"imagefs.available": "2Gi"}. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may change how well eviction can manage resource pressure. Default: nil -->

        evictionMinimumReclaim是一个映射,定义信号名称与最小回收量数值之间的关系。 最小回收量指的是资源压力较大而执行 Pod 驱逐操作时,kubelet 对给定资源的最小回收量。 例如:{"imagefs.available": "2Gi"}

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能会改变驱逐操作应对资源压力的效果。

        默认值:nil

        @@ -1541,20 +1203,10 @@ Default: nil

        podsPerCore设置的是每个核上 Pod 个数上限。此值不能超过maxPods。 所设值必须是非负整数。如果设置为 0,则意味着对 Pod 个数没有限制。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑变更可能导致 kubelet 重启时 Pod 无法被准入, -还可能导致Node.status.capacity.pods所报告的数值发生变化, -进而影响到将来的调度决策。增大此值也会降低性能,因为在同一个处理器核上需要运行更多的 Pod。

        默认值:0

        @@ -1566,24 +1218,15 @@ Default: 0

        enableControllerAttachDetach用来允许 Attach/Detach 控制器管理调度到本节点的卷的挂接(attachment)和解除挂接(detachement), 并且禁止 kubelet 执行任何 attach/detach 操作。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑在运行中的节点上更改由哪个组件来负责卷管理时, -这一变更可能导致节点在被更新前尚未腾空时卷无法被解除挂接。 -如果 kubelet 尚未更新volumes.kubernetes.io/controller-managed-attach-detach -注解时 Pod 已经被调度到了该节点,节点上的卷也会无法解除挂接。 -一般而言,最安全的做法是将此字段设置为与本地配置相同的值。

        +

        注意:kubelet 不支持挂接 CSI 卷和解除挂接, +因此对于该用例,此选项必须为 true。

        默认值:true

        @@ -1595,18 +1238,11 @@ Default: true

        protectKernelDefaults设置为true时,会令 kubelet 在发现内核参数与预期不符时出错退出。若此字段设置为false,则 kubelet 会尝试更改内核参数以满足其预期。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑启用此设置会在内核参数与 kubelet 预期不匹配时导致 -kubelet 进入崩溃循环(Crash-Loop)状态。

        默认值:false

        @@ -1619,18 +1255,12 @@ kubelet 进入崩溃循环(Crash-Loop)状态。

        are present on host. These rules will serve as utility rules for various components, e.g. kube-proxy. The rules will be created based on iptablesMasqueradeBit and iptablesDropBit. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -disabling it will prevent the Kubelet from healing locally misconfigured iptables rules. Default: true -->

        makeIPTablesUtilChains设置为true时,相当于允许 kubelet 确保一组 iptables 规则存在于宿主机上。这些规则会为不同的组件(例如 kube-proxy) 提供工具性质的规则。它们是基于iptablesMasqueradeBitiptablesDropBit 来创建的。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑禁用此行为会导致 kubelet 无法在本地 iptables -规则出错时实现自愈。

        默认值:true

        @@ -1643,18 +1273,11 @@ Default: true Values must be within the range [0, 31]. Must be different from other mark bits. Warning: Please match the value of the corresponding parameter in kube-proxy. TODO: clean up IPTablesMasqueradeBit in kube-proxy. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it needs to be coordinated with other components, like kube-proxy, and the update -will only be effective if MakeIPTablesUtilChains is enabled. Default: 14 -->

        iptablesMasqueradeBit是 iptables fwmark 空间中用来为 SNAT 作标记的位。此值必须介于[0, 31]区间,必须与其他标记位不同。

        警告:请确保此值设置与 kube-proxy 中对应的参数设置取值相同。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑此处的变更要与其他组件(如 kube-proxy)相应的变更协调一致。 -只有当makeIPTablesUtilChains能力被启用时,这里的更新才会起作用。

        默认值:14

        @@ -1665,17 +1288,10 @@ Default: 14 @@ -1686,22 +1302,12 @@ Default: 15 @@ -1711,14 +1317,9 @@ Default: nil @@ -1739,15 +1340,10 @@ Default: true @@ -1758,14 +1354,9 @@ Default: "10Mi" @@ -1803,20 +1394,12 @@ managers are running. Valid values include:

        pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may not be possible to increase the reserved resources, because this -requires resizing cgroups. Always look for a NodeAllocatableEnforced event -after updating this field to ensure that the update was successful. Default: nil -->

        systemReserved是一组资源名称=资源数量对, 用来描述为非 Kubernetes 组件预留的资源(例如:'cpu=200m,memory=150G')。

        -

        目前仅支持 CPU 和内存。更多细节可参见 http://kubernetes.io/zh/docs/user-guide/compute-resources。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑增加预留资源也许是不可能的,因为需要改变控制组大小。 -在更改了此字段之后,应该总是关注NodeAllocatableEnforced事件, -以确保更新是成功的。

        +

        目前仅支持 CPU 和内存。更多细节可参见 + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ 。

        默认值:Nil

        @@ -1830,21 +1413,12 @@ that describe resources reserved for kubernetes system components. Currently cpu, memory and local storage for root file system are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more details. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -it may not be possible to increase the reserved resources, because this -requires resizing cgroups. Always look for a NodeAllocatableEnforced event -after updating this field to ensure that the update was successful. Default: nil -->

        kubeReserved是一组资源名称=资源数量对, 用来描述为 Kubernetes 系统组件预留的资源(例如:'cpu=200m,memory=150G')。 目前支持 CPU、内存和根文件系统的本地存储。 更多细节可参见 https://kubernetes.io/zh/docs/concepts/configuration/manage-resources-containers/。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑增加预留资源也许是不可能的,因为需要改变控制组大小。 -在更改了此字段之后,应该总是关注NodeAllocatableEnforced事件, -以确保更新是成功的。

        默认值:Nil

        @@ -1893,18 +1467,14 @@ Default: "" @@ -1916,18 +1486,14 @@ Default: "" @@ -1945,13 +1511,6 @@ When kube-reserved is in the list, kubeReservedCgroup must be speci This field is supported only when cgroupsPerQOS is set to true. Refer to Node Allocatable for more information. -If DynamicKubeletConfig (deprecated; default off) is on, when -dynamically updating this field, consider that -removing enforcements may reduce the stability of the node. Alternatively, adding -enforcements may reduce the stability of components which were using more than -the reserved amount of resources; for example, enforcing kube-reserved may cause -Kubelets to OOM if it uses more than the reserved resources, and enforcing system-reserved -may cause system daemons to OOM if they use more than the reserved resources. Default: ["pods"] -->

        此标志设置 kubelet 需要执行的各类节点可分配资源策略。此字段接受一组选项列表。 @@ -1963,11 +1522,6 @@ Default: ["pods"]

        这个字段只有在cgroupsPerQOS被设置为true才被支持。

        参阅Node Allocatable 了解进一步的信息。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑去掉此机制可能会降低节点稳定性。 -反之,添加此机制可能会降低原来使用资源超出预留量的组件的稳定性。 -例如,实施 kube-reserved 在 kubelet 使用资源超出预留量时可能导致 kubelet 发生 OOM, -而实施 system-reserved 机制可能导致使用资源超出预留量的系统守护进程发生 OOM。

        默认值:["pods"]

        @@ -1996,14 +1550,9 @@ Default: [] @@ -2014,15 +1563,10 @@ Default: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" @@ -2034,15 +1578,10 @@ Default: "quot;

        kernelMemcgNotification字段如果被设置了,会告知 kubelet 集成内核的 memcg 通知机制来确定是否超出内存逐出阈值,而不是使用轮询机制来判定。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这样做可能影响到 kubelet 与内核的交互方式。

        默认值:false

        @@ -2078,7 +1617,7 @@ Default: true
        字段描述
        ImageMeta [必需]-->[必需]
        +
        ImageMeta [必需]
        ImageMeta
        ImageMeta 的成员被内嵌到此类型中)。 + + +(ImageMeta 的成员被内嵌到此类型中)。

        -Discovery 设置 TLS 启动引导过程中 kubelet 要使用的配置选项。 +

        Discovery 设置 TLS 启动引导过程中 kubelet 要使用的配置选项。

        @@ -1194,7 +1203,7 @@ does not contain any other authentication information -Etcd 包含用来描述 etcd 配置的元素。 +

        Etcd 包含用来描述 etcd 配置的元素。

        字段描述
        @@ -1244,14 +1253,15 @@ Etcd 包含用来描述 etcd 配置的元素。 ExternalEtcd describes an external etcd cluster. Kubeadm has no knowledge of where certificate files live and they must be supplied. --> -ExternalEtcd 描述外部 etcd 集群。 +

        ExternalEtcd 描述外部 etcd 集群。 kubeadm 不清楚证书文件的存放位置,因此必须单独提供证书信息。 +

        字段描述
        - - - @@ -1641,15 +1655,17 @@ This information will be annotated to the Node API object, for later re-use[]core/v1.Taint @@ -58,9 +56,9 @@ FormatOptions 包含为不同类型日志格式提供的选项。 - [FormatOptions](#FormatOptions) -JSONOptions 包含用于 "json" 日志格式的选项。 +JSONOptions 包含用于 "json" 日志格式的选项。
        字段描述
        endpoints [必需]-->[必需]
        +
        endpoints [必需]
        []string
        @@ -1261,7 +1271,7 @@ kubeadm 不清楚证书文件的存放位置,因此必须单独提供证书信

        endpoints 包含一组 etcd 成员的列表。

        caFile [必需]-->[必需]
        +
        caFile [必需]
        string
        @@ -1498,7 +1508,11 @@ Secret 中的证书的秘钥。对应的加密秘钥在 InitConfiguration 结构
        ImageMeta [必需]
        ImageMeta
        ImageMeta 结构的字段被嵌入到此类型中。) + + +(ImageMeta 结构的字段被嵌入到此类型中。)

        ImageMeta 允许用户为 etcd 定制要使用的容器。

        - +

        tains 设定 Node API 对象被注册时要附带的污点。 -若未设置此字段(即字段值为 null), 在 kubeadm init 期间,节点与控制面之间的通信。默认值为污点默认设置为 taints: ["node-role.kubernetes.io/master:""]。 -如果你不希望为控制面节点设置污点,可以在 YAML 中将此字段设置为空的列表,即 -taints: []。 此字段仅用在 Node 注册期间。

        +若未设置此字段(即字段值为 null),在 kubeadm init 期间,默认为控制平面节点添加控制平面污点。 +如果你不想污染你的控制平面节点,可以将此字段设置为空列表(即 YAML 文件中的 taints: []), +这个字段只用于节点注册。

        kubeletExtraArgs
        @@ -1782,7 +1798,7 @@ for, so other administrators can know its purpose.
        ttl
        -meta/v1.Duration +meta/v1.Duration
        +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

        -->

        expires 设置此令牌过期的时间戳。默认为在运行时基于 ttl 来决定。 expiresttl 是互斥的。

        diff --git a/content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1.md similarity index 87% rename from content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md rename to content/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1.md index ad7e4908f40e0..9a62ac5ec62a7 100644 --- a/content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md +++ b/content/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1.md @@ -26,8 +26,6 @@ auto_generated: true --> **出现在:** -- [LoggingConfiguration](#LoggingConfiguration) - @@ -41,8 +39,8 @@ FormatOptions 包含为不同类型日志格式提供的选项。 JSONOptions
        - - [试验特性] json 中包含 "json" 日志格式的选项。 + + [试验特性] json 中包含 "json" 日志格式的选项。
        @@ -104,8 +102,6 @@ using split streams. The default is zero, which disables buffering.--> --> **出现在:** -- [LoggingConfiguration](#LoggingConfiguration) - ## 资源类型 +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - [SerializedNodeConfigSource](#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource) +## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig} + + +CredentialProviderConfig 包含有关每个 exec 凭据提供者的配置信息。 +Kubelet 从磁盘上读取这些配置信息,并根据 CredentialProvider 类型启用各个提供者。 + +
        字段描述
        + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        kubelet.config.k8s.io/v1beta1
        kind
        string
        CredentialProviderConfig
        providers [必需]
        +[]CredentialProvider +
        + +

        + providers 是一组凭据提供者插件,这些插件会被 kubelet 启用。 + 多个提供者可以匹配到同一镜像上,这时,来自所有提供者的凭据信息都会返回给 kubelet。 + 如果针对同一镜像调用了多个提供者,则结果会被组合起来。如果提供者返回的认证主键有重复, + 列表中先出现的提供者所返回的值将被使用。 +

        +
        + ## `KubeletConfiguration` {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}

        enableServer 会启用 kubelet 的安全服务器。

        注意:kubelet 的不安全端口由 readOnlyPort 选项控制。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能会影响到与 kubelet 服务器交互的组件。

        默认值:true

        +Default: ""-->

        staticPodPath 是指向要运行的本地(静态)Pod 的目录, 或者指向某个静态 Pod 文件的路径。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑新路径下所给的静态 Pod 集合可能与 kubelet -启动时所看到的集合不同,而这一差别可能会扰乱节点状态。

        -

        默认值:""

        +

        默认值:""

        syncFrequency
        -meta/v1.Duration +meta/v1.Duration
        +Default: "1m"-->

        syncFrequency 是对运行中的容器和配置进行同步的最长周期。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短这一同步周期可能会带来负面的性能影响, -尤其当节点上 Pod 个数增加时。相反,增加此周期长度时可能会导致 ConfigMap、 -Secret 这类资源未被及时更新。

        -

        默认值:"1m"

        +

        默认值:"1m"

        fileCheckFrequency
        -meta/v1.Duration +meta/v1.Duration
        +Default: "20s"-->

        fileCheckFrequency 是对配置文件中新数据进行检查的时间间隔值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短此时长会导致 kubelet 更为频繁地重新加载其静态 Pod 配置, -而这会带来负面的性能影响。

        -

        默认值:"20s"

        +

        默认值:"20s"

        httpCheckFrequency
        -meta/v1.Duration +meta/v1.Duration

        httpCheckFrequency 是对 HTTP 服务器上新数据进行检查的时间间隔值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短此时长会导致 kubelet 更为频繁地轮询 -staticPodURL,而这会带来负面的性能影响。

        -

        默认值:"20s"

        +

        默认值:"20s"

        staticPodURL 是访问要运行的静态 Pod 的 URL 地址。 -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,新的 URL 上包含的静态 Pod 集合可能与 kubelet -初始启动时看到的不同,而这种差异可能会扰乱节点状态。

        -

        默认值:""

        +

        默认值:""

        staticPodURLHeader是一个由字符串组成的映射表,其中包含的 HTTP 头部信息用于访问podURL

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,要考虑可能导致无法从staticPodURL -读取最新的静态 Pod 集合。

        默认值:nil

        tlsPrivateKeyFile是一个包含与tlsCertFile 证书匹配的 X509 私钥的文件。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:""

        tlsCipherSuites是一个字符串列表,其中包含服务器所接受的加密包名称。 列表中的每个值来自于tls包中定义的常数(https://golang.org/pkg/crypto/tls/#pkg-constants)。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰到与 kubelet 服务器交互的组件。

        默认值:nil

        tlsMinVersion给出所支持的最小 TLS 版本。 字段取值来自于tls包中的常数定义(https://golang.org/pkg/crypto/tls/#pkg-constants)。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰到与 kubelet 服务器交互的组件。

        默认值:""

        authorization设置发送给 kubelet 服务器的请求是如何进行身份认证的。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:

        
           anonymous:
             enabled: false
           webhook:
             enabled: true
        -    cacheTTL: "2m"
        +    cacheTTL: "2m"
           

        authorization设置发送给 kubelet 服务器的请求是如何进行鉴权的。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能会干扰到与 kubelet 服务器交互的组件。

        默认值:

        
           mode: Webhook
           webhook:
        -    cacheAuthorizedTTL: "5m"
        -    cacheUnauthorizedTTL: "30s"
        +    cacheAuthorizedTTL: "5m"
        +    cacheUnauthorizedTTL: "30s"
           

        eventRecordQPS设置每秒钟可创建的事件个数上限。如果此值为 0, 则表示没有限制。此值不能设置为负数。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能因为生成事件所造成的流量变化,导致集群可扩缩能力受影响。

        默认值:5

        -

        eventBurst是突发性事件创建的上限值,允许事件创建临时上升到所指定数量, 不过仍然不超过eventRecordQPS所设置的约束。此值必须是非负值, -且只有eventRecordQPS大于 0 时才会使用此设置。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能因为事件创建所造成的流量变化,导致集群可扩缩能力受影响。

        +且只有eventRecordQPS > 0 时才会使用此设置。

        默认值:10

        -

        enableContentionProfiling用于启用锁竞争性能分析, 仅用于enableDebuggingHandlerstrue的场合。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑启用此分析可能隐含着一定的性能影响。

        默认值:false

        -

        healthzPort是本地主机上提供healthz端点的端口 (设置值为 0 时表示禁止)。合法值介于 1 和 65535 之间。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰到监控 kubelet 健康状况的组件。

        默认值:10248

        -

        healthzBindAddresshealthz服务器用来提供服务的 IP 地址。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能影响到监测 kubelet 健康状况的组件。

        默认值:"127.0.0.1"

        oomScoreAdj 是为 kubelet 进程设置的oom-score-adj值。 所设置的取值要在 [-1000, 1000] 范围之内。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能影响到内存压力较大时节点的稳定性。

        默认值:-999

        streamingConnectionIdleTimeout
        -meta/v1.Duration +meta/v1.Duration

        streamingConnectionIdleTimeout设置流式连接在被自动关闭之前可以空闲的最长时间。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能影响到依赖于通过与 kubelet -服务器间流式连接来接受非频繁更新事件的组件。

        默认值:"4h"

        nodeStatusUpdateFrequency
        -meta/v1.Duration +meta/v1.Duration

        nodeStatusUpdateFrequency是 kubelet 计算节点状态的频率。 如果未启用节点租约特性,这一字段设置的也是 kubelet 向控制面投递节点状态的频率。

        注意:如果节点租约特性未被启用,更改此参数设置时要非常小心, 所设置的参数值必须与节点控制器的nodeMonitorGracePeriod协同。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑变更可能影响节点的可扩缩性。还要注意节点控制器的 -nodeMonitorGracePeriod必须设置为N∗nodeStatusUpdateFrequency, -其中N是节点控制器标记节点不健康之前执行重试的次数。

        默认值:"10s"

        nodeStatusReportFrequency
        -meta/v1.Duration +meta/v1.Duration

        nodeLeaseDurationSeconds是 kubelet 会在其对应的 Lease 对象上设置的时长值。 @@ -735,27 +591,19 @@ Default: 40

        如果租约过期,则节点可被视作不健康。根据 KEP-0009 约定,目前的租约每 10 秒钟续约一次。 在将来,租约的续约时间间隔可能会根据租约的时长来设置。

        此字段的取值必须大于零。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短租约期限可能降低节点对那些暂时导致 kubelet -无法续约的问题的容忍度(例如,时延很短的网络问题)。

        默认值:40

        imageMinimumGCAge
        -meta/v1.Duration +meta/v1.Duration

        imageMinimumGCAge是对未使用镜像进行垃圾搜集之前允许其存在的时长。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这种变更可能触发垃圾收集或者延迟垃圾收集, -并且可能影响节点上镜像的额外开销。

        默认值:"2m"

        volumeStatsAggPeriod
        -meta/v1.Duration +meta/v1.Duration

        volumeStatsAggPeriod是计算和缓存所有 Pod 磁盘用量的频率。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短此周期长度可能产生性能影响。

        默认值:"1m"

        kubeletCgroups是用来隔离 kubelet 的控制组(CGroup)的绝对名称。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:""

        -

        cgroupRoot是用来运行 Pod 的控制组 (CGroup)。 +

        cgroupRoot是用来运行 Pod 的控制组(CGroup)。 容器运行时会尽可能处理此字段的设置值。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        -

        默认值:""

        -

        cgroupDriver是 kubelet 用来操控宿主系统上控制组 (CGroup) +

        cgroupDriver是 kubelet 用来操控宿主系统上控制组(CGroup) 的驱动程序(cgroupfs 或 systemd)。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:"cgroupfs"

        cpuManagerPolicy是要使用的策略名称。需要启用CPUManager 特性门控。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:"None"

        cpuManagerPolicyOptions是一组key=value键值映射, 容许通过额外的选项来精细调整 CPU 管理器策略的行为。需要CPUManagerCPUManagerPolicyOptions两个特性门控都被启用。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:nil

        cpuManagerReconcilePeriod
        -meta/v1.Duration +meta/v1.Duration

        cpuManagerReconcilePeriod是 CPU 管理器的协调周期时长。 需要启用CPUManager特性门控。

        -

        DynamicKubeletConfig (已弃用): -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短周期时长可能带来的性能影响。

        默认值:"10s"

        memoryManagerPolicy是内存管理器要使用的策略的名称。 要求启用MemoryManager特性门控。

        -

        DynamicKubeletConfig (已弃用): -更新此字段时需要对整个节点执行重启。最安全的做法是确保此值与本地配置相同。

        默认值:"none"

        runtimeRequestTimeout
        -meta/v1.Duration +meta/v1.Duration

        runtimeRequestTimeout用来设置除长期运行的请求(pulllogsexecattach)之外所有运行时请求的超时时长。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能干扰与 kubelet 服务器交互的组件。

        默认值:"2m"

        maxPods是此 kubelet 上课运行的 Pod 个数上限。此值必须为非负整数。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑变更可能导致 kubelet 重启时 Pod 无法被准入, -而且可能改变Node.status.capacity[v1.ResourcePods]中报告的数值, -从而影响将来的调度决策。增大此个数值也可能会降低性能,因为会有更多的 Pod -塞到同一节点运行。

        默认值:110

        podCIDR是用来设置 Pod IP 地址的 CIDR 值,仅用于独立部署模式。 运行于集群模式时,这一数值会从控制面获得。

        -

        DynamicKubeletConfig (已弃用): -此字段应该总是设置为默认的空字符串值。并且仅用来设置独立运行的 kubelet, -因为这种 kubelet 模式下无法利用动态 kubelet 配置能力。

        默认值:""

        podPidsLimit是每个 Pod 中可使用的 PID 个数上限。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑减小此值可能会导致变更后无法创建容器进程。

        默认值:-1

        resolvConf是一个域名解析配置文件,用作容器 DNS 解析配置的基础。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑变更仅会对更新完成后所创建的 Pod 起作用。 -建议在变更此字段之前先腾空节点。如果此值设置为空字符串,则会覆盖 DNS 解析的默认配置, +

        如果此值设置为空字符串,则会覆盖 DNS 解析的默认配置, 本质上相当于禁用了 DNS 查询。

        默认值:"/etc/resolv.conf"

        cpuCFSQuota允许为设置了 CPU 限制的容器实施 CPU CFS 配额约束。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑禁止此功能可能会降低节点稳定性。

        默认值:true

        cpuCFSQuotaPeriod
        -meta/v1.Duration +meta/v1.Duration

        cpuCFSQuotaPeriod设置 CPU CFS 配额周期值,cpu.cfs_period_us。 此值需要介于 1 微秒和 1 秒之间,包含 1 微秒和 1 秒。 此功能要求启用CustomCPUCFSQuotaPeriod特性门控被启用。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑为容器所设置的限制值可能导致cpu.cfs_period_us -设置发生变化。这一变化会在节点被重新配置时触发容器重启。

        默认值:"100ms"

        maxOpenFiles是 kubelet 进程可以打开的文件个数。此值必须不能为负数。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能影响到 kubelet 与节点文件系统间交互的能力。

        默认值:1000000

        contentType是向 API 服务器发送请求时使用的内容类型。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这样做可能影响 kubelet 与 API 服务器通信的能力。 -如果 kubelet 因为此字段的变更而失去与 API 服务器间的连接, -则之前所作的变更无法通过动态 kubelet 配置来实现回退。

        默认值:"application/vnd.kubernetes.protobuf"

        kubeAPIQPS设置与 Kubernetes API 服务器通信时要使用的 QPS(每秒查询数)。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能因为 kubelet 与 API 服务器之间流量的变化而影响集群扩缩能力。

        默认值:5

        kubeAPIBurst设置与 Kubernetes API 服务器通信时突发的流量级别。 此字段取值不可以是负数。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能因为 kubelet 与 API 服务器之间流量的变化而影响集群扩缩能力。

        默认值:10

        evictionSoft是一个映射,是从信号名称到定义软性驱逐阈值的映射。 例如:{"memory.available": "300Mi"}

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能会触发或延迟 Pod 驱逐操作, -并且可能造成节点所报告的可分配资源数量发生变化。

        默认值:nil

        evictionSoftGracePeriod是一个映射,是从信号名称到每个软性驱逐信号的宽限期限。 例如:{"memory.available": "30s"}

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑这可能会触发或延迟 Pod 驱逐操作。

        默认值:nil

        evictionPressureTransitionPeriod
        -meta/v1.Duration +meta/v1.Duration

        evictionPressureTransitionPeriod设置 kubelet 离开驱逐压力状况之前必须要等待的时长。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑减少此字段值可能会在节点过量分配时降低节点稳定性。

        默认值:"5m"

        iptablesDropBit是 iptables fwmark 空间中用来标记丢弃包的数据位。 此值必须介于[0, 31]区间,必须与其他标记位不同。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑此处的变更要与其他组件(如 kube-proxy)相应的变更协调一致。 -只有当makeIPTablesUtilChains能力被启用时,这里的更新才会起作用。

        默认值:15

        featureGates是一个从功能特性名称到布尔值的映射,用来启用或禁用实验性的功能。 此字段可逐条更改文件 "k8s.io/kubernetes/pkg/features/kube_features.go" 中所给的内置默认值。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑你所启用或禁止的功能特性的文档。 -尽管我们鼓励功能特性的开发人员使动态启用或禁用功能特性成为可能, -某些变更可能要求重新启动节点,某些特性可能要求在从启用到禁用切换时作出精细的协调。

        默认值:nil

        failSwapOn通知 kubelet 在节点上启用交换分区时拒绝启动。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑缩短此周期长度可能产生性能影响。

        默认值:true

        containerLogMaxSize是定义容器日志文件被轮转之前可以到达的最大尺寸。 例如:"5Mi" 或 "256Ki"。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能会触发日志轮转。

        默认值:"10Mi"

        containerLogMaxFiles设置每个容器可以存在的日志文件个数上限。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑降低此值可能导致日志文件被删除。

        默认值:"5"

        systemReservedCgroup帮助 kubelet 识别用来为 OS 系统级守护进程实施 systemReserved计算资源预留时使用的顶级控制组(CGroup)。 -参考[Node Allocatable](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md) +参考 Node Allocatable 以了解详细信息。

        -

        DynamicKubeletConfig(已弃用): -此字段更新时需要整个节点重启。最安全的做法是保持此值与本地配置相同。

        默认值:""

        kubeReservedCgroup 帮助 kubelet 识别用来为 Kubernetes 节点系统级守护进程实施 kubeReserved计算资源预留时使用的顶级控制组(CGroup)。 -参阅Node Allocatable +参阅 Node Allocatable 了解进一步的信息。

        -

        DynamicKubeletConfig(已弃用): -此字段更新时需要整个节点重启。最安全的做法是保持此值与本地配置相同。

        默认值:""

        volumePluginDir是用来搜索其他第三方卷插件的目录的路径。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑更改volumePluginDir可能干扰使用第三方卷插件的负载。

        默认值:"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"

        providerID字段被设置时,指定的是一个外部提供者(即云驱动)实例的唯一 ID, 该提供者可用来唯一性地标识特定节点。

        -

        DynamicKubeletConfig (已弃用,默认为关闭)被启用时, -如果动态更新了此字段,请考虑可能影响到 kubelet 与云驱动之间进行交互的能力。

        默认值:""

        shutdownGracePeriod
        -meta/v1.Duration +meta/v1.Duration
        +list when the node is shutting down. +For example, to allow critical pods 10s to shutdown, priority>=10000 pods 20s to +shutdown, and all remaining pods 30s to shutdown. +-->

        shutdownGracePeriodByPodPriority设置基于 Pod 相关的优先级类值而确定的体面关闭时间。当 kubelet 收到关闭请求的时候,kubelet 会针对节点上运行的所有 Pod 发起关闭操作,这些关闭操作会根据 Pod 的优先级确定其宽限期限, @@ -2140,6 +1683,15 @@ list when the node is shutting down.-->

      • priority: 0 shutdownGracePeriodSeconds: 30
      • +

        在退出之前,kubelet 要等待的时间上限为节点上所有优先级类的 shutdownGracePeriodSeconds的最大值。 当所有 Pod 都退出或者到达其宽限期限时,kubelet 会释放关闭防护锁。 @@ -2314,6 +1866,202 @@ SerializedNodeConfigSource 允许对 `v1.NodeConfigSource` 执行序列化操作

        +## `CredentialProvider` {#kubelet-config-k8s-io-v1beta1-CredentialProvider} + + +**出现在:** + +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) + + +CredentialProvider 代表的是要被 kubelet 调用的一个 exec 插件。 +这一插件只会在所拉取的镜像与该插件所处理的镜像匹配时才会被调用(参见 matchImages)。 + + + + + + + + + + + + + + + + + + + + + + + + +
        字段描述
        name [必需]
        +string +
        + +

        + name 是凭据提供者的名称(必需)。此名称必须与 kubelet + 所看到的提供者可执行文件的名称匹配。可执行文件必须位于 kubelet 的 + bin 目录(通过 --image-credential-provider-bin-dir 设置)下。 +

        +
        matchImages [必需]
        +[]string +
        + +

        matchImages 是一个必须设置的字符串列表,用来匹配镜像以便确定是否要调用此提供者。 +如果字符串之一与 kubelet 所请求的镜像匹配,则此插件会被调用并给予提供凭证的机会。 +镜像应该包含镜像库域名和 URL 路径。

        + +

        matchImages 中的每个条目都是一个模式字符串,其中可以包含端口号和路径。 +域名部分可以包含统配符,但端口或路径部分不可以。通配符可以用作子域名,例如 +'*.k8s.io' 或 'k8s.*.io',以及顶级域名,如 'k8s.*'。

        +

        对类似 'app*.k8s.io' 这类部分子域名的匹配也是支持的。 +每个通配符只能用来匹配一个子域名段,所以 '*.io' 不会匹配 '*.k8s.io'。

        + +

        镜像与 matchImages 之间存在匹配时,以下条件都要满足:

        +
          + +
        • 二者均包含相同个数的域名部分,并且每个域名部分都对应匹配;
        • +
        • matchImages 条目中的 URL 路径部分必须是目标镜像的 URL 路径的前缀;
        • +
        • 如果 matchImages 条目中包含端口号,则端口号也必须与镜像端口号匹配。
        • +
        + +

        matchImages 的一些示例如下:

        +
          +
        • 123456789.dkr.ecr.us-east-1.amazonaws.com
        • +
        • *.azurecr.io
        • +
        • gcr.io
        • +
        • ..registry.io
        • +
        • registry.io:8080/path
        • +
        +
        defaultCacheDuration [必需]
        +meta/v1.Duration +
        + +

        + defaultCacheDuration 是插件在内存中缓存凭据的默认时长, + 在插件响应中没有给出缓存时长时,使用这里设置的值。此字段是必需的。 +

        +
        apiVersion [必需]
        +string +
        + +

        + 要求 exec 插件 CredentialProviderRequest 请求的输入版本。 + 所返回的 CredentialProviderResponse 必须使用与输入相同的编码版本。当前支持的值有: +

        +
          +
        • credentialprovider.kubelet.k8s.io/v1beta1
        • +
        +
        args
        +[]string +
        + +

        在执行插件可执行文件时要传递给命令的参数。

        +
        env
        +[]ExecEnvVar +
        + +

        + env 定义要提供给插件进程的额外的环境变量。 + 这些环境变量会与主机上的其他环境变量以及 client-go 所使用的环境变量组合起来, + 一起传递给插件。 +

        +
        + +## `ExecEnvVar` {#kubelet-config-k8s-io-v1beta1-ExecEnvVar} + + +**出现在:** + +- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider) + + +ExecEnvVar 用来在执行基于 exec 的凭据插件时设置环境变量。 + + + + + + + + + + + + +
        字段描述
        name [必需]
        +string +
        + + + 无描述 + +
        value [必需]
        +string +
        + + + 无描述 + +
        + + ## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication} @@ -2491,7 +2239,7 @@ API 来提供持有者令牌身份认证。

        cacheAuthorizedTTL
        -meta/v1.Duration +meta/v1.Duration
        cacheUnauthorizedTTL
        -meta/v1.Duration +meta/v1.Duration
        flushFrequency [必需]
        -time.Duration +time.Duration

        - - 对日志进行清洗的最大间隔秒数。如果所选的日志后端在写入日志消息时不提供缓存, -则此配置会被忽略。 + + 对日志进行清洗的最大间隔纳秒数(例如,1s = 1000000000)。 + 如果所选的日志后端在写入日志消息时不提供缓存,则此配置会被忽略。

        sanitization [必需]
        -bool -
        -

        - - [试验功能] 当启用此选项时,被标记为敏感的字段(密码、秘钥、令牌)不会被日志记录。 -运行时日志过滤功能可能会引入非常大的计算开销,因此在生产环境中不应启用。 -

        -
        options [必需]
        FormatOptions
        + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        credentialprovider.kubelet.k8s.io/v1alpha1
        kind
        string
        CredentialProviderRequest
        image [必需]
        +string +
        + +

        + image 是容器镜像,作为凭据提供程序插件请求的一部分。 + 插件可以有选择地解析镜像以提取获取凭据所需的任何信息。 +

        +
        + +## `CredentialProviderResponse` {#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse} + + +

        +CredentialProviderResponse 持有 kubelet 应用于原始请求中提供的指定镜像的凭据。 +kubelet 将通过标准输出读取插件的响应。此响应的 apiVersion 值应设置为与 CredentialProviderRequest 中 apiVersion 值相同。 +

        + + + + + + + + + + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        credentialprovider.kubelet.k8s.io/v1alpha1
        kind
        string
        CredentialProviderResponse
        cacheKeyType [必需]
        +PluginCacheKeyType +
        + +

        + cacheKeyType 表明基于请求中所给镜像而要使用的缓存键类型。缓存键类型有三个有效值: + Image、Registry 和 Global。如果指定了无效值,则 kubelet 不会使用该响应。 +

        +
        cacheDuration
        +meta/v1.Duration +
        + +

        + cacheDuration 表示所提供的凭据应该被缓存的时间。kubelet 使用这个字段为 + auth 中的凭据设置内存中数据的缓存时间。如果为空,kubelet 将使用 CredentialProviderConfig + 中提供的 defaultCacheDuration。如果设置为 0,kubelet 将不会缓存所提供的 auth 数据。 +

        +
        auth
        +map[string]k8s.io/kubelet/pkg/apis/credentialprovider/v1alpha1.AuthConfig +
        + +

        + auth 是一个映射,其中包含传递到 kubelet 的身份验证信息。 + 每个键都是一个匹配镜像字符串(下面将对此进行详细介绍)。相应的 authConfig 值应该对所有与此键匹配的镜像有效。 + 如果不能为请求的镜像返回有效的凭据,插件应将此字段设置为 null。 +

        + +

        + 映射中每个键值都是一个正则表达式,可以选择包含端口和路径。 + 域名部分可以包含通配符,但在端口或路径中不能使用通配符。 + 支持通配符作为子域,如 *.k8s.iok8s.*.io,以及顶级域,如 k8s.*。 + 还支持匹配部分子域,如 app*.k8s.io。每个通配符只能匹配一个子域段, + 因此 *.io 不匹配 *.k8s.io。 +

        + +

        + 当满足以下所有条件时,kubelet 会将镜像与键值匹配: +

        +
          +
        • 两者都包含相同数量的域部分,并且每个部分都匹配。
        • +
        • imageMatch 的 URL 路径必须是目标镜像的 URL 路径的前缀。
        • +
        • 如果 imageMatch 包含端口,则该端口也必须在镜像中匹配。
        • +
        + +

        + 当返回多个键(key)时,kubelet 会倒序遍历所有键,这样: +

        +
          +
        • 具有相同前缀的较长键位于较短键之前
        • +
        • 具有相同前缀的非通配符键位于通配符键之前。
        • +
        + +

        + 对于任何给定的匹配,kubelet 将尝试使用提供的凭据进行镜像拉取,并在第一次成功验证后停止拉取。 +

        +

        键值示例:

        +
          +
        • 123456789.dkr.ecr.us-east-1.amazonaws.com
        • +
        • *.azurecr.io
        • +
        • gcr.io
        • +
        • ..registry.io
        • +
        • registry.io:8080/path
        • +
        +
        + +## `AuthConfig` {#credentialprovider-kubelet-k8s-io-v1alpha1-AuthConfig} + + +**出现在:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse) + + +AuthConfig 包含容器仓库的身份验证信息。目前仅支持基于用户名/密码的身份验证,但未来可能会添加更多身份验证机制。 + + + + + + + + + + + + + +
        字段描述
        username [必需]
        +string +
        + +

        + username 是用于向容器仓库进行身份验证的用户名。空的用户名是合法的。 +

        +
        password [必需]
        +string +
        + +

        + password 是用于向容器仓库进行身份验证的密码。空密码是合法的。 +

        +
        + +## `PluginCacheKeyType` {#credentialprovider-kubelet-k8s-io-v1alpha1-PluginCacheKeyType} + + +(string 数据类型的别名) + +**出现在:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse) \ No newline at end of file diff --git a/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md new file mode 100644 index 0000000000000..3bf2430a76bac --- /dev/null +++ b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md @@ -0,0 +1,253 @@ +--- +title: Kubelet CredentialProvider (v1beta1) +content_type: tool-reference +package: credentialprovider.kubelet.k8s.io/v1beta1 +--- + + + +## 资源类型 {#resource-types} + +- [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest) +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse) + +## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest} + + +

        +CredentialProviderRequest 包含 kubelet 需要进行身份验证的镜像。 +Kubelet 会通过标准输入将此请求对象传递给插件。一般来说,插件倾向于用它们所收到的相同的 apiVersion 来响应。 +

        + + + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        credentialprovider.kubelet.k8s.io/v1beta1
        kind
        string
        CredentialProviderRequest
        image [必需]
        +string +
        + +

        + image 是容器镜像,作为凭据提供程序插件请求的一部分。 + 插件可以有选择地解析镜像以提取获取凭据所需的任何信息。 +

        +
        + +## `CredentialProviderResponse` {#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse} + + +

        +CredentialProviderResponse 持有 kubelet 应用于原始请求中提供的指定镜像的凭据。 +kubelet 将通过标准输出读取插件的响应。此响应的 apiVersion 值应设置为与 CredentialProviderRequest 中 apiVersion 值相同。 +

        + + + + + + + + + + + + + + + + + + + + +
        字段描述
        apiVersion
        string
        credentialprovider.kubelet.k8s.io/v1beta1
        kind
        string
        CredentialProviderResponse
        cacheKeyType [必需]
        +PluginCacheKeyType +
        + +

        + cacheKeyType 表明基于请求中所给镜像而要使用的缓存键类型。缓存键类型有三个有效值: + Image、Registry 和 Global。如果指定了无效值,则 kubelet 不会使用该响应。 +

        +
        cacheDuration
        +meta/v1.Duration +
        + +

        + cacheDuration 表示所提供的凭据应该被缓存的时间。kubelet 使用这个字段为 + auth 中的凭据设置内存中数据的缓存时间。如果为空,kubelet 将使用 CredentialProviderConfig + 中提供的 defaultCacheDuration。如果设置为 0,kubelet 将不会缓存所提供的 auth 数据。 +

        +
        auth
        +map[string]k8s.io/kubelet/pkg/apis/credentialprovider/v1beta1.AuthConfig +
        + +

        + auth 是一个映射,其中包含传递到 kubelet 的身份验证信息。 + 每个键都是一个匹配镜像字符串(下面将对此进行详细介绍)。相应的 authConfig 值应该对所有与此键匹配的镜像有效。 + 如果不能为请求的镜像返回有效的凭据,插件应将此字段设置为 null。 +

        + +

        + 映射中每个键值都是一个正则表达式,可以选择包含端口和路径。 + 域名部分可以包含通配符,但在端口或路径中不能使用通配符。 + 支持通配符作为子域,如 *.k8s.iok8s.*.io,以及顶级域,如 k8s.*。 + 还支持匹配部分子域,如 app*.k8s.io。每个通配符只能匹配一个子域段, + 因此 *.io 不匹配 *.k8s.io。 +

        + +

        + 当满足以下所有条件时,kubelet 会将镜像与键值匹配: +

        +
          +
        • 两者都包含相同数量的域部分,并且每个部分都匹配。
        • +
        • imageMatch 的 URL 路径必须是目标镜像的 URL 路径的前缀。
        • +
        • 如果 imageMatch 包含端口,则该端口也必须在镜像中匹配。
        • +
        + +

        + 当返回多个键(key)时,kubelet 会倒序遍历所有键,这样: +

        +
          +
        • 具有相同前缀的较长键位于较短键之前
        • +
        • 具有相同前缀的非通配符键位于通配符键之前。
        • +
        + +

        + 对于任何给定的匹配,kubelet 将尝试使用提供的凭据进行镜像拉取,并在第一次成功验证后停止拉取。 +

        +

        键值示例:

        +
          +
        • 123456789.dkr.ecr.us-east-1.amazonaws.com
        • +
        • *.azurecr.io
        • +
        • gcr.io
        • +
        • ..registry.io
        • +
        • registry.io:8080/path
        • +
        +
        + +## `AuthConfig` {#credentialprovider-kubelet-k8s-io-v1beta1-AuthConfig} + + +**出现在:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse) + + +AuthConfig 包含容器仓库的身份验证信息。目前仅支持基于用户名/密码的身份验证,但未来可能会添加更多身份验证机制。 + + + + + + + + + + + + + +
        字段描述
        username [必需]
        +string +
        + +

        + username 是用于向容器仓库进行身份验证的用户名。空的用户名是合法的。 +

        +
        password [必需]
        +string +
        + +

        + password 是用于向容器仓库进行身份验证的密码。空密码是合法的。 +

        +
        + +## `PluginCacheKeyType` {#credentialprovider-kubelet-k8s-io-v1beta1-PluginCacheKeyType} + + +(string 数据类型的别名) + +**出现在:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/addons.md b/content/zh-cn/docs/reference/glossary/addons.md similarity index 72% rename from content/zh/docs/reference/glossary/addons.md rename to content/zh-cn/docs/reference/glossary/addons.md index 31fc1f0098243..3fd0546714e34 100644 --- a/content/zh/docs/reference/glossary/addons.md +++ b/content/zh-cn/docs/reference/glossary/addons.md @@ -2,7 +2,7 @@ title: 附加组件(Add-ons) id: addons date: 2019-12-15 -full_link: /zh/docs/concepts/cluster-administration/addons/ +full_link: /zh-cn/docs/concepts/cluster-administration/addons/ short_description: > 扩展 Kubernetes 功能的资源。 @@ -37,4 +37,4 @@ tags: -[安装附加组件](/zh/docs/concepts/cluster-administration/addons/) 阐释了更多关于如何在集群内使用附加组件,并列出了一些流行的附加组件。 +[安装附加组件](/zh-cn/docs/concepts/cluster-administration/addons/) 阐释了更多关于如何在集群内使用附加组件,并列出了一些流行的附加组件。 diff --git a/content/zh/docs/reference/glossary/admission-controller.md b/content/zh-cn/docs/reference/glossary/admission-controller.md similarity index 88% rename from content/zh/docs/reference/glossary/admission-controller.md rename to content/zh-cn/docs/reference/glossary/admission-controller.md index 22567c4861c49..90e63e52b152f 100644 --- a/content/zh/docs/reference/glossary/admission-controller.md +++ b/content/zh-cn/docs/reference/glossary/admission-controller.md @@ -2,7 +2,7 @@ title: 准入控制器(Admission Controller) id: admission-controller date: 2019-06-28 -full_link: /zh/docs/reference/access-authn-authz/admission-controllers/ +full_link: /zh-cn/docs/reference/access-authn-authz/admission-controllers/ short_description: > 在对象持久化之前拦截 Kubernetes Api 服务器请求的一段代码 aka: @@ -47,4 +47,4 @@ validating controllers may not. 准入控制器可针对 Kubernetes Api 服务器进行配置,可以执行验证,变更或两者都执行。任何准入控制器都可以拒绝访问请求。 变更(mutating)控制器可以修改其允许的对象,验证(validating)控制器则不可以。 -* [Kubernetes 文档中的准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) \ No newline at end of file +* [Kubernetes 文档中的准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/affinity.md b/content/zh-cn/docs/reference/glossary/affinity.md similarity index 89% rename from content/zh/docs/reference/glossary/affinity.md rename to content/zh-cn/docs/reference/glossary/affinity.md index b3450ed02fc50..44aa0ab6cac43 100644 --- a/content/zh/docs/reference/glossary/affinity.md +++ b/content/zh-cn/docs/reference/glossary/affinity.md @@ -36,8 +36,8 @@ There are two kinds of affinity: * [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) * [pod-to-pod affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) --> -* [节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) -* [Pod 间亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +* [节点亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) +* [Pod 间亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) -聚合层允许您在自己的集群上安装额外的 Kubernetes 风格的 API。 +聚合层允许你在自己的集群上安装额外的 Kubernetes 风格的 API。 @@ -45,4 +45,5 @@ tags: When you've configured the {{< glossary_tooltip text="Kubernetes API Server" term_id="kube-apiserver" >}} to [support additional APIs](/docs/tasks/extend-kubernetes/configure-aggregation-layer/), you can add `APIService` objects to "claim" a URL path in the Kubernetes API. --> -当您配置了 {{< glossary_tooltip text="Kubernetes API Server" term_id="kube-apiserver" >}} 来 [支持额外的 API](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/),您就可以在 Kubernetes API 中增加 `APIService` 对象来 "申领(Claim)" 一个 URL 路径。 +当你配置了 {{< glossary_tooltip text="Kubernetes API Server" term_id="kube-apiserver" >}} 来 [支持额外的 API](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/), +你就可以在 Kubernetes API 中增加 `APIService` 对象来 "申领(Claim)" 一个 URL 路径。 diff --git a/content/zh/docs/reference/glossary/annotation.md b/content/zh-cn/docs/reference/glossary/annotation.md similarity index 94% rename from content/zh/docs/reference/glossary/annotation.md rename to content/zh-cn/docs/reference/glossary/annotation.md index a305352e32a48..5554aafbbca0b 100644 --- a/content/zh/docs/reference/glossary/annotation.md +++ b/content/zh-cn/docs/reference/glossary/annotation.md @@ -2,7 +2,7 @@ title: 注解(Annotation) id: annotation date: 2018-04-12 -full_link: /zh/docs/concepts/overview/working-with-objects/annotations/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/annotations/ short_description: > 注解是以键值对的形式给资源对象附加随机的无法标识的元数据。 diff --git a/content/zh/docs/reference/glossary/api-eviction.md b/content/zh-cn/docs/reference/glossary/api-eviction.md similarity index 82% rename from content/zh/docs/reference/glossary/api-eviction.md rename to content/zh-cn/docs/reference/glossary/api-eviction.md index 0f1aabd435cf4..4ed41b8648f00 100644 --- a/content/zh/docs/reference/glossary/api-eviction.md +++ b/content/zh-cn/docs/reference/glossary/api-eviction.md @@ -2,7 +2,7 @@ title: API 发起的驱逐 id: api-eviction date: 2021-04-27 -full_link: /zh/docs/concepts/scheduling-eviction/pod-eviction/#api-eviction +full_link: /zh-cn/docs/concepts/scheduling-eviction/pod-eviction/#api-eviction short_description: > API 发起的驱逐是一个先调用 Eviction API 创建驱逐对象,再由该对象体面地中止 Pod 的过程。 aka: @@ -46,13 +46,13 @@ API-initiated eviction is not the same as [node-pressure eviction](/docs/concept 你可以通过 kube-apiserver 的客户端,比如 `kubectl drain` 这样的命令,直接调用 Eviction API 发起驱逐。 当 `Eviction` 对象创建出来之后,该对象将驱动 API 服务器终止选定的Pod。 -API 发起的驱逐取决于你配置的 [`PodDisruptionBudgets`](/zh/docs/tasks/run-application/configure-pdb/) -和 [`terminationGracePeriodSeconds`](/zh/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)。 +API 发起的驱逐取决于你配置的 [`PodDisruptionBudgets`](/zh-cn/docs/tasks/run-application/configure-pdb/) +和 [`terminationGracePeriodSeconds`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)。 API 发起的驱逐不同于 -[节点压力引发的驱逐](/zh/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction)。 +[节点压力引发的驱逐](/zh-cn/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction)。 -* 有关详细信息,请参阅 [API 发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/)。 \ No newline at end of file +* 有关详细信息,请参阅 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/api-group.md b/content/zh-cn/docs/reference/glossary/api-group.md similarity index 86% rename from content/zh/docs/reference/glossary/api-group.md rename to content/zh-cn/docs/reference/glossary/api-group.md index a219943be7fd2..42006b5e90c3e 100644 --- a/content/zh/docs/reference/glossary/api-group.md +++ b/content/zh-cn/docs/reference/glossary/api-group.md @@ -2,7 +2,7 @@ title: API Group id: api-group date: 2019-09-02 -full_link: /zh/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning +full_link: /zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning short_description: > Kubernetes API 中的一组相关路径 @@ -47,4 +47,4 @@ API group 在 REST 路径和序列化对象的 `apiVersion` 字段中指定。 -* 阅读 [API Group](/zh/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) 了解更多信息。 +* 阅读 [API Group](/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) 了解更多信息。 diff --git a/content/zh/docs/reference/glossary/app-container.md b/content/zh-cn/docs/reference/glossary/app-container.md similarity index 96% rename from content/zh/docs/reference/glossary/app-container.md rename to content/zh-cn/docs/reference/glossary/app-container.md index 3773e1c1b83cc..3980937a9d810 100644 --- a/content/zh/docs/reference/glossary/app-container.md +++ b/content/zh-cn/docs/reference/glossary/app-container.md @@ -42,6 +42,6 @@ once the application container has started. If a pod doesn't have any init containers configured, all the containers in that pod are app containers. --> -初始化容器使您可以分离对于{{< glossary_tooltip text="工作负载" term_id="workload" >}} +初始化容器使你可以分离对于{{< glossary_tooltip text="工作负载" term_id="workload" >}} 整体而言很重要的初始化细节,并且一旦应用容器启动,它不需要继续运行。 如果 pod 没有配置任何初始化容器,则该 pod 中的所有容器都是应用程序容器。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/application-architect.md b/content/zh-cn/docs/reference/glossary/application-architect.md similarity index 100% rename from content/zh/docs/reference/glossary/application-architect.md rename to content/zh-cn/docs/reference/glossary/application-architect.md diff --git a/content/zh/docs/reference/glossary/application-developer.md b/content/zh-cn/docs/reference/glossary/application-developer.md similarity index 100% rename from content/zh/docs/reference/glossary/application-developer.md rename to content/zh-cn/docs/reference/glossary/application-developer.md diff --git a/content/zh/docs/reference/glossary/applications.md b/content/zh-cn/docs/reference/glossary/applications.md similarity index 100% rename from content/zh/docs/reference/glossary/applications.md rename to content/zh-cn/docs/reference/glossary/applications.md diff --git a/content/zh/docs/reference/glossary/approver.md b/content/zh-cn/docs/reference/glossary/approver.md similarity index 100% rename from content/zh/docs/reference/glossary/approver.md rename to content/zh-cn/docs/reference/glossary/approver.md diff --git a/content/zh/docs/reference/glossary/cadvisor.md b/content/zh-cn/docs/reference/glossary/cadvisor.md similarity index 100% rename from content/zh/docs/reference/glossary/cadvisor.md rename to content/zh-cn/docs/reference/glossary/cadvisor.md diff --git a/content/zh/docs/reference/glossary/certificate.md b/content/zh-cn/docs/reference/glossary/certificate.md similarity index 94% rename from content/zh/docs/reference/glossary/certificate.md rename to content/zh-cn/docs/reference/glossary/certificate.md index 05b50e37dc615..7de6715c1662c 100644 --- a/content/zh/docs/reference/glossary/certificate.md +++ b/content/zh-cn/docs/reference/glossary/certificate.md @@ -2,7 +2,7 @@ title: 证书(Certificate) id: certificate date: 2018-04-12 -full_link: /zh/docs/tasks/tls/managing-tls-in-a-cluster/ +full_link: /zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/ short_description: > 证书是个安全加密文件,用来确认对 Kubernetes 集群访问的合法性。 diff --git a/content/zh/docs/reference/glossary/cgroup.md b/content/zh-cn/docs/reference/glossary/cgroup.md similarity index 100% rename from content/zh/docs/reference/glossary/cgroup.md rename to content/zh-cn/docs/reference/glossary/cgroup.md diff --git a/content/zh/docs/reference/glossary/cidr.md b/content/zh-cn/docs/reference/glossary/cidr.md similarity index 100% rename from content/zh/docs/reference/glossary/cidr.md rename to content/zh-cn/docs/reference/glossary/cidr.md diff --git a/content/zh/docs/reference/glossary/cla.md b/content/zh-cn/docs/reference/glossary/cla.md similarity index 100% rename from content/zh/docs/reference/glossary/cla.md rename to content/zh-cn/docs/reference/glossary/cla.md diff --git a/content/zh/docs/reference/glossary/cloud-controller-manager.md b/content/zh-cn/docs/reference/glossary/cloud-controller-manager.md similarity index 83% rename from content/zh/docs/reference/glossary/cloud-controller-manager.md rename to content/zh-cn/docs/reference/glossary/cloud-controller-manager.md index 3d33b8c7fe97b..72c581453c496 100644 --- a/content/zh/docs/reference/glossary/cloud-controller-manager.md +++ b/content/zh-cn/docs/reference/glossary/cloud-controller-manager.md @@ -2,7 +2,7 @@ title: 云控制器管理器(Cloud Controller Manager) id: cloud-controller-manager date: 2018-04-12 -full_link: /zh/docs/concepts/architecture/cloud-controller/ +full_link: /zh-cn/docs/concepts/architecture/cloud-controller/ short_description: > 将 Kubernetes 与第三方云提供商进行集成的控制面组件。 @@ -33,18 +33,17 @@ that embeds cloud-specific control logic. The cloud controller manager lets you cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster. --> -云控制器管理器是指嵌入特定云的控制逻辑的 +`cloud-controller-manager` 是指嵌入特定云的控制逻辑之 {{< glossary_tooltip text="控制平面" term_id="control-plane" >}}组件。 -云控制器管理器使得你可以将你的集群连接到云提供商的 API 之上, +`cloud-controller-manager` 允许你将你的集群连接到云提供商的 API 之上, 并将与该云平台交互的组件同与你的集群交互的组件分离开来。 - 通过分离 Kubernetes 和底层云基础设置之间的互操作性逻辑, -云控制器管理器组件使云提供商能够以不同于 Kubernetes 主项目的 +`cloud-controller-manager` 组件使云提供商能够以不同于 Kubernetes 主项目的 步调发布新特征。 diff --git a/content/zh/docs/reference/glossary/cloud-provider.md b/content/zh-cn/docs/reference/glossary/cloud-provider.md similarity index 100% rename from content/zh/docs/reference/glossary/cloud-provider.md rename to content/zh-cn/docs/reference/glossary/cloud-provider.md diff --git a/content/zh/docs/reference/glossary/cluster-architect.md b/content/zh-cn/docs/reference/glossary/cluster-architect.md similarity index 100% rename from content/zh/docs/reference/glossary/cluster-architect.md rename to content/zh-cn/docs/reference/glossary/cluster-architect.md diff --git a/content/zh/docs/reference/glossary/cluster-infrastructure.md b/content/zh-cn/docs/reference/glossary/cluster-infrastructure.md similarity index 100% rename from content/zh/docs/reference/glossary/cluster-infrastructure.md rename to content/zh-cn/docs/reference/glossary/cluster-infrastructure.md diff --git a/content/zh/docs/reference/glossary/cluster-operations.md b/content/zh-cn/docs/reference/glossary/cluster-operations.md similarity index 94% rename from content/zh/docs/reference/glossary/cluster-operations.md rename to content/zh-cn/docs/reference/glossary/cluster-operations.md index 43a9bba4b0310..4067984ac57d5 100644 --- a/content/zh/docs/reference/glossary/cluster-operations.md +++ b/content/zh-cn/docs/reference/glossary/cluster-operations.md @@ -38,5 +38,5 @@ scale the cluster; performing software upgrades; implementing security controls; adding or removing storage; configuring cluster networking; managing cluster-wide observability; and responding to events. --> -群集操作工作的示例包括:部署新节点来扩容集群;执行软件升级;实施安全控制; +集群操作工作的示例包括:部署新节点来扩容集群;执行软件升级;实施安全控制; 添加或删除存储;配置集群网络;管理集群范围的可观测性;响应集群事件。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/cluster-operator.md b/content/zh-cn/docs/reference/glossary/cluster-operator.md similarity index 100% rename from content/zh/docs/reference/glossary/cluster-operator.md rename to content/zh-cn/docs/reference/glossary/cluster-operator.md diff --git a/content/zh/docs/reference/glossary/cluster.md b/content/zh-cn/docs/reference/glossary/cluster.md similarity index 69% rename from content/zh/docs/reference/glossary/cluster.md rename to content/zh-cn/docs/reference/glossary/cluster.md index 74d48ff2c2641..07b79c5f96072 100644 --- a/content/zh/docs/reference/glossary/cluster.md +++ b/content/zh-cn/docs/reference/glossary/cluster.md @@ -4,7 +4,7 @@ id: cluster date: 2019-06-15 full_link: short_description: > - 集群由一组被称作节点的机器组成。这些节点上运行 Kubernetes 所管理的容器化应用。集群具有至少一个工作节点。 + 集群由一组被称作节点的机器组成。这些节点上运行 Kubernetes 所管理的容器化应用。集群具有至少一个工作节点。 aka: tags: @@ -32,8 +32,9 @@ tags: A set of worker machines, called {{< glossary_tooltip text="nodes" term_id="node" >}}, that run containerized applications. Every cluster has at least one worker node. --> -集群由一组被称作节点的机器组成。这些节点上运行 Kubernetes 所管理的容器化应用。集群具有至少一个工作节点。 - +集群是由一组被称作{{< glossary_tooltip text="节点(node)" term_id="node" >}}的机器组成, +这些节点上会运行由 Kubernetes 所管理的容器化应用。 +且每个集群至少有一个工作节点。 -工作节点托管作为应用负载的组件的 Pod 。控制平面管理集群中的工作节点和 Pod 。 -为集群提供故障转移和高可用性,这些控制平面一般跨多主机运行,集群跨多个节点运行。 +工作节点会托管所谓的 Pods,而 Pod 就是作为应用负载的组件。 +控制平面管理集群中的工作节点和 Pods。 +为集群提供故障转移和高可用性, +这些控制平面一般跨多主机运行,而集群也会跨多个节点运行。 diff --git a/content/zh/docs/reference/glossary/cncf.md b/content/zh-cn/docs/reference/glossary/cncf.md similarity index 100% rename from content/zh/docs/reference/glossary/cncf.md rename to content/zh-cn/docs/reference/glossary/cncf.md diff --git a/content/zh/docs/reference/glossary/cni.md b/content/zh-cn/docs/reference/glossary/cni.md similarity index 81% rename from content/zh/docs/reference/glossary/cni.md rename to content/zh-cn/docs/reference/glossary/cni.md index 64febe02151e4..5f87815cb5df2 100644 --- a/content/zh/docs/reference/glossary/cni.md +++ b/content/zh-cn/docs/reference/glossary/cni.md @@ -2,7 +2,7 @@ title: 容器网络接口(CNI) id: cni date: 2018-05-25 -full_link: /zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni +full_link: /zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni short_description: > 容器网络接口 (CNI) 插件是遵循 appc/CNI 协议的一类网络插件。 @@ -41,4 +41,4 @@ tags: * For information on Kubernetes and CNI, see ["Network plugins"](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni). --> -* 想了解 Kubernetes 和 CNI 请参考 ["网络插件"](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni)。 +* 想了解 Kubernetes 和 CNI 请参考 ["网络插件"](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni)。 diff --git a/content/zh/docs/reference/glossary/code-contributor.md b/content/zh-cn/docs/reference/glossary/code-contributor.md similarity index 100% rename from content/zh/docs/reference/glossary/code-contributor.md rename to content/zh-cn/docs/reference/glossary/code-contributor.md diff --git a/content/zh/docs/reference/glossary/configmap.md b/content/zh-cn/docs/reference/glossary/configmap.md similarity index 91% rename from content/zh/docs/reference/glossary/configmap.md rename to content/zh-cn/docs/reference/glossary/configmap.md index be80c84f614cc..b90382e5182ea 100644 --- a/content/zh/docs/reference/glossary/configmap.md +++ b/content/zh-cn/docs/reference/glossary/configmap.md @@ -2,7 +2,7 @@ title: ConfigMap id: configmap date: 2018-04-12 -full_link: /zh/docs/tasks/configure-pod-container/configure-pod-configmap/ +full_link: /zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/ short_description: > ConfigMap 是一种 API 对象,用来将非机密性的数据保存到键值对中。使用时可以用作环境变量、命令行参数或者存储卷中的配置文件。 @@ -41,4 +41,4 @@ environment variables, command-line arguments, or as configuration files in a A ConfigMap allows you to decouple environment-specific configuration from your {{< glossary_tooltip text="container images" term_id="image" >}}, so that your applications are easily portable. --> -ConfigMap 将您的环境配置信息和 {{< glossary_tooltip text="容器镜像" term_id="image" >}} 解耦,便于应用配置的修改。 +ConfigMap 将你的环境配置信息和 {{< glossary_tooltip text="容器镜像" term_id="image" >}} 解耦,便于应用配置的修改。 diff --git a/content/zh/docs/reference/glossary/container-env-variables.md b/content/zh-cn/docs/reference/glossary/container-env-variables.md similarity index 96% rename from content/zh/docs/reference/glossary/container-env-variables.md rename to content/zh-cn/docs/reference/glossary/container-env-variables.md index e8796a12bdd5f..134b0c0c7e95b 100644 --- a/content/zh/docs/reference/glossary/container-env-variables.md +++ b/content/zh-cn/docs/reference/glossary/container-env-variables.md @@ -2,7 +2,7 @@ title: 容器环境变量(Container Environment Variables) id: container-env-variables date: 2018-04-12 -full_link: /zh/docs/concepts/containers/container-environment/ +full_link: /zh-cn/docs/concepts/containers/container-environment/ short_description: > 容器环境变量提供了 name=value 形式的、运行容器化应用所必须的一些重要信息。 diff --git a/content/zh/docs/reference/glossary/container-lifecycle-hooks.md b/content/zh-cn/docs/reference/glossary/container-lifecycle-hooks.md similarity index 94% rename from content/zh/docs/reference/glossary/container-lifecycle-hooks.md rename to content/zh-cn/docs/reference/glossary/container-lifecycle-hooks.md index 6a43d283c96d6..1823b66b4c93c 100644 --- a/content/zh/docs/reference/glossary/container-lifecycle-hooks.md +++ b/content/zh-cn/docs/reference/glossary/container-lifecycle-hooks.md @@ -2,7 +2,7 @@ title: 容器生命周期钩子(Container Lifecycle Hooks) id: container-lifecycle-hooks date: 2018-10-08 -full_link: /zh/docs/concepts/containers/container-lifecycle-hooks/ +full_link: /zh-cn/docs/concepts/containers/container-lifecycle-hooks/ short_description: > 生命周期钩子暴露容器管理生命周期中的事件,允许用户在事件发生时运行代码。 diff --git a/content/zh/docs/reference/glossary/container-runtime-interface.md b/content/zh-cn/docs/reference/glossary/container-runtime-interface.md similarity index 90% rename from content/zh/docs/reference/glossary/container-runtime-interface.md rename to content/zh-cn/docs/reference/glossary/container-runtime-interface.md index 39bc8d0017f1b..b902dc5da00cd 100644 --- a/content/zh/docs/reference/glossary/container-runtime-interface.md +++ b/content/zh-cn/docs/reference/glossary/container-runtime-interface.md @@ -2,7 +2,7 @@ title: 容器运行时接口 id: container-runtime-interface date: 2021-11-24 -full_link: /zh/docs/concepts/architecture/cri +full_link: /zh-cn/docs/concepts/architecture/cri short_description: > kubelet 和容器运行时之间通信的主要协议。 @@ -37,6 +37,6 @@ The Kubernetes Container Runtime Interface (CRI) defines the main {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. --> Kubernetes 容器运行时接口(CRI)定义了主要 [gRPC](https://grpc.io) 协议, -用于[集群组件](/zh/docs/concepts/overview/components/#node-components) +用于[集群组件](/zh-cn/docs/concepts/overview/components/#node-components) {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和 {{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/container-runtime.md b/content/zh-cn/docs/reference/glossary/container-runtime.md similarity index 85% rename from content/zh/docs/reference/glossary/container-runtime.md rename to content/zh-cn/docs/reference/glossary/container-runtime.md index 36a0caf4164aa..4eaf10627ef38 100644 --- a/content/zh/docs/reference/glossary/container-runtime.md +++ b/content/zh-cn/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: 容器运行时(Container Runtime) id: container-runtime date: 2019-06-05 -full_link: /zh/docs/setup/production-environment/container-runtimes +full_link: /zh-cn/docs/setup/production-environment/container-runtimes short_description: > 容器运行时是负责运行容器的软件。 @@ -41,8 +41,9 @@ Kubernetes supports container runtimes such sa and any other implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). --> -Kubernetes 支持容器运行时,例如 +Kubernetes 支持许多容器运行环境,例如 {{< glossary_tooltip term_id="docker">}}、 -{{< glossary_tooltip term_id="containerd" >}}、{{< glossary_tooltip term_id="cri-o" >}} +{{< glossary_tooltip term_id="containerd" >}}、 +{{< glossary_tooltip term_id="cri-o" >}} 以及 [Kubernetes CRI (容器运行环境接口)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md) 的其他任何实现。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/container.md b/content/zh-cn/docs/reference/glossary/container.md similarity index 93% rename from content/zh/docs/reference/glossary/container.md rename to content/zh-cn/docs/reference/glossary/container.md index 450a8af998f1c..1ca18e3ca49dc 100644 --- a/content/zh/docs/reference/glossary/container.md +++ b/content/zh-cn/docs/reference/glossary/container.md @@ -2,7 +2,7 @@ title: 容器(Container) id: container date: 2018-04-12 -full_link: /zh/docs/concepts/overview/what-is-kubernetes/#why-containers +full_link: /zh-cn/docs/concepts/overview/what-is-kubernetes/#why-containers short_description: > 容器是可移植、可执行的轻量级的镜像,镜像中包含软件及其相关依赖。 diff --git a/content/zh/docs/reference/glossary/containerd.md b/content/zh-cn/docs/reference/glossary/containerd.md similarity index 100% rename from content/zh/docs/reference/glossary/containerd.md rename to content/zh-cn/docs/reference/glossary/containerd.md diff --git a/content/zh/docs/reference/glossary/contributor.md b/content/zh-cn/docs/reference/glossary/contributor.md similarity index 100% rename from content/zh/docs/reference/glossary/contributor.md rename to content/zh-cn/docs/reference/glossary/contributor.md diff --git a/content/zh/docs/reference/glossary/control-plane.md b/content/zh-cn/docs/reference/glossary/control-plane.md similarity index 100% rename from content/zh/docs/reference/glossary/control-plane.md rename to content/zh-cn/docs/reference/glossary/control-plane.md diff --git a/content/zh/docs/reference/glossary/controller.md b/content/zh-cn/docs/reference/glossary/controller.md similarity index 97% rename from content/zh/docs/reference/glossary/controller.md rename to content/zh-cn/docs/reference/glossary/controller.md index e60b710603b75..a0c0c02bcef78 100644 --- a/content/zh/docs/reference/glossary/controller.md +++ b/content/zh-cn/docs/reference/glossary/controller.md @@ -2,7 +2,7 @@ title: 控制器(Controller) id: controller date: 2018-04-12 -full_link: /zh/docs/concepts/architecture/controller/ +full_link: /zh-cn/docs/concepts/architecture/controller/ short_description: > 控制器通过 apiserver 监控集群的公共状态,并致力于将当前状态转变为期望的状态。 diff --git a/content/zh/docs/reference/glossary/cri-o.md b/content/zh-cn/docs/reference/glossary/cri-o.md similarity index 100% rename from content/zh/docs/reference/glossary/cri-o.md rename to content/zh-cn/docs/reference/glossary/cri-o.md diff --git a/content/zh/docs/reference/glossary/cri.md b/content/zh-cn/docs/reference/glossary/cri.md similarity index 93% rename from content/zh/docs/reference/glossary/cri.md rename to content/zh-cn/docs/reference/glossary/cri.md index 8c8da979e989c..24450960d07e9 100644 --- a/content/zh/docs/reference/glossary/cri.md +++ b/content/zh-cn/docs/reference/glossary/cri.md @@ -2,7 +2,7 @@ title: 容器运行时接口(CRI) id: cri date: 2019-03-07 -full_link: /zh/docs/concepts/overview/components/#container-runtime +full_link: /zh-cn/docs/concepts/overview/components/#container-runtime short_description: > 一组与 kubelet 集成的容器运行时 API diff --git a/content/zh/docs/reference/glossary/cronjob.md b/content/zh-cn/docs/reference/glossary/cronjob.md similarity index 84% rename from content/zh/docs/reference/glossary/cronjob.md rename to content/zh-cn/docs/reference/glossary/cronjob.md index fe16d58762b45..487731fe31f3b 100644 --- a/content/zh/docs/reference/glossary/cronjob.md +++ b/content/zh-cn/docs/reference/glossary/cronjob.md @@ -2,7 +2,7 @@ title: 周期调度任务(CronJob) id: cronjob date: 2018-04-12 -full_link: /zh/docs/concepts/workloads/controllers/cron-jobs/ +full_link: /zh-cn/docs/concepts/workloads/controllers/cron-jobs/ short_description: > 周期调度的任务(作业)。 @@ -32,7 +32,7 @@ tags: Manages a [Job](/docs/concepts/workloads/controllers/job/) that runs on a periodic schedule. --> - 管理定期运行的 [任务](/zh/docs/concepts/workloads/controllers/job/)。 + 管理定期运行的 [任务](/zh-cn/docs/concepts/workloads/controllers/job/)。 diff --git a/content/zh/docs/reference/glossary/csi.md b/content/zh-cn/docs/reference/glossary/csi.md similarity index 92% rename from content/zh/docs/reference/glossary/csi.md rename to content/zh-cn/docs/reference/glossary/csi.md index 89bb66cafdd29..d430ef3ecacc9 100644 --- a/content/zh/docs/reference/glossary/csi.md +++ b/content/zh-cn/docs/reference/glossary/csi.md @@ -2,7 +2,7 @@ title: 容器存储接口(Container Storage Interface,CSI) id: csi date: 2018-06-25 -full_link: /zh/docs/concepts/storage/volumes/#csi +full_link: /zh-cn/docs/concepts/storage/volumes/#csi short_description: > 容器存储接口 (CSI)定义了存储系统暴露给容器的标准接口。 @@ -48,5 +48,5 @@ CSI 允许存储驱动提供商为 Kubernetes 创建定制化的存储插件, [将它部署到你的集群上](https://kubernetes-csi.github.io/docs/deploying.html)。 然后你才能创建使用该 CSI 驱动的 {{< glossary_tooltip text="Storage Class" term_id="storage-class" >}} 。 -* [Kubernetes 文档中关于 CSI 的描述](/zh/docs/concepts/storage/volumes/#csi) +* [Kubernetes 文档中关于 CSI 的描述](/zh-cn/docs/concepts/storage/volumes/#csi) * [可用的 CSI 驱动列表](https://kubernetes-csi.github.io/docs/drivers.html) diff --git a/content/zh/docs/reference/glossary/customresourcedefinition.md b/content/zh-cn/docs/reference/glossary/customresourcedefinition.md similarity index 70% rename from content/zh/docs/reference/glossary/customresourcedefinition.md rename to content/zh-cn/docs/reference/glossary/customresourcedefinition.md index ee0548e78d4a0..2d63482c3e692 100644 --- a/content/zh/docs/reference/glossary/customresourcedefinition.md +++ b/content/zh-cn/docs/reference/glossary/customresourcedefinition.md @@ -2,9 +2,9 @@ title: CustomResourceDefinition id: CustomResourceDefinition date: 2018-04-12 -full_link: /zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ +full_link: /zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ short_description: > - 通过定制化的代码给您的 Kubernetes API 服务器增加资源对象,而无需编译完整的定制 API 服务器。 + 通过定制化的代码给你的 Kubernetes API 服务器增加资源对象,而无需编译完整的定制 API 服务器。 aka: tags: @@ -33,7 +33,7 @@ tags: Custom code that defines a resource to add to your Kubernetes API server without building a complete custom server. --> - 通过定制化的代码给您的 Kubernetes API 服务器增加资源对象,而无需编译完整的定制 API 服务器。 + 通过定制化的代码给你的 Kubernetes API 服务器增加资源对象,而无需编译完整的定制 API 服务器。 @@ -41,5 +41,6 @@ tags: Custom Resource Definitions let you extend the Kubernetes API for your environment if the publicly supported API resources can't meet your needs. --> -当 Kubernetes 公开支持的 API 资源不能满足您的需要时,定制资源对象(Custom Resource Definitions)让您可以在您的环境上扩展 Kubernetes API。 +当 Kubernetes 公开支持的 API 资源不能满足你的需要时, +定制资源对象(Custom Resource Definitions)让你可以在你的环境上扩展 Kubernetes API。 diff --git a/content/zh/docs/reference/glossary/daemonset.md b/content/zh-cn/docs/reference/glossary/daemonset.md similarity index 94% rename from content/zh/docs/reference/glossary/daemonset.md rename to content/zh-cn/docs/reference/glossary/daemonset.md index 5f35ae5012458..70960117433c6 100644 --- a/content/zh/docs/reference/glossary/daemonset.md +++ b/content/zh-cn/docs/reference/glossary/daemonset.md @@ -2,7 +2,7 @@ title: DaemonSet id: daemonset date: 2018-04-12 -full_link: /zh/docs/concepts/workloads/controllers/daemonset/ +full_link: /zh-cn/docs/concepts/workloads/controllers/daemonset/ short_description: > 确保 Pod 的副本在集群中的一组节点上运行。 diff --git a/content/zh/docs/reference/glossary/data-plane.md b/content/zh-cn/docs/reference/glossary/data-plane.md similarity index 100% rename from content/zh/docs/reference/glossary/data-plane.md rename to content/zh-cn/docs/reference/glossary/data-plane.md diff --git a/content/zh/docs/reference/glossary/deployment.md b/content/zh-cn/docs/reference/glossary/deployment.md similarity index 94% rename from content/zh/docs/reference/glossary/deployment.md rename to content/zh-cn/docs/reference/glossary/deployment.md index ca52523e46348..e09f6805813f6 100644 --- a/content/zh/docs/reference/glossary/deployment.md +++ b/content/zh-cn/docs/reference/glossary/deployment.md @@ -2,7 +2,7 @@ title: Deployment id: deployment date: 2018-04-12 -full_link: /zh/docs/concepts/workloads/controllers/deployment/ +full_link: /zh-cn/docs/concepts/workloads/controllers/deployment/ short_description: > Deployment 是管理应用副本的 API 对象。 diff --git a/content/zh/docs/reference/glossary/developer.md b/content/zh-cn/docs/reference/glossary/developer.md similarity index 100% rename from content/zh/docs/reference/glossary/developer.md rename to content/zh-cn/docs/reference/glossary/developer.md diff --git a/content/zh/docs/reference/glossary/device-plugin.md b/content/zh-cn/docs/reference/glossary/device-plugin.md similarity index 89% rename from content/zh/docs/reference/glossary/device-plugin.md rename to content/zh-cn/docs/reference/glossary/device-plugin.md index 30664b8c31a6e..9bbecf8d1dd87 100644 --- a/content/zh/docs/reference/glossary/device-plugin.md +++ b/content/zh-cn/docs/reference/glossary/device-plugin.md @@ -2,7 +2,7 @@ title: 设备插件(Device Plugin) id: device-plugin date: 2019-02-02 -full_link: /zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ +full_link: /zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ short_description: > 一种软件扩展,可以使 Pod 访问由特定厂商初始化或者安装的设备。 aka: @@ -49,4 +49,4 @@ See [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) for more information. --> -更多信息请查阅[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) \ No newline at end of file +更多信息请查阅[设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/disruption.md b/content/zh-cn/docs/reference/glossary/disruption.md similarity index 73% rename from content/zh/docs/reference/glossary/disruption.md rename to content/zh-cn/docs/reference/glossary/disruption.md index 2b59797a3966f..aa4fca945c0e3 100644 --- a/content/zh/docs/reference/glossary/disruption.md +++ b/content/zh-cn/docs/reference/glossary/disruption.md @@ -2,7 +2,7 @@ title: 干扰(Disruption) id: disruption date: 2019-09-10 -full_link: /zh/docs/concepts/workloads/pods/disruptions/ +full_link: /zh-cn/docs/concepts/workloads/pods/disruptions/ short_description: > 导致 Pod 服务停止的事件。 aka: @@ -41,7 +41,7 @@ Kubernetes terms that an _involuntary disruption_. See [Disruptions](/docs/concepts/workloads/pods/disruptions/) for more information. --> -如果您作为一个集群操作人员,销毁了一个从属于某个应用的 Pod, Kubernetes 视之为 _自愿干扰(Voluntary Disruption)_。如果由于节点故障 -或者影响更大区域故障的断电导致 Pod 离线,kubernetes 视之为 _非愿干扰(Involuntary Disruption)_。 +如果你作为一个集群操作人员,销毁了一个从属于某个应用的 Pod, Kubernetes 视之为**自愿干扰(Voluntary Disruption)**。 +如果由于节点故障 或者影响更大区域故障的断电导致 Pod 离线,kubernetes 视之为**非愿干扰(Involuntary Disruption)**。 -更多信息请查阅[Disruptions](/zh/docs/concepts/workloads/pods/disruptions/) \ No newline at end of file +更多信息请查阅[Disruptions](/zh-cn/docs/concepts/workloads/pods/disruptions/) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/docker.md b/content/zh-cn/docs/reference/glossary/docker.md similarity index 95% rename from content/zh/docs/reference/glossary/docker.md rename to content/zh-cn/docs/reference/glossary/docker.md index bec6c834f2f7e..78ac4c9f5051e 100644 --- a/content/zh/docs/reference/glossary/docker.md +++ b/content/zh-cn/docs/reference/glossary/docker.md @@ -2,7 +2,7 @@ title: Docker id: docker date: 2018-04-12 -full_link: /zh/docs/reference/kubectl/docker-cli-to-kubectl/ +full_link: /zh-cn/docs/reference/kubectl/docker-cli-to-kubectl/ short_description: > Docker 是一种可以提供操作系统级别虚拟化(也称作容器)的软件技术。 diff --git a/content/zh-cn/docs/reference/glossary/dockershim.md b/content/zh-cn/docs/reference/glossary/dockershim.md new file mode 100644 index 0000000000000..b12b08d54a4dd --- /dev/null +++ b/content/zh-cn/docs/reference/glossary/dockershim.md @@ -0,0 +1,40 @@ +--- +title: Dockershim +id: dockershim +date: 2022-04-15 +full_link: /zh-cn/dockershim +short_description: > + dockershim 是 Kubernetes v1.23 及之前版本中的一个组件,Kubernetes 系统组件通过它与 Docker Engine 通信。 + +aka: +tags: +- fundamental +--- + + + + + +dockershim 是 Kubernetes v1.23 及之前版本中的一个组件。 +Kubernetes 系统组件通过它与 {{< glossary_tooltip text="Docker Engine" term_id="docker" >}} 通信。 + + + +从 Kubernetes v1.24 开始,dockershim 已从 Kubernetes 中移除. +想了解更多信息,可参考[移除 Dockershim 的常见问题](/zh-cn/dockershim)。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/downstream.md b/content/zh-cn/docs/reference/glossary/downstream.md similarity index 86% rename from content/zh/docs/reference/glossary/downstream.md rename to content/zh-cn/docs/reference/glossary/downstream.md index cba0229cb95b0..154e1f04790f3 100644 --- a/content/zh/docs/reference/glossary/downstream.md +++ b/content/zh-cn/docs/reference/glossary/downstream.md @@ -27,7 +27,7 @@ tags: --> 可以指:Kubernetes 生态系统中依赖于核心 Kubernetes 代码库或分支代码库的代码。 @@ -39,6 +39,6 @@ May refer to: code in the Kubernetes ecosystem that depends upon the core Kubern * In **GitHub** or **git**: The convention is to refer to a forked repo as *downstream*, whereas the source repo is considered *upstream*. --> -* 在 **Kubernetes 社区**中:*下游(downstream)* 在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如,Kubernete 的一个新特性可以被*下游(downstream)* 应用采用,以提升它们的功能性。 +* 在 **Kubernetes 社区**中:*下游(downstream)* 在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如,Kubernetes 的一个新特性可以被*下游(downstream)* 应用采用,以提升它们的功能性。 * 在 **GitHub** 或 **git** 中:约定用*下游(downstream)* 表示分支代码库,源代码库被认为是*上游(upstream)*。 diff --git a/content/zh/docs/reference/glossary/dynamic-volume-provisioning.md b/content/zh-cn/docs/reference/glossary/dynamic-volume-provisioning.md similarity index 96% rename from content/zh/docs/reference/glossary/dynamic-volume-provisioning.md rename to content/zh-cn/docs/reference/glossary/dynamic-volume-provisioning.md index 5606af15a4c4b..a4f8212ba87bf 100644 --- a/content/zh/docs/reference/glossary/dynamic-volume-provisioning.md +++ b/content/zh-cn/docs/reference/glossary/dynamic-volume-provisioning.md @@ -2,7 +2,7 @@ title: 动态卷供应(Dynamic Volume Provisioning) id: dynamicvolumeprovisioning date: 2018-04-12 -full_link: /zh/docs/concepts/storage/dynamic-provisioning/ +full_link: /zh-cn/docs/concepts/storage/dynamic-provisioning/ short_description: > 允许用户请求自动创建存储卷。 diff --git a/content/zh/docs/reference/glossary/endpoint-slice.md b/content/zh-cn/docs/reference/glossary/endpoint-slice.md similarity index 94% rename from content/zh/docs/reference/glossary/endpoint-slice.md rename to content/zh-cn/docs/reference/glossary/endpoint-slice.md index dcfc35e308fca..f08f9adf4bdef 100644 --- a/content/zh/docs/reference/glossary/endpoint-slice.md +++ b/content/zh-cn/docs/reference/glossary/endpoint-slice.md @@ -2,7 +2,7 @@ title: EndpointSlice id: endpoint-slice date: 2018-04-12 -full_link: /zh/docs/concepts/services-networking/endpoint-slices/ +full_link: /zh-cn/docs/concepts/services-networking/endpoint-slices/ short_description: > 一种将网络端点与 Kubernetes 资源组合在一起的方法。 diff --git a/content/zh/docs/reference/glossary/endpoint.md b/content/zh-cn/docs/reference/glossary/endpoint.md similarity index 100% rename from content/zh/docs/reference/glossary/endpoint.md rename to content/zh-cn/docs/reference/glossary/endpoint.md diff --git a/content/zh/docs/reference/glossary/ephemeral-container.md b/content/zh-cn/docs/reference/glossary/ephemeral-container.md similarity index 85% rename from content/zh/docs/reference/glossary/ephemeral-container.md rename to content/zh-cn/docs/reference/glossary/ephemeral-container.md index d937404797ccb..08161758998c4 100644 --- a/content/zh/docs/reference/glossary/ephemeral-container.md +++ b/content/zh-cn/docs/reference/glossary/ephemeral-container.md @@ -2,14 +2,14 @@ title: 临时容器(Ephemeral Container) id: ephemeral-container date: 2019-08-26 -full_link: /zh/docs/concepts/workloads/pods/ephemeral-containers/ +full_link: /zh-cn/docs/concepts/workloads/pods/ephemeral-containers/ short_description: > - 您可以在 Pod 中临时运行的一种容器类型 + 你可以在 Pod 中临时运行的一种容器类型 aka: tags: - fundamental --- - 您可以在 {{< glossary_tooltip term_id="pod" >}} 中临时运行的一种 {{< glossary_tooltip term_id="container" >}} 类型。 + 你可以在 {{< glossary_tooltip term_id="pod" >}} 中临时运行的一种 {{< glossary_tooltip term_id="container" >}} 类型。 -etcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。 +`etcd` 是兼顾一致性与高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。 -您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。 +你的 Kubernetes 集群的 `etcd` 数据库通常需要有个[备份](/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster)计划。 -要了解 etcd 更深层次的信息,请参考 [etcd 文档](https://etcd.io/docs/)。 +如果想要更深入的了解 `etcd`,请参考 [etcd 文档](https://etcd.io/docs/)。 diff --git a/content/zh/docs/reference/glossary/event.md b/content/zh-cn/docs/reference/glossary/event.md similarity index 96% rename from content/zh/docs/reference/glossary/event.md rename to content/zh-cn/docs/reference/glossary/event.md index 37d143e60b461..6c74215300c41 100644 --- a/content/zh/docs/reference/glossary/event.md +++ b/content/zh-cn/docs/reference/glossary/event.md @@ -51,6 +51,6 @@ Events should be treated as informative, best-effort, supplemental data. In Kubernetes, [auditing](/docs/tasks/debug/debug-cluster/audit/) generates a different kind of Event record (API group `audit.k8s.io`). --> -在 Kubernetes 中,[审计](/zh/docs/tasks/debug/debug-cluster/audit/) +在 Kubernetes 中,[审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/) 机制会生成一种不同种类的 Event 记录(API 组为 `audit.k8s.io`)。 diff --git a/content/zh/docs/reference/glossary/eviction.md b/content/zh-cn/docs/reference/glossary/eviction.md similarity index 81% rename from content/zh/docs/reference/glossary/eviction.md rename to content/zh-cn/docs/reference/glossary/eviction.md index 30666ca2b1f5d..142b9fa6028cd 100644 --- a/content/zh/docs/reference/glossary/eviction.md +++ b/content/zh-cn/docs/reference/glossary/eviction.md @@ -2,7 +2,7 @@ title: 驱逐 id: eviction date: 2021-05-08 -full_link: /zh/docs/concepts/scheduling-eviction/ +full_link: /zh-cn/docs/concepts/scheduling-eviction/ short_description: > 终止节点上一个或多个 Pod 的过程。 aka: @@ -33,6 +33,6 @@ There are two kinds of eviction: * [API-initiated eviction](/docs/reference/generated/kubernetes-api/v1.23/) --> 驱逐的两种类型 -* [节点压力驱逐](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* [节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) * [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/) diff --git a/content/zh/docs/reference/glossary/extensions.md b/content/zh-cn/docs/reference/glossary/extensions.md similarity index 88% rename from content/zh/docs/reference/glossary/extensions.md rename to content/zh-cn/docs/reference/glossary/extensions.md index bec430f31ddb2..ae33b45a2ef00 100644 --- a/content/zh/docs/reference/glossary/extensions.md +++ b/content/zh-cn/docs/reference/glossary/extensions.md @@ -2,7 +2,7 @@ title: 扩展组件(Extensions) id: Extensions date: 2019-02-01 -full_link: /zh/docs/concepts/extend-kubernetes/extend-cluster/#extensions +full_link: /zh-cn/docs/concepts/extend-kubernetes/extend-cluster/#extensions short_description: > 扩展组件是扩展并与 Kubernetes 深度集成以支持新型硬件的软件组件。 aka: @@ -37,5 +37,5 @@ Many cluster administrators use a hosted or distribution instance of Kubernetes. 许多集群管理员会使用托管的 Kubernetes 或其某种发行包,这些集群预装了扩展。 因此,大多数 Kubernetes 用户将不需要 -安装[扩展组件](/zh/docs/concepts/extend-kubernetes/extend-cluster/#extensions), +安装[扩展组件](/zh-cn/docs/concepts/extend-kubernetes/extend-cluster/#extensions), 需要编写新的扩展组件的用户就更少了。 diff --git a/content/zh/docs/reference/glossary/finalizer.md b/content/zh-cn/docs/reference/glossary/finalizer.md similarity index 94% rename from content/zh/docs/reference/glossary/finalizer.md rename to content/zh-cn/docs/reference/glossary/finalizer.md index 82b462784cc9b..2de7c100737f6 100644 --- a/content/zh/docs/reference/glossary/finalizer.md +++ b/content/zh-cn/docs/reference/glossary/finalizer.md @@ -2,7 +2,7 @@ title: Finalizer id: finalizer date: 2021-07-07 -full_link: /zh/docs/concepts/overview/working-with-objects/finalizers/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/finalizers/ short_description: > 一个带有命名空间的键,告诉 Kubernetes 等到特定的条件被满足后, 再完全删除被标记为删除的资源。 @@ -17,7 +17,7 @@ tags: title: Finalizer id: finalizer date: 2021-07-07 -full_link: /zh/docs/concepts/overview/working-with-objects/finalizers/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/finalizers/ short_description: > A namespaced key that tells Kubernetes to wait until specific conditions are met before it fully deletes an object marked for deletion. diff --git a/content/zh/docs/reference/glossary/flexvolume.md b/content/zh-cn/docs/reference/glossary/flexvolume.md similarity index 94% rename from content/zh/docs/reference/glossary/flexvolume.md rename to content/zh-cn/docs/reference/glossary/flexvolume.md index e0f15afb4c77d..c00fd059223d4 100644 --- a/content/zh/docs/reference/glossary/flexvolume.md +++ b/content/zh-cn/docs/reference/glossary/flexvolume.md @@ -2,7 +2,7 @@ title: FlexVolume id: flexvolume date: 2018-06-25 -full_link: /zh/docs/concepts/storage/volumes/#flexvolume +full_link: /zh-cn/docs/concepts/storage/volumes/#flexvolume short_description: > FlexVolume 是一个已弃用的接口,用于创建树外卷插件。 {{< glossary_tooltip text="容器存储接口(CSI)" term_id="csi" >}} @@ -47,6 +47,6 @@ FlexVolume 驱动程序的二进制文件和依赖项必须安装在主机上。 * [More information on FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) * [Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md) --> -* [Kubernetes 文档中的 Flexvolume](/zh/docs/concepts/storage/volumes/#flexvolume) +* [Kubernetes 文档中的 Flexvolume](/zh-cn/docs/concepts/storage/volumes/#flexvolume) * [更多关于 Flexvolumes 的信息](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) * [存储供应商的卷插件 FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md) diff --git a/content/zh/docs/reference/glossary/garbage-collection.md b/content/zh-cn/docs/reference/glossary/garbage-collection.md similarity index 73% rename from content/zh/docs/reference/glossary/garbage-collection.md rename to content/zh-cn/docs/reference/glossary/garbage-collection.md index effc51483ccf6..f6ca64f58d061 100644 --- a/content/zh/docs/reference/glossary/garbage-collection.md +++ b/content/zh-cn/docs/reference/glossary/garbage-collection.md @@ -2,7 +2,7 @@ title: 垃圾收集 id: garbage-collection date: 2021-07-07 -full_link: /zh/docs/concepts/workloads/controllers/garbage-collection/ +full_link: /zh-cn/docs/concepts/workloads/controllers/garbage-collection/ short_description: > Kubernetes 用于清理集群资源的各种机制的统称。 @@ -42,8 +42,8 @@ Kubernetes uses garbage collection to clean up resources like [unused containers that have expired or failed. --> Kubernetes 使用垃圾收集机制来清理资源,例如: -[未使用的容器和镜像](/zh/docs/concepts/workloads/controllers/garbage-collection/#containers-images)、 -[失败的 Pod](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)、 -[目标资源拥有的对象](/zh/docs/concepts/overview/working-with-objects/owners-dependents/)、 -[已完成的 Job](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)、 +[未使用的容器和镜像](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#containers-images)、 +[失败的 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)、 +[目标资源拥有的对象](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/)、 +[已完成的 Job](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/)、 过期或出错的资源。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/helm-chart.md b/content/zh-cn/docs/reference/glossary/helm-chart.md similarity index 100% rename from content/zh/docs/reference/glossary/helm-chart.md rename to content/zh-cn/docs/reference/glossary/helm-chart.md diff --git a/content/zh/docs/reference/glossary/horizontal-pod-autoscaler.md b/content/zh-cn/docs/reference/glossary/horizontal-pod-autoscaler.md similarity index 95% rename from content/zh/docs/reference/glossary/horizontal-pod-autoscaler.md rename to content/zh-cn/docs/reference/glossary/horizontal-pod-autoscaler.md index 45e5033a5a3fd..49b18de3eecce 100644 --- a/content/zh/docs/reference/glossary/horizontal-pod-autoscaler.md +++ b/content/zh-cn/docs/reference/glossary/horizontal-pod-autoscaler.md @@ -2,7 +2,7 @@ title: Pod 水平自动扩缩器(Horizontal Pod Autoscaler) id: horizontal-pod-autoscaler date: 2018-04-12 -full_link: /zh/docs/tasks/run-application/horizontal-pod-autoscale/ +full_link: /zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/ short_description: > Pod 水平自动扩缩器(Horizontal Pod Autoscaler)是一种 API 资源,它根据目标 CPU 利用率或自定义度量目标扩缩 Pod 副本的数量。 diff --git a/content/zh/docs/reference/glossary/host-aliases.md b/content/zh-cn/docs/reference/glossary/host-aliases.md similarity index 92% rename from content/zh/docs/reference/glossary/host-aliases.md rename to content/zh-cn/docs/reference/glossary/host-aliases.md index c2be10bee2b41..df01a76b4ae1f 100644 --- a/content/zh/docs/reference/glossary/host-aliases.md +++ b/content/zh-cn/docs/reference/glossary/host-aliases.md @@ -2,7 +2,7 @@ title: HostAliases id: HostAliases date: 2019-01-31 -full_link: /docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core +full_link: /zh-cn/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core short_description: > 主机别名 (HostAliases) 是一组 IP 地址和主机名的映射,用于注入到 Pod 内的 hosts 文件。 @@ -10,10 +10,7 @@ aka: tags: - operation --- - 主机别名 (HostAliases) 是一组 IP 地址和主机名的映射,用于注入到 {{< glossary_tooltip text="Pod" term_id="pod" >}} 内的 hosts 文件。 - + + + 主机别名 (HostAliases) 是一组 IP 地址和主机名的映射,用于注入到 {{< glossary_tooltip text="Pod" term_id="pod" >}} 内的 hosts 文件。 + diff --git a/content/zh/docs/reference/glossary/image.md b/content/zh-cn/docs/reference/glossary/image.md similarity index 100% rename from content/zh/docs/reference/glossary/image.md rename to content/zh-cn/docs/reference/glossary/image.md diff --git a/content/zh/docs/reference/glossary/index.md b/content/zh-cn/docs/reference/glossary/index.md similarity index 100% rename from content/zh/docs/reference/glossary/index.md rename to content/zh-cn/docs/reference/glossary/index.md diff --git a/content/zh/docs/reference/glossary/ingress.md b/content/zh-cn/docs/reference/glossary/ingress.md similarity index 93% rename from content/zh/docs/reference/glossary/ingress.md rename to content/zh-cn/docs/reference/glossary/ingress.md index 2b0874157412e..e48b9948a1990 100644 --- a/content/zh/docs/reference/glossary/ingress.md +++ b/content/zh-cn/docs/reference/glossary/ingress.md @@ -2,7 +2,7 @@ title: Ingress id: ingress date: 2018-04-12 -full_link: /zh/docs/concepts/services-networking/ingress/ +full_link: /zh-cn/docs/concepts/services-networking/ingress/ short_description: > Ingress 是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP。 diff --git a/content/zh/docs/reference/glossary/init-container.md b/content/zh-cn/docs/reference/glossary/init-container.md similarity index 100% rename from content/zh/docs/reference/glossary/init-container.md rename to content/zh-cn/docs/reference/glossary/init-container.md diff --git a/content/zh/docs/reference/glossary/istio.md b/content/zh-cn/docs/reference/glossary/istio.md similarity index 100% rename from content/zh/docs/reference/glossary/istio.md rename to content/zh-cn/docs/reference/glossary/istio.md diff --git a/content/zh/docs/reference/glossary/job.md b/content/zh-cn/docs/reference/glossary/job.md similarity index 94% rename from content/zh/docs/reference/glossary/job.md rename to content/zh-cn/docs/reference/glossary/job.md index aa557297be310..220f8111449ef 100644 --- a/content/zh/docs/reference/glossary/job.md +++ b/content/zh-cn/docs/reference/glossary/job.md @@ -2,7 +2,7 @@ title: Job id: job date: 2018-04-12 -full_link: /zh/docs/concepts/workloads/controllers/job/ +full_link: /zh-cn/docs/concepts/workloads/controllers/job/ short_description: > Job 是需要运行完成的确定性的或批量的任务。 diff --git a/content/zh/docs/reference/glossary/kops.md b/content/zh-cn/docs/reference/glossary/kops.md similarity index 91% rename from content/zh/docs/reference/glossary/kops.md rename to content/zh-cn/docs/reference/glossary/kops.md index d3241b763edc0..1719e2a8a67ce 100644 --- a/content/zh/docs/reference/glossary/kops.md +++ b/content/zh-cn/docs/reference/glossary/kops.md @@ -56,7 +56,7 @@ Support for using kops with GCE and VMware vSphere are in alpha. * The ability to directly provision, or to generate Terraform manifests --> -`kops` 为您的集群提供了: +`kops` 为你的集群提供了: * 全自动化安装 * 基于 DNS 的集群标识 @@ -69,4 +69,5 @@ Support for using kops with GCE and VMware vSphere are in alpha. You can also build your own cluster using {{< glossary_tooltip term_id="kubeadm" >}} as a building block. `kops` builds on the kubeadm work. --> -您也可以将自己的集群作为一个构造块,使用 {{< glossary_tooltip term_id="kubeadm" >}} 构造集群。`kops` 是建立在 kubeadm 之上的。 +你也可以将自己的集群作为一个构造块,使用 {{< glossary_tooltip term_id="kubeadm" >}} 构造集群。 +`kops` 是建立在 kubeadm 之上的。 diff --git a/content/zh-cn/docs/reference/glossary/kube-apiserver.md b/content/zh-cn/docs/reference/glossary/kube-apiserver.md new file mode 100644 index 0000000000000..7e93998ecd1f3 --- /dev/null +++ b/content/zh-cn/docs/reference/glossary/kube-apiserver.md @@ -0,0 +1,48 @@ +--- +title: API 服务器 +id: kube-apiserver +date: 2018-04-12 +full_link: /zh-cn/docs/concepts/overview/components/#kube-apiserver +short_description: > + 提供 Kubernetes API 服务的控制面组件。 + +aka: +- kube-apiserver +tags: +- architecture +- fundamental +--- + + + +API 服务器是 Kubernetes {{< glossary_tooltip text="控制平面" term_id="control-plane" >}}的组件, +该组件负责公开了 Kubernetes API,负责处理接受请求的工作。 +API 服务器是 Kubernetes 控制平面的前端。 + + + + +Kubernetes API 服务器的主要实现是 [kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 +`kube-apiserver` 设计上考虑了水平扩缩,也就是说,它可通过部署多个实例来进行扩缩。 +你可以运行 `kube-apiserver` 的多个实例,并在这些实例之间平衡流量。 diff --git a/content/zh-cn/docs/reference/glossary/kube-controller-manager.md b/content/zh-cn/docs/reference/glossary/kube-controller-manager.md new file mode 100644 index 0000000000000..132a05936b2cc --- /dev/null +++ b/content/zh-cn/docs/reference/glossary/kube-controller-manager.md @@ -0,0 +1,46 @@ +--- +title: kube-controller-manager +id: kube-controller-manager +date: 2018-04-12 +full_link: /zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/ +short_description: > + 主节点上运行控制器的组件。 + +aka: +tags: +- architecture +- fundamental +--- + + + +[kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) +是{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}的组件, +负责运行{{< glossary_tooltip text="控制器" term_id="controller" >}}进程。 + + + + +从逻辑上讲, +每个{{< glossary_tooltip text="控制器" term_id="controller" >}}都是一个单独的进程, +但是为了降低复杂性,它们都被编译到同一个可执行文件,并在同一个进程中运行。 + diff --git a/content/zh-cn/docs/reference/glossary/kube-proxy.md b/content/zh-cn/docs/reference/glossary/kube-proxy.md new file mode 100644 index 0000000000000..ef7b661715896 --- /dev/null +++ b/content/zh-cn/docs/reference/glossary/kube-proxy.md @@ -0,0 +1,53 @@ +--- +title: kube-proxy +id: kube-proxy +date: 2018-04-12 +full_link: /zh-cn/docs/reference/command-line-tools-reference/kube-proxy/ +short_description: > + `kube-proxy` 是集群中每个节点上运行的网络代理。 + +aka: +tags: +- fundamental +- networking +--- + + +[kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/) +是集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}所上运行的网络代理, +实现 Kubernetes {{< glossary_tooltip term_id="service">}} 概念的一部分。 + + + + +kube-proxy 维护节点上的一些网络规则, +这些网络规则会允许从集群内部或外部的网络会话与 Pod 进行网络通信。 + + +如果操作系统提供了可用的数据包过滤层,则 kube-proxy 会通过它来实现网络规则。 +否则,kube-proxy 仅做流量转发。 diff --git a/content/zh/docs/reference/glossary/kube-scheduler.md b/content/zh-cn/docs/reference/glossary/kube-scheduler.md similarity index 64% rename from content/zh/docs/reference/glossary/kube-scheduler.md rename to content/zh-cn/docs/reference/glossary/kube-scheduler.md index 9d6a1842fb60c..f9c6ecdaf816b 100644 --- a/content/zh/docs/reference/glossary/kube-scheduler.md +++ b/content/zh-cn/docs/reference/glossary/kube-scheduler.md @@ -2,7 +2,7 @@ title: kube-scheduler id: kube-scheduler date: 2018-04-12 -full_link: /zh/docs/reference/command-line-tools-reference/kube-scheduler/ +full_link: /zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/ short_description: > 控制平面组件,负责监视新创建的、未指定运行节点的 Pod,选择节点让 Pod 在上面运行。 @@ -33,7 +33,9 @@ Control plane component that watches for newly created {{< glossary_tooltip term_id="node" text="node">}}, and selects a node for them to run on.--> - 控制平面组件,负责监视新创建的、未指定运行{{< glossary_tooltip term_id="node" text="节点(node)">}}的 {{< glossary_tooltip term_id="pod" text="Pods" >}},选择节点让 Pod 在上面运行。 + `kube-scheduler` 是{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}的组件, + 负责监视新创建的、未指定运行{{< glossary_tooltip term_id="node" text="节点(node)">}}的 {{< glossary_tooltip term_id="pod" text="Pods" >}}, + 并选择节点来让 Pod 在上面运行。 @@ -41,4 +43,5 @@ to run on.--> Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines. --> -调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。 +调度决策考虑的因素包括单个 Pod 及 Pods 集合的资源需求、软硬件及策略约束、 +亲和性及反亲和性规范、数据位置、工作负载间的干扰及最后时限。 diff --git a/content/zh/docs/reference/glossary/kubeadm.md b/content/zh-cn/docs/reference/glossary/kubeadm.md similarity index 92% rename from content/zh/docs/reference/glossary/kubeadm.md rename to content/zh-cn/docs/reference/glossary/kubeadm.md index f565376100a89..106edda93c66b 100644 --- a/content/zh/docs/reference/glossary/kubeadm.md +++ b/content/zh-cn/docs/reference/glossary/kubeadm.md @@ -2,7 +2,7 @@ title: Kubeadm id: kubeadm date: 2018-04-12 -full_link: /zh/docs/setup/production-environment/tools/kubeadm/ +full_link: /zh-cn/docs/setup/production-environment/tools/kubeadm/ short_description: > 用来快速安装 Kubernetes 并搭建安全稳定的集群的工具。 diff --git a/content/zh/docs/reference/glossary/kubectl.md b/content/zh-cn/docs/reference/glossary/kubectl.md similarity index 100% rename from content/zh/docs/reference/glossary/kubectl.md rename to content/zh-cn/docs/reference/glossary/kubectl.md diff --git a/content/zh/docs/reference/glossary/kubelet.md b/content/zh-cn/docs/reference/glossary/kubelet.md similarity index 75% rename from content/zh/docs/reference/glossary/kubelet.md rename to content/zh-cn/docs/reference/glossary/kubelet.md index a00971e224619..41b4dea14bbbd 100644 --- a/content/zh/docs/reference/glossary/kubelet.md +++ b/content/zh-cn/docs/reference/glossary/kubelet.md @@ -2,14 +2,13 @@ title: Kubelet id: kubelet date: 2018-04-12 -full_link: /docs/reference/generated/kubelet +full_link: /zh-cn/docs/reference/generated/kubelet short_description: > 一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。 -aka: +aka: tags: - fundamental -- core-object --- -一个在集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}上运行的代理。 -它保证{{< glossary_tooltip text="容器(containers)" term_id="container" >}}都 -运行在 {{< glossary_tooltip text="Pod" term_id="pod" >}} 中。 +`kubelet` 会在集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}上运行。 +它保证{{< glossary_tooltip text="容器(containers)" term_id="container" >}}都运行在 +{{< glossary_tooltip text="Pod" term_id="pod" >}} 中。 - + -kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs -中描述的容器处于运行状态且健康。 +kubelet 接收一组通过各类机制提供给它的 PodSpecs, +确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。 diff --git a/content/zh/docs/reference/glossary/kubernetes-api.md b/content/zh-cn/docs/reference/glossary/kubernetes-api.md similarity index 96% rename from content/zh/docs/reference/glossary/kubernetes-api.md rename to content/zh-cn/docs/reference/glossary/kubernetes-api.md index 060ddf93cd48e..0a6c096a420dd 100644 --- a/content/zh/docs/reference/glossary/kubernetes-api.md +++ b/content/zh-cn/docs/reference/glossary/kubernetes-api.md @@ -2,7 +2,7 @@ title: Kubernetes API id: kubernetes-api date: 2018-04-12 -full_link: /zh/docs/concepts/overview/kubernetes-api/ +full_link: /zh-cn/docs/concepts/overview/kubernetes-api/ short_description: > Kubernetes API 是通过 RESTful 接口提供 Kubernetes 功能服务并负责集群状态存储的应用程序。 diff --git a/content/zh/docs/reference/glossary/label.md b/content/zh-cn/docs/reference/glossary/label.md similarity index 93% rename from content/zh/docs/reference/glossary/label.md rename to content/zh-cn/docs/reference/glossary/label.md index f2c661cc44bbb..e0654faa26a1f 100644 --- a/content/zh/docs/reference/glossary/label.md +++ b/content/zh-cn/docs/reference/glossary/label.md @@ -2,7 +2,7 @@ title: 标签(Label) id: label date: 2018-04-12 -full_link: /zh/docs/concepts/overview/working-with-objects/labels/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/labels/ short_description: > 用来为对象设置可标识的属性标记;这些标记对用户而言是有意义且重要的。 diff --git a/content/zh/docs/reference/glossary/limitrange.md b/content/zh-cn/docs/reference/glossary/limitrange.md similarity index 100% rename from content/zh/docs/reference/glossary/limitrange.md rename to content/zh-cn/docs/reference/glossary/limitrange.md diff --git a/content/zh/docs/reference/glossary/logging.md b/content/zh-cn/docs/reference/glossary/logging.md similarity index 82% rename from content/zh/docs/reference/glossary/logging.md rename to content/zh-cn/docs/reference/glossary/logging.md index 8a88fe527e288..17e490de2768b 100644 --- a/content/zh/docs/reference/glossary/logging.md +++ b/content/zh-cn/docs/reference/glossary/logging.md @@ -2,7 +2,7 @@ title: 日志(Logging) id: logging date: 2019-04-04 -full_link: /zh/docs/concepts/cluster-administration/logging/ +full_link: /zh-cn/docs/concepts/cluster-administration/logging/ short_description: > 日志是集群或应用程序记录的事件列表。 @@ -18,7 +18,7 @@ tags: title: Logging id: logging date: 2019-04-04 -full_link: /zh/docs/concepts/cluster-administration/logging/ +full_link: /zh-cn/docs/concepts/cluster-administration/logging/ short_description: > Logs are the list of events that are logged by cluster or application. @@ -36,4 +36,4 @@ tags: Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. --> -应用程序和系统日志可以帮助您了解集群内部发生的情况。日志对于调试问题和监视集群活动非常有用。 \ No newline at end of file +应用程序和系统日志可以帮助你了解集群内部发生的情况。日志对于调试问题和监视集群活动非常有用。 \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/managed-service.md b/content/zh-cn/docs/reference/glossary/managed-service.md similarity index 94% rename from content/zh/docs/reference/glossary/managed-service.md rename to content/zh-cn/docs/reference/glossary/managed-service.md index 567736f091da8..d115469431fde 100644 --- a/content/zh/docs/reference/glossary/managed-service.md +++ b/content/zh-cn/docs/reference/glossary/managed-service.md @@ -40,7 +40,7 @@ list, provision, and bind with Managed Services offered by --> 托管服务的一些例子有 AWS EC2、Azure SQL 数据库和 GCP Pub/Sub 等, 不过它们也可以是可以被某应用使用的任何软件交付件。 -[服务目录](/zh/docs/concepts/extend-kubernetes/service-catalog/) +[服务目录](/zh-cn/docs/concepts/extend-kubernetes/service-catalog/) 提供了一种方法用来列举、制备和绑定到 {{< glossary_tooltip text="服务代理商(Service Brokers)" term_id="service-broker" >}} 所提供的托管服务。 diff --git a/content/zh/docs/reference/glossary/manifest.md b/content/zh-cn/docs/reference/glossary/manifest.md similarity index 100% rename from content/zh/docs/reference/glossary/manifest.md rename to content/zh-cn/docs/reference/glossary/manifest.md diff --git a/content/zh/docs/reference/glossary/master.md b/content/zh-cn/docs/reference/glossary/master.md similarity index 100% rename from content/zh/docs/reference/glossary/master.md rename to content/zh-cn/docs/reference/glossary/master.md diff --git a/content/zh/docs/reference/glossary/member.md b/content/zh-cn/docs/reference/glossary/member.md similarity index 100% rename from content/zh/docs/reference/glossary/member.md rename to content/zh-cn/docs/reference/glossary/member.md diff --git a/content/zh/docs/reference/glossary/minikube.md b/content/zh-cn/docs/reference/glossary/minikube.md similarity index 91% rename from content/zh/docs/reference/glossary/minikube.md rename to content/zh-cn/docs/reference/glossary/minikube.md index 4d834cd8075e3..b688f968220c9 100644 --- a/content/zh/docs/reference/glossary/minikube.md +++ b/content/zh-cn/docs/reference/glossary/minikube.md @@ -44,4 +44,4 @@ You can use Minikube to Minikube 在用户计算机上的一个虚拟机内运行单节点 Kubernetes 集群。 你可以使用 Minikube -[在学习环境中尝试 Kubernetes](/zh/docs/setup/learning-environment/). +[在学习环境中尝试 Kubernetes](/zh-cn/docs/setup/learning-environment/). diff --git a/content/zh/docs/reference/glossary/mirror-pod.md b/content/zh-cn/docs/reference/glossary/mirror-pod.md similarity index 100% rename from content/zh/docs/reference/glossary/mirror-pod.md rename to content/zh-cn/docs/reference/glossary/mirror-pod.md diff --git a/content/zh/docs/reference/glossary/name.md b/content/zh-cn/docs/reference/glossary/name.md similarity index 93% rename from content/zh/docs/reference/glossary/name.md rename to content/zh-cn/docs/reference/glossary/name.md index 6c6dd2b15c9bc..984b85f1f73a2 100644 --- a/content/zh/docs/reference/glossary/name.md +++ b/content/zh-cn/docs/reference/glossary/name.md @@ -2,7 +2,7 @@ title: 名称(Name) id: name date: 2018-04-12 -full_link: /zh/docs/concepts/overview/working-with-objects/names/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/names/ short_description: > 客户端提供的字符串,用来指代资源 URL 中的对象,如 `/api/v1/pods/some-name`。 diff --git a/content/zh/docs/reference/glossary/namespace.md b/content/zh-cn/docs/reference/glossary/namespace.md similarity index 95% rename from content/zh/docs/reference/glossary/namespace.md rename to content/zh-cn/docs/reference/glossary/namespace.md index 5934d748fe79b..b2033d2a03792 100644 --- a/content/zh/docs/reference/glossary/namespace.md +++ b/content/zh-cn/docs/reference/glossary/namespace.md @@ -2,7 +2,7 @@ title: 名字空间(Namespace) id: namespace date: 2018-04-12 -full_link: /zh/docs/concepts/overview/working-with-objects/namespaces/ +full_link: /zh-cn/docs/concepts/overview/working-with-objects/namespaces/ short_description: > 名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。 diff --git a/content/zh/docs/reference/glossary/network-policy.md b/content/zh-cn/docs/reference/glossary/network-policy.md similarity index 72% rename from content/zh/docs/reference/glossary/network-policy.md rename to content/zh-cn/docs/reference/glossary/network-policy.md index 10cc5f6b64199..69c8f50319355 100644 --- a/content/zh/docs/reference/glossary/network-policy.md +++ b/content/zh-cn/docs/reference/glossary/network-policy.md @@ -2,7 +2,7 @@ title: 网络策略 id: network-policy date: 2018-04-12 -full_link: /zh/docs/concepts/services-networking/network-policies/ +full_link: /zh-cn/docs/concepts/services-networking/network-policies/ short_description: > 网络策略是一种规范,规定了允许 Pod 组之间、Pod 与其他网络端点之间以怎样的方式进行通信。 @@ -41,4 +41,7 @@ tags: Network Policies help you declaratively configure which Pods are allowed to connect to each other, which namespaces are allowed to communicate, and more specifically which port numbers to enforce each policy on. `NetworkPolicy` resources use labels to select Pods and define rules which specify what traffic is allowed to the selected Pods. Network Policies are implemented by a supported network plugin provided by a network provider. Be aware that creating a network resource without a controller to implement it will have no effect. --> -网络策略帮助您声明式地配置允许哪些 Pod 之间接、哪些命名空间之间允许进行通信,并具体配置了哪些端口号来执行各个策略。`NetworkPolicy` 资源使用标签来选择 Pod,并定义了所选 Pod 可以接受什么样的流量。网络策略由网络提供商提供的并被 Kubernetes 支持的网络插件实现。请注意,当没有控制器实现网络资源时,创建网络资源将不会生效。 +网络策略帮助你声明式地配置允许哪些 Pod 之间、哪些命名空间之间允许进行通信, +并具体配置了哪些端口号来执行各个策略。`NetworkPolicy` 资源使用标签来选择 Pod, +并定义了所选 Pod 可以接受什么样的流量。网络策略由网络提供商提供的并被 Kubernetes 支持的网络插件实现。 +请注意,当没有控制器实现网络资源时,创建网络资源将不会生效。 diff --git a/content/zh/docs/reference/glossary/node-pressure-eviction.md b/content/zh-cn/docs/reference/glossary/node-pressure-eviction.md similarity index 95% rename from content/zh/docs/reference/glossary/node-pressure-eviction.md rename to content/zh-cn/docs/reference/glossary/node-pressure-eviction.md index c5faa5f135177..ad2108e9d8df7 100644 --- a/content/zh/docs/reference/glossary/node-pressure-eviction.md +++ b/content/zh-cn/docs/reference/glossary/node-pressure-eviction.md @@ -2,7 +2,7 @@ title: 节点压力驱逐 id: node-pressure-eviction date: 2021-05-13 -full_link: /zh/docs/concepts/scheduling-eviction/node-pressure-eviction/ +full_link: /zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/ short_description: > 节点压力驱逐是 kubelet 主动使 Pod 失败以回收节点上的资源的过程。 aka: diff --git a/content/zh/docs/reference/glossary/node.md b/content/zh-cn/docs/reference/glossary/node.md similarity index 93% rename from content/zh/docs/reference/glossary/node.md rename to content/zh-cn/docs/reference/glossary/node.md index d9ac8a9deb4a0..be28602caccd2 100644 --- a/content/zh/docs/reference/glossary/node.md +++ b/content/zh-cn/docs/reference/glossary/node.md @@ -2,7 +2,7 @@ title: 节点(Node) id: node date: 2018-04-12 -full_link: /zh/docs/concepts/architecture/nodes/ +full_link: /zh-cn/docs/concepts/architecture/nodes/ short_description: > Kubernetes 中的工作机器称作节点。 @@ -15,7 +15,7 @@ tags: title: Node id: node date: 2018-04-12 -full_link: /zh/docs/concepts/architecture/nodes/ +full_link: /zh-cn/docs/concepts/architecture/nodes/ short_description: > A node is a worker machine in Kubernetes. diff --git a/content/zh/docs/reference/glossary/object.md b/content/zh-cn/docs/reference/glossary/object.md similarity index 93% rename from content/zh/docs/reference/glossary/object.md rename to content/zh-cn/docs/reference/glossary/object.md index 99576b5665887..0a5a07d10a4fe 100644 --- a/content/zh/docs/reference/glossary/object.md +++ b/content/zh-cn/docs/reference/glossary/object.md @@ -2,7 +2,7 @@ title: 对象(Object) id: object date: 2020-10-12 -full_link: /zh/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects +full_link: /zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects short_description: > Kubernetes 系统中的实体, 代表了集群的部分状态。 aka: diff --git a/content/zh/docs/reference/glossary/operator-pattern.md b/content/zh-cn/docs/reference/glossary/operator-pattern.md similarity index 90% rename from content/zh/docs/reference/glossary/operator-pattern.md rename to content/zh-cn/docs/reference/glossary/operator-pattern.md index c6da67025704a..21751ab4ab3a9 100644 --- a/content/zh/docs/reference/glossary/operator-pattern.md +++ b/content/zh-cn/docs/reference/glossary/operator-pattern.md @@ -2,7 +2,7 @@ title: Operator 模式 id: operator-pattern date: 2019-05-21 -full_link: /zh/docs/concepts/extend-kubernetes/operator/ +full_link: /zh-cn/docs/concepts/extend-kubernetes/operator/ short_description: > 一种用于管理自定义资源的专用控制器 @@ -28,7 +28,7 @@ The [operator pattern](/docs/concepts/extend-kubernetes/operator/) is a system design that links a {{< glossary_tooltip term_id="controller" >}} to one or more custom resources. --> -[operator 模式](/zh/docs/concepts/extend-kubernetes/operator/) 是一种系统设计, +[operator 模式](/zh-cn/docs/concepts/extend-kubernetes/operator/) 是一种系统设计, 将 {{< glossary_tooltip term_id="controller" >}} 关联到一个或多个自定义资源。 diff --git a/content/zh/docs/reference/glossary/persistent-volume-claim.md b/content/zh-cn/docs/reference/glossary/persistent-volume-claim.md similarity index 96% rename from content/zh/docs/reference/glossary/persistent-volume-claim.md rename to content/zh-cn/docs/reference/glossary/persistent-volume-claim.md index f5ab57a3e8eb4..4eb2e49e2bc6c 100644 --- a/content/zh/docs/reference/glossary/persistent-volume-claim.md +++ b/content/zh-cn/docs/reference/glossary/persistent-volume-claim.md @@ -2,7 +2,7 @@ title: 持久卷申领(Persistent Volume Claim) id: persistent-volume-claim date: 2018-04-12 -full_link: /zh/docs/concepts/storage/persistent-volumes/ +full_link: /zh-cn/docs/concepts/storage/persistent-volumes/ short_description: > 声明在持久卷中定义的存储资源,以便可以将其挂载为容器中的卷。 diff --git a/content/zh/docs/reference/glossary/persistent-volume.md b/content/zh-cn/docs/reference/glossary/persistent-volume.md similarity index 96% rename from content/zh/docs/reference/glossary/persistent-volume.md rename to content/zh-cn/docs/reference/glossary/persistent-volume.md index 671e3a8bf3eee..8bcd24d6a82b2 100644 --- a/content/zh/docs/reference/glossary/persistent-volume.md +++ b/content/zh-cn/docs/reference/glossary/persistent-volume.md @@ -2,7 +2,7 @@ title: 持久卷(Persistent Volume) id: persistent-volume date: 2018-04-12 -full_link: /zh/docs/concepts/storage/persistent-volumes/ +full_link: /zh-cn/docs/concepts/storage/persistent-volumes/ short_description: > 持久卷是代表集群中一块存储空间的 API 对象。 它是通用的、可插拔的、并且不受单个 Pod 生命周期约束的持久化资源。 diff --git a/content/zh/docs/reference/glossary/platform-developer.md b/content/zh-cn/docs/reference/glossary/platform-developer.md similarity index 86% rename from content/zh/docs/reference/glossary/platform-developer.md rename to content/zh-cn/docs/reference/glossary/platform-developer.md index 41e8b99995d06..a03cf67d81c91 100644 --- a/content/zh/docs/reference/glossary/platform-developer.md +++ b/content/zh-cn/docs/reference/glossary/platform-developer.md @@ -41,8 +41,8 @@ develop extensions which are contributed to the Kubernetes community. Others develop closed-source commercial or site-specific extensions. --> -平台开发人员可以使用[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -或[使用汇聚层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) +平台开发人员可以使用[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +或[使用汇聚层扩展 Kubernetes API](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 来为其 Kubernetes 实例增加功能,特别是为其应用程序添加功能。 一些平台开发人员也是 kubernetes {{< glossary_tooltip text="贡献者" term_id="contributor" >}}, 他们会开发贡献给 Kubernetes 社区的扩展。 diff --git a/content/zh/docs/reference/glossary/pod-disruption-budget.md b/content/zh-cn/docs/reference/glossary/pod-disruption-budget.md similarity index 89% rename from content/zh/docs/reference/glossary/pod-disruption-budget.md rename to content/zh-cn/docs/reference/glossary/pod-disruption-budget.md index 444bec87e35c5..32077d48cd72e 100644 --- a/content/zh/docs/reference/glossary/pod-disruption-budget.md +++ b/content/zh-cn/docs/reference/glossary/pod-disruption-budget.md @@ -1,7 +1,7 @@ --- id: pod-disruption-budget title: Pod Disruption Budget -full-link: /zh/docs/concepts/workloads/pods/disruptions/ +full-link: /zh-cn/docs/concepts/workloads/pods/disruptions/ date: 2019-02-12 short_description: > Pod Disruption Budget 是这样一种对象:它保证在主动中断( voluntary disruptions)时,多实例应用的 {{< glossary_tooltip text="Pod" term_id="pod" >}} 不会少于一定的数量。 @@ -43,7 +43,7 @@ tags: Involuntary disruptions cannot be prevented by PDBs; however they do count against the budget. --> - [Pod 干扰预算(Pod Disruption Budget,PDB)](/zh/docs/concepts/workloads/pods/disruptions/) + [Pod 干扰预算(Pod Disruption Budget,PDB)](/zh-cn/docs/concepts/workloads/pods/disruptions/) 使应用所有者能够为多实例应用创建一个对象,来确保一定数量的具有指定标签的 Pod 在任何时候都不会被主动驱逐。 PDB 无法防止非主动的中断,但是会计入预算(budget)。 diff --git a/content/zh/docs/reference/glossary/pod-disruption.md b/content/zh-cn/docs/reference/glossary/pod-disruption.md similarity index 90% rename from content/zh/docs/reference/glossary/pod-disruption.md rename to content/zh-cn/docs/reference/glossary/pod-disruption.md index 5ea9d70b10137..39b8ce8d83994 100644 --- a/content/zh/docs/reference/glossary/pod-disruption.md +++ b/content/zh-cn/docs/reference/glossary/pod-disruption.md @@ -37,7 +37,7 @@ tags: Pods on Nodes are terminated either voluntarily or involuntarily. --> -[pod 干扰](/zh/docs/concepts/workloads/pods/disruptions/) 是指节点上的 pod 被自愿或非自愿终止的过程。 +[pod 干扰](/zh-cn/docs/concepts/workloads/pods/disruptions/) 是指节点上的 pod 被自愿或非自愿终止的过程。 diff --git a/content/zh/docs/reference/glossary/pod-lifecycle.md b/content/zh-cn/docs/reference/glossary/pod-lifecycle.md similarity index 89% rename from content/zh/docs/reference/glossary/pod-lifecycle.md rename to content/zh-cn/docs/reference/glossary/pod-lifecycle.md index d83ca39ad0bd2..8eff328c4d730 100644 --- a/content/zh/docs/reference/glossary/pod-lifecycle.md +++ b/content/zh-cn/docs/reference/glossary/pod-lifecycle.md @@ -2,7 +2,7 @@ title: Pod 生命周期 id: pod-lifecycle date: 2019-02-17 -full-link: /zh/docs/concepts/workloads/pods/pod-lifecycle/ +full-link: /zh-cn/docs/concepts/workloads/pods/pod-lifecycle/ related: - pod - container @@ -36,7 +36,7 @@ A high-level summary of what phase the Pod is in within its lifecyle. -[Pod 生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/) 是关于 Pod +[Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/) 是关于 Pod 处于哪个阶段的概述。包含了下面5种可能的的阶段: Running、Pending、Succeeded、 Failed、Unknown。关于 Pod 的阶段的更高级描述请查阅 [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core) `phase` 字段。 diff --git a/content/zh/docs/reference/glossary/pod-priority.md b/content/zh-cn/docs/reference/glossary/pod-priority.md similarity index 85% rename from content/zh/docs/reference/glossary/pod-priority.md rename to content/zh-cn/docs/reference/glossary/pod-priority.md index d029d0593cfdc..1ddc48e1a84c4 100644 --- a/content/zh/docs/reference/glossary/pod-priority.md +++ b/content/zh-cn/docs/reference/glossary/pod-priority.md @@ -2,7 +2,7 @@ title: Pod 优先级(Pod Priority) id: pod-priority date: 2019-01-31 -full_link: /zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority +full_link: /zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority short_description: > Pod 优先级表示一个 Pod 相对于其他 Pod 的重要性。 @@ -34,7 +34,7 @@ tags: -[Pod 优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) +[Pod 优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) 允许用户为 Pod 设置高于或低于其他 Pod 的优先级 -- 这对于生产集群 工作负载而言是一个重要的特性。 diff --git a/content/zh/docs/reference/glossary/pod-security-policy.md b/content/zh-cn/docs/reference/glossary/pod-security-policy.md similarity index 89% rename from content/zh/docs/reference/glossary/pod-security-policy.md rename to content/zh-cn/docs/reference/glossary/pod-security-policy.md index f80416fb44991..db491739304b4 100644 --- a/content/zh/docs/reference/glossary/pod-security-policy.md +++ b/content/zh-cn/docs/reference/glossary/pod-security-policy.md @@ -2,7 +2,7 @@ title: Pod 安全策略 id: pod-security-policy date: 2018-04-12 -full_link: /zh/docs/concepts/security/pod-security-policy/ +full_link: /zh-cn/docs/concepts/security/pod-security-policy/ short_description: > 为 Pod 的创建和更新操作启用细粒度的授权。 @@ -47,5 +47,5 @@ Pod 安全策略是集群级别的资源,它控制着 Pod 规约中的安全 PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. We recommend migrating to [Pod Security Admission](/docs/concepts/security/pod-security-admission/), or a 3rd party admission plugin. --> PodSecurityPolicy 自 Kubernetes v1.21 起已弃用,并将在 v1.25 中删除。 -我们建议迁移到 [Pod 安全准入](/zh/docs/concepts/security/pod-security-admission/)或第三方准入插件。 +我们建议迁移到 [Pod 安全准入](/zh-cn/docs/concepts/security/pod-security-admission/)或第三方准入插件。 diff --git a/content/zh/docs/reference/glossary/pod.md b/content/zh-cn/docs/reference/glossary/pod.md similarity index 95% rename from content/zh/docs/reference/glossary/pod.md rename to content/zh-cn/docs/reference/glossary/pod.md index 873ec90e62723..eee257a43d56a 100644 --- a/content/zh/docs/reference/glossary/pod.md +++ b/content/zh-cn/docs/reference/glossary/pod.md @@ -4,7 +4,7 @@ id: pod date: 2018-04-12 full_link: /docs/concepts/workloads/pods/pod-overview/ short_description: > - Pod 表示您的集群上一组正在运行的容器。 + Pod 表示你的集群上一组正在运行的容器。 aka: tags: diff --git a/content/zh/docs/reference/glossary/preemption.md b/content/zh-cn/docs/reference/glossary/preemption.md similarity index 88% rename from content/zh/docs/reference/glossary/preemption.md rename to content/zh-cn/docs/reference/glossary/preemption.md index 328d0abfaca2b..7875e60d0bb6d 100644 --- a/content/zh/docs/reference/glossary/preemption.md +++ b/content/zh-cn/docs/reference/glossary/preemption.md @@ -2,7 +2,7 @@ title: 抢占(Preemption) id: preemption date: 2019-01-31 -full_link: /zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption +full_link: /zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption short_description: > Kubernetes 中的抢占逻辑通过驱逐节点上的低优先级 Pod 来帮助悬决的 Pod 找到合适的节点。 @@ -37,6 +37,6 @@ Kubernetes 中的抢占逻辑通过驱逐{{< glossary_tooltip term_id="node" >}} If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible. --> 如果一个 Pod 无法调度,调度器会尝试 -[抢占](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) +[抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) 较低优先级的 Pod,以使得悬决的 Pod 有可能被调度。 diff --git a/content/zh/docs/reference/glossary/proxy.md b/content/zh-cn/docs/reference/glossary/proxy.md similarity index 88% rename from content/zh/docs/reference/glossary/proxy.md rename to content/zh-cn/docs/reference/glossary/proxy.md index c9d4ff0bea79a..b186dd606531a 100644 --- a/content/zh/docs/reference/glossary/proxy.md +++ b/content/zh-cn/docs/reference/glossary/proxy.md @@ -43,7 +43,7 @@ actual server's reply to the client. network proxy that runs on each node in your cluster, implementing part of the Kubernetes {{< glossary_tooltip term_id="service">}} concept. --> -[kube-proxy](/zh/docs/reference/command-line-tools-reference/kube-proxy/) 是集群中每个节点上运行的网络代理,实现了部分 Kubernetes {{< glossary_tooltip term_id="service">}} 概念。 +[kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/) 是集群中每个节点上运行的网络代理,实现了部分 Kubernetes {{< glossary_tooltip term_id="service">}} 概念。 -卷插件让您能给 {{< glossary_tooltip text="Pod" term_id="pod" >}} 附加和挂载存储卷。 +卷插件让你能给 {{< glossary_tooltip text="Pod" term_id="pod" >}} 附加和挂载存储卷。 卷插件既可以是 _in tree_ 也可以是 _out of tree_ 。_in tree_ 插件是 Kubernetes 代码库的一部分, 并遵循其发布周期。而 _Out of tree_ 插件则是独立开发的。 diff --git a/content/zh/docs/reference/glossary/volume.md b/content/zh-cn/docs/reference/glossary/volume.md similarity index 92% rename from content/zh/docs/reference/glossary/volume.md rename to content/zh-cn/docs/reference/glossary/volume.md index 7ef39eee2fd33..9159e275caa57 100644 --- a/content/zh/docs/reference/glossary/volume.md +++ b/content/zh-cn/docs/reference/glossary/volume.md @@ -2,7 +2,7 @@ title: 卷(Volume) id: volume date: 2018-04-12 -full_link: /zh/docs/concepts/storage/volumes/ +full_link: /zh-cn/docs/concepts/storage/volumes/ short_description: > 包含可被 Pod 中容器访问的数据的目录。 @@ -47,4 +47,4 @@ A Kubernetes volume lives as long as the Pod that encloses it. Consequently, a v -更多信息可参考[storage](/zh/docs/concepts/storage/) \ No newline at end of file +更多信息可参考[storage](/zh-cn/docs/concepts/storage/) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/wg.md b/content/zh-cn/docs/reference/glossary/wg.md similarity index 100% rename from content/zh/docs/reference/glossary/wg.md rename to content/zh-cn/docs/reference/glossary/wg.md diff --git a/content/zh/docs/reference/glossary/workload.md b/content/zh-cn/docs/reference/glossary/workload.md similarity index 96% rename from content/zh/docs/reference/glossary/workload.md rename to content/zh-cn/docs/reference/glossary/workload.md index 394b42bd1c993..68cd860a831f1 100644 --- a/content/zh/docs/reference/glossary/workload.md +++ b/content/zh-cn/docs/reference/glossary/workload.md @@ -2,7 +2,7 @@ title: 工作负载(Workload) id: workloads date: 2019-02-13 -full_link: /zh/docs/concepts/workloads/ +full_link: /zh-cn/docs/concepts/workloads/ short_description: > 工作负载是在 Kubernetes 上运行的应用程序。 diff --git a/content/zh/docs/reference/issues-security/_index.md b/content/zh-cn/docs/reference/issues-security/_index.md similarity index 100% rename from content/zh/docs/reference/issues-security/_index.md rename to content/zh-cn/docs/reference/issues-security/_index.md diff --git a/content/zh/docs/reference/issues-security/issues.md b/content/zh-cn/docs/reference/issues-security/issues.md similarity index 86% rename from content/zh/docs/reference/issues-security/issues.md rename to content/zh-cn/docs/reference/issues-security/issues.md index 23a015a519d03..45a248990ca48 100644 --- a/content/zh/docs/reference/issues-security/issues.md +++ b/content/zh-cn/docs/reference/issues-security/issues.md @@ -1,6 +1,7 @@ --- title: Kubernetes 问题追踪 weight: 10 +aliases: [/zh-cn/cve/, /zh-cn/cves/] --- 要报告安全问题,请遵循 -[Kubernetes 安全问题公开流程](/zh/docs/reference/issues-security/security/#report-a-vulnerability)。 +[Kubernetes 安全问题公开流程](/zh-cn/docs/reference/issues-security/security/#report-a-vulnerability)。 -与安全性相关的公告请发送到 +与安全性相关的公告将发送到 [kubernetes-security-announce@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-security-announce) 邮件列表。 diff --git a/content/zh/docs/reference/issues-security/security.md b/content/zh-cn/docs/reference/issues-security/security.md similarity index 68% rename from content/zh/docs/reference/issues-security/security.md rename to content/zh-cn/docs/reference/issues-security/security.md index 3124098682040..4cdc9d07e77bb 100644 --- a/content/zh/docs/reference/issues-security/security.md +++ b/content/zh-cn/docs/reference/issues-security/security.md @@ -1,5 +1,6 @@ --- title: Kubernetes 安全和信息披露 +aliases: [/zh-cn/security/] content_type: concept weight: 20 --- @@ -27,7 +28,7 @@ This page describes Kubernetes security and disclosure information. -## 安全公告 +## 安全公告 {#security-announcements} -## 报告一个漏洞 +## 报告一个漏洞 {#report-a-vulnerability} 我们非常感谢向 Kubernetes 开源社区报告漏洞的安全研究人员和用户。 所有的报告都由社区志愿者进行彻底调查。 -如需报告,请连同安全细节以及预期的[所有 Kubernetes bug 报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md) -详细信息电子邮件到[security@kubernetes.io](mailto:security@kubernetes.io)列表。 +如需报告,请将你的漏洞提交给 [Kubernetes 漏洞赏金计划](https://hackerone.com/kubernetes)。 +这样做可以使得社区能够在标准化的响应时间内对漏洞进行分类和处理。 你还可以通过电子邮件向私有 [security@kubernetes.io](mailto:security@kubernetes.io) 列表发送电子邮件,邮件中应该包含 [所有 Kubernetes 错误报告](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE/bug-report.yaml) 所需的详细信息。 + @@ -68,45 +69,45 @@ GPG 密钥加密你的发往邮件列表的邮件。揭示问题时不需要使 -### 我应该在什么时候报告漏洞? +### 我应该在什么时候报告漏洞? {#when-should-i-report-a-vulnerability} - 你认为在 Kubernetes 中发现了一个潜在的安全漏洞 - 你不确定漏洞如何影响 Kubernetes - 你认为你在 Kubernetes 依赖的另一个项目中发现了一个漏洞 -- 对于具有漏洞报告和披露流程的项目,请直接在该项目处报告 + - 对于具有漏洞报告和披露流程的项目,请直接在该项目处报告 -### 我什么时候不应该报告漏洞? +### 我什么时候不应该报告漏洞? {#when-should-i-not-report-a-vulnerability} -- 你需要帮助调整 Kubernetes 组件的安全性 -- 你需要帮助应用与安全相关的更新 +- 你需要调整 Kubernetes 组件安全性的帮助 +- 你需要应用与安全相关更新的帮助 - 你的问题与安全无关 -## 安全漏洞响应 +## 安全漏洞响应 {#security-vulnerability-response} -每个报告在 3 个工作日内由安全响应委员会成员确认和分析。这将启动[安全发布过程](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#disclosures)。 +每个报告在 3 个工作日内由安全响应委员会成员确认和分析,这将启动[安全发布过程](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#disclosures)。 与安全响应委员会共享的任何漏洞信息都保留在 Kubernetes 项目中,除非有必要修复该问题,否则不会传播到其他项目。 @@ -118,7 +119,7 @@ As the security issue moves from triage, to identified fix, to release planning -## 公开披露时间 +## 公开披露时间 {#public-disclosure-timing} -信息披露的时间范围从即时(尤其是已经公开的)到几周。作为一个基本的约定,我们希望报告日期到披露日期的间隔是 7 天。在设置披露日期时,Kubernetes 产品安全团队拥有最终决定权。 - +信息披露的时间范围从即时(尤其是已经公开的)到几周不等。 +对于具有直接缓解措施的漏洞,我们希望报告日期到披露日期的间隔是 7 天。 +在设置披露日期方面,Kubernetes 安全响应委员会拥有最终决定权。 diff --git a/content/zh/docs/reference/kubectl/overview.md b/content/zh-cn/docs/reference/kubectl/_index.md similarity index 70% rename from content/zh/docs/reference/kubectl/overview.md rename to content/zh-cn/docs/reference/kubectl/_index.md index ddc83e853695d..787a20d7377e8 100644 --- a/content/zh/docs/reference/kubectl/overview.md +++ b/content/zh-cn/docs/reference/kubectl/_index.md @@ -1,62 +1,73 @@ --- -reviewers: -- hw-qiaolei -title: kubectl 概述 -content_type: concept -weight: 20 +title: 命令行工具 (kubectl) +content_type: reference +weight: 60 +no_list: true card: name: reference weight: 20 --- - + +{{< glossary_definition prepend="Kubernetes 提供" term_id="kubectl" length="short" >}} + + +这个工具叫做 `kubectl`。 -你可以使用 Kubectl 命令行工具管理 Kubernetes 集群。 -`kubectl` 在 `$HOME/.kube` 目录中查找一个名为 `config` 的配置文件。 -你可以通过设置 KUBECONFIG 环境变量或设置 -[`--kubeconfig`](/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/) -参数来指定其它 [kubeconfig](/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 文件。 +`针对配置信息,`kubectl` 在 `$HOME/.kube` 目录中查找一个名为 `config` 的配置文件。 +你可以通过设置 `KUBECONFIG` 环境变量或设置 +[`--kubeconfig`](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +参数来指定其它 [kubeconfig](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 文件。 本文概述了 `kubectl` 语法和命令操作描述,并提供了常见的示例。 有关每个命令的详细信息,包括所有受支持的参数和子命令, 请参阅 [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) 参考文档。 -有关安装说明,请参见[安装 kubectl](/zh/docs/tasks/tools/install-kubectl/) 。 - - -## 语法 +有关安装说明,请参见[安装 kubectl](/zh-cn/docs/tasks/tools/#kubectl); +如需快速指南,请参见[备忘单](/zh-cn/docs/reference/kubectl/cheatsheet/)。 +如果你更习惯使用 `docker` 命令行工具, +[Docker 用户的 `kubectl`](/zh-cn/docs/reference/kubectl/docker-cli-to-kubectl/) +介绍了一些 Kubernetes 的等价命令。 + -使用以下语法 `kubectl` 从终端窗口运行命令: +## 语法 + +使用以下语法从终端窗口运行 `kubectl` 命令: ```shell kubectl [command] [TYPE] [NAME] [flags] @@ -68,34 +79,36 @@ where `command`, `TYPE`, `NAME`, and `flags` are: 其中 `command`、`TYPE`、`NAME` 和 `flags` 分别是: * `command`:指定要对一个或多个资源执行的操作,例如 `create`、`get`、`describe`、`delete`。 -* `TYPE`:指定[资源类型](#资源类型)。资源类型不区分大小写, - 可以指定单数、复数或缩写形式。例如,以下命令输出相同的结果: +* `TYPE`:指定[资源类型](#resource-types)。资源类型不区分大小写, + 可以指定单数、复数或缩写形式。例如,以下命令输出相同的结果: - ```shell - kubectl get pod pod1 - kubectl get pods pod1 - kubectl get po pod1 - ``` + ```shell + kubectl get pod pod1 + kubectl get pods pod1 + kubectl get po pod1 + ``` - * `NAME`:指定资源的名称。名称区分大小写。 - 如果省略名称,则显示所有资源的详细信息 `kubectl get pods`。 + 如果省略名称,则显示所有资源的详细信息。例如:`kubectl get pods`。 在对多个资源执行操作时,你可以按类型和名称指定每个资源,或指定一个或多个文件: - - * 要按类型和名称指定资源: - - * 要对所有类型相同的资源进行分组,请执行以下操作:`TYPE1 name1 name2 name<#>`。 - - 例子:`kubectl get pod example-pod1 example-pod2` +--> + * 要按类型和名称指定资源: - * 分别指定多个资源类型:`TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`。 + * 要对所有类型相同的资源进行分组,请执行以下操作:`TYPE1 name1 name2 name<#>`。
        + 例子:`kubectl get pod example-pod1 example-pod2` - 例子:`kubectl get pod/example-pod1 replicationcontroller/example-rc1` + * 分别指定多个资源类型:`TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`。
        + 例子:`kubectl get pod/example-pod1 replicationcontroller/example-rc1` - * 用一个或多个文件指定资源:`-f file1 -f file2 -f file<#>` + * 用一个或多个文件指定资源:`-f file1 -f file2 -f file<#>` - * [使用 YAML 而不是 JSON](/zh/docs/concepts/configuration/overview/#general-configuration-tips) - 因为 YAML 更容易使用,特别是用于配置文件时。 - 例子:`kubectl get -f ./pod.yaml` + * [使用 YAML 而不是 JSON](/zh-cn/docs/concepts/configuration/overview/#general-configuration-tips), + 因为 YAML 对用户更友好, 特别是对于配置文件。
        + 例子:`kubectl get -f ./pod.yaml` -* `flags`: 指定可选的参数。例如,可以使用 `-s` 或 `-server` 参数指定 +* `flags`: 指定可选的参数。例如,可以使用 `-s` 或 `--server` 参数指定 Kubernetes API 服务器的地址和端口。 -{{< caution >}} +{{< caution >}} 从命令行指定的参数会覆盖默认值和任何相应的环境变量。 {{< /caution >}} -如果你需要帮助,从终端窗口运行 `kubectl help` 。 +如果你需要帮助,在终端窗口中运行 `kubectl help`。 + + +## 集群内身份验证和命名空间覆盖 + + +默认情况下,`kubectl` 命令首先确定它是否在 Pod 中运行,从而被视为在集群中运行。 +它首先检查 `KUBERNETES_SERVICE_HOST` 和 `KUBERNETES_SERVICE_PORT` 环境变量以及 +`/var/run/secrets/kubernetes.io/serviceaccount/token` 中是否存在服务帐户令牌文件。 +如果三个条件都被满足,则假定在集群内进行身份验证。 + + +为保持向后兼容性,如果在集群内身份验证期间设置了 `POD_NAMESPACE` +环境变量,它将覆盖服务帐户令牌中的默认命名空间。 +任何依赖默认命名空间的清单或工具都会受到影响。 + + +**`POD_NAMESPACE` 环境变量** + + +如果设置了 `POD_NAMESPACE` 环境变量,对命名空间资源的 CLI 操作对象将使用该变量值作为默认值。 +例如,如果该变量设置为 `seattle`,`kubectl get pods` 将返回 `seattle` 命名空间中的 Pod。 +这是因为 Pod 是一个命名空间资源,且命令中没有提供命名空间。 + + +直接使用 `--namespace ` 会覆盖此行为。 + + +**kubectl 如何处理 ServiceAccount 令牌** + + +假设: +* 有 Kubernetes 服务帐户令牌文件挂载在 + `/var/run/secrets/kubernetes.io/serviceaccount/token` 上,并且 +* 设置了 `KUBERNETES_SERVICE_HOST` 环境变量,并且 +* 设置了 `KUBERNETES_SERVICE_PORT` 环境变量,并且 +* 你没有在 kubectl 命令行上明确指定命名空间。 + + +然后 kubectl 假定它正在你的集群中运行。 +kubectl 工具查找该 ServiceAccount 的命名空间 +(该命名空间与 Pod 的命名空间相同)并针对该命名空间进行操作。 +这与集群外运行的情况不同; +当 kubectl 在集群外运行并且你没有指定命名空间时, +kubectl 命令会针对 `default` 命名空间进行操作。 +操作 | 语法 | 描述 +-------------------- | -------------------- | -------------------- +`alpha` | `kubectl alpha SUBCOMMAND [flags]` | 列出与 alpha 特性对应的可用命令,这些特性在 Kubernetes 集群中默认情况下是不启用的。 +`annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的注解。 +`api-resources` | `kubectl api-resources [flags]` | 列出可用的 API 资源。 +`api-versions` | `kubectl api-versions [flags]` | 列出可用的 API 版本。 + +`apply` | `kubectl apply -f FILENAME [flags]`| 从文件或 stdin 对资源应用配置更改。 +`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | 挂接到正在运行的容器,查看输出流或与容器(stdin)交互。 +`auth` | `kubectl auth [flags] [options]` | 检查授权。 +`autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | 自动扩缩由副本控制器管理的一组 pod。 +`certificate` | `kubectl certificate SUBCOMMAND [options]` | 修改证书资源。 +`cluster-info` | `kubectl cluster-info [flags]` | 显示有关集群中主服务器和服务的端口信息。 +`completion` | `kubectl completion SHELL [options]` | 为指定的 Shell(Bash 或 Zsh)输出 Shell 补齐代码。 +`config` | `kubectl config SUBCOMMAND [flags]` | 修改 kubeconfig 文件。有关详细信息,请参阅各个子命令。 + +`convert` | `kubectl convert -f FILENAME [options]` | 在不同的 API 版本之间转换配置文件。配置文件可以是 YAML 或 JSON 格式。注意 - 需要安装 `kubectl-convert` 插件。 +`cordon` | `kubectl cordon NODE [options]` | 将节点标记为不可调度。 +`cp` | `kubectl cp [options]` | 从容器复制文件、目录或将文件、目录复制到容器。 +`create` | `kubectl create -f FILENAME [flags]` | 从文件或 stdin 创建一个或多个资源。 +`delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | 基于文件、标准输入或通过指定标签选择器、名称、资源选择器或资源本身,删除资源。 +`describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | 显示一个或多个资源的详细状态。 +`diff` | `kubectl diff -f FILENAME [flags]`| 在当前起作用的配置和文件或标准输之间作对比 (**BETA**) + +`drain` | `kubectl drain NODE [options]` | 腾空节点以准备维护。 +`edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 +`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 Pod 中的容器执行命令。 +`explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 +`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 Pod 作为新的 Kubernetes 服务暴露。 +`get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 列出一个或多个资源。 +`kustomize` | kubectl kustomize [flags] [options]` | 列出从 kustomization.yaml 文件中的指令生成的一组 API 资源。参数必须是包含文件的目录的路径,或者是 git 存储库 URL,其路径后缀相对于存储库根目录指定了相同的路径。 + +`label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的标签。 +`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | 打印 Pod 中容器的日志。 +`options` | `kubectl options` | 全局命令行选项列表,这些选项适用于所有命令。 +`patch` | kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags] | 使用策略合并流程更新资源的一个或多个字段。 +`plugin` | `kubectl plugin [flags] [options]` | 提供用于与插件交互的实用程序。 +`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | 将一个或多个本地端口转发到一个 Pod。 +`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | 运行访问 Kubernetes API 服务器的代理。 +`replace` | `kubectl replace -f FILENAME` | 基于文件或标准输入替换资源。 +`rollout` | `kubectl rollout SUBCOMMAND [options]` | 管理资源的上线。有效的资源类型包括:Deployment、 DaemonSet 和 StatefulSet。 +`run` | kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server | client | none] [--overrides=inline-json] [flags] | 在集群上运行指定的镜像。 + - -操作 | 语法 | 描述 --------------------- | -------------------- | -------------------- -`alpha` | `kubectl alpha SUBCOMMAND [flags]` | 列出与 alpha 特性对应的可用命令,这些特性在 Kubernetes 集群中默认情况下是不启用的。 -`annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的注解。 -`api-resources` | `kubectl api-resources [flags]` | 列出可用的 API 资源。 -`api-versions` | `kubectl api-versions [flags]` | 列出可用的 API 版本。 -`apply` | `kubectl apply -f FILENAME [flags]`| 从文件或 stdin 对资源应用配置更改。 -`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | 附加到正在运行的容器,查看输出流或与容器(stdin)交互。 -`auth` | `kubectl auth [flags] [options]` | 检查授权。 -`autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | 自动伸缩由副本控制器管理的一组 pod。 -`certificate` | `kubectl certificate SUBCOMMAND [options]` | 修改证书资源。 -`cluster-info` | `kubectl cluster-info [flags]` | 显示有关集群中主服务器和服务的端口信息。 -`completion` | `kubectl completion SHELL [options]` | 为指定的 shell (bash 或 zsh)输出 shell 补齐代码。 -`config` | `kubectl config SUBCOMMAND [flags]` | 修改 kubeconfig 文件。有关详细信息,请参阅各个子命令。 -`convert` | `kubectl convert -f FILENAME [options]` | 在不同的 API 版本之间转换配置文件。配置文件可以是 YAML 或 JSON 格式。 -`cordon` | `kubectl cordon NODE [options]` | 将节点标记为不可调度。 -`cp` | `kubectl cp [options]` | 在容器之间复制文件和目录。 -`create` | `kubectl create -f FILENAME [flags]` | 从文件或 stdin 创建一个或多个资源。 -`delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | 从文件、标准输入或指定标签选择器、名称、资源选择器或资源中删除资源。 -`describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | 显示一个或多个资源的详细状态。 -`diff` | `kubectl diff -f FILENAME [flags]`| 将 live 配置和文件或标准输入做对比 (**BETA**) -`drain` | `kubectl drain NODE [options]` | 腾空节点以准备维护。 -`edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 -`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 pod 中的容器执行命令。 -`explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 pod, node, service 等。 -`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 pod 作为新的 Kubernetes 服务暴露。 -`get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 列出一个或多个资源。 -`kustomize` | `kubectl kustomize [flags] [options]` | 列出从 kustomization.yaml 文件中的指令生成的一组 API 资源。参数必须是包含文件的目录的路径,或者是 git 存储库 URL,其路径后缀相对于存储库根目录指定了相同的路径。 -`label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的标签。 -`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | 在 pod 中打印容器的日志。 -`options` | `kubectl options` | 全局命令行选项列表,适用于所有命令。 -`patch` | kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags] | 使用策略合并 patch 程序更新资源的一个或多个字段。 -`plugin` | `kubectl plugin [flags] [options]` | 提供用于与插件交互的实用程序。 -`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | 将一个或多个本地端口转发到一个 pod。 -`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | 运行 Kubernetes API 服务器的代理。 -`replace` | `kubectl replace -f FILENAME` | 从文件或标准输入中替换资源。 -`rollout` | `kubectl rollout SUBCOMMAND [options]` | 管理资源的部署。有效的资源类型包括:Deployments, DaemonSets 和 StatefulSets。 -`run` | kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server | client | none] [--overrides=inline-json] [flags] | 在集群上运行指定的镜像。 `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | 更新指定副本控制器的大小。 -`set` | `kubectl set SUBCOMMAND [options]` | 配置应用程序资源。 +`set` | `kubectl set SUBCOMMAND [options]` | 配置应用资源。 `taint` | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | 更新一个或多个节点上的污点。 -`top` | `kubectl top [flags] [options]` | 显示资源(CPU/内存/存储)的使用情况。 +`top` | `kubectl top [flags] [options]` | 显示资源(CPU、内存、存储)的使用情况。 `uncordon` | `kubectl uncordon NODE [options]` | 将节点标记为可调度。 `version` | `kubectl version [--client] [flags]` | 显示运行在客户端和服务器上的 Kubernetes 版本。 -`wait` | kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options] | 实验性:等待一种或多种资源的特定条件。 - +`wait` | kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options] | 实验特性:等待一种或多种资源的特定状况。 -了解更多有关命令操作的信息,请参阅 [kubectl](/zh/docs/reference/kubectl/kubectl/) 参考文档。 +了解更多有关命令操作的信息, +请参阅 [kubectl](/zh-cn/docs/reference/kubectl/kubectl/) 参考文档。 - ## 资源类型 -下表列出所有受支持的资源类型及其缩写别名: +下表列出所有受支持的资源类型及其缩写别名。 | 资源名 | 缩写名 | API 分组 | 按命名空间 | 资源类型 | |---|---|---|---|---| @@ -388,7 +421,6 @@ The following table includes a list of all the supported resource types and thei | `storageclasses` | `sc` | storage.k8s.io | false | StorageClass | | `volumeattachments` | | storage.k8s.io | false | VolumeAttachment | - @@ -398,7 +430,8 @@ The following table includes a list of all the supported resource types and thei -有关如何格式化或排序某些命令的输出的信息,请使用以下部分。有关哪些命令支持各种输出选项的详细信息,请参阅[kubectl](/zh/docs/reference/kubectl/kubectl/) 参考文档。 +有关如何格式化或排序某些命令的输出的信息,请参阅以下章节。有关哪些命令支持不同输出选项的详细信息, +请参阅 [kubectl](/zh-cn/docs/reference/kubectl/kubectl/) 参考文档。 -所有 `kubectl` 命令的默认输出格式都是人类可读的纯文本格式。要以特定格式向终端窗口输出详细信息,可以将 `-o` 或 `--output` 参数添加到受支持的 `kubectl` 命令中。 +所有 `kubectl` 命令的默认输出格式都是人类可读的纯文本格式。要以特定格式在终端窗口输出详细信息, +可以将 `-o` 或 `--output` 参数添加到受支持的 `kubectl` 命令中。 -根据 `kubectl` 操作,支持以下输出格式: +取决于具体的 `kubectl` 操作,支持的输出格式如下: -Output format | Description +输出格式 | 描述 --------------| ----------- `-o custom-columns=` | 使用逗号分隔的[自定义列](#custom-columns)列表打印表。 `-o custom-columns-file=` | 使用 `` 文件中的[自定义列](#custom-columns)模板打印表。 `-o json` | 输出 JSON 格式的 API 对象 -`-o jsonpath=