[{"data":1,"prerenderedAt":710},["ShallowReactive",2],{"/en-us/blog/using-child-pipelines-to-continuously-deploy-to-five-environments/":3,"navigation-en-us":38,"banner-en-us":456,"footer-en-us":472,"Olivier Dupré":681,"next-steps-en-us":695},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"seo":8,"content":16,"config":28,"_id":31,"_type":32,"title":33,"_source":34,"_file":35,"_stem":36,"_extension":37},"/en-us/blog/using-child-pipelines-to-continuously-deploy-to-five-environments","blog",false,"",{"title":9,"description":10,"ogTitle":9,"ogDescription":10,"noIndex":6,"ogImage":11,"ogUrl":12,"ogSiteName":13,"ogType":14,"canonicalUrls":12,"schema":15},"Using child pipelines to continuously deploy to five environments","Learn how to manage continuous deployment to multiple environments, including temporary, on-the-fly sandboxes, with a minimalist GitLab workflow.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097012/Blog/Hero%20Images/Blog/Hero%20Images/AdobeStock_397632156_3Ldy1urjMStQCl4qnOBvE0_1750097011626.jpg","https://about.gitlab.com/blog/using-child-pipelines-to-continuously-deploy-to-five-environments","https://about.gitlab.com","article","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Using child pipelines to continuously deploy to five environments\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Olivier Dupré\"}],\n        \"datePublished\": \"2024-09-26\",\n      }",{"title":9,"description":10,"authors":17,"heroImage":11,"date":19,"body":20,"category":21,"tags":22},[18],"Olivier Dupré","2024-09-26","DevSecOps teams sometimes require the ability to manage continuous\ndeployment across multiple environments — and they need to do so without\nchanging their workflows. The [GitLab DevSecOps\nplatform](https://about.gitlab.com/) supports this need, including\ntemporary, on-the-fly sandboxes, with a minimalist approach. In this\narticle, you'll learn how to run continuous deployment of infrastructure\nusing Terraform, over multiple environments.\n\n\nThis strategy can easily be applied to any project, whether it is\ninfrastructure as code (IaC) relying on another technology, such as\n[Pulumi](https://www.pulumi.com/) or [Ansible](https://www.ansible.com/),\nsource code in any language, or a monorepo that mixes many languages.\n\n\nThe final pipeline that you will have at the end of this tutorial will\ndeploy:\n\n\n* A temporary **review** environment for each feature branch.\n\n* An **integration** environment, easy to wipe out and deployed from the\nmain branch.\n\n* A **QA** environment, also deployed from the main branch, to run quality\nassurance steps.\n\n* A **staging** environment, deployed for every tag. This is the last round\nbefore production.\n\n* A **production** environment, just after the staging environment. This one\nis triggered manually for demonstration, but can also be continuously\ndeployed.\n\n\n>Here is the legend for the flow charts in this article:\n\n> * Round boxes are the GitLab branches.\n\n> * Square boxes are the environments.\n\n> * Text on the arrows are the actions to flow from one box to the next.\n\n> * Angled squares are decision steps.\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    A(main) -->|new feature| B(feature_X)\n\n    B -->|auto deploy| C[review/feature_X]\n    B -->|merge| D(main)\n    C -->|destroy| D\n\n    D -->|auto deploy| E[integration]\n    E -->|manual| F[qa]\n\n    D -->|tag| G(X.Y.Z)\n    F -->|validate| G\n\n    G -->|auto deploy| H[staging]\n    H -->|manual| I{plan}\n    I -->|manual| J[production]\n\u003C/pre>\n\n\nOn each step, you'll learn the [why](#why) and the [what](#what) before\nmoving to the [how](#how). This will help you fully understand and replicate\nthis tutorial.\n\n\n## Why\n\n\n* [Continuous\nintegration](https://about.gitlab.com/topics/ci-cd/#what-is-continuous-integration-ci)\nis almost a de facto standard. Most companies have implemented CI pipelines\nor are willing to standardize their practice.\n\n\n* [Continuous\ndelivery](https://about.gitlab.com/topics/ci-cd/#what-is-continuous-delivery-cd),\nwhich pushes artifacts to a repository or registry at the end of the CI\npipeline, is also popular.\n\n\n* Continuous deployment, which goes further and deploys these artifacts\nautomatically, is less widespread. When it has been implemented, we see it\nessentially in the application field. When discussing continuously\ndeploying  infrastructure, the picture seems less obvious, and is more about\nmanaging several environments. In contrast, testing, securing, and verifying\nthe infrastructure's code seems more challenging. And this is one of the\nfields where DevOps has not yet reached its maturity. One of the other\nfields is to shift security left, integrating security teams and, more\nimportantly, security concerns, earlier in the delivery lifecycle, to\nupgrade from DevOps to ***DevSecOps***.\n\n\nGiven this high-level picture, in this tutorial, you will work toward a\nsimple, yet efficient way to implement DevSecOps for your infrastructure\nthrough the example of deploying resources to five environments, gradually\nprogressing from development to production.\n\n\n__Note:__ Even if I advocate embracing a FinOps approach and reducing the\nnumber of environments, sometimes there are excellent reasons to maintain\nmore than just dev, staging, and production. So, please, adapt the examples\nbelow to match your needs.\n\n\n## What\n\n\nThe rise of cloud technology has driven the usage of IaC. Ansible and\nTerraform were among the first to pave the road here. OpenTofu, Pulumi, AWS\nCDK, Google Deploy Manager, and many others joined the party.\n\n\nDefining IaC is a perfect solution to feel safe when deploying\ninfrastructure. You can test it, deploy it, and replay it again and again\nuntil you reach your goal.\n\n\nUnfortunately, we often see companies maintain several branches, or even\nrepositories, for each of their target environments. And this is where the\nproblems start. They are no longer enforcing a process. They are no longer\nensuring that any change in the production code base has been accurately\ntested in previous environments. And they start seeing drifts from one\nenvironment to the other.\n\n\nI realized this tutorial was necessary when, at a conference I attended,\nevery participant said they do not have a workflow that enforces the\ninfrastructure to be tested thoroughly before being deployed to production.\nAnd they all agreed that sometimes they patch the code directly in\nproduction. Sure, this is fast, but is it safe? How do you report back to\nprevious environments? How do you ensure there are no side effects? How do\nyou control whether you are putting your company at risk with new\nvulnerabilities being pushed too quickly in production?\n\n\nThe question of *why* DevOps teams deploy directly to production is critical\nhere. Is it because the pipeline could be more efficient or faster? Is there\nno automation? Or, even worse, because there is *no way to test accurately\noutside of production*?\n\n\nIn the next section, you will learn how to implement automation for your\ninfrastructure and ensure that your DevOps team can effectively test what\nyou are doing before pushing to any environment impacting others. You will\nsee how your code is secured and its deployment is controlled, end-to-end.\n\n\n## How\n\n\nAs mentioned earlier, there are many IaC languages out there nowadays and we\nobjectively cannot cover *all* of them in a single article. So, I will rely\non a basic Terraform code running on Version 1.4. Please do not focus on the\nIaC language itself but instead on the process that you could apply to your\nown ecosystem.\n\n\n### The Terraform code\n\n\nLet's start with a fundamental Terraform code.\n\n\nWe are going to deploy to AWS, a virtual private cloud (VPC), which is a\nvirtual network. In that VPC, we will deploy a public and a private subnet.\nAs their name implies, they are subnets of the main VPC. Finally, we will\nadd an Elastic Cloud Compute (EC2) instance (a virtual machine) in the\npublic subnet.\n\n\nThis demonstrates the deployment of four resources without adding too much\ncomplexity. The idea is to focus on the pipeline, not the code.\n\n\nHere is the target we want to reach for your repository.\n\n\n![target for\nrepository](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097033/Blog/Content%20Images/Blog/Content%20Images/image5_aHR0cHM6_1750097033415.png)\n\n\nLet’s do it step by step.\n\n\nFirst, we declare all resources in a `terraform/main.tf` file:\n\n\n```terraform\n\nprovider \"aws\" {\n  region = var.aws_default_region\n}\n\n\nresource \"aws_vpc\" \"main\" {\n  cidr_block = var.aws_vpc_cidr\n\n  tags = {\n    Name     = var.aws_resources_name\n  }\n}\n\n\nresource \"aws_subnet\" \"public_subnet\" {\n  vpc_id     = aws_vpc.main.id\n  cidr_block = var.aws_public_subnet_cidr\n\n  tags = {\n    Name = \"Public Subnet\"\n  }\n}\n\nresource \"aws_subnet\" \"private_subnet\" {\n  vpc_id     = aws_vpc.main.id\n  cidr_block = var.aws_private_subnet_cidr\n\n  tags = {\n    Name = \"Private Subnet\"\n  }\n}\n\n\nresource \"aws_instance\" \"sandbox\" {\n  ami           = var.aws_ami_id\n  instance_type = var.aws_instance_type\n\n  subnet_id = aws_subnet.public_subnet.id\n\n  tags = {\n    Name     = var.aws_resources_name\n  }\n}\n\n```\n\n\nAs you can see, there are a couple of variables that are needed for this\ncode, so let's declare them in a `terraform/variables.tf` file:\n\n\n```terraform\n\nvariable \"aws_ami_id\" {\n  description = \"The AMI ID of the image being deployed.\"\n  type        = string\n}\n\n\nvariable \"aws_instance_type\" {\n  description = \"The instance type of the VM being deployed.\"\n  type        = string\n  default     = \"t2.micro\"\n}\n\n\nvariable \"aws_vpc_cidr\" {\n  description = \"The CIDR of the VPC.\"\n  type        = string\n  default     = \"10.0.0.0/16\"\n}\n\n\nvariable \"aws_public_subnet_cidr\" {\n  description = \"The CIDR of the public subnet.\"\n  type        = string\n  default     = \"10.0.1.0/24\"\n}\n\n\nvariable \"aws_private_subnet_cidr\" {\n  description = \"The CIDR of the private subnet.\"\n  type        = string\n  default     = \"10.0.2.0/24\"\n}\n\n\nvariable \"aws_default_region\" {\n  description = \"Default region where resources are deployed.\"\n  type        = string\n  default     = \"eu-west-3\"\n}\n\n\nvariable \"aws_resources_name\" {\n  description = \"Default name for the resources.\"\n  type        = string\n  default     = \"demo\"\n}\n\n```\n\n\nAlready, we are almost good to go on the IaC side. What's missing is a way\nto share the Terraform states. For those who don't know, Terraform works\nschematically doing the following:\n\n\n* `plan` checks the differences between the current state of the\ninfrastructure and what is defined in the code. Then, it outputs the\ndifferences.\n\n* `apply` applies the differences in the `plan` and updates the state.\n\n\nFirst round, the state is empty, then it is filled with the details (ID,\netc.) of the resources applied by Terraform.\n\n\nThe problem is: Where is that state stored? How do we share it so several\ndevelopers can collaborate on code?\n\n\nThe solution is fairly simple: Leverage GitLab to store and share the state\nfor you through a [Terraform HTTP\nbackend](https://docs.gitlab.com/ee/user/infrastructure/iac/terraform_state.html).\n\n\nThe first step in using this backend is to create the most simple\n`terraform/backend.tf` file. The second step will be handled in the\npipeline.\n\n\n```terraform\n\nterraform {\n  backend \"http\" {\n  }\n}\n\n```\n\n\nEt voilà! We have a bare minimum Terraform code to deploy these four\nresources. We will provide the variable values at the runtime, so let's do\nthat later.\n\n\n### The workflow\n\n\nThe workflow that we are going to implement now is the following:\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    A(main) -->|new feature| B(feature_X)\n\n    B -->|auto deploy| C[review/feature_X]\n    B -->|merge| D(main)\n    C -->|destroy| D\n\n    D -->|auto deploy| E[integration]\n    E -->|manual| F[qa]\n\n    D -->|tag| G(X.Y.Z)\n    F -->|validate| G\n\n    G -->|auto deploy| H[staging]\n    H -->|manual| I{plan}\n    I -->|manual| J[production]\n\u003C/pre>\n\n\n1. Create a **feature** branch. This will continuously run all scanners on\nthe code to ensure that it is still compliant and secured. This code will be\ncontinuously deployed to a temporary environment `review/feature_branch`\nwith the name of the current branch. This is a safe environment where the\ndevelopers and operations teams can test their code without impacting\nanybody. This is also where we will enforce the process, like enforcing code\nreviews and running scanners, to ensure that the quality and security of the\ncode are acceptable and do not put your assets at risk. The infrastructure\ndeployed by this branch is automatically destroyed when the branch is\nclosed. This helps you keep your budget under control.\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    A(main) -->|new feature| B(feature_X)\n\n    B -->|auto deploy| C[review/feature_X]\n    B -->|merge| D(main)\n    C -->|destroy| D\n\u003C/pre>\n\n\n2. Once approved, the feature branch will be **merged** into the main\nbranch. This is a [protected\nbranch](https://docs.gitlab.com/ee/user/project/protected_branches.html)\nwhere no one can push. This is mandatory to ensure that every change request\nto production is thoroughly tested. That branch is also continuously\ndeployed. The target here is the `integration` environment. To keep this\nenvironment slightly more stable, its deletion is not automated but can be\ntriggered manually.\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    D(main) -->|auto deploy| E[integration]\n\u003C/pre>\n\n\n3. From there, manual approval is required to trigger the next deployment.\nThis will deploy the main branch to the `qa` environment. Here, I have set a\nrule to prevent deletion from the pipeline. The idea is that this\nenvironment should be quite stable (after all, it's already the third\nenvironment), and I would like to prevent deletion by mistake. Feel free to\nadapt the rules to match your processes.\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    D(main)-->|auto deploy| E[integration]\n    E -->|manual| F[qa]\n\u003C/pre>\n\n\n4. To proceed, we will need to **tag** the code. We are relying on\n[protected\ntags](https://docs.gitlab.com/ee/user/project/protected_tags.html) here to\nensure that only a specific set of users are allowed to deploy to these last\ntwo environments. This will immediately trigger a deployment to the\n`staging` environment.\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    D(main) -->|tag| G(X.Y.Z)\n    F[qa] -->|validate| G\n\n    G -->|auto deploy| H[staging]\n\u003C/pre>\n\n\n5. Finally, we are landing to `production`. When discussing infrastructure,\nit is often challenging to deploy progressively (10%, 25%, etc.), so we will\ndeploy the whole infrastructure. Still, we control that deployment with a\nmanual trigger of this last step. And to enforce maximum control on this\nhighly critical environment, we will control it as a [protected\nenvironment](https://docs.gitlab.com/ee/ci/environments/protected_environments.html).\n\n\n\u003Cpre class=\"mermaid\">\n\nflowchart LR\n    H[staging] -->|manual| I{plan}\n    I -->|manual| J[production]\n\u003C/pre>\n\n\n### The pipeline\n\n\nTo implement the above [workflow](#the-workflow), we are now going to\nimplement a pipeline with two [downstream\npipelines](https://docs.gitlab.com/ee/ci/pipelines/downstream_pipelines.html).\n\n\n#### The main pipeline\n\n\nLet's start with the main pipeline. This is the one that will be triggered\nautomatically on any **push to a feature branch**, any **merge to the\ndefault branch**, or any **tag**. *The one* that will do true **continuous\ndeployment** to the following environments: `dev`, `integration`, and\n`staging`. And it is declared in the `.gitlab-ci.yml` file at the root of\nyour project.\n\n\n![the repository\ntarget](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097033/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750097033417.png)\n\n\n```yml\n\nStages:\n  - test\n  - environments\n\n.environment:\n  stage: environments\n  variables:\n    TF_ROOT: terraform\n    TF_CLI_ARGS_plan: \"-var-file=../vars/$variables_file.tfvars\"\n  trigger:\n    include: .gitlab-ci/.first-layer.gitlab-ci.yml\n    strategy: depend            # Wait for the triggered pipeline to successfully complete\n    forward:\n      yaml_variables: true      # Forward variables defined in the trigger job\n      pipeline_variables: true  # Forward manual pipeline variables and scheduled pipeline variables\n\nreview:\n  extends: .environment\n  variables:\n    environment: review/$CI_COMMIT_REF_SLUG\n    TF_STATE_NAME: $CI_COMMIT_REF_SLUG\n    variables_file: review\n    TF_VAR_aws_resources_name: $CI_COMMIT_REF_SLUG  # Used in the tag Name of the resources deployed, to easily differenciate them\n  rules:\n    - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH\n\nintegration:\n  extends: .environment\n  variables:\n    environment: integration\n    TF_STATE_NAME: $environment\n    variables_file: $environment\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n\nstaging:\n  extends: .environment\n  variables:\n    environment: staging\n    TF_STATE_NAME: $environment\n    variables_file: $environment\n  rules:\n    - if: $CI_COMMIT_TAG\n\n#### TWEAK\n\n# This tweak is needed to display vulnerability results in the merge\nwidgets.\n\n# As soon as this issue https://gitlab.com/gitlab-org/gitlab/-/issues/439700\nis resolved, the `include` instruction below can be removed.\n\n# Until then, the SAST IaC scanners will run in the downstream pipelines,\nbut their results will not be available directly in the merge request\nwidget, making it harder to track them.\n\n# Note: This workaround is perfectly safe and will not slow down your\npipeline.\n\ninclude:\n  - template: Security/SAST-IaC.gitlab-ci.yml\n#### END TWEAK\n\n\n```\n\n\nThis pipeline runs only two stages: `test` and  `environments`. The former\nis needed for the *TWEAK* to run scanners. The later triggers a child\npipeline with a different set of variables for each case defined above (push\nto the branch, merge to the default branch, or tag).\n\n\nWe are adding here a dependency with the keyword\n[strategy:depend](https://docs.gitlab.com/ee/ci/yaml/index.html#triggerstrategy)\non our child pipeline so the pipeline view in GitLab will be updated only\nonce the deployment is finished.\n\n\nAs you can see here, we are defining a base job,\n[hidden](https://docs.gitlab.com/ee/ci/jobs/#hide-jobs), and we are\nextending it with specific variables and rules to trigger only one\ndeployment for each target environment.\n\n\nBesides the [predefined\nvariables](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html),\nwe are using two new entries that we need to define:\n\n1. [The variables specific](#the-variable-definitions) to each environment:\n`../vars/$variables_file.tfvars`\n\n2. [The child pipeline](#the-child-pipeline), defined in\n`.gitlab-ci/.first-layer.gitlab-ci.yml`\n\n\nLet's start with the smallest part, the variable definitions.\n\n\n### The variable definitions\n\n\nWe are going here to mix two solutions to provide variables to Terraform:\n\n\n* The first one using [.tfvars\nfiles](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files)\nfor all non-sensitive input, which should be stored within GitLab.\n\n\n![solution one to provide variables to\nTerraform](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097034/Blog/Content%20Images/Blog/Content%20Images/image2_aHR0cHM6_1750097033419.png)\n\n\n* The second using [environment\nvariables](https://developer.hashicorp.com/terraform/language/values/variables#environment-variables)\nwith the prefix `TF_VAR`. That second way to inject variables, associated\nwith the GitLab capacity to [mask\nvariables](https://docs.gitlab.com/ee/ci/variables/#mask-a-cicd-variable),\n[protect\nthem](https://docs.gitlab.com/ee/ci/variables/#protect-a-cicd-variable), and\n[scope them to\nenvironments](https://docs.gitlab.com/ee/ci/environments/index.html#limit-the-environment-scope-of-a-cicd-variable)\nis a powerful solution to **prevent sensitive information leakages**. (If\nyou consider your production’s private CIDR very sensitive, you could\nprotect it like this, ensuring it is only available for the `production`\nenvironment, for pipelines running against protected branches and tags, and\nthat its value is masked in the job’s logs.)\n\n\n![solution two to provide variables to\nTerraform](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097034/Blog/Content%20Images/Blog/Content%20Images/image4_aHR0cHM6_1750097033422.png)\n\n\nAdditionally, each variable file should be controlled through a\n[`CODEOWNERS` file](https://docs.gitlab.com/ee/user/project/codeowners/) to\nset who can modify each of them.\n\n\n```\n\n[Production owners] \n\nvars/production.tfvars @operations-group\n\n\n[Staging owners]\n\nvars/staging.tfvars @odupre @operations-group\n\n\n[CodeOwners owners]\n\nCODEOWNERS @odupre\n\n```\n\n\nThis article is not a Terraform training, so we will go very fast and simply\nshow here the `vars/review.tfvars` file. Subsequent environment files are,\nof course, very similar. Just set the non-sensitive variables and their\nvalues here.\n\n\n```shell\n\naws_vpc_cidr = \"10.1.0.0/16\"\n\naws_public_subnet_cidr = \"10.1.1.0/24\"\n\naws_private_subnet_cidr = \"10.1.2.0/24\"\n\n```\n\n\n#### The child pipeline\n\n\nThis one is where the actual work is done. So, it is slightly more complex\nthan the first one. But there is no difficulty here that we cannot overcome\ntogether!\n\n\nAs we have seen in the definition of the [main\npipeline](#the-main-pipeline), that downstream pipeline is declared in the\nfile `.gitlab-ci/.first-layer.gitlab-ci.yml`.\n\n\n![Downstream pipeline declared in\nfile](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097033/Blog/Content%20Images/Blog/Content%20Images/image3_aHR0cHM6_1750097033424.png)\n\n\nLet's break it down into small chunks. We'll see the big picture at the end.\n\n\n##### Run Terraform commands and secure the code\n\n\nFirst, we want to run a pipeline for Terraform. We, at GitLab, are open\nsource. So, our Terraform template is open source. And you simply need to\ninclude it. This can be achieved with the following snippet:\n\n\n```yml\n\ninclude:\n  - template: Terraform.gitlab-ci.yml\n```\n\n\nThis template runs for you the Terraform checks on the formatting and\nvalidates your code, before planning and applying it. It also allows you to\ndestroy what you have deployed.\n\n\nAnd, because GitLab is the a single, unified DevSecOps platform, we are also\nautomatically including two security scanners within that template to find\npotential threats in your code and warn you before you deploy it to the next\nenvironments.\n\n\nNow that we have checked, secured, built, and deployed our code, let's do\nsome tricks.\n\n\n##### Share cache between jobs\n\n\nWe will cache the job results to reuse them in subsequent pipeline jobs.\nThis is as simple as adding the following piece of code:\n\n\n```yml\n\ndefault:\n  cache:  # Use a shared cache or tagged runners to ensure terraform can run on apply and destroy\n    - key: cache-$CI_COMMIT_REF_SLUG\n      fallback_keys:\n        - cache-$CI_DEFAULT_BRANCH\n      paths:\n        - .\n```\n\n\nHere, we are defining a different cache for each commit, falling back to the\nmain branch name if needed.\n\n\nIf we look carefully at the templates that we are using, we can see that it\nhas some rules to control when jobs are run. We want to run all controls\n(both QA and security) on all branches. So, we are going to override these\nsettings.\n\n\n##### Run controls on all branches\n\n\nGitLab templates are a powerful feature where one can override only a piece\nof the template. Here, we are interested only in overwriting the rules of\nsome jobs to always run quality and security checks. Everything else defined\nfor these jobs will stay as defined in the template.\n\n\n```yml\n\nfmt:\n  rules:\n    - when: always\n\nvalidate:\n  rules:\n    - when: always\n\nkics-iac-sast:\n  rules:\n    - when: always\n\niac-sast:\n  rules:\n    - when: always\n```\n\n\nNow that we have enforced the quality and security controls, we want to\ndifferentiate how the main environments (integration and staging) in the\n[workflow](#the-workflow) and review environments behave. Let's start by\ndefining the main environment’s behavior, and we will tweak this\nconfiguration for the review environments.\n\n\n##### CD to integration and staging\n\n\nAs defined earlier, we want to deploy the main branch and the tags to these\ntwo environments. We are adding rules to control that on both the `build`\nand `deploy` jobs. Then, we want to enable `destroy` only for the\n`integration` as we have defined `staging` to be too critical to be deleted\nwith a single click. This is error-prone and we don't want to do that.\n\n\nFinally, we are linking the `deploy` job to the `destroy` one, so we can\n`stop` the environment directly from GitLab GUI.\n\n\nThe `GIT_STRATEGY` is here to prevent retrieving the code from the source\nbranch in the runner when destroying. This would fail if the branch has been\ndeleted manually, so we are relying on the cache to get everything we need\nto run the Terraform instructions.\n\n\n```yml\n\nbuild:  # terraform plan\n  environment:\n    name: $TF_STATE_NAME\n    action: prepare\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    - if: $CI_COMMIT_TAG\n\ndeploy: # terraform apply --> automatically deploy on corresponding env\n(integration or staging) when merging to default branch or tagging. Second\nlayer environments (qa and production) will be controlled manually\n  environment: \n    name: $TF_STATE_NAME\n    action: start\n    on_stop: destroy\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    - if: $CI_COMMIT_TAG\n\ndestroy:\n  extends: .terraform:destroy\n  variables:\n    GIT_STRATEGY: none\n  dependencies:\n    - build\n  environment:\n    name: $TF_STATE_NAME\n    action: stop\n  rules:\n    - if: $CI_COMMIT_TAG  # Do not destroy production\n      when: never\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $TF_DESTROY == \"true\" # Manually destroy integration env.\n      when: manual\n```\n\n\nAs said, this matches the need to deploy to `integration` and `staging`. But\nwe are still missing a temporary environment where the developers can\nexperience and validate their code without impacts on others. This is where\nthe deployment to the `review` environment takes place.\n\n\n##### CD to review environments\n\n\nDeploying to review environment is not too different than deploying to\n`integration` and `staging`. So we will once again leverage GitLab's\ncapacity to overwrite only pieces of job definition here.\n\n\nFirst, we set rules to run these jobs only on feature branches.\n\n\nThen, we link the `deploy_review` job to `destroy_review`. This will allow\nus to stop the environment **manually** from the GitLab user interface, but\nmore importantly, it will **automatically trigger the environment\ndestruction** when the feature branch is closed. This is a good FinOps\npractice to help you control your operational expenditures.\n\n\nSince Terraform needs a plan file to destroy an infrastructure, exactly like\nit needs one to build an infrastructure, then we are adding a dependency\nfrom `destroy_review` to `build_review`, to retrieve its artifacts.\n\n\nFinally, we see here that the environment's name is set to `$environment`.\nIt has been set in the [main pipeline](#the-main-pipeline) to\n`review/$CI_COMMIT_REF_SLUG`, and forwarded to this child pipeline with the\ninstruction `trigger:forward:yaml_variables:true`.\n\n\n```yml\n\nbuild_review:\n  extends: build\n  rules:\n    - if: $CI_COMMIT_TAG\n      when: never\n    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH\n      when: on_success\n\ndeploy_review:\n  extends: deploy\n  dependencies:\n    - build_review\n  environment:\n    name: $environment\n    action: start\n    on_stop: destroy_review\n    # url: https://$CI_ENVIRONMENT_SLUG.example.com\n  rules:\n    - if: $CI_COMMIT_TAG\n      when: never\n    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH\n      when: on_success\n\ndestroy_review:\n  extends: destroy\n  dependencies:\n    - build_review\n  environment:\n    name: $environment\n    action: stop\n  rules:\n    - if: $CI_COMMIT_TAG  # Do not destroy production\n      when: never\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH   # Do not destroy staging\n      when: never\n    - when: manual\n```\n\n\nSo, to recap, we now have a pipeline that can:\n\n\n* Deploy temporary review environments, which are automatically cleaned up\nwhen the feature branch is closed\n\n* Continuously deploy the **default branch** to `integration`\n\n* Continuously deploy the **tags** to `staging`\n\n\nLet's now add an extra layer, where we will deploy, based on a manual\ntrigger this time, to `qa` and `production` environments.\n\n\n##### Continously deploy to QA and production\n\n\nBecause not everybody is willing to deploy continuously to production, we\nwill add a manual validation to the next two deployments. From a purely\n**CD** perspective, we would not add this trigger, but take this as an\nopportunity to learn how to run jobs from other triggers.\n\n\nSo far, we have started a [child pipeline](#the-child-pipeline) from the\n[main pipeline](#the-main-pipeline) to run all deployments.\n\n\nSince we want to run other deployments from the default branch and the tags,\nwe will add another layer dedicated to these additional steps. Nothing new\nhere. We will just repeat exactly the same process as the one we only did\nfor the [main pipeline](#the-main-pipeline). Going this way allows you to\nmanipulate as many layers as you need. I have already seen up to nine\nenvironments in some places.\n\n\nWithout arguing once again on the benefits to have fewer environments, the\nprocess that we are using here makes it very easy to implement the same\npipeline all the way from early stages to final delivery, while keeping your\npipeline definition simple and split in small chunks that you can maintain\nat no cost.\n\n\nTo prevent variable conflicts here, we are just using new var names to\nidentify the Terraform state and input file.\n\n\n```yml\n\n.2nd_layer:\n  stage: 2nd_layer\n  variables:\n    TF_ROOT: terraform\n  trigger:\n    include: .gitlab-ci/.second-layer.gitlab-ci.yml\n    # strategy: depend            # Do NOT wait for the downstream pipeline to finish to mark upstream pipeline as successful. Otherwise, all pipelines will fail when reaching the pipeline timeout before deployment to 2nd layer.\n    forward:\n      yaml_variables: true      # Forward variables defined in the trigger job\n      pipeline_variables: true  # Forward manual pipeline variables and scheduled pipeline variables\n\nqa:\n  extends: .2nd_layer\n  variables:\n    TF_STATE_NAME_2: qa\n    environment: $TF_STATE_NAME_2\n    TF_CLI_ARGS_plan_2: \"-var-file=../vars/$TF_STATE_NAME_2.tfvars\"\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n\nproduction:\n  extends: .2nd_layer\n  variables:\n    TF_STATE_NAME_2: production\n    environment: $TF_STATE_NAME_2\n    TF_CLI_ARGS_plan_2: \"-var-file=../vars/$TF_STATE_NAME_2.tfvars\"\n  rules:\n    - if: $CI_COMMIT_TAG\n```\n\n\n**One important trick here is the strategy used for the new downstream\npipeline.** We leave that `trigger:strategy` to its default value;\notherwise, the [main pipeline](#the-main-pipeline) would wait for your\n[grand-child pipeline](#the-grand-child-pipeline) to finish. With a manual\ntrigger, this could last for a very long time and make your pipeline\ndashboard harder to read and understand.\n\n\nYou have probably already wondered what is the content of that\n`.gitlab-ci/.second-layer.gitlab-ci.yml` file we are including here.  We\nwill cover that in the next section.\n\n\n##### The first layer complete pipeline definition\n\n\nIf you are looking for a complete view of this first layer (stored in\n`.gitlab-ci/.first-layer.gitlab-ci.yml`), just expand the section below.\n\n\n```yml\n\nvariables:\n  TF_VAR_aws_ami_id: $AWS_AMI_ID\n  TF_VAR_aws_instance_type: $AWS_INSTANCE_TYPE\n  TF_VAR_aws_default_region: $AWS_DEFAULT_REGION\n\ninclude:\n  - template: Terraform.gitlab-ci.yml\n\ndefault:\n  cache:  # Use a shared cache or tagged runners to ensure terraform can run on apply and destroy\n    - key: cache-$CI_COMMIT_REF_SLUG\n      fallback_keys:\n        - cache-$CI_DEFAULT_BRANCH\n      paths:\n        - .\n\nstages:\n  - validate\n  - test\n  - build\n  - deploy\n  - cleanup\n  - 2nd_layer       # Use to deploy a 2nd environment on both the main branch and on the tags\n\nfmt:\n  rules:\n    - when: always\n\nvalidate:\n  rules:\n    - when: always\n\nkics-iac-sast:\n  rules:\n    - if: $SAST_DISABLED == 'true' || $SAST_DISABLED == '1'\n      when: never\n    - if: $SAST_EXCLUDED_ANALYZERS =~ /kics/\n      when: never\n    - when: on_success\n\niac-sast:\n  rules:\n    - if: $SAST_DISABLED == 'true' || $SAST_DISABLED == '1'\n      when: never\n    - if: $SAST_EXCLUDED_ANALYZERS =~ /kics/\n      when: never\n    - when: on_success\n\n###########################################################################################################\n\n## Integration env. and Staging. env\n\n##  * Auto-deploy to Integration on merge to main.\n\n##  * Auto-deploy to Staging on tag.\n\n##  * Integration can be manually destroyed if TF_DESTROY is set to true.\n\n##  * Destroy of next env. is not automated to prevent errors.\n\n###########################################################################################################\n\nbuild:  # terraform plan\n  environment:\n    name: $TF_STATE_NAME\n    action: prepare\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    - if: $CI_COMMIT_TAG\n\ndeploy: # terraform apply --> automatically deploy on corresponding env\n(integration or staging) when merging to default branch or tagging. Second\nlayer environments (qa and production) will be controlled manually\n  environment: \n    name: $TF_STATE_NAME\n    action: start\n    on_stop: destroy\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    - if: $CI_COMMIT_TAG\n\ndestroy:\n  extends: .terraform:destroy\n  variables:\n    GIT_STRATEGY: none\n  dependencies:\n    - build\n  environment:\n    name: $TF_STATE_NAME\n    action: stop\n  rules:\n    - if: $CI_COMMIT_TAG  # Do not destroy production\n      when: never\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $TF_DESTROY == \"true\" # Manually destroy integration env.\n      when: manual\n###########################################################################################################\n\n\n###########################################################################################################\n\n## Dev env.\n\n##  * Temporary environment. Lives and dies with the Merge Request.\n\n##  * Auto-deploy on push to feature branch.\n\n##  * Auto-destroy on when Merge Request is closed.\n\n###########################################################################################################\n\nbuild_review:\n  extends: build\n  rules:\n    - if: $CI_COMMIT_TAG\n      when: never\n    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH\n      when: on_success\n\ndeploy_review:\n  extends: deploy\n  dependencies:\n    - build_review\n  environment:\n    name: $environment\n    action: start\n    on_stop: destroy_review\n    # url: https://$CI_ENVIRONMENT_SLUG.example.com\n  rules:\n    - if: $CI_COMMIT_TAG\n      when: never\n    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH\n      when: on_success\n\ndestroy_review:\n  extends: destroy\n  dependencies:\n    - build_review\n  environment:\n    name: $environment\n    action: stop\n  rules:\n    - if: $CI_COMMIT_TAG  # Do not destroy production\n      when: never\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH   # Do not destroy staging\n      when: never\n    - when: manual\n###########################################################################################################\n\n\n###########################################################################################################\n\n## Second layer\n\n##  * Deploys from main branch to qa env.\n\n##  * Deploys from tag to production.\n\n###########################################################################################################\n\n.2nd_layer:\n  stage: 2nd_layer\n  variables:\n    TF_ROOT: terraform\n  trigger:\n    include: .gitlab-ci/.second-layer.gitlab-ci.yml\n    # strategy: depend            # Do NOT wait for the downstream pipeline to finish to mark upstream pipeline as successful. Otherwise, all pipelines will fail when reaching the pipeline timeout before deployment to 2nd layer.\n    forward:\n      yaml_variables: true      # Forward variables defined in the trigger job\n      pipeline_variables: true  # Forward manual pipeline variables and scheduled pipeline variables\n\nqa:\n  extends: .2nd_layer\n  variables:\n    TF_STATE_NAME_2: qa\n    environment: $TF_STATE_NAME_2\n    TF_CLI_ARGS_plan_2: \"-var-file=../vars/$TF_STATE_NAME_2.tfvars\"\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n\nproduction:\n  extends: .2nd_layer\n  variables:\n    TF_STATE_NAME_2: production\n    environment: $TF_STATE_NAME_2\n    TF_CLI_ARGS_plan_2: \"-var-file=../vars/$TF_STATE_NAME_2.tfvars\"\n  rules:\n    - if: $CI_COMMIT_TAG\n###########################################################################################################\n\n```\n\n\nAt this stage, we are already deploying safely to three environments. That\nis my personal ideal recommendation. However, if you need more environments,\nadd them to your CD pipeline.\n\n\nYou have certainly already noted that we include a downstream pipeline with\nthe keyword `trigger:include`. This includes the file\n`.gitlab-ci/.second-layer.gitlab-ci.yml`. We want to run almost the same\npipeline so obviously, its content is very similar to the one we have\ndetailed above. The main advantage here to define this [grand-child\npipeline](#the-grand-child-pipeline) is that it lives on its own, making\nboth variables and rules way easier to define.\n\n\n### The grand-child pipeline\n\n\nThis second layer pipeline is a brand new pipeline. Hence, it needs to mimic\nthe first layer definition with:\n\n\n* [Inclusion of the Terraform\ntemplate](#run-terraform-commands-and-secure-the-code).\n\n* [Enforcement of security checks](#run-controls-on-all-branches). Terraform\nvalidation would be duplicates of the first layer, but security scanners may\nfind threats that did not yet exist when scanners previously ran (for\nexample, if you deploy to production a couple of days after your deployment\nto staging).\n\n* [Overwrite build and deploy jobs to set specific\nrules](#cd-to-review-environments). Note that the `destroy` stage is no\nlonger automated to prevent too fast deletions.\n\n\nAs explained above, the `TF_STATE_NAME` and `TF_CLI_ARGS_plan` have been\nprovided from the [main pipeline](#the-main-pipeline) to the [child\npipeline](#the-child-pipeline). We needed another variable name to pass\nthese values from the [child pipeline](#the-child-pipeline) to here, the\n[grand-child pipeline](#the-grand-child-pipeline). This is why they are\npostfixed with `_2` in the child pipeline and the value is copied back to\nthe appropriate variable during the `before_script` here.\n\n\nSince we have already broken down each step above, we can zoom out here\ndirectly to the broad view of the global second layer definition (stored in\n`.gitlab-ci/.second-layer.gitlab-ci.yml`).\n\n\n```yml\n\n# Use to deploy a second environment on both the default branch and the\ntags.\n\n\ninclude:\n  template: Terraform.gitlab-ci.yml\n\nstages:\n  - validate\n  - test\n  - build\n  - deploy\n\nfmt:\n  rules:\n    - when: never\n\nvalidate:\n  rules:\n    - when: never\n\nkics-iac-sast:\n  rules:\n    - if: $SAST_DISABLED == 'true' || $SAST_DISABLED == '1'\n      when: never\n    - if: $SAST_EXCLUDED_ANALYZERS =~ /kics/\n      when: never\n    - when: always\n\n###########################################################################################################\n\n## QA env. and Prod. env\n\n##  * Manually trigger build and auto-deploy in QA\n\n##  * Manually trigger both build and deploy in Production\n\n##  * Destroy of these env. is not automated to prevent errors.\n\n###########################################################################################################\n\nbuild:  # terraform plan\n  cache:  # Use a shared cache or tagged runners to ensure terraform can run on apply and destroy\n    - key: $TF_STATE_NAME_2\n      fallback_keys:\n        - cache-$CI_DEFAULT_BRANCH\n      paths:\n        - .\n  environment:\n    name: $TF_STATE_NAME_2\n    action: prepare\n  before_script:  # Hack to set new variable values on the second layer, while still using the same variable names. Otherwise, due to variable precedence order, setting new value in the trigger job, does not cascade these new values to the downstream pipeline\n    - TF_STATE_NAME=$TF_STATE_NAME_2\n    - TF_CLI_ARGS_plan=$TF_CLI_ARGS_plan_2\n  rules:\n    - when: manual\n\ndeploy: # terraform apply\n  cache:  # Use a shared cache or tagged runners to ensure terraform can run on apply and destroy\n    - key: $TF_STATE_NAME_2\n      fallback_keys:\n        - cache-$CI_DEFAULT_BRANCH\n      paths:\n        - .\n  environment: \n    name: $TF_STATE_NAME_2\n    action: start\n  before_script:  # Hack to set new variable values on the second layer, while still using the same variable names. Otherwise, due to variable precedence order, setting new value in the trigger job, does not cascade these new values to the downstream pipeline\n    - TF_STATE_NAME=$TF_STATE_NAME_2\n    - TF_CLI_ARGS_plan=$TF_CLI_ARGS_plan_2\n  rules:\n    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    - if: $CI_COMMIT_TAG && $TF_AUTO_DEPLOY == \"true\"\n    - if: $CI_COMMIT_TAG\n      when: manual\n###########################################################################################################\n\n```\n\n\nEt voilà. **We are ready to go.** Feel free to change the way you control\nyour job executions, leveraging for example GitLab's capacity to [delay a\njob](https://docs.gitlab.com/ee/ci/jobs/job_control.html#run-a-job-after-a-delay)\nbefore deploying to production.\n\n\n## Try it yourself\n\n\nWe finally reached our destination. We are now able to control **deployments\nto five different environments**, with only the **feature branches**, the\n**main branch**, and **tags**.\n\n* We are intensively reusing GitLab open source templates to ensure\nefficiency and security in our pipelines.\n\n* We are leveraging GitLab template capacities to overwrite only the blocks\nthat need custom control.\n\n* We have split the pipeline in small chunks, controlling the downstream\npipelines to match exactly what we need.\n\n\nFrom there, the floor is yours. You could, for example, easily update the\nmain pipeline to trigger downstream pipelines for your software source code,\nwith the\n[trigger:rules:changes](https://docs.gitlab.com/ee/ci/yaml/#ruleschanges)\nkeyword. And use another\n[template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/)\ndepending on the changes that happened. But that is another story.\n","engineering",[23,24,25,26,27],"CI/CD","CI","CD","DevSecOps platform","tutorial",{"slug":29,"featured":6,"template":30},"using-child-pipelines-to-continuously-deploy-to-five-environments","BlogPost","content:en-us:blog:using-child-pipelines-to-continuously-deploy-to-five-environments.yml","yaml","Using Child Pipelines To Continuously Deploy To Five Environments","content","en-us/blog/using-child-pipelines-to-continuously-deploy-to-five-environments.yml","en-us/blog/using-child-pipelines-to-continuously-deploy-to-five-environments","yml",{"_path":39,"_dir":40,"_draft":6,"_partial":6,"_locale":7,"data":41,"_id":452,"_type":32,"title":453,"_source":34,"_file":454,"_stem":455,"_extension":37},"/shared/en-us/main-navigation","en-us",{"logo":42,"freeTrial":47,"sales":52,"login":57,"items":62,"search":393,"minimal":424,"duo":443},{"config":43},{"href":44,"dataGaName":45,"dataGaLocation":46},"/","gitlab logo","header",{"text":48,"config":49},"Get free trial",{"href":50,"dataGaName":51,"dataGaLocation":46},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":53,"config":54},"Talk to sales",{"href":55,"dataGaName":56,"dataGaLocation":46},"/sales/","sales",{"text":58,"config":59},"Sign in",{"href":60,"dataGaName":61,"dataGaLocation":46},"https://gitlab.com/users/sign_in/","sign in",[63,107,204,209,314,374],{"text":64,"config":65,"cards":67,"footer":90},"Platform",{"dataNavLevelOne":66},"platform",[68,74,82],{"title":64,"description":69,"link":70},"The most comprehensive AI-powered DevSecOps Platform",{"text":71,"config":72},"Explore our Platform",{"href":73,"dataGaName":66,"dataGaLocation":46},"/platform/",{"title":75,"description":76,"link":77},"GitLab Duo (AI)","Build software faster with AI at every stage of development",{"text":78,"config":79},"Meet GitLab Duo",{"href":80,"dataGaName":81,"dataGaLocation":46},"/gitlab-duo/","gitlab duo ai",{"title":83,"description":84,"link":85},"Why GitLab","10 reasons why Enterprises choose GitLab",{"text":86,"config":87},"Learn more",{"href":88,"dataGaName":89,"dataGaLocation":46},"/why-gitlab/","why gitlab",{"title":91,"items":92},"Get started with",[93,98,103],{"text":94,"config":95},"Platform Engineering",{"href":96,"dataGaName":97,"dataGaLocation":46},"/solutions/platform-engineering/","platform engineering",{"text":99,"config":100},"Developer Experience",{"href":101,"dataGaName":102,"dataGaLocation":46},"/developer-experience/","Developer experience",{"text":104,"config":105},"MLOps",{"href":106,"dataGaName":104,"dataGaLocation":46},"/topics/devops/the-role-of-ai-in-devops/",{"text":108,"left":109,"config":110,"link":112,"lists":116,"footer":186},"Product",true,{"dataNavLevelOne":111},"solutions",{"text":113,"config":114},"View all Solutions",{"href":115,"dataGaName":111,"dataGaLocation":46},"/solutions/",[117,141,165],{"title":118,"description":119,"link":120,"items":125},"Automation","CI/CD and automation to accelerate deployment",{"config":121},{"icon":122,"href":123,"dataGaName":124,"dataGaLocation":46},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[126,129,133,137],{"text":23,"config":127},{"href":128,"dataGaLocation":46,"dataGaName":23},"/solutions/continuous-integration/",{"text":130,"config":131},"AI-Assisted Development",{"href":80,"dataGaLocation":46,"dataGaName":132},"AI assisted development",{"text":134,"config":135},"Source Code Management",{"href":136,"dataGaLocation":46,"dataGaName":134},"/solutions/source-code-management/",{"text":138,"config":139},"Automated Software Delivery",{"href":123,"dataGaLocation":46,"dataGaName":140},"Automated software delivery",{"title":142,"description":143,"link":144,"items":149},"Security","Deliver code faster without compromising security",{"config":145},{"href":146,"dataGaName":147,"dataGaLocation":46,"icon":148},"/solutions/security-compliance/","security and compliance","ShieldCheckLight",[150,155,160],{"text":151,"config":152},"Application Security Testing",{"href":153,"dataGaName":154,"dataGaLocation":46},"/solutions/application-security-testing/","Application security testing",{"text":156,"config":157},"Software Supply Chain Security",{"href":158,"dataGaLocation":46,"dataGaName":159},"/solutions/supply-chain/","Software supply chain security",{"text":161,"config":162},"Software Compliance",{"href":163,"dataGaName":164,"dataGaLocation":46},"/solutions/software-compliance/","software compliance",{"title":166,"link":167,"items":172},"Measurement",{"config":168},{"icon":169,"href":170,"dataGaName":171,"dataGaLocation":46},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[173,177,181],{"text":174,"config":175},"Visibility & Measurement",{"href":170,"dataGaLocation":46,"dataGaName":176},"Visibility and Measurement",{"text":178,"config":179},"Value Stream Management",{"href":180,"dataGaLocation":46,"dataGaName":178},"/solutions/value-stream-management/",{"text":182,"config":183},"Analytics & Insights",{"href":184,"dataGaLocation":46,"dataGaName":185},"/solutions/analytics-and-insights/","Analytics and insights",{"title":187,"items":188},"GitLab for",[189,194,199],{"text":190,"config":191},"Enterprise",{"href":192,"dataGaLocation":46,"dataGaName":193},"/enterprise/","enterprise",{"text":195,"config":196},"Small Business",{"href":197,"dataGaLocation":46,"dataGaName":198},"/small-business/","small business",{"text":200,"config":201},"Public Sector",{"href":202,"dataGaLocation":46,"dataGaName":203},"/solutions/public-sector/","public sector",{"text":205,"config":206},"Pricing",{"href":207,"dataGaName":208,"dataGaLocation":46,"dataNavLevelOne":208},"/pricing/","pricing",{"text":210,"config":211,"link":213,"lists":217,"feature":301},"Resources",{"dataNavLevelOne":212},"resources",{"text":214,"config":215},"View all resources",{"href":216,"dataGaName":212,"dataGaLocation":46},"/resources/",[218,251,273],{"title":219,"items":220},"Getting started",[221,226,231,236,241,246],{"text":222,"config":223},"Install",{"href":224,"dataGaName":225,"dataGaLocation":46},"/install/","install",{"text":227,"config":228},"Quick start guides",{"href":229,"dataGaName":230,"dataGaLocation":46},"/get-started/","quick setup checklists",{"text":232,"config":233},"Learn",{"href":234,"dataGaLocation":46,"dataGaName":235},"https://university.gitlab.com/","learn",{"text":237,"config":238},"Product documentation",{"href":239,"dataGaName":240,"dataGaLocation":46},"https://docs.gitlab.com/","product documentation",{"text":242,"config":243},"Best practice videos",{"href":244,"dataGaName":245,"dataGaLocation":46},"/getting-started-videos/","best practice videos",{"text":247,"config":248},"Integrations",{"href":249,"dataGaName":250,"dataGaLocation":46},"/integrations/","integrations",{"title":252,"items":253},"Discover",[254,259,263,268],{"text":255,"config":256},"Customer success stories",{"href":257,"dataGaName":258,"dataGaLocation":46},"/customers/","customer success stories",{"text":260,"config":261},"Blog",{"href":262,"dataGaName":5,"dataGaLocation":46},"/blog/",{"text":264,"config":265},"Remote",{"href":266,"dataGaName":267,"dataGaLocation":46},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"text":269,"config":270},"TeamOps",{"href":271,"dataGaName":272,"dataGaLocation":46},"/teamops/","teamops",{"title":274,"items":275},"Connect",[276,281,286,291,296],{"text":277,"config":278},"GitLab Services",{"href":279,"dataGaName":280,"dataGaLocation":46},"/services/","services",{"text":282,"config":283},"Community",{"href":284,"dataGaName":285,"dataGaLocation":46},"/community/","community",{"text":287,"config":288},"Forum",{"href":289,"dataGaName":290,"dataGaLocation":46},"https://forum.gitlab.com/","forum",{"text":292,"config":293},"Events",{"href":294,"dataGaName":295,"dataGaLocation":46},"/events/","events",{"text":297,"config":298},"Partners",{"href":299,"dataGaName":300,"dataGaLocation":46},"/partners/","partners",{"backgroundColor":302,"textColor":303,"text":304,"image":305,"link":309},"#2f2a6b","#fff","Insights for the future of software development",{"altText":306,"config":307},"the source promo card",{"src":308},"/images/navigation/the-source-promo-card.svg",{"text":310,"config":311},"Read the latest",{"href":312,"dataGaName":313,"dataGaLocation":46},"/the-source/","the source",{"text":315,"config":316,"lists":318},"Company",{"dataNavLevelOne":317},"company",[319],{"items":320},[321,326,332,334,339,344,349,354,359,364,369],{"text":322,"config":323},"About",{"href":324,"dataGaName":325,"dataGaLocation":46},"/company/","about",{"text":327,"config":328,"footerGa":331},"Jobs",{"href":329,"dataGaName":330,"dataGaLocation":46},"/jobs/","jobs",{"dataGaName":330},{"text":292,"config":333},{"href":294,"dataGaName":295,"dataGaLocation":46},{"text":335,"config":336},"Leadership",{"href":337,"dataGaName":338,"dataGaLocation":46},"/company/team/e-group/","leadership",{"text":340,"config":341},"Team",{"href":342,"dataGaName":343,"dataGaLocation":46},"/company/team/","team",{"text":345,"config":346},"Handbook",{"href":347,"dataGaName":348,"dataGaLocation":46},"https://handbook.gitlab.com/","handbook",{"text":350,"config":351},"Investor relations",{"href":352,"dataGaName":353,"dataGaLocation":46},"https://ir.gitlab.com/","investor relations",{"text":355,"config":356},"Trust Center",{"href":357,"dataGaName":358,"dataGaLocation":46},"/security/","trust center",{"text":360,"config":361},"AI Transparency Center",{"href":362,"dataGaName":363,"dataGaLocation":46},"/ai-transparency-center/","ai transparency center",{"text":365,"config":366},"Newsletter",{"href":367,"dataGaName":368,"dataGaLocation":46},"/company/contact/","newsletter",{"text":370,"config":371},"Press",{"href":372,"dataGaName":373,"dataGaLocation":46},"/press/","press",{"text":375,"config":376,"lists":377},"Contact us",{"dataNavLevelOne":317},[378],{"items":379},[380,383,388],{"text":53,"config":381},{"href":55,"dataGaName":382,"dataGaLocation":46},"talk to sales",{"text":384,"config":385},"Get help",{"href":386,"dataGaName":387,"dataGaLocation":46},"/support/","get help",{"text":389,"config":390},"Customer portal",{"href":391,"dataGaName":392,"dataGaLocation":46},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":394,"login":395,"suggestions":402},"Close",{"text":396,"link":397},"To search repositories and projects, login to",{"text":398,"config":399},"gitlab.com",{"href":60,"dataGaName":400,"dataGaLocation":401},"search login","search",{"text":403,"default":404},"Suggestions",[405,407,411,413,417,421],{"text":75,"config":406},{"href":80,"dataGaName":75,"dataGaLocation":401},{"text":408,"config":409},"Code Suggestions (AI)",{"href":410,"dataGaName":408,"dataGaLocation":401},"/solutions/code-suggestions/",{"text":23,"config":412},{"href":128,"dataGaName":23,"dataGaLocation":401},{"text":414,"config":415},"GitLab on AWS",{"href":416,"dataGaName":414,"dataGaLocation":401},"/partners/technology-partners/aws/",{"text":418,"config":419},"GitLab on Google Cloud",{"href":420,"dataGaName":418,"dataGaLocation":401},"/partners/technology-partners/google-cloud-platform/",{"text":422,"config":423},"Why GitLab?",{"href":88,"dataGaName":422,"dataGaLocation":401},{"freeTrial":425,"mobileIcon":430,"desktopIcon":435,"secondaryButton":438},{"text":426,"config":427},"Start free trial",{"href":428,"dataGaName":51,"dataGaLocation":429},"https://gitlab.com/-/trials/new/","nav",{"altText":431,"config":432},"Gitlab Icon",{"src":433,"dataGaName":434,"dataGaLocation":429},"/images/brand/gitlab-logo-tanuki.svg","gitlab icon",{"altText":431,"config":436},{"src":437,"dataGaName":434,"dataGaLocation":429},"/images/brand/gitlab-logo-type.svg",{"text":439,"config":440},"Get Started",{"href":441,"dataGaName":442,"dataGaLocation":429},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":444,"mobileIcon":448,"desktopIcon":450},{"text":445,"config":446},"Learn more about GitLab Duo",{"href":80,"dataGaName":447,"dataGaLocation":429},"gitlab duo",{"altText":431,"config":449},{"src":433,"dataGaName":434,"dataGaLocation":429},{"altText":431,"config":451},{"src":437,"dataGaName":434,"dataGaLocation":429},"content:shared:en-us:main-navigation.yml","Main Navigation","shared/en-us/main-navigation.yml","shared/en-us/main-navigation",{"_path":457,"_dir":40,"_draft":6,"_partial":6,"_locale":7,"title":458,"button":459,"image":463,"config":467,"_id":469,"_type":32,"_source":34,"_file":470,"_stem":471,"_extension":37},"/shared/en-us/banner","is now in public beta!",{"text":86,"config":460},{"href":461,"dataGaName":462,"dataGaLocation":46},"/gitlab-duo/agent-platform/","duo banner",{"altText":464,"config":465},"GitLab Duo Agent Platform",{"src":466},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1753720689/somrf9zaunk0xlt7ne4x.svg",{"layout":468},"release","content:shared:en-us:banner.yml","shared/en-us/banner.yml","shared/en-us/banner",{"_path":473,"_dir":40,"_draft":6,"_partial":6,"_locale":7,"data":474,"_id":677,"_type":32,"title":678,"_source":34,"_file":679,"_stem":680,"_extension":37},"/shared/en-us/main-footer",{"text":475,"source":476,"edit":482,"contribute":487,"config":492,"items":497,"minimal":669},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":477,"config":478},"View page source",{"href":479,"dataGaName":480,"dataGaLocation":481},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":483,"config":484},"Edit this page",{"href":485,"dataGaName":486,"dataGaLocation":481},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":488,"config":489},"Please contribute",{"href":490,"dataGaName":491,"dataGaLocation":481},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":493,"facebook":494,"youtube":495,"linkedin":496},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[498,520,576,605,639],{"title":64,"links":499,"subMenu":503},[500],{"text":26,"config":501},{"href":73,"dataGaName":502,"dataGaLocation":481},"devsecops platform",[504],{"title":205,"links":505},[506,510,515],{"text":507,"config":508},"View plans",{"href":207,"dataGaName":509,"dataGaLocation":481},"view plans",{"text":511,"config":512},"Why Premium?",{"href":513,"dataGaName":514,"dataGaLocation":481},"/pricing/premium/","why premium",{"text":516,"config":517},"Why Ultimate?",{"href":518,"dataGaName":519,"dataGaLocation":481},"/pricing/ultimate/","why ultimate",{"title":521,"links":522},"Solutions",[523,528,530,532,537,542,546,549,553,558,560,563,566,571],{"text":524,"config":525},"Digital transformation",{"href":526,"dataGaName":527,"dataGaLocation":481},"/topics/digital-transformation/","digital transformation",{"text":151,"config":529},{"href":153,"dataGaName":151,"dataGaLocation":481},{"text":140,"config":531},{"href":123,"dataGaName":124,"dataGaLocation":481},{"text":533,"config":534},"Agile development",{"href":535,"dataGaName":536,"dataGaLocation":481},"/solutions/agile-delivery/","agile delivery",{"text":538,"config":539},"Cloud transformation",{"href":540,"dataGaName":541,"dataGaLocation":481},"/topics/cloud-native/","cloud transformation",{"text":543,"config":544},"SCM",{"href":136,"dataGaName":545,"dataGaLocation":481},"source code management",{"text":23,"config":547},{"href":128,"dataGaName":548,"dataGaLocation":481},"continuous integration & delivery",{"text":550,"config":551},"Value stream management",{"href":180,"dataGaName":552,"dataGaLocation":481},"value stream management",{"text":554,"config":555},"GitOps",{"href":556,"dataGaName":557,"dataGaLocation":481},"/solutions/gitops/","gitops",{"text":190,"config":559},{"href":192,"dataGaName":193,"dataGaLocation":481},{"text":561,"config":562},"Small business",{"href":197,"dataGaName":198,"dataGaLocation":481},{"text":564,"config":565},"Public sector",{"href":202,"dataGaName":203,"dataGaLocation":481},{"text":567,"config":568},"Education",{"href":569,"dataGaName":570,"dataGaLocation":481},"/solutions/education/","education",{"text":572,"config":573},"Financial services",{"href":574,"dataGaName":575,"dataGaLocation":481},"/solutions/finance/","financial services",{"title":210,"links":577},[578,580,582,584,587,589,591,593,595,597,599,601,603],{"text":222,"config":579},{"href":224,"dataGaName":225,"dataGaLocation":481},{"text":227,"config":581},{"href":229,"dataGaName":230,"dataGaLocation":481},{"text":232,"config":583},{"href":234,"dataGaName":235,"dataGaLocation":481},{"text":237,"config":585},{"href":239,"dataGaName":586,"dataGaLocation":481},"docs",{"text":260,"config":588},{"href":262,"dataGaName":5,"dataGaLocation":481},{"text":255,"config":590},{"href":257,"dataGaName":258,"dataGaLocation":481},{"text":264,"config":592},{"href":266,"dataGaName":267,"dataGaLocation":481},{"text":277,"config":594},{"href":279,"dataGaName":280,"dataGaLocation":481},{"text":269,"config":596},{"href":271,"dataGaName":272,"dataGaLocation":481},{"text":282,"config":598},{"href":284,"dataGaName":285,"dataGaLocation":481},{"text":287,"config":600},{"href":289,"dataGaName":290,"dataGaLocation":481},{"text":292,"config":602},{"href":294,"dataGaName":295,"dataGaLocation":481},{"text":297,"config":604},{"href":299,"dataGaName":300,"dataGaLocation":481},{"title":315,"links":606},[607,609,611,613,615,617,619,623,628,630,632,634],{"text":322,"config":608},{"href":324,"dataGaName":317,"dataGaLocation":481},{"text":327,"config":610},{"href":329,"dataGaName":330,"dataGaLocation":481},{"text":335,"config":612},{"href":337,"dataGaName":338,"dataGaLocation":481},{"text":340,"config":614},{"href":342,"dataGaName":343,"dataGaLocation":481},{"text":345,"config":616},{"href":347,"dataGaName":348,"dataGaLocation":481},{"text":350,"config":618},{"href":352,"dataGaName":353,"dataGaLocation":481},{"text":620,"config":621},"Sustainability",{"href":622,"dataGaName":620,"dataGaLocation":481},"/sustainability/",{"text":624,"config":625},"Diversity, inclusion and belonging (DIB)",{"href":626,"dataGaName":627,"dataGaLocation":481},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":355,"config":629},{"href":357,"dataGaName":358,"dataGaLocation":481},{"text":365,"config":631},{"href":367,"dataGaName":368,"dataGaLocation":481},{"text":370,"config":633},{"href":372,"dataGaName":373,"dataGaLocation":481},{"text":635,"config":636},"Modern Slavery Transparency Statement",{"href":637,"dataGaName":638,"dataGaLocation":481},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"title":640,"links":641},"Contact Us",[642,645,647,649,654,659,664],{"text":643,"config":644},"Contact an expert",{"href":55,"dataGaName":56,"dataGaLocation":481},{"text":384,"config":646},{"href":386,"dataGaName":387,"dataGaLocation":481},{"text":389,"config":648},{"href":391,"dataGaName":392,"dataGaLocation":481},{"text":650,"config":651},"Status",{"href":652,"dataGaName":653,"dataGaLocation":481},"https://status.gitlab.com/","status",{"text":655,"config":656},"Terms of use",{"href":657,"dataGaName":658,"dataGaLocation":481},"/terms/","terms of use",{"text":660,"config":661},"Privacy statement",{"href":662,"dataGaName":663,"dataGaLocation":481},"/privacy/","privacy statement",{"text":665,"config":666},"Cookie preferences",{"dataGaName":667,"dataGaLocation":481,"id":668,"isOneTrustButton":109},"cookie preferences","ot-sdk-btn",{"items":670},[671,673,675],{"text":655,"config":672},{"href":657,"dataGaName":658,"dataGaLocation":481},{"text":660,"config":674},{"href":662,"dataGaName":663,"dataGaLocation":481},{"text":665,"config":676},{"dataGaName":667,"dataGaLocation":481,"id":668,"isOneTrustButton":109},"content:shared:en-us:main-footer.yml","Main Footer","shared/en-us/main-footer.yml","shared/en-us/main-footer",[682],{"_path":683,"_dir":684,"_draft":6,"_partial":6,"_locale":7,"content":685,"config":689,"_id":691,"_type":32,"title":692,"_source":34,"_file":693,"_stem":694,"_extension":37},"/en-us/blog/authors/olivier-dupr","authors",{"name":18,"config":686},{"headshot":687,"ctfId":688},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1750713474/cj6odchlpoqxbibenvye.png","4VIckvQsyfNxEtz4pM42aP",{"template":690},"BlogAuthor","content:en-us:blog:authors:olivier-dupr.yml","Olivier Dupr","en-us/blog/authors/olivier-dupr.yml","en-us/blog/authors/olivier-dupr",{"_path":696,"_dir":40,"_draft":6,"_partial":6,"_locale":7,"header":697,"eyebrow":698,"blurb":699,"button":700,"secondaryButton":704,"_id":706,"_type":32,"title":707,"_source":34,"_file":708,"_stem":709,"_extension":37},"/shared/en-us/next-steps","Start shipping better software faster","50%+ of the Fortune 100 trust GitLab","See what your team can do with the intelligent\n\n\nDevSecOps platform.\n",{"text":48,"config":701},{"href":702,"dataGaName":51,"dataGaLocation":703},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":53,"config":705},{"href":55,"dataGaName":56,"dataGaLocation":703},"content:shared:en-us:next-steps.yml","Next Steps","shared/en-us/next-steps.yml","shared/en-us/next-steps",1755803040768]