{
  "id": 39,
  "benchmarkId": "RGS_RKE2_STIG",
  "slug": "rancher_government_solutions_rke2",
  "status": "accepted",
  "statusDate": "2024-12-20T00:00:00.000Z",
  "title": "Rancher Government Solutions RKE2 Security Technical Implementation Guide",
  "description": "This Security Technical Implementation Guide is published as a tool to improve the security of Department of Defense (DOD) information systems. The requirements are derived from the National Institute of Standards and Technology (NIST) 800-53 and related documents. Comments or proposed revisions to this document should be sent via email to the following address: disa.stig_spt@mail.mil.",
  "version": "2",
  "createdAt": "2025-10-21T11:13:21.173Z",
  "updatedAt": "2025-10-23T20:54:28.119Z",
  "groups": [
    {
      "id": 2714,
      "benchmarkId": 39,
      "groupId": "V-254553",
      "title": "SRG-APP-000014-CTR-000035",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254553r1016525_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000010",
      "ruleTitle": "Rancher RKE2 must protect authenticity of communications sessions with the use of FIPS-validated 140-2 or 140-3 security requirements for cryptographic modules.",
      "ruleVulnDiscussion": "Use strong TLS settings.\n\nRKE2 uses FIPS validated BoringCrypto modules. RKE2 Server can prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. There is a lot of traffic between RKE2 nodes to deploy, update, and delete resources so it is important to set strong TLS settings on top of this default feature. It is also important to use approved cypher suites. This ensures the protection of the transmitted information, confidentiality, and integrity so that the attacker cannot read or alter this communication.\n\nThe use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and key store.\n\nTo enable the enforcement of minimum version of TLS and cipher suites to be used by the various components of RKE2, the settings \"tls-min-version\" and \"tls-cipher-suites\" must be set.\n\nFurther documentation of the FIPS modules can be found here: https://docs.rke2.io/security/fips_support.\n\nSatisfies: SRG-APP-000014-CTR-000035, SRG-APP-000014-CTR-000040, SRG-APP-000219-CTR-000550, SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095, SRG-APP-000514-CTR-001315, SRG-APP-000560-CTR-001340, SRG-APP-000605-CTR-001380, SRG-APP-000610-CTR-001385, SRG-APP-000635-CTR-001405, SRG-APP-000645-CTR-001410",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000068",
      "ruleFixText": "Configure the use of strong TLS settings.\n\nEdit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\nkube-controller-manager-arg: \n- \"tls-min-version=VersionTLS12\" [or higher]\n- \"tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"\nkube-scheduler-arg: \n- \"tls-min-version=VersionTLS12\"\n- \"tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"\nkube-apiserver-arg: \n- \"tls-min-version=VersionTLS12\"\n- \"tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"\n\nOnce configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57986r1016524_fix",
      "ruleCheckSystem": "C-58037r1016523_chk",
      "ruleCheckContent": "Use strong TLS settings. \n\nOn an RKE2 server, run each command: \n\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\n/bin/ps -ef | grep kube-controller-manager | grep -v grep \n\n/bin/ps -ef | grep kube-scheduler | grep -v grep\n\nFor each, look for the existence of tls-min-version (use this command for an aid \"| grep tls-min-version\"): \nIf the setting \"tls-min-version\" is not configured or it is set to \"VersionTLS10\" or \"VersionTLS11\", this is a finding.\n\nFor each, look for the existence of the tls-cipher-suites. \nIf \"tls-cipher-suites\" is not set for all servers, or does not contain the following, this is a finding: \n\n--tls-cipher-suites=suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2715,
      "benchmarkId": 39,
      "groupId": "V-254554",
      "title": "SRG-APP-000023-CTR-000055",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254554r1043176_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000030",
      "ruleTitle": "RKE2 must use a centralized user management solution to support account management functions.",
      "ruleVulnDiscussion": "The Kubernetes Controller Manager is a background process that embeds core control loops regulating cluster system state through the API Server. Every process executed in a pod has an associated service account. By default, service accounts use the same credentials for authentication. Implementing the default settings poses a high risk to the Kubernetes Controller Manager. Setting the use-service-account-credential value lowers the attack surface by generating unique service accounts settings for each controller instance.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000015",
      "ruleFixText": "Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following \"kube-controller-manager-arg\" argument:\n- use-service-account-credentials=true\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57987r918225_fix",
      "ruleCheckSystem": "C-58038r859230_chk",
      "ruleCheckContent": "Ensure use-service-account-credentials argument is set correctly.\n\nRun this command on the RKE2 Control Plane:\n/bin/ps -ef | grep kube-controller-manager | grep -v grep\n\nIf --use-service-account-credentials argument is not set to \"true\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2716,
      "benchmarkId": 39,
      "groupId": "V-254555",
      "title": "SRG-APP-000026-CTR-000070",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254555r1056186_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000060",
      "ruleTitle": "Rancher RKE2 components must be configured in accordance with the security configuration settings based on DOD security configuration or implementation guidance, including SRGs, STIGs, NSA configuration guides, CTOs, and DTMs.",
      "ruleVulnDiscussion": "Once an attacker establishes access to a system, the attacker often attempts to create a persistent method of re-establishing access. One way to accomplish this is for the attacker to modify an existing account. Auditing of account creation is one method for mitigating this risk. A comprehensive account management process will ensure an audit trail documents the creation of application user accounts and, as required, notifies administrators and/or application when accounts are created. Such a process greatly reduces the risk that accounts will be surreptitiously created and provides logging that can be used for forensic purposes.\n\nWithin Rancher RKE2, audit data can be generated from any of the deployed container platform components. This audit data is important when there are issues, such as security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to know where within the container platform the event occurred.\n\nTo address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/auditing mechanisms that meet or exceed access control policy requirements. Such integration allows the application developer to offload those access control functions and focus on core application features and functionality.\n\nSatisfies: SRG-APP-000026-CTR-000070, SRG-APP-000027-CTR-000075, SRG-APP-000028-CTR-000080, SRG-APP-000092-CTR-000165, SRG-APP-000095-CTR-000170, SRG-APP-000096-CTR-000175, SRG-APP-000097-CTR-000180, SRG-APP-000098-CTR-000185, SRG-APP-000099-CTR-000190, SRG-APP-000100-CTR-000195, SRG-APP-000101-CTR-000205, SRG-APP-000319-CTR-000745, SRG-APP-000320-CTR-000750, SRG-APP-000343-CTR-000780, SRG-APP-000358-CTR-000805, SRG-APP-000374-CTR-000865, SRG-APP-000375-CTR-000870, SRG-APP-000381-CTR-000905, SRG-APP-000409-CTR-000990, SRG-APP-000492-CTR-001220, SRG-APP-000493-CTR-001225, SRG-APP-000494-CTR-001230, SRG-APP-000495-CTR-001235, SRG-APP-000496-CTR-001240, SRG-APP-000497-CTR-001245, SRG-APP-000498-CTR-001250, SRG-APP-000499-CTR-001255, SRG-APP-000500-CTR-001260, SRG-APP-000501-CTR-001265, SRG-APP-000502-CTR-001270, SRG-APP-000503-CTR-001275, SRG-APP-000504-CTR-001280, SRG-APP-000505-CTR-001285, SRG-APP-000506-CTR-001290, SRG-APP-000507-CTR-001295, SRG-APP-000508-CTR-001300, SRG-APP-000509-CTR-001305, SRG-APP-000510-CTR-001310, SRG-APP-000516-CTR-000790, SRG-APP-00516-CTR-001325",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000018",
      "ruleFixText": "Audit logging and policies:\n\nEdit the /etc/rancher/rke2/config.yaml file, and enable the audit policy:\naudit-policy-file: /etc/rancher/rke2/audit-policy.yaml\n\n1. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration.\n\n--audit-policy-file= Path to the file that defines the audit policy configuration. (Example: /etc/rancher/rke2/audit-policy.yaml)\n--audit-log-mode=blocking-strict\n\nIf configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server\n\n2. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration.\n\nIf using RKE2 v1.24 or older, set:\nprofile: cis-1.6\n\nIf using RKE2 v1.25 or newer, set:\nprofile: cis-1.23\n\nAvailable with October 2023 releases (v1.25.15+rke2r1, v1.26.10+rke2r1, v1.27.7+rke2r1, v1.28.3+rke2r1), use the generic profile \"cis\".\n\nIf configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server\n\n3. Edit the audit policy file, by default located at /etc/rancher/rke2/audit-policy.yaml to look like below:\n\napiVersion: audit.k8s.io/v1\nkind: Policy\nmetadata:\n  name: rke2-audit-policy\nrules:\n  - level: Metadata\n    resources:\n    - group: \"\"\n      resources: [\"secrets\"]\n  - level: RequestResponse\n    resources:\n    - group: \"\"\n      resources: [\"*\"]\n\nIf configuration files are updated on a host, restart the RKE2 Service. Run the command \"systemctl restart rke2-server\" for server hosts and \"systemctl restart rke2-agent\" for agent hosts.",
      "ruleFixId": "F-57988r1028344_fix",
      "ruleCheckSystem": "C-58039r1056186_chk",
      "ruleCheckContent": "Audit logging and policies:\n\n1. On all hosts running RKE2 Server, run the command:\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\nIf --audit-policy-file is not set, this is a finding.\nIf --audit-log-mode is not = \"blocking-strict\", this is a finding.\n\n2. Ensure the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, contains CIS profile setting. Run the following command:\ncat /etc/rancher/rke2/config.yaml \n\nRKE2 can be started with the profile flag set to cis, cis-1.23, or cis-1.6 depending on the RKE2 version. Available with October 2023 releases (v1.25.15+rke2r1, v1.26.10+rke2r1, v1.27.7+rke2r1, v1.28.3+rke2r1), use the generic profile: \"cis\".\n\nIf a value for profile is not found or is not set correctly, this is a finding. (Example: \"profile: cis\")\n\n3. Check the contents of the audit-policy file.\nBy default, RKE2 expects the audit-policy file to be located at /etc/rancher/rke2/audit-policy.yaml; however, this location can be overridden in the /etc/rancher/rke2/config.yaml file with argument 'kube-apiserver-arg: \"audit-policy-file=/etc/rancher/rke2/audit-policy.yaml\"'.\n\nIf the audit policy file does not exist or does not look like the following, this is a finding.\n\napiVersion: audit.k8s.io/v1\nkind: Policy\nmetadata:\n  name: rke2-audit-policy\nrules:\n  - level: Metadata\n    resources:\n    - group: \"\"\n      resources: [\"secrets\"]\n  - level: RequestResponse\n    resources:\n    - group: \"\"\n      resources: [\"*\"]",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2717,
      "benchmarkId": 39,
      "groupId": "V-254556",
      "title": "SRG-APP-000033-CTR-000090",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254556r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000100",
      "ruleTitle": "The Kubernetes Controller Manager must have secure binding.",
      "ruleVulnDiscussion": "Limiting the number of attack vectors and implementing authentication and encryption on the endpoints available to external sources is paramount when securing the overall Kubernetes cluster. The Controller Manager API service exposes port 10252/TCP by default for health and metrics information use. This port does not encrypt or authenticate connections. If this port is exposed externally, an attacker can use this port to attack the entire Kubernetes cluster. By setting the bind address to only localhost (i.e., 127.0.0.1), only those internal services that require health and metrics information can access the Control Manager API.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following \"kube-controller-manager-arg\" argument:\n- bind-address=127.0.0.1\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57989r918227_fix",
      "ruleCheckSystem": "C-58040r859236_chk",
      "ruleCheckContent": "Ensure bind-address is set correctly. \n\nRun this command on the RKE2 Control Plane:\n/bin/ps -ef | grep kube-controller-manager | grep -v grep\n\nIf --bind-address is not set to \"127.0.0.1\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2718,
      "benchmarkId": 39,
      "groupId": "V-254557",
      "title": "SRG-APP-000033-CTR-000090",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254557r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000110",
      "ruleTitle": "The Kubernetes Kubelet must have anonymous authentication disabled.",
      "ruleVulnDiscussion": "RKE2 registry is used to store images and is the keeper of truth for trusted images within the platform. To guarantee the images' integrity, access to the registry must be limited to those individuals who need to perform tasks to the images such as the update, creation, or deletion. Without this control access, images can be deleted that are in use by RKE2 causing a denial of service (DoS), and images can be modified or introduced without going through the testing and validation process allowing for the intentional or unintentional introduction of containers with flaws and vulnerabilities.\n\nBy allowing anonymous connections, the controls put in place to secure the Kubelet can be bypassed. Setting anonymous authentication to \"false\" also disables unauthenticated requests from kubelets.\n\nWhile there are instances where anonymous connections may be needed (e.g., health checks) and Role-Based Access Controls (RBAC) are in place to limit the anonymous access, this access must be disabled and only enabled when necessary.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following:\n--anonymous-auth=false\n\nOnce configuration file is updated, restart the RKE2 Agent. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57990r940053_fix",
      "ruleCheckSystem": "C-58041r859239_chk",
      "ruleCheckContent": "Ensure anonymous-auth is set correctly so anonymous requests will be rejected.\n\nRun this command on each node:\n/bin/ps -ef | grep kubelet | grep -v grep\n\nIf --anonymous-auth is set to \"true\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2719,
      "benchmarkId": 39,
      "groupId": "V-254558",
      "title": "SRG-APP-000033-CTR-000095",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254558r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000120",
      "ruleTitle": "The Kubernetes API server must have the insecure port flag disabled.",
      "ruleVulnDiscussion": "By default, the API server will listen on two ports. One port is the secure port and the other port is called the \"localhost port\". This port is also called the \"insecure port\", port 8080. Any requests to this port bypass authentication and authorization checks. If this port is left open, anyone who gains access to the host on which the master is running can bypass all authorization and authentication mechanisms put in place, and have full control over the entire cluster.\n\nClose the insecure port by setting the API server's --insecure-port flag to \"0\", ensuring that the --insecure-bind-address is not set.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\nkube-apiserver-arg:\n- insecure-port=0\n\nOnce configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57991r894456_fix",
      "ruleCheckSystem": "C-58042r894455_chk",
      "ruleCheckContent": "Ensure insecure-port is set correctly.\n\nIf running v1.20 through v1.23, this is default configuration so no change is necessary if not configured. \nIf running v1.24, this check is Not Applicable.\n\nRun this command on the RKE2 Control Plane:\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\nIf --insecure-port is not set to \"0\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2720,
      "benchmarkId": 39,
      "groupId": "V-254559",
      "title": "SRG-APP-000033-CTR-000095",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254559r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000130",
      "ruleTitle": "The Kubernetes Kubelet must have the read-only port flag disabled.",
      "ruleVulnDiscussion": "Kubelet serves a small REST API with read access to port 10255. The read-only port for Kubernetes provides no authentication or authorization security control. Providing unrestricted access on port 10255 exposes Kubernetes pods and containers to malicious attacks or compromise. Port 10255 is deprecated and should be disabled. \n\nClose the read-only-port by setting the API server's read-only port flag to \"0\".",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\nkubelet-arg:\n--read-only-port=0\n\nIf configuration files are updated on a host, restart the RKE2 Service. Run the command \"systemctl restart rke2-server\" for server hosts and \"systemctl restart rke2-agent\" for agent hosts.",
      "ruleFixId": "F-57992r940056_fix",
      "ruleCheckSystem": "C-58043r940055_chk",
      "ruleCheckContent": "Ensure read-only-port is set correctly so anonymous requests will be rejected.\n\nRun this command on each node:\n/bin/ps -ef | grep kubelet | grep -v grep\n\nIf --read-only-port is not set to \"0\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2721,
      "benchmarkId": 39,
      "groupId": "V-254560",
      "title": "SRG-APP-000033-CTR-000095",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254560r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000140",
      "ruleTitle": "The Kubernetes API server must have the insecure bind address not set.",
      "ruleVulnDiscussion": "By default, the API server will listen on two ports and addresses. One address is the secure address and the other address is called the \"insecure bind\" address and is set by default to localhost. Any requests to this address bypass authentication and authorization checks. If this insecure bind address is set to localhost, anyone who gains access to the host on which the master is running can bypass all authorization and authentication mechanisms put in place and have full control over the entire cluster.\n\nClose or set the insecure bind address by setting the API server's --insecure-bind-address flag to an IP or leave it unset and ensure that the --insecure-bind-port is not set.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "If running rke2 Kubernetes version > 1.20, this requirement is NA.\n\nUpgrade to a supported version of RKE2 Kubernetes.",
      "ruleFixId": "F-57993r918230_fix",
      "ruleCheckSystem": "C-58044r918229_chk",
      "ruleCheckContent": "If running rke2 Kubernetes version > 1.20, this requirement is not applicable (NA).\n\nEnsure insecure-bind-address is set correctly. \n\nRun the command:\nps -ef | grep kube-apiserver\n\nIf the setting insecure-bind-address is found and set to \"localhost\", this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2722,
      "benchmarkId": 39,
      "groupId": "V-254561",
      "title": "SRG-APP-000033-CTR-000095",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254561r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000150",
      "ruleTitle": "The Kubernetes kubelet must enable explicit authorization.",
      "ruleVulnDiscussion": "Kubelet is the primary agent on each node. The API server communicates with each kubelet to perform tasks such as starting/stopping pods. By default, kubelets allow all authenticated requests, even anonymous ones, without requiring any authorization checks from the API server. This default behavior bypasses any authorization controls put in place to limit what users may perform within the Kubernetes cluster. To change this behavior, the default setting of AlwaysAllow for the authorization mode must be set to \"Webhook\".",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on every RKE2 node and set the following \"kubelet-arg\" argument:\n\n- authorization-mode=Webhook\n\nOnce the configuration file is updated, restart the RKE2 Server or Agent. Run the command:\nsystemctl restart rke2-server or systemctl restart rke2-agent",
      "ruleFixId": "F-57994r918233_fix",
      "ruleCheckSystem": "C-58045r918232_chk",
      "ruleCheckContent": "Ensure authorization-mode is set correctly in the kubelet on each rke2 node.\n\nRun this command on each node:\n/bin/ps -ef | grep kubelet | grep -v grep\n\nIf --authorization-mode is not set to \"Webhook\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2723,
      "benchmarkId": 39,
      "groupId": "V-254562",
      "title": "SRG-APP-000033-CTR-000100",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254562r960792_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-000160",
      "ruleTitle": "The Kubernetes API server must have anonymous authentication disabled.",
      "ruleVulnDiscussion": "The Kubernetes API Server controls Kubernetes via an API interface. A user who has access to the API essentially has root access to the entire Kubernetes cluster. To control access, users must be authenticated and authorized. By allowing anonymous connections, the controls put in place to secure the API can be bypassed.\n\nSetting anonymous authentication to \"false\" also disables unauthenticated requests from kubelets.\n\nWhile there are instances where anonymous connections may be needed (e.g., health checks) and Role-Based Access Controls (RBAC) are in place to limit the anonymous access, this access should be disabled, and only enabled when necessary.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000213",
      "ruleFixText": "Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following \"kube-apiserver-arg\" argument:\n\n- anonymous-auth=false\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57995r918235_fix",
      "ruleCheckSystem": "C-58046r859254_chk",
      "ruleCheckContent": "Ensure anonymous-auth argument is set correctly.\n\nRun this command on the RKE2 Control Plane:\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\nIf --anonymous-auth is set to \"true\" or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2724,
      "benchmarkId": 39,
      "groupId": "V-254563",
      "title": "SRG-APP-000100-CTR-000200",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254563r960906_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000320",
      "ruleTitle": "All audit records must identify any containers associated with the event within Rancher RKE2.",
      "ruleVulnDiscussion": "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate.\n\nRetaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements.\nResult: Pass",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001487",
      "ruleFixText": "Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following \"kube-apiserver-arg\" argument:\n\n- audit-log-maxage=30\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57996r918237_fix",
      "ruleCheckSystem": "C-58047r859257_chk",
      "ruleCheckContent": "Ensure audit-log-maxage is set correctly.\n\nRun the below command on the RKE2 Control Plane:\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\nIf --audit-log-maxage argument is not set to at least 30 or is not configured, this is a finding. \n(By default, RKE2 sets the --audit-log-maxage argument parameter to 30.)",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2730,
      "benchmarkId": 39,
      "groupId": "V-254569",
      "title": "SRG-APP-000233-CTR-000585",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254569r1016537_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000940",
      "ruleTitle": "Rancher RKE2 runtime must isolate security functions from nonsecurity functions.",
      "ruleVulnDiscussion": "RKE2 runs as isolated as possible.\n\nRKE2 is a container-based Kubernetes distribution. A container image is essentially a complete and executable version of an application, which relies only on the host's OS kernel. Running containers use resource isolation features in the OS kernel, such as cgroups in Linux, to run multiple independent containers on the same OS. Unless part of the core RKE2 system or configured explicitly, containers managed by RKE2 should not have access to host resources.\n\nProper hardening of the surrounding environment is independent of RKE2 but ensures overall security stature.\n\nWhen Kubernetes launches a container, there are several mechanisms available to ensure complete deployments:\n- When a primary container process fails it is destroyed rebooted.\n- When Liveness checks fail for the container deployment it is destroyed rebooted.\n- If a readiness check fails at any point after the deployment the container is destroyed rebooted.\n- Kubernetes has the ability to rollback a deployment configuration to a previous state if a deployment fails.\n- Failover traffic to a working replica if any of the previous problems are encountered.\n\nSystem kernel is responsible for memory, disk, and task management. The kernel provides a gateway between the system hardware and software. Kubernetes requires kernel access to allocate resources to the Control Plane. Threat actors that penetrate the system kernel can inject malicious code or hijack the Kubernetes architecture. It is vital to implement protections through Kubernetes components to reduce the attack surface.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001084",
      "ruleFixText": "Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\nkubelet-arg:\n --protect-kernel-defaults=true\n\nIf configuration files are updated on a host, restart the RKE2 Service. \nRun the command \"systemctl restart rke2-server\" for server hosts and \"systemctl restart rke2-agent\" for agent hosts.",
      "ruleFixId": "F-58002r1016536_fix",
      "ruleCheckSystem": "C-58053r1016535_chk",
      "ruleCheckContent": "Ensure protect-kernel-defaults argument is set correctly.\n\nRun this command on each node:\n/bin/ps -ef | grep kubelet | grep -v grep\n\nIf --protect-kernel-defaults is not set to \"true\", missing or is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2725,
      "benchmarkId": 39,
      "groupId": "V-254564",
      "title": "SRG-APP-000133-CTR-000300",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254564r1016531_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000520",
      "ruleTitle": "Configuration and authentication files for Rancher RKE2 must be protected.",
      "ruleVulnDiscussion": "There are various configuration files, logs, access credentials, and other files stored on the host filesystem that contain sensitive information. \n\nThese files could potentially put at risk, along with other specific workloads and components:\n- API server.\n- proxy.\n- scheduler.\n- controller.\n- etcd.\n- Kubernetes administrator account information.\n- audit log access, modification, and deletion.\n- application access, modification, and deletion.\n- container runtime files.\n\nIf an attacker can gain access to these files, changes can be made to open vulnerabilities and bypass user authorizations inherent within Kubernetes with RBAC implemented. It is crucial to ensure user permissions are enforced down through to the operating system. Protecting file permissions will ensure that if a nonprivileged user gains access to the system they will still not be able to access protected information from the cluster API, cluster configuration, and sensitive cluster information. This control relies on the underlying operating system also having been properly configured to allow only least privileged access to perform required operations.\n\nSatisfies: SRG-APP-000133-CTR-000300, SRG-APP-000133-CTR-000295, SRG-APP-000133-CTR-000305, SRG-APP-000133-CTR-000310",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001499",
      "ruleFixText": "File system permissions:\n1. Fix permissions of the files in /etc/rancher/rke2:\ncd /etc/rancher/rke2\nchmod 0600 ./*\nchown root:root ./*\nls -l\n\n2. Fix permissions of the files in /var/lib/rancher/rke2:\ncd /var/lib/rancher/rke2\nchown root:root ./*\nls -l\n\n3. Fix permissions of the files and directories in /var/lib/rancher/rke2/agent:\ncd /var/lib/rancher/rke2/agent\nchown root:root ./*\nchmod 0700 pod-manifests\nchmod 0700 etc\nfind . -maxdepth 1 -type f -name \"*.kubeconfig\" -exec chmod 0640 {} \\;\nfind . -maxdepth 1 -type f -name \"*.crt\" -exec chmod 0600 {} \\;\nfind . -maxdepth 1 -type f -name \"*.key\" -exec chmod 0600 {} \\;\nls -l\n\n4. Fix permissions of the files in /var/lib/rancher/rke2/bin:\ncd /var/lib/rancher/rke2/agent/bin\nchown root:root ./*\nchmod 0750 ./*\nls -l\n\n5. Fix permissions directory of /var/lib/rancher/rke2/data:\ncd /var/lib/rancher/rke2/agent\nchown root:root data\nchmod 0750 data\nls -l\n\n6. Fix permissions of files in /var/lib/rancher/rke2/data:\ncd /var/lib/rancher/rke2/data\nchown root:root ./*\nchmod 0640 ./*\nls -l\n\n7. Fix permissions in /var/lib/rancher/rke2/server:\ncd /var/lib/rancher/rke2/server\nchown root:root ./*\nchmod 0700 cred\nchmod 0700 db\nchmod 0700 tls\nchmod 0750 manifests\nchmod 0750 logs\nchmod 0600 token\nls -l\n\nEdit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\nwrite-kubeconfig-mode: \"0600\"\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57997r1016530_fix",
      "ruleCheckSystem": "C-58048r1016529_chk",
      "ruleCheckContent": "File system permissions:\n1. Ensure correct permissions of the files in /etc/rancher/rke2\ncd /etc/rancher/rke2\nls -l\n\nall owners are root:root\nall permissions are 0600\n\n2. Ensure correct permissions of the files in /var/lib/rancher/rke2\ncd /var/lib/rancher/rke2\nls -l \n\nall owners are root:root\n\n3. Ensure correct permissions of the files and directories in /var/lib/rancher/rke2/agent\ncd /var/lib/rancher/rke2/agent\nls -l\n\nowners and group are root:root\n\nFile permissions set to 0640 for the following:\nrke2controller.kubeconfig\nkubelet.kubeconfig\nkubeproxy.kubeconfig\n\nCertificate file permissions set to 0600\nclient-ca.crt\nclient-kubelet.crt\nclient-kube-proxy.crt\nclient-rke2-controller.crt\nserver-ca.crt\nserving-kubelet.crt\n\nKey file permissions set to 0600\nclient-kubelet.key\nserving-kubelet.key\nclient-rke2-controller.key\nclient-kube-proxy.key\n\nThe directory permissions to 0700 \npod-manifests\netc \n\n4. Ensure correct permissions of the files in /var/lib/rancher/rke2/bin\ncd /var/lib/rancher/rke2/bin\nls -l\n\nall owners are root:root\nall files are 0750\n\n5. Ensure correct permissions of the directory /var/lib/rancher/rke2/data\ncd /var/lib/rancher/rke2\nls -l\n\nall owners are root:root\npermissions are 0750\n\n6. Ensure correct permissions of each file in /var/lib/rancher/rke2/data \ncd /var/lib/rancher/rke2/data\nls -l\n\nall owners are root:root\nall files are 0640\n\n7. Ensure correct permissions of /var/lib/rancher/rke2/server\ncd /var/lib/rancher/rke2/server\nls -l \n\nall owners are root:root\n\nThe following directories are set to 0700\ncred\ndb\ntls \n\nThe following directories are set to 0750\nmanifests \nlogs \n\nThe following file is set to 0600\ntoken \n\n8. Ensure the RKE2 Server configuration file on all RKE2 Server hosts contain the following:\n(cat /etc/rancher/rke2/config.yaml)\nwrite-kubeconfig-mode: \"0600\"\n\nIf any of the permissions specified above do not match the required level, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2726,
      "benchmarkId": 39,
      "groupId": "V-254565",
      "title": "SRG-APP-000141-CTR-000315",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254565r960963_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000550",
      "ruleTitle": "Rancher RKE2 must be configured with only essential configurations.",
      "ruleVulnDiscussion": "It is important to disable any unnecessary components to reduce any potential attack surfaces. \n\nRKE2 allows disabling the following components:\n- rke2-canal\n- rke2-coredns\n- rke2-ingress-nginx\n- rke2-kube-proxy\n- rke2-metrics-server\n\nIf utilizing any of these components presents a security risk, or if any of the components are not required then they can be disabled by using the \"disable\" flag.\n\nIf any of the components are not required, they can be disabled by using the \"disable\" flag.\n\nSatisfies: SRG-APP-000141-CTR-000315, SRG-APP-000384-CTR-000915",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000381",
      "ruleFixText": "Disable unnecessary RKE2 components.\n\nEdit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains a \"disable\" flag if any default RKE2 components are unnecessary. \n\nExample:\ndisable: rke2-canal\ndisable: rke2-coredns\ndisable: rke2-ingress-nginx\ndisable: rke2-kube-proxy\ndisable: rke2-metrics-server\n\nOnce the configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-57998r918243_fix",
      "ruleCheckSystem": "C-58049r918242_chk",
      "ruleCheckContent": "Ensure the RKE2 Server configuration file on all RKE2 Server hosts contains a \"disable\" flag only if there are default RKE2 components that need to be disabled. \n\nIf there are no default components that need to be disabled, this is not a finding.\n\nRun this command on the RKE2 Control Plane:\ncat /etc/rancher/rke2/config.yaml\n\nRKE2 allows disabling the following components. If any of the components are not required, they can be disabled:\n- rke2-canal\n- rke2-coredns\n- rke2-ingress-nginx\n- rke2-kube-proxy\n- rke2-metrics-server\n\nIf services not in use are enabled, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2727,
      "benchmarkId": 39,
      "groupId": "V-254566",
      "title": "SRG-APP-000142-CTR-000325",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254566r1050657_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000580",
      "ruleTitle": "Rancher RKE2 runtime must enforce ports, protocols, and services that adhere to the PPSM CAL.",
      "ruleVulnDiscussion": "Ports, protocols, and services within the RKE2 runtime must be controlled and conform to the PPSM CAL. Those ports, protocols, and services that fall outside the PPSM CAL must be blocked by the runtime. Instructions on the PPSM can be found in DOD Instruction 8551.01 Policy.\n\nRKE2 sets most ports and services configuration upon initiation; however, these ports can be changed after the fact to noncompliant configurations. It is important to verify core component configurations for compliance.\n\nAPI Server, Scheduler, Controller, ETCD, and User Pods should all be checked to ensure proper PPS configuration.\n\nSatisfies: SRG-APP-000142-CTR-000325, SRG-APP-000142-CTR-000330, SRG-APP-000383-CTR-000910",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-000382",
      "ruleFixText": "Review system documentation and ensure all ports, protocols, and services are properly documented and approved by the ISSO.",
      "ruleFixId": "F-57999r1050560_fix",
      "ruleCheckSystem": "C-58050r1050559_chk",
      "ruleCheckContent": "Check Ports, Protocols, and Services (PPS).\nChange to the /var/lib/rancher/rke2/agent/pod-manifests directory on the Kubernetes RKE2 Control Plane. \nRun the command:\ngrep kube-apiserver.yaml -I -insecure-port\ngrep kube-apiserver.yaml -I -secure-port\ngrep kube-apiserver.yaml -I -etcd-servers *\n\nReview findings against the most recent PPSM CAL:\nhttps://cyber.mil/ppsm/cal/\n\nAny manifest and namespace PPS or services configuration not in compliance with PPSM CAL or otherwise approved by the information system security officer (ISSO) is a finding.\n\nIf there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM or otherwise approved by the ISSO, this is a finding. Any PPS not set in the system documentation is a finding.\n\nVerify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements or otherwise approved by the ISSO is a finding.\n\nReview findings against the most recent PPSM CAL:\nhttps://cyber.mil/ppsm/cal/\n\nRunning these commands individually will show what ports are currently configured to be used by each of the core components. Inspect this output and ensure only proper ports are being used. If any ports not defined as the proper ports are being used, this is a finding.\n\n/var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-controller-manager -o=jsonpath=\"{.items[*].spec.containers[*].args}\"\n\n/var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-scheduler -o=jsonpath=\"{.items[*].spec.containers[*].args}\"\n\n/var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-apiserver -o=jsonpath=\"{.items[*].spec.containers[*].args}\" | grep tls-min-version\n\nVerify user pods:\nUser pods will also need to be inspected to ensure compliance. This will need to be on a case-by-case basis.\ncat /var/lib/rancher/rke2/server/db/etcd/config\nIf any ports not defined as the proper ports are being used or otherwise approved by the ISSO, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2728,
      "benchmarkId": 39,
      "groupId": "V-254567",
      "title": "SRG-APP-000171-CTR-000435",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254567r1016559_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000800",
      "ruleTitle": "Rancher RKE2 must store only cryptographic representations of passwords.",
      "ruleVulnDiscussion": "Secrets, such as passwords, keys, tokens, and certificates should not be stored as environment variables. These environment variables are accessible inside RKE2 by the \"Get Pod\" API call, and by any system, such as CI/CD pipeline, which has access to the definition file of the container. Secrets must be mounted from files or stored within password vaults.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-004062",
      "ruleFixText": "Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.",
      "ruleFixId": "F-58000r859270_fix",
      "ruleCheckSystem": "C-58051r894460_chk",
      "ruleCheckContent": "On the RKE2 Control Plane, run the following commands:\n\nkubectl get pods -A\nkubectl get jobs -A\nkubectl get cronjobs -A\n\nThis will output all running pods, jobs, and cronjobs. \n\nEvaluate each of the above commands using the respective commands below:\n\nkubectl get pod -n <namespace> <pod> -o yaml\nkubectl get job -n <namespace> <job> -o yaml\nkubectl get cronjob -n <namespace> <cronjob> -o yaml\n\nIf any contain sensitive values as environment variables, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2729,
      "benchmarkId": 39,
      "groupId": "V-254568",
      "title": "SRG-APP-000190-CTR-000500",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254568r1016534_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000890",
      "ruleTitle": "Rancher RKE2 must terminate all network connections associated with a communications session at the end of the session, or as follows: for in-band management sessions (privileged sessions), the session must be terminated after five minutes of inactivity.",
      "ruleVulnDiscussion": "Terminating an idle session within a short time period reduces the window of opportunity for unauthorized personnel to take control of a management session enabled on the console or console port that has been left unattended. In addition, quickly terminating an idle session will also free up resources committed by the managed network element. \n\nTerminating network connections associated with communications sessions includes, for example, de-allocating associated TCP/IP address/port pairs at the operating system level, or de-allocating networking assignments at the application level if multiple application sessions are using a single, operating-system-level network connection. This does not mean that the application terminates all sessions or network access; it only ends the inactive session and releases the resources associated with that session.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001133",
      "ruleFixText": "Edit the RKE2 Server configuration file on all RKE2 Agent hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\nkubelet-arg:\n- streaming-connection-idle-timeout=5m\n\nIf configuration files are updated on a host, restart the RKE2 Service. \nRun the command \"systemctl restart rke2-server\" for server hosts and \"systemctl restart rke2-agent\" for agent hosts.",
      "ruleFixId": "F-58001r1016533_fix",
      "ruleCheckSystem": "C-58052r1016532_chk",
      "ruleCheckContent": "Ensure streaming-connection-idle-timeout argument is set correctly.\n\nRun this command on each node:\n/bin/ps -ef | grep kubelet | grep -v grep\n\nIf --streaming-connection-idle-timeout is set to < \"5m\", missing or the parameter is not configured, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2731,
      "benchmarkId": 39,
      "groupId": "V-254570",
      "title": "SRG-APP-000243-CTR-000600",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254570r1016539_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000970",
      "ruleTitle": "Rancher RKE2 runtime must maintain separate execution domains for each container by assigning each container a separate address space to prevent unauthorized and unintended information transfer via shared system resources.",
      "ruleVulnDiscussion": "Separating user functionality from management functionality is a requirement for all the components within the Kubernetes Control Plane. Without the separation, users may have access to management functions that can degrade the Kubernetes architecture and the services being offered, and can offer a method to bypass testing and validation of functions before introduced into a production environment.\n\nSatisfies: SRG-APP-000243-CTR-000600, SRG-APP-000431-CTR-001065, SRG-APP-000211-CTR-000530, SRG-APP-000243-CTR-000595",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001082",
      "ruleFixText": "System namespaces are reserved and isolated.\n\nA resource cannot move to a new namespace; the resource must be deleted and recreated in the new namespace.\n\nkubectl delete <resource_type> <resource_name>\nkubectl create -f <resource.yaml> --namespace=<user_created_namespace>",
      "ruleFixId": "F-58003r940065_fix",
      "ruleCheckSystem": "C-58054r1016538_chk",
      "ruleCheckContent": "System namespaces are reserved and isolated.\n\nTo view the available namespaces, run the command:\nkubectl get namespaces\n\nThe namespaces to be validated include:\ndefault\nkube-public\nkube-system\nkube-node-lease\n\nFor the default namespace, execute the commands:\nkubectl config set-context --current --namespace=default\nkubectl get all\n\nFor the kube-public namespace, execute the commands:\nkubectl config set-context --current --namespace=kube-public\nkubectl get all\n\nFor the kube-node-lease namespace, execute the commands:\nkubectl config set-context --current --namespace=kube-node-lease\nkubectl get all\n\nThe only return values are the Kubernetes service objects (e.g., service/kubernetes).\n\nFor the kube-system namespace, execute the commands:\nkubectl config set-context --current --namespace=kube-system\nkubectl get all\n\nThe values returned include the following resources:\n- ETCD\n- Helm\n- Kubernetes API Server\n- Kubernetes Controller Manager\n- Kubernetes Proxy\n- Kubernetes Scheduler\n- Kubernetes Networking Components\n- Ingress Controller Components\n- Metrics Server\n\nIf a return value from the \"kubectl get all\" command is not the Kubernetes service, one from the above lists, or a service otherwise approved by your Information Systems Security Officer (ISSO), this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2732,
      "benchmarkId": 39,
      "groupId": "V-254571",
      "title": "SRG-APP-000340-CTR-000770",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254571r961353_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-001130",
      "ruleTitle": "Rancher RKE2 must prevent nonprivileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures.",
      "ruleVulnDiscussion": "Admission controllers intercept requests to the Kubernetes API before an object is instantiated. Enabling the admissions webhook allows for Kubernetes to apply policies against objects that are to be created, read, updated or deleted.\n\nAdmissions controllers can be used for:\n- Prevent pod’s ability to run privileged containers\n- Prevent pod’s ability to use privileged escalation\n- Controlling pod’s access to volume types\n- Controlling pod’s access to host file system\n- Controlling pod’s usage of host networking objects and configuration\n\nSatisfies: SRG-APP-000340-CTR-000770, SRG-APP-000342-CTR-000775",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-002233",
      "ruleFixText": "If using RKE2 v1.24 or older:\n\nOn each Control Plane node, create the following policy to a file called restricted.yml.\n\napiVersion: policy/v1beta1\nkind: PodSecurityPolicy\nmetadata:\nname: restricted\nannotations:\nseccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'\napparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'\nseccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'\napparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'\nspec:\nprivileged: false\n#Required to prevent escalations to root.\nallowPrivilegeEscalation: false\n#This is redundant with non-root + disallow privilege escalation,\n# but we can provide it for defense in depth.\nrequiredDropCapabilities:\n- ALL\n# Allow core volume types.\nvolumes:\n- 'configMap'\n- 'emptyDir'\n- 'projected'\n- 'secret'\n- 'downwardAPI'\n# Assume that persistentVolumes set up by the cluster admin are safe to use.\n- 'persistentVolumeClaim'\nhostNetwork: false\nhostIPC: false\nhostPID: false\nrunAsUser:\n# Require the container to run without root privileges.\nrule: 'MustRunAsNonRoot'\nseLinux:\n# This policy assumes the nodes are using AppArmor rather than SELinux.\nrule: 'RunAsAny'\nsupplementalGroups:\nrule: 'MustRunAs'\nranges:\n# Forbid adding the root group.\n- min: 1\nmax: 65535\nfsGroup:\nrule: 'MustRunAs'\nranges:\n# Forbid adding the root group.\n- min: 1\nmax: 65535\nreadOnlyRootFilesystem: false\n\nTo implement the policy, run the command:\n\nkubectl create -f restricted.yml\"\n\nIf using RKE v1.25 or newer:\n\nOn each Control Plane node, create the file \"/etc/rancher/rke2/rke2-pss.yaml\" and add the following content:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n  configuration:\n    apiVersion: pod-security.admission.config.k8s.io/v1beta1\n    kind: PodSecurityConfiguration\n    defaults:\n      enforce: \"restricted\"\n      enforce-version: \"latest\"\n      audit: \"restricted\"\n      audit-version: \"latest\"\n      warn: \"restricted\"\n      warn-version: \"latest\"\n    exemptions:\n      usernames: []\n      runtimeClasses: []\n      namespaces: [kube-system, cis-operator-system, tigera-operator]\n\nEnsure the namespace exemptions contain only namespaces requiring access to capabilities outside of the restricted settings above.\n\nOnce the file is created, restart the Control Plane nodes with:\n\nsystemctl restart rke2-server",
      "ruleFixId": "F-58004r918246_fix",
      "ruleCheckSystem": "C-58055r918245_chk",
      "ruleCheckContent": "If using RKE2 v1.24 or older:\n\nOn the Server Node, run the command:\n\nkubectl get podsecuritypolicy\n\nFor any pod security policies listed, with the exception of system-unrestricted-psp (which is required for core Kubernetes functionality), edit the policy with the command:\n\nkubectl edit podsecuritypolicy policyname\nWhere policyname is the name of the policy\n\nReview the runAsUser, supplementalGroups, and fsGroup sections of the policy.\n\nIf any of these sections are missing, this is a finding.\n\nIf the rule within the runAsUser section is not set to \"MustRunAsNonRoot\", this is a finding.\n\nIf the ranges within the supplementalGroups section has min set to \"0\" or min is missing, this is a finding.\n\nIf the ranges within the fsGroup section have a min set to \"0\" or the min is missing, this is a finding.\n\nIf using RKE2 v1.25 or newer:\n\nOn each controlplane node, validate that the file \"/etc/rancher/rke2/rke2-pss.yaml\" exists and the default configuration settings match the following:\n\n    defaults:\n      audit: restricted\n      audit-version: latest\n      enforce: restricted\n      enforce-version: latest\n      warn: restricted\n      warn-version: latest\n\nIf the configuration file differs from the above, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2733,
      "benchmarkId": 39,
      "groupId": "V-254572",
      "title": "SRG-APP-000378-CTR-000880",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254572r1016560_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-001270",
      "ruleTitle": "Rancher RKE2 must prohibit the installation of patches, updates, and instantiation of container images without explicit privileged status.",
      "ruleVulnDiscussion": "Controlling access to those users and roles responsible for patching and updating RKE2 reduces the risk of untested or potentially malicious software from being installed within the platform. This access may be separate from the access required to install container images into the registry and those access requirements required to instantiate an image into a service. Explicit privileges (escalated or administrative privileges) provide the regular user with explicit capabilities and control that exceeds the rights of a regular user.\n\nKubernetes uses the API Server to control communication to the other services that makeup Kubernetes. The use of authorizations and not the default of \"AlwaysAllow\" enables the Kubernetes functions control to only the groups that need them.\n\nTo control access, the API server must have one of the following options set for the authorization mode:\n    --authorization-mode=ABAC Attribute-Based Access Control (ABAC) mode allows a user to configure policies using local files.\n    --authorization-mode=RBAC Role-based access control (RBAC) mode allows a user to create and store policies using the Kubernetes API.\n    --authorization-mode=Webhook\nWebHook is an HTTP callback mode that allows a user to manage authorization using a remote REST endpoint.\n    --authorization-mode=Node \nNode authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.\n    --authorization-mode=AlwaysDeny \nThis flag blocks all requests. Use this flag only for testing.\n\nSatisfies: SRG-APP-000378-CTR-000880, SRG-APP-000378-CTR-000885",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-003980",
      "ruleFixText": "Edit the RKE2 Server configuration file on all RKE2 Control Plane hosts, located at /etc/rancher/rke2/config.yaml, to contain the following:\n\n kube-apiserver-arg:\n--authorization-mode=RBAC,Node\n\nOnce configuration file is updated, restart the RKE2 Server. Run the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-58005r1016541_fix",
      "ruleCheckSystem": "C-58056r1016540_chk",
      "ruleCheckContent": "Ensure authorization-mode is set correctly in the apiserver.\n\nRun this command on all RKE2 Control Plane hosts:\n/bin/ps -ef | grep kube-apiserver | grep -v grep\n\nIf  --authorization-mode is not set to \"RBAC,Node\" or is not configured, this is a finding.\n(By default, RKE2 sets Node,RBAC as the parameter to the --authorization-mode argument.)",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2734,
      "benchmarkId": 39,
      "groupId": "V-254573",
      "title": "SRG-APP-000429-CTR-001060",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254573r1050650_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "high",
      "ruleVersion": "CNTR-R2-001500",
      "ruleTitle": "Rancher RKE2 keystore must implement encryption to prevent unauthorized disclosure of information at rest within Rancher RKE2.",
      "ruleVulnDiscussion": "Encrypting secrets at rest in etcd.\n\nBy default, RKE2 will create an encryption key and configuration file and pass these to the Kubernetes API server. The result is that RKE2 automatically encrypts Kubernetes Secret objects when writing them to etcd.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-002476",
      "ruleFixText": "This is Not Applicable for RKE2 versions 1.20 and greater.\n\nEnable secrets encryption.\n\nEdit the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, so that it contains:\n\nsecrets-encryption: true",
      "ruleFixId": "F-58006r1016544_fix",
      "ruleCheckSystem": "C-58057r1016543_chk",
      "ruleCheckContent": "This is Not Applicable for RKE2 versions 1.20 and greater.\n\nReview the encryption configuration file.\n\nAs root or with root permissions, run the following command:\nview /var/lib/rancher/rke2/server/cred/encryption-config.json\n\nEnsure the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, does NOT contain:\n\nsecrets-encryption: false\n\nIf secrets encryption is turned off, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2735,
      "benchmarkId": 39,
      "groupId": "V-254574",
      "title": "SRG-APP-000454-CTR-001110",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254574r961677_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-001580",
      "ruleTitle": "Rancher RKE2 must remove old components after updated versions have been installed.",
      "ruleVulnDiscussion": "Previous versions of Rancher RKE2 components that are not removed after updates have been installed may be exploited by adversaries by causing older components to execute which contain vulnerabilities. When these components are deleted, the likelihood of this happening is removed.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-002617",
      "ruleFixText": "Remove any old pods that are using older images. On the RKE2 Control Plane, run the command:\n\nkubectl delete pod podname\n(Note: \"podname\" is the name of the pod to delete.)\n\nRun the command:\nsystemctl restart rke2-server",
      "ruleFixId": "F-58007r859291_fix",
      "ruleCheckSystem": "C-58058r859290_chk",
      "ruleCheckContent": "To view all pods and the images used to create the pods, from the RKE2 Control Plane, run the following command:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{..image}\" | \\\ntr -s '[[:space:]]' '\\n' | \\\nsort | \\\nuniq -c\n\nReview the images used for pods running within Kubernetes.\nIf there are multiple versions of the same image, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2736,
      "benchmarkId": 39,
      "groupId": "V-254575",
      "title": "SRG-APP-000456-CTR-001125",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-254575r961683_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-001620",
      "ruleTitle": "Rancher RKE2 registry must contain the latest images with most recent updates and execute within Rancher RKE2 runtime as authorized by IAVM, CTOs, DTMs, and STIGs.",
      "ruleVulnDiscussion": "Software supporting RKE2, images in the registry must stay up to date with the latest patches, service packs, and hot fixes. Not updating RKE2 and container images will expose the organization to vulnerabilities.\n\nFlaws discovered during security assessments, continuous monitoring, incident response activities, or information system error handling must also be addressed expeditiously.\n\nOrganization-defined time periods for updating security-relevant container platform components may vary based on a variety of factors including, for example, the security category of the information system or the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw).\n\nThis requirement will apply to software patch management solutions used to install patches across the enclave and to applications themselves that are not part of that patch management solution. For example, many browsers today provide the capability to install their own patch software. Patch criticality, as well as system criticality will vary. Therefore, the tactical situations regarding the patch management process will also vary. This means that the time period utilized must be a configurable parameter. Time frames for application of security-relevant software updates may be dependent upon the Information Assurance Vulnerability Management (IAVM) process.\n\nRKE2 components will be configured to check for and install security-relevant software updates within an identified time period from the availability of the update. RKE2 registry will ensure the images are current. The specific time period will be defined by an authoritative source (e.g., IAVM, CTOs, DTMs, and STIGs).",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-002605",
      "ruleFixText": "Upgrade RKE2 to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.",
      "ruleFixId": "F-58008r859294_fix",
      "ruleCheckSystem": "C-58059r918250_chk",
      "ruleCheckContent": "Authenticate on the RKE2 Control Plane.\n\nVerify all nodes in the cluster are running a supported version of RKE2 Kubernetes. \nRun command:\n\nkubectl get nodes\n\nIf any nodes are running an unsupported version of RKE2 Kubernetes, this is a finding.\n\nVerify all images running in the cluster are patched to the latest version.\nRun command:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec.containers[*].image}\" | tr -s '[[:space:]]' '\\n' | sort | uniq -c\n\nIf any images running in the cluster are not the latest version, this is a finding.\n\nNote: Kubernetes release support levels can be found at: https://kubernetes.io/releases/",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    },
    {
      "id": 2737,
      "benchmarkId": 39,
      "groupId": "V-268321",
      "title": "SRG-APP-000131-CTR-000285",
      "description": "<GroupDescription></GroupDescription>",
      "ruleId": "SV-268321r1017019_rule",
      "ruleWeight": "10.0",
      "ruleSeverity": "medium",
      "ruleVersion": "CNTR-R2-000460",
      "ruleTitle": "Rancher RKE2 must be built from verified packages.",
      "ruleVulnDiscussion": "Only RKE2 images that have been properly signed by Rancher Government's authorized key will be deployed to ensure the cluster's security and compliance with organizational policies.",
      "ruleFalsePositives": "",
      "ruleFalseNegatives": "",
      "ruleDocumentable": "false",
      "ruleMitigations": "",
      "ruleIdent": "CCI-001749",
      "ruleFixText": "Immediate action must be taken to remove non-verifiable images from the cluster and replace them with verifiable images. \n\nUtilize Hauler (https://hauler.dev) to pull and verify RKE2 images from Rancher Government Solutions Carbide Repository.\n\nFor more information about pulling Carbide images and their signatures, including RKE2, see: \nhttps://rancherfederal.github.io/carbide-docs/docs/registry-docs/downloading-images",
      "ruleFixId": "F-72245r1017018_fix",
      "ruleCheckSystem": "C-72342r1017017_chk",
      "ruleCheckContent": "Utilizing Hauler (https://hauler.dev), ensure all RKE2 Kubernetes Container images running in the RKE2 cluster have been obtained and their signatures have been validated and signed by Rancher Government Solutions Private Key. \nFor reference, the public key is available at: \nhttps://raw.githubusercontent.com/rancherfederal/carbide-releases/main/carbide-key.pub\n\nFor more information about verifying the signatures of Carbide images, including RKE2, see: \nhttps://rancherfederal.github.io/carbide-docs/docs/registry-docs/validating-images\n\nIf any RKE2 images are identified as not being signed by the Rancher Government Solutions' private key, this is a finding.",
      "createdAt": "2025-10-21T11:13:22.126Z",
      "updatedAt": "2025-10-21T11:13:22.126Z"
    }
  ],
  "profiles": [
    {
      "id": 284,
      "benchmarkId": 39,
      "profileId": "MAC-1_Classified",
      "title": "I - Mission Critical Classified",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 285,
      "benchmarkId": 39,
      "profileId": "MAC-1_Public",
      "title": "I - Mission Critical Public",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 286,
      "benchmarkId": 39,
      "profileId": "MAC-1_Sensitive",
      "title": "I - Mission Critical Sensitive",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 287,
      "benchmarkId": 39,
      "profileId": "MAC-2_Classified",
      "title": "II - Mission Support Classified",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 288,
      "benchmarkId": 39,
      "profileId": "MAC-2_Public",
      "title": "II - Mission Support Public",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 289,
      "benchmarkId": 39,
      "profileId": "MAC-2_Sensitive",
      "title": "II - Mission Support Sensitive",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 290,
      "benchmarkId": 39,
      "profileId": "MAC-3_Classified",
      "title": "III - Administrative Classified",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 291,
      "benchmarkId": 39,
      "profileId": "MAC-3_Public",
      "title": "III - Administrative Public",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    },
    {
      "id": 292,
      "benchmarkId": 39,
      "profileId": "MAC-3_Sensitive",
      "title": "III - Administrative Sensitive",
      "description": "<ProfileDescription></ProfileDescription>",
      "createdAt": "2025-10-21T11:13:22.279Z",
      "updatedAt": "2025-10-21T11:13:22.279Z"
    }
  ]
}