vulnhuntr是一款基于大语言模型和静态代码分析的安全漏洞扫描与分析工具,该工具可以算得上是世界上首款具备自主AI能力的安全漏洞扫描工具。
1、本地文件包含(LFI) 2、任意文件覆盖(AFO) 3、远程代码执行(RCE) 4、跨站点脚本(XSS) 5、SQL 注入(SQLI) 6、服务器端请求伪造(SSRF) 7、不安全的直接对象引用(IDOR)
Python v3.10
Docker安装
docker build -t vulnhuntr https://github.com/protectai/vulnhuntr.git#main
pipx安装
pipx install git+https://github.com/protectai/vulnhuntr.git --python python3.10
源码安装
git clone https://github.com/protectai/vulnhuntr
cd vulnhuntr && poetry install
usage: vulnhuntr [-h] -r ROOT [-a ANALYZE] [-l {claude,gpt,ollama}] [-v]
Analyze a GitHub project for vulnerabilities. Export your ANTHROPIC_API_KEY/OPENAI_API_KEY before running.
options:
-h, --help 显示工具帮助信息和退出
-r ROOT, --root ROOT 项目根目录的路径
-a ANALYZE, --analyze ANALYZE
项目中要分析的特定路径或文件
-l {claude,gpt,ollama}, --llm {claude,gpt,ollama}
要使用的LLM客户端(默认:claude)
-v, --verbosity 增加输出的详细程度(-v表示INFO,-vv表示DEBUG)
使用 Claude 分析整个存储库:
export ANTHROPIC_API_KEY="sk-1234"
vulnhuntr -r /path/to/target/repo/
export OPENAI_API_KEY="sk-1234"
vulnhuntr -r /path/to/target/repo/ -a server.py -l gpt
docker run --rm -e ANTHROPIC_API_KEY=sk-1234 -e ANTHROPIC_BASE_URL=https://localhost:1234/api -v /local/path/to/target/repo:/repo vulnhuntr:latest -r /repo -a repo-subfolder/target-file.py
输出样例
scratchpad:
Analyzing the add_llm function in llm_app.py. 2. Identified user input used as keys to access dictionaries: EmbeddingModel, ChatModel, RerankModel,
CvModel, and TTSModel. 3. These dictionaries contain class references, which are instantiated with user-supplied parameters. 4. The factory variable is
directly used from user input (req['llm_factory']) without proper validation. 5. The llm_name is also derived from user input in most cases. 6. The
instantiation of these classes occurs in try-except blocks, but exceptions are only used for error messages, not preventing execution. 7. The api_key
and other sensitive information are passed directly to these instantiated classes. 8. There's no comprehensive input validation or sanitization for the
factory or llm_name variables. 9. The vulnerability is in a route decorated with @login_required, indicating it's accessible to authenticated users. 10.
Reviewing the EmbeddingModel, ChatModel, RerankModel, CvModel, and TTSModel dictionaries to understand their contents and potential for exploitation.
----------------------------------------
analysis:
The add_llm function in llm_app.py contains a critical Remote Code Execution (RCE) vulnerability. The function uses user-supplied input
(req['llm_factory'] and req['llm_name']) to dynamically instantiate classes from the EmbeddingModel, ChatModel, RerankModel, CvModel, and TTSModel
dictionaries. This pattern of using user input as a key to access and instantiate classes is inherently dangerous, as it allows an attacker to
potentially execute arbitrary code. The vulnerability is exacerbated by the lack of comprehensive input validation or sanitization on these
user-supplied values. While there are some checks for specific factory types, they are not exhaustive and can be bypassed. An attacker could potentially
provide a malicious value for 'llm_factory' that, when used as an index to these model dictionaries, results in the execution of arbitrary code. The
vulnerability is particularly severe because it occurs in a route decorated with @login_required, suggesting it's accessible to authenticated users,
which might give a false sense of security.
----------------------------------------
poc:
POST /add_llm HTTP/1.1
Host: target.com
Content-Type: application/json
Authorization: Bearer <valid_token>
{
"llm_factory": "__import__('os').system",
"llm_name": "id",
"model_type": "EMBEDDING",
"api_key": "dummy_key"
}
This payload attempts to exploit the vulnerability by setting 'llm_factory' to a string that, when evaluated, imports the os module and calls system.
The 'llm_name' is set to 'id', which would be executed as a system command if the exploit is successful.
----------------------------------------
confidence_score:
8
----------------------------------------
vulnerability_types:
- RCE
----------------------------------------
原文始发于微信公众号(FreeBuf):vulnhuntr:基于大语言模型和静态代码分析的漏洞扫描与分析工具
免责声明:文章中涉及的程序(方法)可能带有攻击性,仅供安全研究与教学之用,读者将其信息做其他用途,由读者承担全部法律及连带责任,本站不承担任何法律及连带责任;如有问题可邮件联系(建议使用企业邮箱或有效邮箱,避免邮件被拦截,联系方式见首页),望知悉。
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论