免责声明
由于传播、利用本公众号琴音安全所提供的信息而造成的任何直接或者间接的后果及损失,均由使用者本人负责,公众号琴音安全及作者不为此承担任何责任,一旦造成后果请自行承担!如有侵权烦请告知,我们会立即删除并致歉谢谢! |
0x01前言
续之前上一篇xray联动red,多个扫描器之间进行联动扫描漏洞,组合为非是:awvs+burp可在burp的http历史记录查看每一个数据包;awvs+xray无疑awvs的爬虫很好用,支持表单分析和单页应用的爬取,xray的扫描功能很强大;xray+burp将burp的流量转发给xray扫描,简单点来说就是你访问一个页面,xray就会对这个页面就行扫描;各有各的好处,三个套在一起,可谓是取长补短
0x02设置步骤
burp设置
设置burp代理,AWVS扫描流量会转发到burp
在添加一层代理转发给xray联动
xray设置
启动xray,设置代理与上一步ip端口一致
xray_windows_amd64.exe webscan --listen 127.0.0.1:1664 --html-output shy.html
AWVS设置
启动AWVS,添加扫描目标和代理,并自定义请求头
效果展示
burp
xray
AWVS
拓展
批量给awvs添加目标,实现自动化刷洞
需注意一下几点
- 将apikey 替换为自己的,可在AWVSweb端进行生成
-
修改代码58行,配置批量添加目标url.txt
- 将awvs_url改为自己awvs地址和端口
#coding=utf-8 import requests import json import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) apikey = '1986ad8c0a5b3df4d7028d5f3c06e936c9ac1de7079174939808751e8af186837' # API headers = {'Content-Type': 'application/json', "X-Auth": apikey} def addTask(url, target): try: url = ''.join((url, '/api/v1/targets/add')) data = {"targets": [{"address": target, "description": ""}], "groups": []} r = requests.post(url, headers=headers, data=json.dumps(data), timeout=30, verify=False) result = json.loads(r.content.decode()) return result['targets'][0]['target_id'] except Exception as e: return e def scan(url, target, Crawl, user_agent, profile_id, proxy_address, proxy_port): scanUrl = ''.join((url, '/api/v1/scans')) target_id = addTask(url, target) if target_id: data = {"target_id": target_id, "profile_id": profile_id, "incremental": False, "schedule": {"disable": False, "start_date": None, "time_sensitive": False}} try: configuration(url, target_id, proxy_address, proxy_port, Crawl, user_agent) response = requests.post(scanUrl, data=json.dumps(data), headers=headers, timeout=30, verify=False) result = json.loads(response.content) return result['target_id'] except Exception as e: print(e) def configuration(url, target_id, proxy_address, proxy_port, Crawl, user_agent): configuration_url = ''.join((url, '/api/v1/targets/{0}/configuration'.format(target_id))) data = {"scan_speed": "fast", "login": {"kind": "none"}, "ssh_credentials": {"kind": "none"}, "sensor": False, "user_agent": user_agent, "case_sensitive": "auto", "limit_crawler_scope": True, "excluded_paths": [], "authentication": {"enabled": False}, "proxy": {"enabled": Crawl, "protocol": "http", "address": proxy_address, "port": proxy_port}, "technologies": [], "custom_headers": [], "custom_cookies": [], "debug": False, "client_certificate_password": "", "issue_tracker_id": "", "excluded_hours_id": ""} r = requests.patch(url=configuration_url, data=json.dumps(data), headers=headers, timeout=30, verify=False) def main(): Crawl = True proxy_address = '127.0.0.1' proxy_port = '8080' awvs_url = 'https://127.0.0.1:3443' # awvs url with open(r'C:\x\x\url.txt', 'r', encoding='utf-8') as f: #添加URL路径 targets = f.readlines() profile_id = "11111111-1111-1111-1111-111111111111" user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.21 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.21" # 扫描默认UA头 if Crawl: profile_id = "11111111-1111-1111-1111-111111111117" for target in targets: target = target.strip() if scan(awvs_url, target, Crawl, user_agent, profile_id, proxy_address, int(proxy_port)): print("{0} 添加成功".format(target)) if __name__ == '__main__': main()
工具 获取
链接:https://pan.baidu.com/s/1lZHsZJnCjKKpzkZ7mVhiUA
提取码:0000
原文始发于微信公众号(琴音安全):三联套娃AWVS+Burp+XRAY,躺平刷洞(附批量脚本)
免责声明:文章中涉及的程序(方法)可能带有攻击性,仅供安全研究与教学之用,读者将其信息做其他用途,由读者承担全部法律及连带责任,本站不承担任何法律及连带责任;如有问题可邮件联系(建议使用企业邮箱或有效邮箱,避免邮件被拦截,联系方式见首页),望知悉。
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论