The Taiwan Banker

The Taiwan Banker

New cybersecurity thinking acknowledges the limitations of human monitoring

New

2022.12 The Taiwan Banker NO.156 / By David Stinson

New cybersecurity thinking acknowledges the limitations of human monitoringBanker's Digest
Since the pandemic, it has become common knowledge that supply chains for physical goods are an important consideration for strategic planning. This realization is just starting to set in for software. The problem of exponentially growing attack surfaces is even more complex and intractable in the world of information. A third party can be invoked with the simple keyword “import,” with potentially no need to negotiate with a business partner. And unlike goods sitting in a warehouse, the content of the library could change since the last time you’ve looked at it, failing in an immense number of ways. A single successful IT supplier can hope to have customers all over the world, with applications across a range of sensitivity levels. The US government is just starting to get its head around the vulnerabilities created by such arrangements. In December 2020, it announced that the Treasury and Department of Commerce had been compromised by a malicious update to the Orion network performance monitoring software of SolarWinds. Eventually, the list of victims expanded to at least 200 organizations worldwide, including other hardened government and critical IT targets. The original purpose of the attack remains unknown, but the troves of data exfiltrated over many months of “dwell time” before the virus was detected could be used for almost anything. The SolarWinds attack remains firmly in mind as the US prepares an upcoming update of its Cybersecurity Strategy, for the first time since 2018. That strategy document will likely further operationalize the “zero trust” principle which has become an important part of cybersecurity practice in recent years. This name is somewhat unfortunate in that it implies eliminating cost-benefit calculations. In reality, it does not mean never using embedded third-party services, but rather implies a variety of methods to ensure that a single compromise does not spread within a network. Supply chain vulnerability The SolarWinds attack – also associated with the names Sunburst, Solorigate, and Nobelium – began when the attackers entered SolarWinds’ build system, a key stage for software integrity. That intrusion may have leveraged a previous attack on Microsoft 365, SolarWinds’ email provider. Once in the system, the attackers shipped several malicious updates SolarWinds’ customers. In many cases, the hackers left the system in place, but did not activate it, hoping to avoid detection. In high-value targets of interest, however, they proceeded to attack federated identity systems. Federated identity management (FIM) was conceived as a solution to the “password apocalypse.” The proliferation of passwords that need to be remembered on a daily basis is not only inconvenient for users, but also encourages weak security practices, such as shared passwords across different accounts. With FIM, a user only requires a single password, and the IT department only needs to create a single account. The user account stores the passwords to all outside accounts to which the user might require access – similar to the way that Google or Facebook allow users to sign into a variety of other platforms, except contained within an organization. Within the FIM system, the outside party – which could be a cloud service like Amazon Web Services (AWS) – must ensure that the request comes from a legitimate user. To do so, it requests a token from the user’s machine, which then automatically requests the token from an identity provider which manages credentials for the entire organization. The original machine then passes the token onto the third party. In the SolarWinds attack, the hackers obtained a “golden SAML,” allowing accounts to feign approval from the identity provider without ever having contacted it. Thus, the Orion software was allowed to connect to cloud services, and could feign any credentials within the target organizations, including the CEO and IT administrators. The identity provider, having been bypassed, had no opportunity to notice the unusual nature of such requests. Zero trust, multiple definitions The use of IT management software exemplifies a trend in recent years of hackers “living off the land,” which means making use of existing administrative software in order to blend into the environment for an extended period. The older style of attack would tend to add malicious files to a system, but dead disk scans – the older style of anti-virus software which would monitor the hard disk at rest, rather than runtime actions – have gotten good enough that once inside a system, hackers must make further efforts to conceal their presence. Now, advanced attackers must often use lateral movements to escalate their privileges by step. This also means that they must investigate their network environment carefully in order to discover any monitoring tools and avoid any activity that might be flagged to administrators. The zero trust concept has become mainstream in this technological environment. “The foundational tenet of the Zero Trust Model is that no actor, system, network, or service operating outside or within the security perimeter is trusted. Instead, we must verify anything and everything attempting to establish access,” according to the US Department of Defense Zero Trust Reference Architecture. “It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction.” “Could Zero Trust in and of itself prevented the attack from succeeding?” asked George Finney, CSO of Southern Methodist University, referring to the SolarWinds hack. “Probably not. However, I am firmly convinced that broader deployment of Zero Trust could have mitigated the impact of the attack by potentially calling attention to it sooner and by limiting its spread.” In particular, the Orion software may not have needed permission for internet access in the first place. Once such permission is given, it is unrealistic to expect human administrators, or even coded rulesets, to uncover abnormal behavior, particularly given that the hackers took further steps to obfuscate their communications. The human factor In May 2021, the US issued an Executive Order called Improving the Nation’s Cybersecurity calling for the government to move to a zero-trust architecture by 2024. “Incremental improvements will not give us the security we need,” said President Joe Biden in the order. “Instead, the Federal Government needs to make bold changes and significant investments in order to defend the vital institutions that underpin the American way of life.” This language reflects the current sense of insecurity in the wake of constant attacks, up to and including its most sensitive systems. For hundreds of years, the US has benefitted from geopolitical isolation which suddenly no longer exists in cyberspace. Banks do not have quite the same security requirements as either the US or the Taiwanese military, but many of the challenges are common between both types of systems. Banking is now entering an era of open banking APIs and embedded services, which will open up a variety of new vulnerabilities. To protect their systems, banks will need to ensure that exactly the right level of functionality is enabled. An overfeatured architecture is just as harmful as an underfeatured one. In the end, cybersecurity (as well as, from the offensive perspective, cyber intrusions) always comes down to humans. IT staff must find ways to increase the signal-to-noise ratio during monitoring. Defensive software is moving from a static focus on the hard drive to more granular system processes, yet even so, only by cutting down on extraneous connections and processes to reduce the inherent complexity of modern IT operations can information reliably be discovered in time for intervention. Post-event analysis can reveal the cause of an attack, but it won’t highlight all of the dead ends and false starts that would have taken place in real time.