In an ideal world HashiCorp Vault is neither the first nor last line of defense against an adversary. Most organizations have multiple layers of security, starting at the perimeter and continuing throughout their infrastructure to establish defense in depth. But we’ve built and architected Vault to stand against adversaries that have successfully infiltrated that perimeter.
This characteristic of Vault has become incredibly important with the rise of insider threats and supply chain attacks. Recent campaigns like UNC2452 and ShadowHammer have led to some of the largest data breaches in history, resulting in advanced adversaries having prolonged and undetected access to extremely sensitive systems across the U.S. public and private sector.
This is not the first time that Vault and Vault Enterprise have operated in environments subject to a data breach. At my HashiConf 2020 talk on Adversarial Modeling I highlighted that we are aware of situations where Vault storage backends were exfiltrated and successfully resisted attack against cryptanalysis.
Data encrypted by Vault has been able to stand up to independent attack because of the design of its cryptographic barrier and its encryption at rest — one that uses industry-vetted cryptography and provides options for users to secure their infrastructure pursuant to their threat model.
Vault and Vault Enterprise protect their data independent of their host storage system within a structure known as the Cryptographic Barrier (AKA the “crypto barrier”). This is done for two reasons: to allow Vault operators to not be “locked in” to a particular storage or compute infrastructure, and to ultimately ensure that an adversary compromising the storage infrastructure of Vault does not yield access to Vault’s secrets or its sensitive configuration data.
Put another way: if you steal all of Vault’s data, an independent layer of cryptography that is neither controlled nor operated by the storage system Vault “parks” its data on protects this data even after successful breach and exfiltration of Vault’s storage backend.
In order to start a Vault server and access the data encrypted at rest, a process known as unsealing must occur. Unsealing can either take two forms:
Manual Unsealing: Using Shamir’s Key Sharing Algorithm, an operator assembles a quorum of key shards that are used to generate the key to decrypt data stored behind the cryptographic barrier.
Auto Unsealing: Utilizing a trusted external system, such as an HSM (Hardware Security Module) or a cloud KMS (Key Management System), to store material used to generate the key to decrypt data stored behind the cryptographic barrier.
Regardless of the unsealing mechanism, Vault is designed to specifically complicate efforts for an adversary to steal key material to circumvent cryptography.
Neither approach directly stores the actual encryption key for the data, instead storing material used to generate the encryption keys to access data at rest. This ensures that even if a well-resourced adversary breaches an auto unseal system that they cannot directly steal the keys to decrypt the crypto barrier outside of the unseal process.
Vault goes one step further in ensuring that Vault users and operators never interact with keys used to protect their secrets. The Vault server completely manages the process of retrieving keys from encrypted storage, instead gatekeeping access to those secrets purely in the access control list (ACL) permissions in Vault — as well as Sentinel policies for Vault Enterprise users — for the verified entity attempting to access said secrets.
While this simplifies the process of using Vault and Vault Enterprise for users, it also minimizes the possibility of a privilege escalation or key exfiltration attack. An adversary who has breached a user’s infrastructure cannot steal a user’s key material and use that to attack encrypted Vault data at rest.
Beyond secure management of key material, Vault’s infrastructure is designed to protect and encourage best practices for managing secure access to secrets. This includes:
Multilevel Security (MLS), Role-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC): Vault’s ACL systems and Vault Enterprise features like Sentinel allow for operators to instrument ABAC, RBAC, and Multilevel Security. Both have no access to key material for protecting secrets at rest, and are default-deny for privileged access to secrets.
Audit Logging: Extensive audit logging exists within Vault to track all requests and responses. Vault supports a number of audit log formats and integrations, allowing for security teams to surface and correlate security events to better understand how potentially anomalous behavior could highlight potential inside threat activity.
Dynamic Credentials: Secrets engines like the cloud secret engines (AWS Secrets Engine, Azure Secrets Engine, GCP Secrets Engines) and the database secret engine allow for Vault to create short-lived, ephemeral credentials for users or applications. This ensures that even if these credentials are accidentally leaked, eavesdropped, or stolen that they cannot be used to gain pervasive access to the victim’s infrastructure.
Trust Independent of the Network: Vault does not rely on its network to establish trust or privacy, requiring independent TLS for communication with a client and AuthN/AuthZ of the client via a configured Auth Method and the corresponding client’s ACL privileges via their Vault entity.
Zero Trust Security. Vault is implemented with the assumption of untrusted networks, also known as the zero trust principle. Explicit requirements for client authentication/authorization and short-lived, ephemeral credentials work together to make lateral movement and persistent access significantly more challenging while providing a rich forensic audit trail for detection and response.
Vault and Vault Enterprise simplify key management for protecting data at rest to protect against advanced adversaries who have penetrated perimeter security. This ensures that an adversary who wants to steal Vault data cannot “go around” the encryption. They need to go headfirst into Vault’s crypto barrier and breach it through mathematic code breaking/cryptanalysis. This is no simple task.
The ciphers in Vault are chosen specifically because they have been shown to resist cryptanalysis against very well-resourced, skilled adversaries, including those armed with supercomputers and near-term quantum computers.
Data encrypted within the barrier is protected with the Go implementation of AES-256 GCM (Galois Counter Mode). AES-256 GCM was chosen because it was a performant cipher sufficient for protecting extremely sensitive information, including data classified TOP SECRET by the U.S. Government and many NATO nations.
AES-256 GCM remains computationally intractable against attacks to guess its keys and a standard across the private sector and Western military/national intelligence environments for its resistance to cryptanalysis from supercomputers. It also remains secure against known quantum cryptanalysis attacks utilizing techniques powered by Grover’s Algorithm.
Selecting ciphers that are secure against known quantum cryptanalysis has been an effort from the Vault team since the very beginning of Vault and Vault Enterprise. We’ve detailed some of these efforts in previous blog posts.
Vault Enterprise goes two steps further, allowing operators to further supplement Vault’s crypto barrier with additional crypto from an external crypto module via Seal Wrap or supplement Vault’s system random number generator with entropy from an external hardware True Random Number Generator via Entropy Augmentation.
These features allow Vault to further protect its core cryptography and are often used to satisfy compliance requirements for deploying Vault in certain high-security environments. But all versions of Vault utilize cryptography selected to stand up to adversaries wielding nation-state-level expertise and resources.
Whether we’re protecting against classical cryptanalysis or quantum cryptanalysis the goal is the same: we design all versions of Vault to withstand codebreaking attacks even if an adversary has supercomputers and long-term access to a copy of Vault’s encrypted storage backend.
Beyond implementing secure architecture, the Vault development team and HashiCorp’s security organization have developed a number of additional controls to protect against an adversary’s attempt to weaken or circumvent the protections built into Vault.
These include the following:
Safe Coding Practices: Vault takes care to isolate code that handles key material and cryptography as much as possible, and all code changes to Vault are subject to stringent code review and testing.
*Internal Security: *The Vault team and HashiCorp as a whole are subject to continuous internal security checks and analysis. This includes collaboration with our application security team, proactive penetration testing from our internal Red Team, auditing of the systems used for Vault’s development, and much more.
External Audit and Compliance: Vault and the HashiCorp engineering infrastructure as a whole are continuously subject to external evaluation. Vault is subject to regular external audits that cover code reviews as well as cryptanalytic analysis, and HashiCorp has achieved SOC 2 Type 2 audit as well as ISO 27001 compliance.
One of the most important aspects of secure development in Vault is the community. Vault’s cryptographic libraries and core security are all open source, ensuring that there is a community in the tens of thousands who regularly review Vault’s security foundations. To ensure that the community’s findings translate into continuously improving Vault’s security, HashiCorp has a responsible disclosure program that has been historically critical to Vault’s security in development.
Vault is designed to serve as an independent, last line of defense for secrets. Situations like the ongoing activity from APT29 highlight the grave importance of this role, and the importance of designing the cryptography and key management protecting Vault secrets to stand up to concerted attacks from well-resourced, skilled adversaries.
Designing secure cryptosystems is a never-ending process. In the future, we may enable users to migrate to new ciphers proven secure against future forms of cryptanalysis. But today and tomorrow, our goal remains the same: the data in Vault is protected with encryption to maintain perfect forward secrecy as much as technologically possible.
People are increasingly hanging out in small, private communities. Global timelines and newsfeeds won’t come back.
The shift looks something like this:
As covered by this excellent edition of Garbage Day, Twitter shows all the characteristics of a rotting online community. How to recognise a rotting community, quoting the full list:
Power users aggressively dominate discussion on the site.
Public harassment and inter-community elitism has created a culture of indirect communication, where users no longer directly say what they’re actually trying to say.
There is no longer any internal cultural memory.
Users have become so obsessed with the minutiae of the community that the site now functions as a meta discussion of itself instead of whatever its intended purpose was.
Poor or lax moderation has created a sense that nothing on the site is genuine - fake users, fake trending topics, fake threads, fake engagement.
Users, reacting to the inauthentic behavior, public harassment, and elitism that occurs due to bad moderation, create their own self-policed communities within the larger community, which typically only exacerbates these problems and creates warring factions within the site.
Meanwhile, only the olds use Facebook (including me), but everyone younger has mostly vanished, being increasingly uncomfortable with
So this is the end of the era of global timelines. Who would take on the responsibility of content moderation to build another one.
Where is everybody going instead?
Well there are the peer-to-peer and small group spaces of texting and WhatsApp.
But the problem of peer-to-peer is that you don’t get those joyous, serendipitous moments of running into like-minded friends-of-friends.
There are private Discords, private Slack channels, and a flurry of spatial interfaces in development. They’re immune to data harvesting, invisible from search engines, and there’s no context collapse – good fences make good neighbours.
As the global timelines get abandoned, this is where people are homesteading.
And doing all the usual things of chatting, sharing links, giving support, falling out, making jokes, and all the rest.
…Or so I’m told.
I’m no zillennial hanging out in a handful of private Discords. Instead I have a blog, which is like being a big noise in ham radio, or an unironic aficionado of VHS.
My own, limited experience of this, from back in 2015:
If the global timeline feels like a city, a private Slack group feels like a neighbourhood.
I do not include in this “virtual neighbourhood” space media like newsletters and podcasts, both growing fast rn.
Perhaps what we’re seeing is the disentangling of social media back into social and media: newsletters and podcasts are best understood as being part of the media spectrum, even if many of them are smaller and have community spaces attached. And Discord space, Slack spaces, etc, these virtual neighbourhoods are pure social.
I’d love to understand these virtual neighbourhoods better.
My hunch is their optimum size will hover around the Dunbar number of ~150, fewer if you’re just looking at active members (you need a mix of active and less active in any community).
But has anyone published any research on this?
What is the distribution of populations of private Discord groups and other similar spaces? How many groups do people belong to? How does this time take away from other activities? Is there a typology of groups and how they start? How well do people know each other? Is there a typical lifecycle? Are there temporary groups and persistent groups? Is there a difference in the culture created vs the global timelines? Etc.
And let’s assume that this grows into the dominant mode of socialising online in the 2020s: