Applied Crypto Highlights: Restricted Oblivious RAMs and Hidden Volume Encryption

This is the first in a series of guest posts highlighting new research in applied cryptography. This post is written by Travis Mayberry from Northeastern University. Note that Travis is graduating this year and will be on the job market.

ORAM Background

Oblivious RAM is a very hot research topic right now. As Seny has written about here, it can be used to perform searches over outsourced encrypted data while maintaining the highest possible levels of security against a malicious storage provider. As noted in that post, however, in exchange for this security it imposes a very significant overhead on the client. In contrast, searchable encryption gives us almost as much security at a much lower cost. So, why should we care about ORAM then, and why is it so interesting to researchers right now? In this post I'm going to attempt to answer that question as well as highlight some advances in ORAM efficiency that I recently presented at CCS and a few interesting applications for it that you may not have seen.

Broadly, the answer to my question above is that the security you give up for improved efficiency might not be acceptable. The motivating example I usually give is one of a hospital that wants to outsource its patient records to the cloud. Of course, they are concerned about patient privacy and so they encrypt those records to prevent the cloud provider from learning sensitive information from those records. Unfortunately, beyond the data itself, a careful adversary can learn a lot of sensitive information from where, when and how often a client accesses their data. In this case, if the provider sees that a cancer doctor has accessed my records, they will learn that I have (or at least suspect I have) cancer, regardless of whether they can decrypt my actual records or not. The most dangerous aspect of these types of attacks is that they are cumulative. An adversary may learn only a small amount from any one access, but over time they can aggregate everything that they have seen with any side knowledge of the client they might have to reveal a surprising amount of sensitive information. With more data being outsourced to the cloud every day, this becomes a bigger and bigger problem.

Here is, of course, where Oblivious RAM comes in. Remember that ORAM can be used for secure searching, but it is actually a very general tool that can hide any access pattern a user wishes to perform from the server it is performing it on. An ORAM scheme provides an interface \((\mathsf{Read}, \mathsf{Write})\), which guarantees that the addresses a client reads and writes to are hidden from the server. Specifically, given any two sequences of accesses \(S_0\) and \(S_1\), and a random bit \(b \leftarrow_{\$} \{0,1\}\), there should not exist any probabilistic polynomial time adversary \(\mathcal{A}\) such that \(\mathcal{A}(\mathsf{ORAM}(S_b)) \rightarrow b\) with probability non-negligibly greater than \(1/2\). Here we use \(\mathsf{ORAM}(S_b)\) to signify the series of accesses performed on the server by the ORAM algorithm when running \(S_b\).

This is accomplished by continually shuffling and refreshing the data on the server so that each individual access is indistinguishable from random. Again, refer to the previous post for a good example of how an ORAM works, but there are a few things worth restating:

  1. ORAMs are highly stateful. Every read and write operation will change the data structure on the server, and every subsequent operation will depend on the current state of the storage device. This often means that the client has to keep additional auxiliary information beyond long-term secrets in order to correctly access the data on the server. We refer to this as the \emph{client memory}.

  2. Each access the client performs will require more than one (oftem many more) "raw" accesses on the storage device. This is a consequence of the fact that the client must hide which block of data they are actually interested in.

Efficiency. The main property which we evaluate ORAM algorithms on is their communication efficiency. For every operation the ORAM performs, how much data must be transferred to and from the server? This is very important for cloud scenarios because communication overhead translates directly to cost. Efficiency is expressed in terms of the number of blocks in the data store, \(n\), and the size of each block \(B\). Usually, however, \(B\) is left out and the cost is expressed as a multiple of \(B\). This gives an intuitive notion of "overhead" compared to a normal access, which simply costs \(B\) communication. Additionally, we must consider the amount of client memory that a scheme requires. If the client needs a huge amount of memory then it would be counterproductive in a scenario meant specifically to alleviate the client's storage burden.

Existing work. As I alluded to before, there has been a flurry of research lately on ORAM. The two most notable papers have been by Shi et al. [SCSL11] and Stefanov et al. [SDS+13]. The first paper introduced a new paradigm for ORAMs, constructing a data structure with a binary tree where each node is itself a smaller Oblivious RAM. This has inspired much of the subsequent research, and it would require its own post to do it justice, but suffice it to say that they achieve overhead of \(O(\log^3{n})\) with only \(O(1)\) client memory. Stefanov et al. introduced an improved tree construction with \(O(\log^2{n})\) efficiency and \(O(\log{n})\) client memory. Additionally, in a later revision of the paper, they were able to introduce a tweak which reduces the overhead of both schemes by a \(O(\log{n})\) factor. This, finally, gives us a scheme with constant memory and \(O(\log^2{n})\) overhead, and one with higher, logarithmic memory but only \(O(\log{n})\) overhead. In terms of efficiency, Path ORAM represents the state of the art for Oblivious RAM.

Write-only ORAM construction

Although tree-based ORAMs provide drastically improved efficiency over more traditional hierarchical or square-root schemes, they still impose a non-trivial overhead that makes many applications cost-prohibitive. As a simple reference, setting \(n=2^{20}\) (a modestly sized database) in Path ORAM induces an overhead of at least 80x, and potentially much more depending on the size of \(B\) and the security parameter. There is little known about ORAM lower bounds, but it has been shown that under certain conditions the best you can do is an overhead of \(\Omega(\log{n})\) [Goldreich87]. While there are some rather large loopholes in that proof, such as the requirement that the client have \(O(1)\) memory and the memory blocks be all of equal size, this level of overhead seems unavoidable when using a tree-based scheme simpy because the height of the tree will be \(O(\log{n})\).

It is interesting, then, to consider whether a more restricted Oblivious RAM might achieve better efficiency. Consider an ORAM which attempts to hide not read and write accesses, but writes alone. Suppose for the time being that there is an alternative, secure way for the client to read from the storage device, and it only wants to hide updates. It turns out that this goal is achievable in a rather simple way that induces only \(O(1)\) overhead!

\({\sf Setup}\): To start with, we initialize an array of size \(2n\) on the storage device to hold \(n\) logical blocks of data. Every location is initially empty, and the client has a local data structure which maps a logical block ID in the range \([0,n)\) to a storage location in the range \([0, 2n)\). This way, the client always knows which location a block is in if they want to retrieve it.

\({\sf Write}(id, data)\): Every write operation starts by choosing \(k\) unique positions in the array uniformly at random, \(P = \{p_0, ..., p_k\}\). Since there are \(2n\) "slots" for only \(n\) real blocks, if we choose \(k\) to be moderately large we will be guaranteed that for at least one \(i \leq k\), \(p_i\) will be empty. The client then picks one of these empty blocks and writes \(data\) into it, reencrypting all blocks in \(P\) that \emph{are} full already, and writing random strings into the free blocks of \(P\) that were not used. Finally, the client updates their map data structure so that the record for \(id\) points to the new location. The old location of that block will still hang around, with "stale" data in it, but if it is ever chosen again in a set \(P\) it will be considered free for the purposes of storing new data. In that way, we avoid having to touch the existing location of a block when we are updating its value, leading to more efficient hiding of the access pattern.

Security for this scheme follows from the security of the encryption. If it is indistinguishable from random (IND-CPA), then the adversary sees \(k\) random blocks being filled with random strings. Since everything is independent of the IDs which the client is actually writing to, the access pattern is completely hidden from the server.

Efficiency: While we can achieve communication overhead of \(O(1)\) with the above scheme, there are two problems still 1) the map structure on the client is very large (\(O(n \log{n})\)) and 2) \(k\) needs to be rather large to guarantee an empty block is found. The first issue can be neatly solved by storing the map itself in an ORAM, recursively. This is a relatively common technique, and with a trick from [SDS+13], we can guarantee that this will induce only an \(O(1)\) overhead.

On the other hand, as described above, \(k\) needs to be \(\Omega(\lambda)\) where \(\lambda\) is the security parameter. Since any block chosen randomly has probability \(1/2\) to be empty, to make the failure probability \(O(2^{-\lambda})\), one must set \(k=\Omega(\lambda)\). Fortunately, we can take advantage of the fact that, although failure rate is only low enough when \(k\) is large, the expected number of empty blocks that the client will find is actually \(k/2\). Instead of giving up when we don't find an empty block, we can store a stash of blocks on the client which did not make it into the array, and when we find more than one empty block we can write out extras from the stash. This allows us to set \(k=3\) and, with some queueing theory analysis, maintain a stash of size \(O(\lambda) = \Theta(\log n)\).

In conclusion, we can achieve write-only secure Oblivious RAM with only \(O(1)\) overhead, and in practice very small constants. This allows fully practical use of ORAM for the first time ever, in a reduced set of use cases.

Uses for write-only ORAM

Okay, so we have this write-only ORAM that is pretty efficient, but what does that really get us? In the example I gave above, you clearly need both reading and writing. Well, this idea is very new, but I do have a few ideas. If you want to write a lot more than you read, but still occasionally do some reads, then you can do something like is suggested in [LD13] and actually use PIR to independently read from the database. Of course, PIR is very inefficient, but if your writes outnumber your reads by orders of magnitude, then the savings from more efficient ORAM may outweight the inefficient PIR. This could be useful in data warehousing situations where you need to store files in case they are needed in the future, but the vast majority of them will not be needed.

A more useful, and practical situation would be for online backup or mirroring services. Consider Dropbox for example. Each client has a local copy of the storage device, so when they read from it Dropbox does not get to see these accesses. When they write, however, the client pushes the changes to the server, which then distributes them to the other clients. The adversary in this scenario is effectively "write-only". Using this ORAM, you could have access pattern protection on Dropbox now at very little cost.

I leave the most interesting case for the end, of course. Think about encrypted hard disks. A common scenario is that a user wishes to encrypt their so that if their machine is ever lost of stolen, sensitive information on it cannot be retrieved without the encryption key. Just as in the case above, it might be that you leak more information than you think just through the access pattern you induce on your hard drive, and not the encrypted data itself. Particularly, if an adversary is able to compromise your disk on more than one occassion (every night when you leave your computer at your desk, for instance).

Hidden volume encryption

Now, I know this sounds really paranoid, but stay with me because it is pretty interesting. There is also a notion of "hidden volume" encryption, introduced by TrueCrypt. With this type of encryption, a user can have not one encrypted volume on their disk, but two. The second volume lives on the portions of the disk which are marked "free" on the first volume. This allows for a user to actually \emph{deny} that this second volume even exists. Why would you want to do that? Imagine someone compromises your machine. They know that you have \emph{something} encrypted on there, so they coerce you (through legal or maybe not so legal means) into revealing your password. If you have all of your really secret information stored on your hidden volume, you can safely give up the key to the outer volume and they will have no way of knowing whether you have any further information to give up or not. If the coersion you are facing is of a legal sort, they probably can't continue to pressure you with absolutely no proof that you have any more information to give up at all. If it is of the less legal variety, then it \emph{might} help you, depending on the incentives that they have to keep torturing you. There has been some game theory analysis of the situation, but it does not cover many situations.

Getting back to Oblivious RAM, the hidden volume approach that is incorporated into TrueCrypt fails spectacularly when someone has access to your machine on more than one occasion. It becomes obvious that the client is writing into a hidden volume when the adversary sees a "free" area spontaneously and repeatedly changing its value. The key weakness here is that the main volume and hidden volume are neatly separated from each other, and writing into a certain location reliably is a dead giveaway of the existence of another volume.

Fortunately, hiding access patterns is what ORAM was designed for. And, as I said before, hard drives require just "write-only" security, meaning that we can use our new, optimally efficient construction. Reference our CCS paper for the full details, or my talk at MSR, but the idea is fairly straightforward. The user initializes a number of different ORAMs on the disk, one for every potential volume that they might be using. They choose this number sufficiently larger than the number of volumes that they want to actually use, so that there is some uncertainty as to exactly how many are really being used. For instance, I could choose a maximum of 10 but only use 4. The goal is that, the user can give up \(i \lt max\) passwords, and an adversary should not be able to guess whether there is another volume greater than \(i\) in use, or if \(i\) is the last volume.

Every access, the user writes to the volume that they want to actually change, and they do a "dummy" operation on the other volumes, which looks identical to a real operation but does not change anything. So, upon compromising the machine, an adversary sees the number of operations that have been done, but not which volumes they were on. There are some subtleties that you will have to read the full paper for, but any access pattern that may have given away the existence of a hidden volume is effectively protected by the ORAM.

Conclusion

Hopefully I have convinced you at this point that Oblivious RAM is an interesting cryptographic primitive, and that it is on the verge of being practical in some key situations. When it comes down to it, if you don't think that access pattern security is an issue, then the extra cost associated with ORAM will never be worth it to you. But if you are worried about adversaries that could aggregate accesses and potentially learn critical private information, then I highly encourage you to keep an eye on future research in the area.