Link to the paper: https://www.henrycg.com/files/academic/papers/sosp17atom.pdf
Summary
In this paper Kwon et al studies an anonymous message system called Atom which protects against traffic-analysis attacks. The novel development of this form of messaging system is that its capacity scales near-linearly with the number of servers on the network compared to prior methods scaled at a much slower rate. Atom brings together a lot of theory that has been published in prior papers and then puts into practice this theory with a number of work arounds specifically work arounds for the multiparty computation protocols which have been inefficient to deploy on a large scale. The result of atom is that at the time of publication the system is 23x faster than prior systems with similar privacy guarantees.
What I liked:
I like the fact that they not only showed the set up behind how something works and not just the theory, but the authors of the study further extrapolated some applications of the technology in a real world setting
I like that they were able to figure out a way around multiparty computation protocols (MPCs) which are generally too inefficient to use.
They used 1024 amazon servers to run their program and actually test the performance of it prior to publishing this paper - though this probably wouldn't be a real life scenario it’s an extra plus
The systems fail safe kicks in with only 2 honest users in a pool full of adversaries which is very good because it prevents a majority of dishonest users from taking over if there are still honest users
Two forms of tamper resistance? Both the NIZK proofs and the novel trap message based encryption. The trap message means that if there is a malicious server that edits a message it is a 50% chance that it’s a trap message.
What I didn’t like:
The very start of the atom protocol dictates that volunteer servers are organized into small groups. How are these small groups created? Can they be done in a decentralized manner?
There are very strict setting in which atom is effective for anonymity that are spelled out in the report -- they are essentially betting that there is an honest server in each of the server groups which might be reasonable when there are thousands of servers but what if there are only a handful early on.
The study used the exact same servers from the exact same place - which doesn't replicate how a network would work if they are all scattered across the world with different bandwidths different server types etc
The system is extremely vulnerable when there is a small amount of users - which kinda begs the question why be an early adopter of the technology if it puts you more at risk?
A small problem the paper notes is intersection attacks - but on a strong healthy network this should not be a problem - it does how ever kind of build the argument against being an early adopter of this protocol.
Points for further discussion:
Why is latency a major problem here? IE is the constraint the number of servers or the latency between the links? The system atom uses can transit 1 million tweets in 30 minutes
The study points to an internal example where they rented out amazon servers but how will the system deploy in the real world when each user has different motives in the network?
Why should someone be an early adopter of this technology?
If we were to stagger different servers with different capabilities would this cut the idle time the paper was referring to by a significant amount? Would we even be able to predict this because the server groups are constantly changing?
Is there a way to make an atom network private - so that it is only available to a certain subset of users who all want to stay anonymous?
New Ideas:
In bitcoin early adopters are incentivized to join the network by having the opportunity to cheaply mine new currency when it is easy to - how can Atom convince early adopters to come on board?
Is there a way to have a bunch of cheap servers bombard the network in the hope that the servers are put in a group with less than 2 honest users?
Can this architecture be used in other decentralized networks to preserve anonymity
Explore configuring the trap message so if a message is altered it is a 90% chance or a 99% chance that it is a trap message? Would this lower the amount of messages that could be transmitted on the network?
Explore on making the process computationally expensive to scale in order to prevent one person from getting hundreds of cheap servers to attack the network.