This commit is contained in:
= 2018-12-15 23:28:21 +01:00
parent 55a805e5ab
commit d4f5d8578c
2 changed files with 189 additions and 3 deletions

190
notes.org
View File

@ -1209,8 +1209,7 @@ access the resources. Its a web server offering an OAuth API to authenticate
**** Upload Only
- Once download is finished, BitTorrent switches to preferring peers which it has better upload rates to and also preferring peers which no one else happens to be uploading to.
** Attacking a Sawm with a Band of Liars: evaluating the impact of attacks on BitTorrent
** Attacking a Swarm with a Band of Liars: evaluating the impact of attacks on BitTorrent
*** Introduction
- Peer-to-Peer (P2P) file sharing has become one of the most relevant network applications, allowing the fast dis- semination of content in the Internet. In this category, BitTorrent is one of the most popular protocols.
- BitTorrent is now being used as the core technology of content delivery schemes with proper rights management that are being put in operation (e.g., Azureus Vuze)
@ -1253,3 +1252,190 @@ access the resources. Its a web server offering an OAuth API to authenticate
- Sybil attacks are in general effective, more become more and more effective, as the amount of sybils increase.
- Results indicate that BitTorrent is susceptible to attacks in which malicious peers in collusion lie about the possession of pieces and make them artificially rarer.
** Do Incentives Build Robustness in BitTorrent?
*** Abstract
- A fundamental problem with many peer-to-peer systems is the tendency for users to “free ride”—to consume resources without contributing to the system.
- The popular file distribution tool BitTorrent was explicitly designed to address this problem, using a tit-for-tat reciprocity strategy to provide positive incentives for nodes to contribute resources to the swarm.
*** Introduction
- In early peer-to-peer systems such as Napster, the novelty factor sufficed to draw plentiful participation from peers.
- The tremendous success of BitTorrent suggests that TFT is successful at inducing contributions from rational peers. Moreover, the bilateral nature of TFT allows for enforcement without a centralized trusted infrastructure.
- discover the presence of significant altruism in BitTorrent, i.e., all peers regularly make contributions to the system that do not directly improve their performance.
- BitTyrant, a modified BitTorrent client designed to benefit strategic peers. The key idea is to carefully select peers and contribution rates so as to maximize download per unit of upload bandwidth. The strategic behavior of BitTyrant is executed simply through policy modifications to existing clients without any change to the BitTorrent protocol.
- We find that peers individually benefit from BitTyrants strategic behavior, irrespective of whether or not other peers are using BitTyrant.
- Peers not using BitTyrant can experience degraded performance due to the absence of altruisitic contributions. Taken together, these results suggest that “incentives do not build robustness in BitTorrent”.
- Robustness requires that performance does not degrade if peers attempt to strategically manipulate the system, a condition BitTorrent does not meet today.
- Average download times currently depend on significant altruism from high capacity peers that, when withheld, reduces performance for all users.
*** BitTorrent Overview
**** Protocol
- BitTorrent focuses on bulk data transfer. All users in a particular swarm are interested in obtaining the same file or set of files.
- Torrent files contain name, metadata, size of files and fingerprints of the data blocks.
- These fingerprints are used to verify data integrity. The metadata file also specifies the address of a tracker server for the torrent, which interactions between peers participating in the swarm.
- Peers exchange blocks and control information with a set of directly connected peers we call the local neighborhood.
- This set of peers, obtained from the tracker, is unstructured and random, requiring no special join or recovery operations when new peers arrive or existing peers depart.
- We refer to the set of peers to which a BitTorrent client is currently sending data as its active set.
- The choking strategy is intended to provide positive incentives for contributing to the system and inhibit free-riding.
- Modulo TCP effects and assuming last-hop bottleneck links, each peer provides an equal share of its available upload capacity to peers to which it is actively sending data. We refer to this rate throughout the paper as a peers equal split rate. This rate is determined by the upload capacity of a particular peer and the size of its active set.
- There is no end-all definition on the size of the active set. Sometimes it's static, sometimes it's the square root of your upload capacity.
**** Measurement
- BitTorrents behavior depends on a large number of parameters: topology, bandwidth, block size, churn, data availability, number of directly connected peers, active TFT transfers, and number of optimistic unchokes.
*** Modelling altruism in BitTorrent
- Peers, other than the modified client, use the active set sizing recommended by the reference BitTorrent implementation. In practice, other BitTorrent implementations are more popular (see Table 1) and have different active set sizes. As we will show, aggressive active set sizes tend to decrease altruism, and the reference implementation uses the most aggressive strategy among the popular implementations we inspected.
- Active sets are comprised of peers with random draws from the overall upload capacity distribution. If churn is low, over time TFT may match peers with similar equal split rates, biasing active set draws. We argue in the next section that BitTorrent is slow to reach steady-state, particularly for high capacity peers.
- A bunch of other assumptions, allows them to model the altruism
**** Tit-for-tat (TFT) matching time
- By default, the reference BitTorrent client optimistically unchokes two peers every 30 seconds in an attempt to explore the local neighborhood for better reciprocation pairings
- These results suggest that TFT as implemented does not quickly find good matches for high capacity peers, even in the absence of churn.
- We consider a peer as being “content” with a matching once its equal split is matched or exceeded by a peer. However, one of the two peers in any matching that is not exact will be searching for alternates and switching when they are discovered, causing the other to renew its search.
- The long convergence time suggests a potential source of altruism: high capacity clients are forced to peer with those of low capacity while searching for better peers via optimistic unchokes.
**** Probability of reciprocation
- Reciprocation is defined as such: If a peer P sends enough data to a peer Q, causing Q to insert P into its active set for the next round, then Q reciprocates P.
- Reciprocation from Q to P is determined by two factors: the rate at which P sends data to Q and the rates at which other peers send data to Q.
- This can be computed via the raw upload capacity and the equal split rate
- Beyond a certain equal split rate (14 KB/s in Figure 3), reciprocation is essentially assured, suggesting that further contribution may be altruistic.
**** Expected download rate
- The sub-linear growth suggests significant unfairness in BitTorrent, particularly for high capacity peers. This unfairness improves performance for the majority of low capacity peers, suggesting that high capacity peers may be able to better allocate their upload capacity to improve their own performance
***** Expected upload rate
- Two factors can control the upload rate of a peer: data availability and capacity limit.
1) When a peer is constrained by data availability, it does not have enough data of interest to its local neighborhood to saturate its capacity. In this case, the peers upload capacity is wasted and utilization suffers. Because of the dependence of upload utilization on data availability, it is crucial that a client downloads new data at a rate fast enough, so that the client can redistribute the downloaded data and saturate its upload capacity. We have found that indeed this is the case in the reference BitTorrent client because of the square root growth rate of its active set size.
2) capacity limit is obvious
**** Modeling Altruism
- We first consider altruism to be simply the difference between expected upload rate and download rate.
+ This reflects the asymmetry of upload contribution and download rate (The graph essentially shows very high altruism for peers with upload rate above 100 KB/s)
- The second definition is any upload contribution that can be withdrawn without loss in download performance.
+ This suggests that all peers make altruistic contributions that could be eliminated. Sufficiently low bandwidth peers almost never earn reciprocation, while high capacity peers send much faster than the minimal rate required for reciprocation.
- Both of the effects from the second definition can be exploited. Note that low bandwidth peers, despite not being reciprocated, still receive data in aggregate faster than they send data. This is because they receive indiscriminate optimistic unchokes from other users
**** Validation
- Our modeling results suggest that at least part of the altruism in BitTorrent arises from the sub-linear growth of download throughput as a function of upload rate
- Note that equal split rate, the parameter of Figure 7, is a conservative lower bound on total upload capacity
- Essentially; not entirely wrong
*** Building BitTyrant: A strategic client
- The modeling results of Section 3 suggest that altruism in BitTorrent serves as a kind of progressive tax. As contribution increases, performance improves, but not in direct proportion.
- If performance for low capacity peers is disproportionately high, a strategic user can simply exploit this unfairness by masquerading as many low capacity clients to improve performance
- Also, by flooding the local neighborhood of high capacity peers, low capacity peers can inflate their chances of TFT reciprocation by dominating the active transfer set of a high capacity peer
- Both of the above mentioned attacks can be stopped, by simply refusing multiple connections from the same IP
- Rather than focus on a redesign at the protocol level, we focus on BitTorrents robustness to strategic behavior and find that strategizing can improve performance in isolation while promoting fairness at scale.
**** Maximizing reciprocation
- The modeling results of Section 3 and the operational behavior of BitTorrent clients suggest the following three strategies to improve performance.
1) Maximize reciprocation bandwidth per connection: All things being equal, a node can improve its performance by finding peers that reciprocate with high bandwidth for a low offered rate, dependent only on the other peers of the high capacity node. The reciprocation bandwidth of a peer is dependent on its upload capacity and its active set size. By discovering which peers have large reciprocation bandwidth, a client can optimize for a higher reciprocation bandwidth per connection.
2) Maximize number of reciprocating peers: A client can expand its active set to maximize the number of peers that reciprocate until the marginal benefit of an additional peer is outweighed by the cost of reduced reciprocation probability from other peers.
3) Deviate from equal split: On a per-connection basis, a client can lower its upload contribution to a particular peer as long as that peer continues to reciprocate.
- The largest source of altruism in our model is unnecessary contribution to peers in a nodes active set. As such, the third option of being a dick, could work well.
- The reciprocation behavior points to a performance trade-off. If the active set size is large, equal split capacity is reduced, reducing reciprocation probability. However, an additional active set connection is an additional opportunity for reciprocation. To maximize performance, a peer should increase its active set size until an additional connection would cause a reduction in reciprocation across all connections sufficient to reduce overall download performance.
- Strategic high capacity peers can benefit a lot by manipulating their active set size, however, increasing reciprocation probability via active sert sizing is very sensitive and the throughput drops quickly, once the maximum has been reached.
- These challenges suggest that any a priori active set sizing function may not suffice to maximize download rate for strategic clients.
- Instead, they motivate the dynamic algorithm used in BitTyrant that adaptively modifies the size and membership of the active set and the upload bandwidth allocated to each peer
- BitTyrant differs from BitTorrent as it dynamically sizes its active set and varies the sending rate per connection. For each peer p, BitTyrant maintains estimates of the upload rate required for reciprocation, u_p , as well as the download throughput, d_p , received when p reciprocates. Peers are ranked by the ratio d_p /u_p and unchoked in order until the sum of u_p terms for unchoked peers exceeds the upload capacity of the BitTyrant peer.
- the best peers are those that reciprocate most for the least number of bytes contributed to them
**** Sizing local neighbourhood
- Bigger neighbourhood, allows for a bigger active set size. We want this, as graphs show that several hundreds might be ideal, but the BitTorrent is usually capped between 50 and 100.
- Bigger neighbourhood also allows for more optimistic unchokes
- A concern is increased protocol overhead
**** Additional cheating
- The reference BitTorrent client optimistically unchokes peers randomly. Azureus, on the other hand, makes a weighted random choice that takes into account the number of bytes ex- changed with a peer. If a peer has built up a deficit in the number of traded bytes, it is less likely to be picked for optimistic unchokes.
- This can be abused by simply disconnecting, thus wiping your history.
+ Can be stopped by logging IPs
- Early versions of BitTorrent clients used a seeding algorithm wherein seeds upload to peers that are the fastest downloaders, an algorithm that is prone to exploitation by fast peers or clients that falsify download rate by emitting have messages.
- A client would prefer to unchoke those peers that have blocks that it needs. Thus, peers can appear to be more attractive by falsifying block announcements to increase the chances of being unchoked.
*** Evaluation
**** Single peer using
- These results demonstrate the significant, real world performance boost that users can realize by behaving strategically. The median performance gain for BitTyrant is a factor of 1.72 with 25% of downloads finishing at least twice as fast with BitTyrant.
- Because of the random set of peers that BitTorrent trackers return and the high skew of real world equal split capacities, BitTyrant cannot always improve performance.
- Another circumstance for which BitTyrant cannot significantly improve performance is a swarm whose aggregate performance is controlled by data availability rather than the upload capacity distribution.
- BitTyrant does not simply improve performance, it also provides more consistent performance across multiple trials. By dynamically sizing the active set and preferentially selecting peers to optimistically unchoke, BitTyrant avoids the randomization present in existing TFT implementations, which causes slow convergence for high capacity peers
- There is a point of diminishing returns for high capacity peers, and BitTyrant can discover it. For clients with high capacity, the number of peers and their available bandwidth distribution are significant factors in determining performance. Our modeling results from Section 4.1 suggest that the highest capacity peers may require several hundred available peers to fully maximize throughput due to reciprocation.
**** Multiple peers using
- In contrast, BitTyrants unchoking algorithm transitions naturally from single to multiple swarms. Rather than al- locate bandwidth among swarms, as existing clients do, BitTyrant allocates bandwidth among connections, optimizing aggregate download throughput over all connections for all swarms. This allows high capacity BitTyrant clients to effectively participate in more swarms simultaneously, lowering per-swarm performance for low capacity peers that cannot.
- It can also suck to use it:
1) If high capacity peers participate in many swarms or otherwise limit altruism, total capacity per swarm decreases. This reduction in capacity lengthens download times for all users of a single swarm regardless of contribution. Although high capacity peers will see an increase in aggregate download rate across many swarms, low capacity peers that cannot successfully compete in multiple swarms simultaneously will see a large reduction in download rates.
2) New users experience a lengthy bootstrapping period. To maximize throughput, BitTyrant unchokes peers that send fast. New users without data are bootstrapped by the excess capacity of the system only.
3) Peering relationships are not stable. BitTyrant was designed to exploit the significant altruism that exists in BitTorrent swarms today. As such, it continually reduces send rates for peers that reciprocate, attempting to find the minimum rate required.
*** Conclusion
- although TFT discourages free riding, the bulk of BitTorrents performance has little to do with TFT. The dominant performance effect in practice is altruistic contribution on the part of a small minority of high capacity peers.
- More importantly, this altruism is not a consequence of TFT; selfish peers—even those with modest resources—can significantly reduce their contribution and yet improve their download performance.
* Security and Privacy
** S/Kademlia: A Praticable Approach Toweards Secure Key-Based Routing
*** Abstract
- Security is a common problem in completely decentralized peer-to-peer systems. Although several suggestions exist on how to create a secure key-based routing protocol, a practicable approach is still unattended.
- In this paper we introduce a secure key-based routing protocol based on Kademlia
*** Introduction
- A major problem of completely decentralized peer-to-peer systems are security issues.
- All widely deployed structured overlay networks used in the Internet today (i.e. BitTorrent, OverNet and eMule) are based on the Kademlia
*** Background
- common service which is provided by all structured peer-to-peer networks is the keybased routing layer (KBR)
- Every participating node in the overlay chooses a unique nodeId from the same id space and maintains a routing table with nodeIds and IP addresses of neighbors in the overlay topology.
- Every node is responsible for a particular range of the identifier space, usually for all keys close to its nodeId in the id space.
**** Kademlia
- Kademlia is a structured peer-to-peer system which has several advantages compared to protocols like Chord as a results of using a novel XOR metric for distance between points in the identifier space. Because XOR is a symmetric operation, Kademlia nodes receive lookup queries from the same nodes which are also in their local routing tables.
- In Kademlia every node chooses a random 160-bit nodeId and maintains a routing table consisting of up to 160 k-buckets.
*** Attacks on Kademlia
**** Attacks on the underlying network
- We assume, that the underlying network layer doesnt provide any security properties to the overlay layer. Therefore an attacker could be able to overhear or modify arbitrary data packets. Furthermore we presume nodes can spoof IP addresses and there is no authentication of data packets in the underlay. Consequently, attacks on the underlay can lead to denial of service attacks on the overlay layer.
**** Attacks on overlay routing
***** Eclipse attack
- Tries to place adversarial nodes in the network in a way that one or morenodes are cut off from it.
- Can be prevented, first, if a node can not choose its nodeid freely and secondly, when it is hard to influence the other nodes routing table.
- Kademlia already does the latter, as nodes are only thrown out of buckets when they stop responding.
***** Sybil attack
- In completely decentralized systems there is no instance that controls the quantity of nodeIds an attacker can obtain. Thus an attacker can join the network with lots of nodeIds until he controls a fraction m of all nodes in the network.
- Can not be prevented, but only impeded. Force nodes to pay for authorization. In decentralised systems, this can only be done through system resources.
***** Churn attack
- If the attacker owns some nodes he may induce high churn in the network until the network stabilization fails. Since a Kademlia node is advised to keep long-living contacts in its routing table, this attack does not have a great impact on the Kademlia overlay topology.
***** Adversarial Routing
- Since a node is simply removed from a routing table when it neither responds with routing information nor routes any packet, the only way of influencing the networks routing is to return adversarial routing information. For example an adversarial node might just return other collaborating nodes which are closer to the queried key. This way an adversarial node routes a packet into its subnet of collaborators
- Can be prevented by using a lookup algorithm which considers multiple disjoint paths.
**** Other Attack
***** Denial of service
- A adversarial may try to suborn a victim to consume all its resources, i.e. memory, bandwidth, computational power.
***** Attacks on data storage
- Key-based routing protocols are commonly used as building blocks to realize a distributed hash table (DHT) for data storage. To make it more difficult for adversarial nodes to modify stored data items, the same data item is replicated on a number of neighboring nodes.
*** Design
**** Secure nodeid assignment
- It should be hard to generate a large number of nodeIds (to prevent sybil attack) and you shouldn't be able to choose the nodeid freely (to prevent eclipse attack).
- The nodeid should authenticate a node
+ Can be achieved by hasing ip + port or a public key
+ The first solution has a significant drawback because with dynamically allocated IP addresses the nodeId will change subsequently.
+ It is also not suitable to limit the number of generated nodeIds if you want to support networks with NAT in which several nodes appear to have the same public IP address.
+ Finally there is no way of ensuring integrity of exchanged messages with those kind of nodeIds.
+ This is why we advocate to use the hash over a public key to generate the nodeId. With this public key it is possible to sign messages exchanged by nodes.
- Due to computational overhead we differentiate between two signature types:
1) Weak signature: The weak signature does not sign the whole message. It is limited to IP address, port and a timestamp. The timestamp specifies how long the signature is valid. This prevents replay attacks if dynamic IP addresses are used. Used in FIND_NODE and PING messages.
2) Strong signature: The strong signature signs the full content of a message. This ensures integrity of the message and resilience against Man-in-the-Middle attacks. Replay attacks can be prevented with nonces inside the RPC messages.
- To impede sybil and eclipse attacks can be done by either using a crypto puzzle or a signature from a central certificate authority, so we need to combine the signature types above with one of the following:
1) Supervised signature: If a signatures public key additionally is signed by a trustworthy certificate authority, this signature is called supervised signature. This signature is needed to impede a Sybil attack in the networks bootstrapping phase where only a few nodes exist in the network. Centralized as fuck and single point of failure.
2) Crypto puzzle signature: In the absence of a trustworthy authority we need to impede the Eclipse and Sybil attack with a crypto puzzle. Might not completely stop either, but might as well make it as hard as possible for an adversary.
- Two puzzles are created:
1) A static puzzle that impedes that the nodeId can be chosen freely: Generate key so that c_1 first bits of H(H(key)) = 0; NodeId = H(key) (so NodeId cannot be chosen freely)
2) dynamic puzzle that ensures that it is complex to generate a huge amount of nodeIds.: Generate X so that c_2 first bits of H(key ⊕ X) = 0; increase c_2 over time to keep NodeId generation expensive
- verification is O(1) — creation is O(2^c_1 + 2^c_2)
**** Sibling Broadcast
- Siblings are nodes which are responsible for a certain key-value pair that needs to be stored in a DHT.
- In the case of Kademlia those key-value pairs are replicated over the k closest nodes (we remember: k is the bucket size).
- we want to consider this number of nodes independently from the bucket size k and introduce the number of siblings as a parameter s.
- A common security problem is the reliability of sibling information which arises when replicated information needs to be stored in the DHT which uses a majority decision to compensate for adversarial nodes.
- Since Kademlias original protocol converges to a list of siblings, it is complicated to analyze and prove the coherency of sibling information.
- For this reason we introduce a sibling list of size η · s per node, which ensures that each node knows at least s siblings to a ID within the nodes siblings range with high probability.
- thus, routing tables in S/Kademlia consist of the usual k-buckets and a sorted list of siblings of size η · s.
**** Routing table maintenance
- To secure routing table maintenance in S/Kademlia we categorize signaling messages to the following classes: Incoming signed RPC requests, responses or unsigned messages. Each of those messages contains the sender address. If the message is weakly or strong signed, this address can not be forged or associated with another nodeId
- We call the sender address valid if the message is signed and actively valid, if the sender address is valid and comes from a RPC response. Kademlia uses those sender addresses to maintain their routing tables.
- Actively valid sender addresses are immediately added to their corresponding bucket, when it is not full. Valid sender addresses are only added to a bucket if the nodeId prefix differs in an appropriate amount of bits
+ This is needed, since otherwise an attacker can easily generate nodeIds that share a prefix with the victims nodeid and flood his buckets, since buckets close to own nodeid are only sparsely filled.
- Sender addresses from unsigned messages will simply be ignored.
- If a message contains more information about other nodes, then each of them can be added by invoking a ping RPC on them. If a node already exists in the routing table it is moved at the tail of the bucket.
**** Lookup over disjoint paths
- The original Kademlia lookup iteratively queries α nodes with a FIND NODE RPC for the closest k nodes to the destination key. α is a system-wide redundancy parameter
- In each step the returned nodes from previous RPCs are merged into a sorted list from which the next α nodes are picked. A major drawback of this approach is, that the lookup fails as soon as a single adversarial node is queried.
- We extended this algorithm to use d disjoint paths and thus increase the lookup success ratio in a network with adversarial nodes. The initiator starts a lookup by taking the k closest nodes to the destination key from his local routing table and distributes them into d independent lookup buckets. From there on the node continues with d parallel lookups similar to the traditional Kademlia lookup.
+ Each peer is queried only once, to keep the paths from being disjoint.
- By using sibling list, lookup doesn't converge at a single node, but terminates on d close-by neighbours, which all know the complete s siblings for the destionation key. So this should still succeed even if k-1 of the neighbors are evil.
*** Evaluations and results
- The figure clearly shows that by increasing the number of parallel disjoint paths d the fraction of successful lookups can be considerably improved. In this case the communication overhead increases linearly with d. We also see that with k = 16 there is enough redundancy in the k-buckets to actually create d disjoint paths.
- In the second setup we adapted k = 2 · d to the number of disjoint paths to keep a minimum of redundancy in the routing tables and consequently reduce communication overhead. The results in figure 5 show, that a smaller k leads to a smaller fraction of successful lookups compared to figure 4. The reason for this is the increased average path length due to the smaller routing table as shown in the path length distribution diagram
- Larger values for k, than 8.. 16, would also increase the probability that a large fraction of buckets are not full for a long time. This unnecessarily makes the routing table more vulnerable to Eclipse attacks.
*** Related work
- They state that an important step to defend these attacks is detection by defining verifiable system invariants. For example nodes can detect incorrect lookup routing by verifying that the lookup gets “closer” to the destination key.
+ This could be done by Pastry, as it has GPS or location information
+ Kademlia as well, as distance can be calculated.
- To prevent Sybil: In [13] Rowaihy et al. present an admission control system for structured peer-to- peer systems. The systems constructs a tree-like hierarchy of cooperative admission control nodes, from which a joining node has to gain admission. Another approach [7] to limit Sybil attacks is to store the IP addresses of participating nodes in a secure DHT. In this way the number of nodeIds per IP address can be limited by querying the DHT if a new node wants to join.
*** Conclusion
- We propose several practicable solutions to make Kademlia more resilient. First we suggest to limit free nodeId generation by using crypto puzzles in combination with public key cryptography. Furthermore we extend the Kademlia routing table by a sibling list. This reduces the complexity of the bucket splitting algorithm and allows a DHT to store data in a safe replicated way. Finally we propose a lookup algorithm which uses multiple disjoint paths to increase the lookup success ratio. The evaluation of S/Kademlia in the simulation frame- work OverSim has shown, that even with 20% of adversarial nodes still 99% of all lookups are successful if disjoint paths are used. We believe that the proposed extensions to the Kademlia protocol are practical and could be used to easily secure existing Kademlia networks.

Binary file not shown.