Skip to content
This repository has been archived by the owner on Oct 23, 2022. It is now read-only.

Fix connecting to peers #454

Merged
merged 9 commits into from
Jan 28, 2021
Merged

Fix connecting to peers #454

merged 9 commits into from
Jan 28, 2021

Conversation

koivunej
Copy link
Collaborator

@koivunej koivunej commented Jan 26, 2021

The correct way to connect to a peer is in my understanding to DialPeer and then proceed to give the address of that said peer from NetworkBehaviour::addresses_of_peer. While doing this, the previous workaround for missing events from banning a peer (the way we force a disconnect) was also necessary to fix.

@koivunej koivunej changed the title Fix connecting with the wrong way Fix connecting to peers Jan 26, 2021
@koivunej koivunej requested a review from ljedrz January 26, 2021 17:03
@koivunej koivunej marked this pull request as ready for review January 26, 2021 17:03

/// The connections which have been requested, but the swarm/network is yet to ask for
/// addresses; currently filled in the order of adding, with the default size of one.
pending_addresses: HashMap<PeerId, Vec<Multiaddr>>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if a HashSet wouldn't be better than having to do linear lookups, but I guess a Vec should be faster for only a few entries, and I wouldn't expect more 👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the use in NetworkBehaviour::addresses_of_peer which is just a pending_addresses.remove(peer_id).unwrap_or_default().

This is never expected to accumulate a lot of addresses. Perhaps there's a chance somehow to accumulate dead entries here, like getting libp2p_kad dialing to the same peer at the sibling poll. Even in that case, the hashmap entry would get removed. Perhaps not in all cases. Overall it'd be easier to get access to the networking core of the Swarm for all of these, but at the same time I kind of agree hiding it is a good idea. Perhaps having access to HashSet<PeerId> via PollParameters would work. Need to still keep thinking this :)

This comment was marked as outdated.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, it seems that this is not so simple. I guess I'll have to follow up with another PR to see if this could just be "connect to the peer with this given multiaddr, or any other way". That should at least simplify a lot of the wrestling, and allow to use the non-provided methods of NetworkBehaviour.

Noted an additional "deadlock" with a peer racing to connect to us before our dialing attempt has completed. This should be solvable by redefining the connect as "end up being in connection to a peer."

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite outdated. Might be best to look again with fresh eyes :)

Copy link
Member

@ljedrz ljedrz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@koivunej

This comment has been minimized.

@koivunej
Copy link
Collaborator Author

Perhaps the peerid focus is the future.

Decided not to do this after all; it would remove a lot of the sense/usecase of this cli functionality.

As the PR included some back and forth, decided to squash most of it and dragged unrelated changes above the big commit. Couldn't really find a way to do it piecewise, as a number of test cases failed. I'll probably do a round of Ipfs:: method documentation scanning and then we are hopefully done with this.

The PR now includes adaptation to libp2p/rust-libp2p#1619 which was skipped in the #446, which also changed how the NetworkBehaviour::inject_disconnect and single connection closed methods are now called (they were not previously) so there needed to be a hack. Might be that we'll need to do a quickfix to remove the debug_asserts but they should be correct now.

@koivunej koivunej requested a review from ljedrz January 27, 2021 17:39
while libp2p might soon gain the ability to dial peer addresses
directly, this will be a stopgap until that. updated a bunch of comments
and adapted to the since fixed missing events on banning the peer, which
is still used to disconnect peer at will.
bors bot added a commit that referenced this pull request Jan 28, 2021
455: test: split off common_tests from common r=koivunej a=koivunej

Tiny refactoring to stop testing `common::tests` as part of every top level `test/*.rs`, noticed during #454.

Co-authored-by: Joonas Koivunen <[email protected]>
@koivunej
Copy link
Collaborator Author

The amount of added untested code here isn't great. I have some ideas for more test cases but those done similarly as we've done them so far in tests/connectivity.rs would put even more complexity on the hidden helper ipfs::Node which I cannot prove would always work. Best bet seems to be to add them similarly to

rust-ipfs/src/p2p/swarm.rs

Lines 268 to 287 in 23fc80b

#[tokio::test]
async fn swarm_api() {
let (peer1_id, trans) = mk_transport();
let mut swarm1 = Swarm::new(trans, SwarmApi::default(), peer1_id);
let (peer2_id, trans) = mk_transport();
let mut swarm2 = Swarm::new(trans, SwarmApi::default(), peer2_id);
Swarm::listen_on(&mut swarm1, "/ip4/127.0.0.1/tcp/0".parse().unwrap()).unwrap();
for l in Swarm::listeners(&swarm1) {
let mut addr = l.to_owned();
addr.push(Protocol::P2p(
Multihash::from_bytes(&peer1_id.to_bytes()).unwrap(),
));
if let Some(fut) = swarm2.connect(addr.try_into().unwrap()) {
fut.await.unwrap();
}
}
}
. This particular test case is broken and only seems to work, so I should start by fixing that in a follow up PR.

bors r+

@bors
Copy link
Contributor

bors bot commented Jan 28, 2021

Build succeeded:

@bors bors bot merged commit 2c54092 into rs-ipfs:master Jan 28, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants