Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Localize WithCancel contexts and remove DAGService timeout #1608

Closed
wants to merge 2 commits into from

Conversation

rht
Copy link
Contributor

@rht rht commented Aug 25, 2015

No description provided.

@jbenet jbenet added the status/in-progress In progress label Aug 25, 2015
@rht rht changed the title Localize WithTimeout contexts Localize WithTimeout and WithCancel contexts Aug 25, 2015
@rht rht force-pushed the cleanup-context branch from 17f6cc2 to 5458bdc Compare August 25, 2015 01:33
@@ -174,6 +175,8 @@ func (h *BasicHost) NewStream(pid protocol.ID, p peer.ID) (inet.Stream, error) {
// h.Network.Dial, and block until a connection is open, or an error is
// returned. // TODO: Relay + NAT.
func (h *BasicHost) Connect(ctx context.Context, pi peer.PeerInfo) error {
ctx, cancel := context.WithTimeout(ctx, host.DiscoveryConnTimeout)
defer cancel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure we dont want to be able to specify our own timeout for this? this feels like the opposite of what we want to do...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far in the code the timeouts either have the same value, or none at all.
Rather than having to reason the timeout each time, why not localize to the place where the ctx and timeout are needed and specify the timeout here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because what if i know that i'm going to connect to a peer that has insane latency? with this change, i will have no way of specifying that we should wait longer for the connection.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, i think @whyrusleeping's right.

the timeout was originally for "discovered" peers, right? (not peers coming from an api consumer/client) maybe it makes sense for it to be just there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case then shouldn't all the timeouts in the code that use (rh *RoutedHost) Connect be modified for the latency anyway? Not just for (n *IpfsNode) HandlePeerFOund.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the basic UX is:

  • when calling Connect, client specifies timeout
  • when connecting automatically from a discovered peer, use the discoveryConnTimeout

@rht rht force-pushed the cleanup-context branch 2 times, most recently from b2ee5b5 to 065e97c Compare August 26, 2015 00:09

ctxT, _ = context.WithTimeout(ctx, time.Second*2)
ctxT, _ := context.WithTimeout(ctx, time.Second*2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defer cancel()

@rht rht force-pushed the cleanup-context branch 3 times, most recently from f594ac2 to c83e82f Compare August 30, 2015 02:22
@rht rht added the codereview label Aug 30, 2015
@rht
Copy link
Contributor Author

rht commented Aug 30, 2015

Localized Connect() timeout reverted.

@@ -136,10 +136,10 @@ func TestValueGetSet(t *testing.T) {

connect(t, ctx, dhtA, dhtB)

ctxT, _ := context.WithTimeout(ctx, time.Second)
dhtA.PutValue(ctxT, "/v/hello", []byte("world"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove this timeout? we expect this to happen immediately, if it hangs, then thats wrong and we want to know about it.

@rht rht changed the title Localize WithTimeout and WithCancel contexts Localize WithCancel contexts and remove DAGService timeout Sep 4, 2015
rht added 2 commits October 3, 2015 18:22

Unverified

This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>

Unverified

This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
@rht rht force-pushed the cleanup-context branch from 9572f61 to 1e4725e Compare October 3, 2015 11:23
@rht
Copy link
Contributor Author

rht commented Oct 3, 2015

Rebased.

@@ -338,10 +338,8 @@ func (i *gatewayHandler) putHandler(w http.ResponseWriter, r *http.Request) {
return
}

tctx, cancel := context.WithTimeout(ctx, time.Minute)
defer cancel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not sure this is a bad context to have-- what is the timeout on ctx as is? is there one?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, we should keep this one. anything on the gateway should have an explicit timeout (although, maybe it should be made configurable)

@jbenet
Copy link
Member

jbenet commented Oct 3, 2015

The cancels on tests are going to help our tests go faster 👍 👍

@jbenet
Copy link
Member

jbenet commented Oct 3, 2015

Many of these removed contexts (particularly the WithCancels) are not right IMO. they're well placed to cut down resource expenditures early. (I've commented on those i think are off to remove.

@whyrusleeping PTAL too?

@ghost ghost added RFCR and removed codereview labels Dec 22, 2015
@whyrusleeping
Copy link
Member

closing, we can pull relevant changes out if needed. the rest will be difficult to rebase

@jbenet jbenet removed the status/in-progress In progress label Dec 30, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants