-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the Downloads page #7378
Conversation
current_download = self.window().download_details_widget.current_download | ||
if current_download and current_download.get(infohash) == infohash: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my POV the code if current_download and current_download.get(infohash)
was incorrect and always should have returned False
.
de1f8b7
to
8085cef
Compare
6c15034
to
3bfe20e
Compare
if self.rest_request: | ||
self.rest_request.cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a concern regarding this part of the code. Before this PR, request canceling was managed by a separate timer. Although the previous logic with two separate timers wasn't very elegant, it did ensure that a new request would not cancel an ongoing request that the Core was still processing. With the new logic, it's possible for a UI change to trigger a new request, canceling a previous request that Core is still processing, even if it was sent just a short time ago.
In cases where processing a request to the /downloads
endpoint takes a significant amount of time, canceling and resending the request could negatively impact Tribler's responsiveness.
I suggest implementing the following logic: if a previous request with the same parameters (get_pieces/get_peers/get_files) was sent not too long ago (within a timeout window), instead of canceling that request, skip the new request and continue waiting for the results of the earlier one. This approach should help maintain responsiveness while ensuring requests are properly processed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a concern regarding this part of the code. Before this PR, request canceling was managed by a separate timer. Although the previous logic with two separate timers wasn't very elegant, it did ensure that a new request would not cancel an ongoing request that the Core was still processing.
Unfortunately, I disagree with this sentence. The previous cancellation logic works roughly the same way that it works in the PR.
A request could be canceled in two places in the previous version of the code:
- By calling
on_downloads_request_timeout
(removed in this PR)
def on_downloads_request_timeout(self):
self._logger.warning("Downloads request timed out, retrying...")
if self.rest_request:
self.rest_request.cancel()
self.schedule_downloads_timer()
- On each
load_downloads
call (renamed withrefresh_downloads
)
def load_downloads(self):
url_params = {'get_pieces': 1}
if self.window().download_details_widget.currentIndex() == 3:
url_params['get_peers'] = 1
elif self.window().download_details_widget.currentIndex() == 1:
url_params['get_files'] = 1
isactive = not self.isHidden()
if isactive or (time.time() - self.downloads_last_update > 30):
# Update if the downloads page is visible or if we haven't updated for longer than 30 seconds
self.downloads_last_update = time.time()
priority = QNetworkRequest.LowPriority if not isactive else QNetworkRequest.HighPriority
if self.rest_request:
self.rest_request.cancel()
request_manager.get(
endpoint="downloads",
url_params=url_params,
on_success=self.on_received_downloads,
priority=priority,
)
Therefore additional timer has nothing in common with "ensuring that a new request would not cancel an ongoing request".
But the fanny fact is... it seems that self.rest_request
is always equal to None
, so, cancellation logic could be safely removed without changing an application logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic with the removed self.rest_request
value assignment was introduced here: f5d6aeb#diff-d512cc9b333c64acd7568dfce806cc67707f68e9ff9016368a096a66921a353f
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic with the removed self.rest_request value assignment was introduced here
Ah, this is the reason why we stopped seeing "Request was canceled" spam for the download endpoint :)
I restored the assignment, and now the log is again full with
Request was canceled: GET https://localhost:52194/downloads?get_pieces=1
Under normal circumstances, frequent cancellation of download requests shouldn't be required. So, there's still a need for a solution to address this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out, the problem with request canceling has a simple answer: if we assign self.rest_request
, we also need to clean it in on_received_downloads
:) With this change, the Request was cancelled
spam is gone, so it turns out to be a false positive.
After this change, I measured the response time from the /downloads
endpoint on my machine, and it is usually 0.3 - 0.5 seconds but periodically 3.7 - 4.0 seconds. Only these two intervals can be observed, nothing in-between, like "1.0 second". It is possible that the endpoint logic sometimes gets blocked by other asynchronous processes, and this causes a bigger delay.
With this, I think it may be too early to cancel a request if we issue a new request just tens of milliseconds after the previous one (in case of a request provoked by UI changes). We need to have some timeout, not necessarily too long, say, 10 seconds, and not cancel the previous request before the timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Together with @kozlovsky, we decided on the following:
- In this PR, I will remove the cancellation logic.
- In the subsequent PR, I will write code to ensure the order of responses.
I appreciate the work put into this pull request, and I suggest improving the review process for large future PRs. When a PR includes both simple refactorings and subtle logic changes, it can be challenging to identify and thoroughly review the important changes. To make the review process more efficient, consider breaking down large pull requests into a series of smaller, atomic commits. This way, it's easier to ensure that each commit either keeps the current logic (for refactorings) or introduces a clear, understandable change. It may not always be apparent how to split commits during development. However, once completed, the PR author can use interactive rebase or manually reapply changes in a temporary branch to separate them. While this adds some work for the PR author, it greatly simplifies the reviewer's job and helps prevent important changes from being overlooked. For example, this PR could be organized into the following commits:
Ideally, each commit should be one of the following:
With this structure, the reviewer can quickly verify most commits and focus on closely examining the last commit containing the significant logic change. This approach will help streamline the review process and ensure important updates are not missed. |
And then the reviewer adds a single comment (like "let's rename the enum") and this change could touch all commits and the PR author will have to redo this time-consuming work one more time. And then another comment (like "let's change a folder of the enum") and again all splitting work should be redone. |
It can add some work, but usually only a little. It is often possible to update the corresponding commit and rebase the later commits on top of the modified commit. A PyCharm tool for resolving simple merge conflicts is usually able to resolve all conflicts automatically in these cases. |
I tried this method for some PRs, and for me, it involved quite a lot of work. But let's approach this differently. It's good that you believe the PR process could be improved. However, this discussion isn't part of this particular PR and should be discussed separately with the entire team. If all members agree with the change, then we should establish an agreement and adhere to the new rules. |
This PR related to #7360
It optimizes the Downloads page UI logic by removing unnecessary calls and stopping sending requests while the page is inactive.