Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow startup issue with v0.110.2 CKB node #4153

Closed
guomaoqiu opened this issue Sep 12, 2023 · 9 comments
Closed

Slow startup issue with v0.110.2 CKB node #4153

guomaoqiu opened this issue Sep 12, 2023 · 9 comments
Labels
p:should-have Priority: important but not necessary for delivery in the current delivery timebox s:information-needed Status: Information needed to locate the issue stale To be closed due to a lack of activity t:bug Type: This doesn't seem right.

Comments

@guomaoqiu
Copy link

Bug Report

  • cd /var/lib/ckb
  • ckb init --chain mainnet
    Config files already exist, use --force to overwrite.
  • true
  • ckb migrate --force
  • ckb run
    2023-09-12 09:53:47.000 +00:00 main INFO sentry sentry is disabled
    2023-09-12 09:53:47.000 +00:00 main INFO ckb_bin::helper raise_fd_limit newly-increased limit: 1048576
    2023-09-12 09:53:47.105 +00:00 main INFO ckb_bin::subcommand::run ckb version: 0.110.2 (0fa89e7 2023-09-11)
    2023-09-12 09:53:47.105 +00:00 main INFO ckb_launcher Miner is disabled, edit ckb.toml to enable it
    2023-09-12 09:53:47.112 +00:00 main DEBUG ckb_db_migration current database version [20230206163640]
    2023-09-12 09:53:47.112 +00:00 main DEBUG ckb_db_migration latest database version [20230206163640]
    2023-09-12 09:53:47.437 +00:00 main INFO ckb_memory_tracker::process track current process: disable
    2023-09-12 09:53:47.437 +00:00 main INFO ckb_launcher chain genesis hash: 0x92b197aa1fba0f63633922c61c92375c9c074a93e85963554f5499fe1450d0e5
    2023-09-12 09:53:47.438 +00:00 main INFO ckb_sync::types header_map.memory_limit 600MB
    2023-09-12 09:53:47.438 +00:00 GlobalRt-3 DEBUG ckb_notify Watch new_block "BlockFilter"
    2023-09-12 09:53:47.438 +00:00 GlobalRt-8 DEBUG ckb_block_filter::filter Latest built block hash 0x055a2acc2803e7945ff76f9dc9fe8d766c0b8ed0b48c87acc7f1fd6cc30c6d6d
    2023-09-12 09:53:47.438 +00:00 GlobalRt-8 DEBUG ckb_block_filter::filter Latest built block is main chain, start from 10919431
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::snapshot no previous snapshot found
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iterator ordering before clearing tears: {}, max_header_stable_lsn: 0
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iterator in clean_tail_tears, found missing item in tail: None and we'll scan segments {} above lowest lsn 0
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iterator unable to load new segment: Io(Custom { kind: Other, error: "no segments remaining to iterate over" })
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iterator filtering out segments after detected tear at (lsn, lid) -1
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iterator unable to load new segment: Io(Custom { kind: Other, error: "no segments remaining to iterate over" })
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::segment SA starting with tip 0 stable -1 free {}
    2023-09-12 09:53:47.450 +00:00 main DEBUG sled::pagecache::iobuf starting log for a totally fresh system
    2023-09-12 09:53:47.451 +00:00 main DEBUG sled::pagecache::segment segment accountant returning offset: 0 for lsn 0 on deck: {}
    2023-09-12 09:53:47.451 +00:00 main DEBUG sled::pagecache::iobuf starting IoBufs with next_lsn: 0 next_lid: 0
    2023-09-12 09:53:47.451 +00:00 main DEBUG sled::pagecache::iobuf storing lsn 0 in beginning of buffer
    2023-09-12 09:53:47.451 +00:00 main DEBUG sled::pagecache load_snapshot loading pages from 0..0
    2023-09-12 09:53:47.451 +00:00 main DEBUG sled::meta allocated pid 3 for root of new_tree [95, 95, 115, 108, 101, 100, 95, 95, 100, 101, 102, 97, 117, 108, 116]
    2023-09-12 09:53:47.451 +00:00 log flusher DEBUG sled::pagecache::iobuf advancing offset within the current segment from 0 to 96
    2023-09-12 09:53:47.453 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf wrote lsns 0-95 to disk at offsets 0-95, maxed false complete_len 96
    2023-09-12 09:53:47.453 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf mark_interval(0, 96)
    2023-09-12 09:53:47.453 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf new highest interval: 0 - 95

Current Behavior

Expected Behavior

Environment

  • CKB version: [the output of 0.110.2]
  • Chain: [mainnet]
  • Operating system: [docker]
  • Arch: x64 [the output of uname -a]
  • Installation: [dockerhub]

Additional context/Screenshots

@guomaoqiu guomaoqiu added the t:bug Type: This doesn't seem right. label Sep 12, 2023
@eval-exec
Copy link
Collaborator

Why did it fail to start? I didn't see any ERROR level logs.

@guomaoqiu
Copy link
Author

No, I changed the output to 'debug' mode, and the above is all the output,

It shouldn't be a problem with the new version of the service. I tried going back to the previous version and got the same info,

Why did it fail to start? I didn't see any ERROR level logs.

@guomaoqiu
Copy link
Author

Then the following paragraph is the service log on another server, all:

root@testnet-ckb-node-01:/opt/ckb-node-deploy# docker logs -f testnet-ckb-node-0 --tail=100
2023-09-12 10:07:22.802 +00:00 main INFO sentry  **Notice**: The ckb process will send stack trace to sentry on Rust panics. This is enabled by default before mainnet, which can be opted out by setting the option `dsn` to empty in the config file. The DSN is now https://[email protected]/4
2023-09-12 10:07:22.810 +00:00 main INFO ckb_bin::helper  raise_fd_limit newly-increased limit: 1048576
2023-09-12 10:07:22.893 +00:00 main INFO ckb_bin::subcommand::run  ckb version: 0.110.2 (0fa89e7 2023-09-11)
2023-09-12 10:07:22.893 +00:00 main INFO ckb_launcher  Miner is disabled, edit ckb.toml to enable it
2023-09-12 10:07:23.867 +00:00 main INFO ckb_memory_tracker::process  track current process: disable
2023-09-12 10:07:23.867 +00:00 main INFO ckb_launcher  chain genesis hash: 0x10639e0895502b5688a6be8cf69460d76541bfa4821629d86d62ba0aae3f9606
2023-09-12 10:07:23.867 +00:00 main INFO ckb_sync::types  header_map.memory_limit 600MB
2023-09-12 10:11:11.048 +00:00 GlobalRt-0 INFO ckb_network::network  Listen on address: /ip4/0.0.0.0/tcp/8115
2023-09-12 10:11:11.048 +00:00 GlobalRt-0 INFO ckb_network::network  p2p service event: ListenStarted { address: "/ip4/0.0.0.0/tcp/8115" }
2023-09-12 10:11:11.581 +00:00 GlobalRt-5 INFO ckb_light_client_protocol_server  LightClient(2).connected peer=SessionId(1)
2023-09-12 10:11:11.581 +00:00 GlobalRt-4 INFO ckb_sync::synchronizer  SyncProtocol.connected peer=SessionId(1)
2023-09-12 10:11:11.581 +00:00 GlobalRt-9 INFO ckb_filter  FilterProtocol.connected peer=SessionId(1)
2023-09-12 10:11:11.712 +00:00 GlobalRt-2 INFO ckb_sync::synchronizer  SyncProtocol.connected peer=SessionId(2)
2023-09-12 10:11:11.712 +00:00 GlobalRt-2 INFO ckb_light_client_protocol_server  LightClient(2).connected peer=SessionId(2)
2023-09-12 10:11:11.712 +00:00 GlobalRt-2 INFO ckb_filter  FilterProtocol.connected peer=SessionId(2)
2023-09-12 10:11:11.840 +00:00 main INFO ckb_rpc::server  Listen HTTP RPCServer on address 0.0.0.0:8114
2023-09-12 10:11:11.840 +00:00 main INFO parity_ws  Listening for new connections on 0.0.0.0:28114.
2023-09-12 10:11:11.840 +00:00 main INFO ckb_rpc::server  Listen WS RPCServer on address 0.0.0.0:28114
2023-09-12 10:11:11.967 +00:00 GlobalRt-0 INFO ckb_sync::synchronizer  SyncProtocol.connected peer=SessionId(3)
2023-09-12 10:11:11.967 +00:00 GlobalRt-10 INFO ckb_filter  FilterProtocol.connected peer=SessionId(3)
2023-09-12 10:11:11.967 +00:00 GlobalRt-5 INFO ckb_light_client_protocol_server  LightClient(2).connected peer=SessionId(3)
2023-09-12 10:11:12.308 +00:00 GlobalRt-1 INFO ckb_relay  RelayProtocol(2).connected peer=SessionId(1)
2023-09-12 10:11:12.308 +00:00 GlobalRt-1 INFO ckb_relay  RelayProtocol(2).connected peer=SessionId(2)
2023-09-12 10:11:12.309 +00:00 GlobalRt-1 INFO ckb_relay  RelayProtocol(2).connected peer=SessionId(3)
2023-09-12 10:11:12.316 +00:00 GlobalRt-0 INFO ckb_sync::synchronizer  SyncProtocol.connected peer=SessionId(4)
2023-09-12 10:11:12.316 +00:00 GlobalRt-10 INFO ckb_light_client_protocol_server  LightClient(2).connected peer=SessionId(4)
2023-09-12 10:11:12.316 +00:00 GlobalRt-7 INFO ckb_relay  RelayProtocol(2).connected peer=SessionId(4)
2023-09-12 10:11:12.316 +00:00 GlobalRt-2 INFO ckb_filter  FilterProtocol.connected peer=SessionId(4)
2023-09-12 10:11:12.437 +00:00 GlobalRt-9 INFO ckb_sync::relayer::block_transactions_process  relayer receive BLOCKTXN of Byte32(0xeabb6f12941be028f014e69ae365026556686fdf34523df8f300fb2a8ccabdfe), peer: SessionId(1)
2023-09-12 10:11:12.471 +00:00 GlobalRt-0 INFO ckb_relay  RelayProtocol(2).connected peer=SessionId(5)
2023-09-12 10:11:12.471 +00:00 GlobalRt-3 INFO ckb_filter  FilterProtocol.connected peer=SessionId(5)
2023-09-12 10:11:12.471 +00:00 GlobalRt-11 INFO ckb_light_client_protocol_server  LightClient(2).connected peer=SessionId(5)
2023-09-12 10:11:13.964 +00:00 GlobalRt-11 INFO ckb_sync::relayer::block_transactions_process  relayer receive BLOCKTXN of Byte32(0xe1fdfb9894a7921b88fde6a2980444ff2be8d168a3ce79d8559975f88b19840b), peer: SessionId(1)
2023-09-12 10:11:16.150 +00:00 main INFO sentry  **Notice**: The ckb process will send stack trace to sentry on Rust panics. This is enabled by default before mainnet, which can be opted out by setting the option `dsn` to empty in the config file. The DSN is now https://[email protected]/4
2023-09-12 10:11:16.151 +00:00 main INFO ckb_bin::helper  raise_fd_limit newly-increased limit: 1048576
2023-09-12 10:11:16.234 +00:00 main INFO ckb_bin::subcommand::run  ckb version: 0.110.2 (0fa89e7 2023-09-11)
2023-09-12 10:11:16.234 +00:00 main INFO ckb_launcher  Miner is disabled, edit ckb.toml to enable it
2023-09-12 10:11:16.238 +00:00 main DEBUG ckb_db_migration  current database version [20230206163640]
2023-09-12 10:11:16.238 +00:00 main DEBUG ckb_db_migration  latest  database version [20230206163640]
2023-09-12 10:11:16.417 +00:00 main INFO ckb_memory_tracker::process  track current process: disable
2023-09-12 10:11:16.417 +00:00 main INFO ckb_launcher  chain genesis hash: 0x10639e0895502b5688a6be8cf69460d76541bfa4821629d86d62ba0aae3f9606
2023-09-12 10:11:16.417 +00:00 main INFO ckb_sync::types  header_map.memory_limit 600MB
2023-09-12 10:11:16.418 +00:00 GlobalRt-7 DEBUG ckb_notify  Watch new_block "BlockFilter"
2023-09-12 10:11:16.418 +00:00 GlobalRt-8 DEBUG ckb_block_filter::filter  Latest built block hash 0xc4578603e9c255edd8938bea879d652da1589d121fd724ef00e29e953de98997
2023-09-12 10:11:16.418 +00:00 GlobalRt-8 DEBUG ckb_block_filter::filter  Latest built block is main chain, start from 10603351
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::snapshot  no previous snapshot found
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iterator  ordering before clearing tears: {}, max_header_stable_lsn: 0
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iterator  in clean_tail_tears, found missing item in tail: None and we'll scan segments {} above lowest lsn 0
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iterator  unable to load new segment: Io(Custom { kind: Other, error: "no segments remaining to iterate over" })
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iterator  filtering out segments after detected tear at (lsn, lid) -1
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iterator  unable to load new segment: Io(Custom { kind: Other, error: "no segments remaining to iterate over" })
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::segment  SA starting with tip 0 stable -1 free {}
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iobuf  starting log for a totally fresh system
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::segment  segment accountant returning offset: 0 for lsn 0 on deck: {}
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iobuf  starting IoBufs with next_lsn: 0 next_lid: 0
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache::iobuf  storing lsn 0 in beginning of buffer
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::pagecache  load_snapshot loading pages from 0..0
2023-09-12 10:11:16.418 +00:00 main DEBUG sled::meta  allocated pid 3 for root of new_tree [95, 95, 115, 108, 101, 100, 95, 95, 100, 101, 102, 97, 117, 108, 116]
2023-09-12 10:11:16.418 +00:00 log flusher DEBUG sled::pagecache::iobuf  advancing offset within the current segment from 0 to 96
2023-09-12 10:11:16.420 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf  wrote lsns 0-95 to disk at offsets 0-95, maxed false complete_len 96
2023-09-12 10:11:16.420 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf  mark_interval(0, 96)
2023-09-12 10:11:16.420 +00:00 sled-io-2 DEBUG sled::pagecache::iobuf  new highest interval: 0 - 95

@guomaoqiu
Copy link
Author

The above log seems to be normal output. After waiting for a long time, it has returned to normal. The startup time is relatively long.

@eval-exec
Copy link
Collaborator

eval-exec commented Sep 12, 2023

What's the datetime of ckb_network::network start listening? You can find this in the log.

@doitian doitian added s:information-needed Status: Information needed to locate the issue p:should-have Priority: important but not necessary for delivery in the current delivery timebox labels Sep 12, 2023
@eval-exec
Copy link
Collaborator

eval-exec commented Sep 13, 2023

@guomaoqiu Hello, I want to reproduce this, is the slow node running inside a Docker container?
Could you please provide the network mode of your Docker container? Is it started by --network host or is it using -p for port forwarding? Additionally, the output of docker inspect $CONTAINER_ID maybe helpful too.

@doitian
Copy link
Member

doitian commented Sep 13, 2023

Related PR: #4000 , available in v0.111.0 (In preview now)

@eval-exec eval-exec changed the title Failed to start upgrade node service Slow startup issue with v0.110.2 CKB node Sep 13, 2023
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale To be closed due to a lack of activity label Jul 28, 2024
Copy link

github-actions bot commented Aug 3, 2024

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p:should-have Priority: important but not necessary for delivery in the current delivery timebox s:information-needed Status: Information needed to locate the issue stale To be closed due to a lack of activity t:bug Type: This doesn't seem right.
Projects
None yet
Development

No branches or pull requests

3 participants