Skip to content

Commit

Permalink
remove log output
Browse files Browse the repository at this point in the history
  • Loading branch information
s-m-e committed Feb 6, 2022
1 parent aa818bc commit 8dbb47f
Showing 1 changed file with 3 additions and 106 deletions.
109 changes: 3 additions & 106 deletions docs/source/gettingstarted.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,91 +13,18 @@ Cluster Management via CLI
.. code:: bash
(env) user@computer:~> scherbelberg create
cluster INFO 2022-01-28 14:24:33,141: Creating cloud client ...
cluster INFO 2022-01-28 14:24:33,142: Creating ssl certificates ...
cluster INFO 2022-01-28 14:24:35,778: Creating ssh key ...
cluster INFO 2022-01-28 14:24:37,786: Uploading ssh key ...
cluster INFO 2022-01-28 14:24:38,098: Getting handle on ssh key ...
cluster INFO 2022-01-28 14:24:38,153: Creating network ...
cluster INFO 2022-01-28 14:24:38,328: Getting handle on network ...
cluster INFO 2022-01-28 14:24:38,408: Creating firewall ...
cluster INFO 2022-01-28 14:24:38,508: Getting handle on firewall ...
cluster INFO 2022-01-28 14:24:38,608: Creating nodes ...
cluster INFO 2022-01-28 14:24:38,608: Creating node cluster-node-scheduler ...
cluster INFO 2022-01-28 14:24:40,560: Waiting for node cluster-node-scheduler to become available ...
cluster INFO 2022-01-28 14:24:40,739: Creating node cluster-node-worker000 ...
cluster INFO 2022-01-28 14:24:41,709: Waiting for node cluster-node-worker000 to become available ...
cluster INFO 2022-01-28 14:24:48,465: Attaching network to node cluster-node-scheduler ...
cluster INFO 2022-01-28 14:24:49,034: Bootstrapping node cluster-node-scheduler ...
cluster INFO 2022-01-28 14:24:49,034: [scheduler] [root] Waiting for SSH ...
cluster INFO 2022-01-28 14:24:49,184: Attaching network to node cluster-node-worker000 ...
cluster INFO 2022-01-28 14:24:49,864: Bootstrapping node cluster-node-worker000 ...
cluster INFO 2022-01-28 14:24:49,865: [worker000] [root] Waiting for SSH ...
cluster INFO 2022-01-28 14:24:54,046: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:24:54,882: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:24:59,056: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:24:59,895: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:01,064: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:01,905: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:03,074: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:05,082: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:05,920: [worker000] [root] SSH up.
cluster INFO 2022-01-28 14:25:05,920: [worker000] Copying root files to node ...
cluster INFO 2022-01-28 14:25:06,927: [worker000] Running first bootstrap script ...
cluster INFO 2022-01-28 14:25:08,091: [scheduler] [root] SSH up.
cluster INFO 2022-01-28 14:25:08,091: [scheduler] Copying root files to node ...
cluster INFO 2022-01-28 14:25:10,098: [scheduler] Running first bootstrap script ...
cluster INFO 2022-01-28 14:25:49,004: [worker000] Rebooting ...
cluster INFO 2022-01-28 14:25:49,317: [worker000] [root] Waiting for SSH ...
cluster INFO 2022-01-28 14:25:53,328: [scheduler] Rebooting ...
cluster INFO 2022-01-28 14:25:53,670: [scheduler] [root] Waiting for SSH ...
cluster INFO 2022-01-28 14:25:55,431: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:25:59,784: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:01,447: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:03,456: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:05,465: [worker000] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:05,801: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:06,473: [worker000] [root] SSH up.
cluster INFO 2022-01-28 14:26:06,473: [worker000] Running second bootstrap script ...
cluster INFO 2022-01-28 14:26:07,808: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:09,815: [scheduler] [root] Continuing to wait for SSH ...
cluster INFO 2022-01-28 14:26:11,824: [scheduler] [root] SSH up.
cluster INFO 2022-01-28 14:26:11,824: [scheduler] Running second bootstrap script ...
cluster INFO 2022-01-28 14:27:00,573: [worker000] [clusteruser] Waiting for SSH ...
cluster INFO 2022-01-28 14:27:01,581: [worker000] [clusteruser] SSH up.
cluster INFO 2022-01-28 14:27:01,581: [worker000] Copying user files to node ...
cluster INFO 2022-01-28 14:27:03,590: [worker000] Running third (user) bootstrap script ...
cluster INFO 2022-01-28 14:27:06,883: [scheduler] [clusteruser] Waiting for SSH ...
cluster INFO 2022-01-28 14:27:07,891: [scheduler] [clusteruser] SSH up.
cluster INFO 2022-01-28 14:27:07,891: [scheduler] Copying user files to node ...
cluster INFO 2022-01-28 14:27:09,900: [scheduler] Running third (user) bootstrap script ...
cluster INFO 2022-01-28 14:29:11,100: [scheduler] Bootstrapping done.
cluster INFO 2022-01-28 14:29:11,101: [scheduler] [clusteruser] Waiting for SSH ...
cluster INFO 2022-01-28 14:29:11,812: [worker000] Bootstrapping done.
cluster INFO 2022-01-28 14:29:12,107: [scheduler] [clusteruser] SSH up.
cluster INFO 2022-01-28 14:29:12,108: [scheduler] Staring dask scheduler ...
cluster INFO 2022-01-28 14:29:13,114: [scheduler] Dask scheduler started.
cluster INFO 2022-01-28 14:29:13,115: [worker000] [clusteruser] Waiting for SSH ...
cluster INFO 2022-01-28 14:29:14,122: [worker000] [clusteruser] SSH up.
cluster INFO 2022-01-28 14:29:14,123: [worker000] Staring dask worker ...
cluster INFO 2022-01-28 14:29:15,130: [worker000] Dask worker started.
cluster INFO 2022-01-28 14:29:15,130: Successfully created new cluster.
.. note::

Creating a cluster requires around 3 to 10 minutes.
Creating a cluster requires around 3 to 10 minutes. If you want to get a better idea of what is going on, you can adjust the `log level`_ using the ``-l`` flag for instance to the ``INFO`` level: ``scherbelberg create -l 20``.

.. _log level: https://docs.python.org/3/library/logging.html#levels

Once the cluster has been created, it can be inspected at any time using the ``scherbelberg ls`` command:

.. code:: bash
(env) user@computer:~> scherbelberg ls
cluster INFO 2022-01-28 14:34:53,789: Creating cloud client ...
cluster INFO 2022-01-28 14:34:53,790: Getting handle on scheduler ...
cluster INFO 2022-01-28 14:34:54,099: Getting handles on workers ...
cluster INFO 2022-01-28 14:34:54,273: Getting handle on firewall ...
cluster INFO 2022-01-28 14:34:54,346: Getting handle on network ...
cluster INFO 2022-01-28 14:34:54,418: Successfully attached to existing cluster.
<Cluster prefix="cluster" alive=True workers=1 ipc=9753 dash=9756 nanny=9759>
<node name=cluster-node-worker000 public=188.34.155.13 private=10.0.1.100>
<node name=cluster-node-scheduler public=78.47.76.87 private=10.0.1.200>
Expand All @@ -111,18 +38,11 @@ Sometimes, it is necessary to log into worker nodes or the scheduler. *scherbelb
.. code:: bash
(env) user@computer:~> scherbelberg ssh worker000
cluster INFO 2022-01-28 14:35:49,774: Creating cloud client ...
cluster INFO 2022-01-28 14:35:49,775: Getting handle on scheduler ...
cluster INFO 2022-01-28 14:35:49,979: Getting handles on workers ...
cluster INFO 2022-01-28 14:35:50,157: Getting handle on firewall ...
cluster INFO 2022-01-28 14:35:50,235: Getting handle on network ...
cluster INFO 2022-01-28 14:35:50,319: Successfully attached to existing cluster.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
(clusterenv) clusteruser@cluster-node-worker000:~$ exit
logout
(env) user@computer:~>
.. note::

Expand All @@ -133,46 +53,23 @@ The scheduler node is accessible as follows:
.. code:: bash
(env) user@computer:~> scherbelberg ssh scheduler
cluster INFO 2022-01-28 14:36:23,019: Creating cloud client ...
cluster INFO 2022-01-28 14:36:23,019: Getting handle on scheduler ...
cluster INFO 2022-01-28 14:36:23,243: Getting handles on workers ...
cluster INFO 2022-01-28 14:36:23,477: Getting handle on firewall ...
cluster INFO 2022-01-28 14:36:23,543: Getting handle on network ...
cluster INFO 2022-01-28 14:36:23,618: Successfully attached to existing cluster.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
(clusterenv) clusteruser@cluster-node-scheduler:~$ exit
logout
(env) user@computer:~>
Once a cluster is not required anymore, it can be destroyed using the ``scherbelberg destroy`` command:

.. code:: bash
(env) user@computer:~> scherbelberg destroy
cluster INFO 2022-01-28 14:37:17,612: Creating cloud client ...
cluster INFO 2022-01-28 14:37:17,612: Getting handle on scheduler ...
cluster INFO 2022-01-28 14:37:18,377: Getting handles on workers ...
cluster INFO 2022-01-28 14:37:18,564: Getting handle on firewall ...
cluster INFO 2022-01-28 14:37:18,638: Getting handle on network ...
cluster INFO 2022-01-28 14:37:18,706: Successfully attached to existing cluster.
cluster INFO 2022-01-28 14:37:18,868: Deleting cluster-node-scheduler ...
cluster INFO 2022-01-28 14:37:19,221: Deleting cluster-node-worker000 ...
cluster INFO 2022-01-28 14:37:20,334: Deleting cluster-network ...
cluster INFO 2022-01-28 14:37:20,647: Deleting cluster-key ...
cluster INFO 2022-01-28 14:37:20,792: Deleting cluster-firewall ...
cluster INFO 2022-01-28 14:37:20,913: Cluster cluster destroyed.
(env) user@computer:~>
Under certain circumstances, the creation or destruction of a cluster may fail or result in an unclean state, for instance due to connectivity issues. In cases like this, it might be necessary to "nuke" the remains of the cluster before it can possibly be recreated:

.. code:: bash
(env) user@computer:~> scherbelberg nuke
cluster INFO 2022-01-28 15:43:19,549: Creating cloud client ...
cluster INFO 2022-01-28 15:43:20,285: Cluster cluster nuked.
(env) user@computer:~>
Cluster Management via API
--------------------------
Expand Down

0 comments on commit 8dbb47f

Please sign in to comment.