CN113138717A - Node deployment method, device and storage medium - Google Patents
Node deployment method, device and storage medium Download PDFInfo
- Publication number
- CN113138717A CN113138717A CN202110384901.5A CN202110384901A CN113138717A CN 113138717 A CN113138717 A CN 113138717A CN 202110384901 A CN202110384901 A CN 202110384901A CN 113138717 A CN113138717 A CN 113138717A
- Authority
- CN
- China
- Prior art keywords
- node
- storage
- deployment
- deleted
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/062—Securing storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Storage Device Security (AREA)
Abstract
The embodiment of the application provides a node deployment method, equipment and a storage medium. In some embodiments of the present application, any storage node in a distributed storage system to be deployed receives a deployment instruction sent by a management terminal, verifies whether its own environmental resource meets a deployment parameter requirement, generates a resource view according to its own environmental resource after meeting the deployment parameter requirement, and sends the resource view to the management terminal, and the management terminal can isolate an actual physical resource according to the resource view, shield hardware with a defective function, and improve data security of the distributed storage system.
Description
Technical Field
The present application relates to the field of distributed storage systems, and in particular, to a node deployment method, device, and storage medium.
Background
With the development of the internet, the storage system is larger and larger, and a centralized storage architecture composed of one or more large hosts cannot meet the requirement of the storage performance of mass data. In addition, due to the improvement of the performance of the PC in recent years, the PC is used as a storage host, and a distributed storage architecture is adopted, so that better concurrency can be provided, the cost of machine room deployment can be reduced, the problem of single-point failure in centralized storage is solved, and the method becomes a choice of more enterprises.
At present, the data security performance of the distributed storage system is low.
Disclosure of Invention
Aspects of the present application provide a node deployment method, device, and storage medium, which isolate actual physical resources using a resource view, prevent a hardware defect from affecting a storage service, and improve data security.
The embodiment of the application provides a node deployment method, which is applicable to a first storage node and comprises the following steps:
receiving a deployment instruction sent by a user through a management terminal, wherein the deployment instruction carries a deployment parameter requirement;
verifying whether the environmental resources of the first storage node meet the deployment parameter requirements, wherein the environmental resources comprise state information of actual physical resources in the first storage node;
if so, generating a resource view according to the environment resources of a first storage node, and setting the storage service on the first storage node to be in an externally available state;
and sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view.
Preferably, the deployment parameter requirement includes identification information of a second storage node that has been deployed and whose deployment order is before the first storage node, and the method further includes:
under the condition that the environmental resources of the first storage node meet the requirement of the deployment parameters, sending a communication establishment request to a second storage node to request to establish communication connection with the second storage node;
and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node.
Preferably, after the creation of the distributed storage system is completed, the distributed storage system includes a plurality of storage nodes that have already been deployed, and the method further includes:
obtaining sequence tags of a plurality of storage nodes, wherein the plurality of storage nodes comprise a first storage node;
determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes;
and if so, taking the first storage node as a main node.
Preferably, before obtaining the sequence tags of the plurality of storage nodes, the method further comprises:
counting the number of storage nodes with storage service in an externally available state;
and if the number is larger than a set number threshold, acquiring sequence tags of a plurality of storage nodes.
Preferably, after the first storage node is taken as the master node, the method further comprises:
receiving identification information of a node to be deleted issued by a management terminal;
judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted;
and if not, the main node executes the operation of the node to be deleted.
Preferably, the method further comprises:
when the node to be deleted is in an online state, judging whether the node to be deleted provides storage service for the outside;
and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Preferably, the master node performs a node to be deleted operation, including:
and sending a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Preferably, after establishing the communication connection between the first storage node and the second storage node, the method further comprises:
receiving the resource view of the node to be deleted sent by the node to be deleted based on the communication connection, and storing the resource view of the node to be deleted locally;
after the master node executes the operation of the node to be deleted, the method further comprises the following steps:
and deleting the resource view of the node to be deleted.
An embodiment of the present application further provides a node deployment apparatus, including: the system comprises a receiving module, a deployment module and a management module, wherein the receiving module is used for receiving a deployment instruction sent by a user through a management terminal, and the deployment instruction carries a deployment parameter requirement; the verification module is used for verifying whether the environment resources of the first storage node meet the deployment parameter requirements, wherein the environment resources comprise state information of actual physical resources in the first storage node; if so, generating a resource view according to the environment resources of a first storage node, and setting the storage service on the first storage node to be in an externally available state; and the sending module is used for sending the resource view to a management terminal so that the management terminal can carry out state management on the actual physical resources in the first storage node according to the resource view.
An embodiment of the present application further provides a node deployment system, including: the management terminal and a plurality of storage nodes which are in communication connection with the management terminal;
the management terminal is used for responding to the input operation of a user in the node information input item and acquiring the information of a plurality of storage nodes needing to be serially deployed for creating the distributed storage system; responding to a storage node deployment operation sent by a user, and sequentially sending deployment instructions to the plurality of storage nodes, wherein the deployment instructions carry deployment parameter requirements; receiving resource views of a plurality of storage nodes, and performing state management on actual physical resources of the plurality of storage nodes according to the resource views;
each storage node is used for receiving a deployment instruction sent by the management terminal; verifying whether the self environment resource meets the requirement of the deployment parameter or not, wherein the environment resource comprises the state information of the actual physical resource in each storage node; if so, generating a resource view according to the self environment resource; and setting the storage service of the self as a use state, and reporting the resource view to the management terminal.
An embodiment of the present application further provides a node deployment device, including: a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program, so as to implement each step in the above node deployment method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by one or more processors, causes the one or more processors to perform the steps of the node deployment method described above.
In some embodiments of the present application, any storage node in a distributed storage system to be deployed receives a deployment instruction sent by a management terminal, verifies whether its own environmental resource meets a deployment parameter requirement, generates a resource view according to its own environmental resource after meeting the deployment parameter requirement, and sends the resource view to the management terminal, and the management terminal can isolate an actual physical resource according to the resource view, shield hardware with a defective function, and improve data security of the distributed storage system.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic structural diagram of a distributed storage system provided in an exemplary embodiment of the present application;
fig. 2a is a schematic flowchart of a node deployment method provided from a management terminal perspective in an exemplary embodiment of the present application;
FIG. 2b is a flowchart illustrating a node deployment method from the perspective of a first storage node according to an exemplary embodiment of the present application;
fig. 3 is a schematic flowchart of a node deletion method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a management terminal according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of a node deployment device according to an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of a node deployment device according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a node deployment apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
A distributed storage system is used for storing data on a plurality of independent devices in a distributed mode. The traditional network storage system adopts a centralized storage server to store all data, the storage server becomes the bottleneck of the system performance, is also the focus of reliability and safety, and cannot meet the requirement of large-scale storage application. The distributed network storage system adopts an expandable system structure, utilizes a plurality of storage servers to share the storage load, and utilizes the position server to position the storage information, thereby not only improving the reliability, the availability and the access efficiency of the system, but also being easy to expand. Distributed, which refers to a unique type of system architecture comprised of a set of computer nodes that communicate over a network and that work in concert to accomplish a common task.
Different from centralized storage relying on a fixed large host, the hardware environment of distributed storage is more complex, the distributed storage not only comprises heterogeneous hosts, but also has difference in storage resources on each host, if specific hardware resources are allocated for services in such a scene, resource waste is easily caused, if capacity expansion and capacity reduction are required along with services, service interruption can be caused, other services often write data, and some services basically have no write operation, so that a specific hard disk is easily damaged due to too many times of disk writing. In order to solve the problems, the storage resources need to be managed uniformly, the storage resources are abstracted into a storage pool, the storage resources are allocated according to running tasks and services, the utilization rate of the resources is greatly improved, the storage resources provided for users are actually dispersed in each storage device, and therefore not only can the damage of a disk caused by frequent write operations of a certain service be prevented, but also the IOPS of data access is greatly improved.
The distributed storage has advantages and disadvantages for operation and maintenance, the distributed advantages are convenient for increasing and decreasing storage resources, the cost of a special storage host is high in the traditional centralized storage, a Scale-Up is usually adopted, a hard disk and a memory are added, so that the capacity expansion of the centralized storage is often limited by the number of hard disk slots of a server, the distributed storage provides storage service by using a cheap PC or a small server, and the Scale-Out is usually adopted for capacity expansion, so that the capacity expansion is not limited. The centralized storage is usually based on a fixed hardware platform, the number of devices is small, the deployment is simple, while the distributed storage is usually based on a heterogeneous platform, the number of hosts is large, the deployment time is long, and the deployment difficulty is large.
Therefore, it is an urgent technical problem to be solved by those skilled in the art to provide an efficient and easy-to-use deployment of distributed storage.
In some embodiments of the present application, any storage node in the distributed storage system to be deployed receives a deployment instruction sent by the management terminal, verifies whether its own environmental resource meets the deployment parameter requirement, generates a resource view according to its own environmental resource after meeting the deployment parameter requirement, and sends the resource view to the management terminal, where the management terminal can isolate actual physical resources according to the resource view, shield hardware with defective functions, and improve data security of the distributed storage system.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a distributed storage system 100 according to an exemplary embodiment of the present application. As shown in fig. 1, the distributed storage system 100 includes: a storage management centralized control platform 10a and at least one storage node 10 b. In addition, the distributed storage system 100 may also provide other services according to actual needs, and may further include a computing node 10c, an arbitration node 10d, and the like.
As shown in fig. 1, the current storage cluster is composed of three storage nodes 10b, the three storage nodes 10b provide storage services to the outside together, and the storage nodes 10b need to consider data balancing and reconstruction, so that there is a large data flow between the storage nodes 10b, and in order to prevent the influence of too much data flow blocking external control signals, the communication network of the current storage cluster is separated, the external control signals (including deployment instructions) go through the management network, i.e., the control flow in the graph, and the data balancing and internal data access services in the cluster go through the storage network, i.e., the data flow in the graph. Obviously, the embodiment of the present application is not limited to three storage nodes 10b, and may be a single storage node 10b, or may be a plurality of storage nodes in other numbers.
As shown in fig. 1, the storage management centralized control platform 10a in the embodiment of the present application is a collection of a stack of software, such as a database, a messaging middleware, and cloud resource management, where the stack of software runs in each container, the container may run on each node in a cluster, and the storage management centralized control platform 10a may be deployed on a storage node in a system, or may be deployed on a non-storage node, such as a compute node 10c and an arbitration node 10 d. In the storage cluster in the embodiment of the application, an independent monitoring node is not set, and the storage management centralized control platform 10a sends the deployment instruction to the storage node 10b, so that the monitoring node is prevented from being damaged to influence the normal operation of the system, and the stability of the system is improved.
As shown in fig. 1, the storage node 10b, which is an active electronic device connected to a network, is capable of transmitting, receiving or forwarding information through a communication channel, and has one or more physical hard disks thereon. The storage node 10b may be a server, a workstation, or the like, but is not limited thereto. The storage node 10b in the embodiment of the present application may include a plurality of physical hard disks. Physical hard disks include, but are not limited to: solid state drives, mechanical drives, hybrid drives, and the like. A storage cluster refers to a system consisting of various storage devices that store programs and data, control components, and devices (hardware) and algorithms (software) that manage information scheduling. The storage system may centrally manage the storage resources (e.g., physical hard disks) of the storage node 10 b.
The storage cluster in the embodiment of the application is divided into four components according to software functions, wherein the four components comprise a deployment component, a view management component, a service management component and a cluster management component.
In the foregoing embodiment, the deployment component aims to provide a distributed storage scheme supporting dynamic lateral expansion from a single node to multiple nodes, and checks, deploys, expands and contracts the node environment according to an input deployment instruction.
In the above embodiment, the view management component divides and stores available resources by hardware information and deployment instructions. The resource view may be specified by a user for each node when deploying or expanding the capacity, or may be provided with a template of a certain rule, and each deployed node automatically adapts to the rule. The resource view is used for managing the actual physical resources divided into the storage service, and comprises network card resources, CPU resources, cache resources and storage resources used by the storage service. For a plurality of services which need to use system resources, the management of the resource view can shield hardware resources which are not needed by the current storage service, or shield resources which can affect the storage performance. The management of the resource view can shield hardware with defective functions to prevent the defect of the hardware resource from influencing data security. The resource view provides the abstract image of the resource, the resource accessed by the storage service does not necessarily correspond to the physical resource, a plurality of physical resources can be aggregated into a single abstract resource, and the single physical resource can be split into a plurality of abstract resources.
In the above embodiments, the service management component supports the startup, shutdown, and anomaly detection of storage and related services. The starting and exiting strategy supporting the storage service related functions comprises the environment configuration which the storage service depends on, the starting of the storage service, the closing of the storage service and the emptying of the environment configuration which the storage service depends on. The service management component supports normal exit of the service, the service exit comprises two modes of SIGKILL and SIGTERM, the SIGKILL can directly kill the service, and the SIGTERM can complete the downloading of data first and then exit the service. The service management component supports the configuration of the restart strategy, and can decide whether to restart or not at the exit of service exception. The service management component supports the running state of the monitoring service, including whether the monitoring service is online or offline, the reason for triggering the offline, and records the reason for exiting the storage service every time. The resource view is used for providing configuration information for the storage nodes, including the positions of the nodes in the cluster and the IP of the nodes. The present application provides dynamic revisions to node configurations, such as revising cluster node IP.
In the above embodiment, the cluster management component supports selection of a management node (i.e., a master node), and supports management of nodes in a cluster. The cluster management component functions are used for ensuring the consistency of cluster information, selecting management nodes and realizing dynamic capacity expansion and capacity reduction. The cluster management adopts a distributed application program coordination service to provide a multi-level naming space, the coordination service is provided with a main node and a plurality of slave nodes in the whole cluster, the main node can execute write operation on the naming space, and the slave nodes only have reading function. If the access service at any node needs to write data, the write operation is received by the master node and sends a request to each slave node, and the final data submission is determined according to the feedback of the slave nodes. The cluster management supports the election of management nodes, when the nodes are started, each node generates a sequence label in a designated name space of a cluster, when more than half of the nodes are started, the cluster judges the node with the minimum label value as a management node, and the management node can be any node in the cluster. And if the management node is offline, the cluster judges the node with the minimum label value in the instruction name space again as the management node. The dynamic capacity expansion and reduction of the nodes are realized by dynamically adding and deleting the nodes in the cluster, and the cluster only contains the node information which is added into the cluster at present and does not contain the node information to be deployed. To add a new node to a cluster, the new node needs to be informed of existing node information in the cluster, and the new node establishes a connection with the nodes and joins the cluster. If the cluster needs to be scaled, the nodes to be deleted also need to advertise the cluster service.
In addition, the distributed storage system supports not only storage nodes, but also other roles, such as compute nodes providing computing resources, arbitration nodes providing a cluster arbitration function. The node types can be distinguished through deployment instructions, and can also be automatically adapted through the resource quantity of the nodes, such as a server with only a single disk, and the node types can be automatically configured into a computing node or an arbitration node. The storage service does not need to be operated by the arbitration node and the computing node, a resource view required by storage does not need to be provided, and the method is simpler compared with the deployment implementation of the capacity expansion and reduction.
The following embodiments illustrate the deployment process of a distributed storage system.
In this embodiment, the storage management centralized control platform 10a includes an interface interacting with a user, the user sends a deployment instruction to the storage node 10b through the interface, and a carrier of the interface may be a management terminal. Wherein, an interface is displayed on an electronic display screen of the management terminal; the interface includes a node information entry; the management terminal responds to the input operation of a user in the node information input item and acquires the information of a plurality of storage nodes needing to be serially deployed for creating the distributed storage system; the management terminal responds to storage node deployment operation sent by a user and sequentially sends deployment instructions to the plurality of storage nodes, wherein the deployment instructions carry deployment parameter requirements. The serial deployment mode of each storage node to be deployed means that after any storage node is deployed, the next storage node is deployed. Each storage node to be deployed receives a deployment instruction sent by a management terminal; verifying whether the self environment resource meets the requirement of deployment parameters or not, wherein the environment resource comprises state information of actual physical resources in each storage node; if so, generating a resource view according to the self environment resource; and setting the storage service of the self as a use state, and reporting the resource view to the management terminal. And the management terminal receives the resource views of the plurality of storage nodes and performs state management on the actual physical resources of the plurality of storage nodes according to the resource views. In the embodiment of the application, the deployment mode of the storage nodes adopts a serial deployment mode, and after the deployment of the current storage node is finished, the deployment of the next storage node is carried out, so that the distributed storage with the transverse expansion from a single node to multiple nodes can be supported.
In the above embodiment, the management terminal acquires information of a plurality of storage nodes that need to be serially deployed to create the distributed storage system in response to an input operation of a user in the node information entry. In an implementation manner, a user can manually add the IPs of a plurality of storage nodes in the node information entry, and the management terminal can obtain the IPs of the storage nodes to be deployed.
In the above embodiment, the management terminal responds to the storage node deployment operation, and sequentially sends the deployment instruction to the plurality of storage nodes, so that the plurality of storage nodes set their own storage services in an external state according to the deployment instruction. One way to realize the implementation is that the interface further comprises a deployment control, and the management terminal responds to the triggering operation of the user on the deployment control and sends a deployment instruction to any one storage node; and receiving a deployment completion message sent by the currently deployed storage node, and sending a deployment instruction to the next storage node until all the storage nodes are deployed completely. The sequence of the deployment instruction sent by the management terminal to the storage nodes can be according to the sequence of manually adding the storage node IP, or after the last storage node is deployed, any storage node is randomly selected from undeployed storage nodes for deployment.
After the first storage node to be deployed acquires the deployment instruction sent by the management terminal, the deployment operation of the node is carried out. One way to implement the method is that a first storage node receives a deployment instruction sent by a management terminal, and the deployment instruction carries a deployment parameter requirement; verifying whether the environmental resources of the first storage node meet the requirements of deployment parameters; if so, generating a resource view according to the environment resources of the first storage node, and setting the storage service on the first storage node to be in an externally available state; and sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view.
It should be noted that the environment resources of the storage node include state information of actual physical resources of the storage node; for example, network card resources of the storage node, CPU resources, cache resources, storage resources, and other resources have various parameters and performances, and the association relationship between various actual physical resources. The deployment parameter requirements comprise various parameters and performances of various actual physical resources and the requirements of incidence relations among various actual physical resources; for example, the deployment parameter requirements may be preset according to the characteristics of the environment, such as the environment emphasizing performance, checking whether the memory, the hard disk, and the network card are on the same CPU, and for the environment emphasizing compatibility, the configurations may be relaxed. And the resource view is used for providing network card resources, CPU resources, cache resources and various parameters and performances of the storage resources, the positions of the storage nodes in the distributed storage system, the IP of the nodes and the like used by the storage service.
In the above embodiment, the first storage node generates the resource view according to its own environment resource, and sends the resource view to the management terminal. And after receiving the resource view, the management terminal judges whether the hardware resources of the first storage node have performance faults, and if so, deletes the hardware resources with the performance faults, thereby improving the data security of subsequent storage services.
In the above embodiment, after receiving the resource view, the management terminal may display the resource view on the electronic display screen, so that the user can view the resource view. The management terminal may be deployed on the first storage node.
In the above embodiment, after the first storage node generates the resource view, the storage service on the first storage node is set to the externally available state. In the embodiment of the application, the distributed storage system can expand and reduce the capacity of the storage nodes of the distributed storage system according to actual requirements.
And when all the storage nodes are deployed, the creation of the distributed storage system is completed. A distributed storage system may include one storage node or a plurality of storage nodes. In the case where the distributed storage system includes a plurality of storage nodes, communication connections are established between the plurality of storage nodes for sharing of data, such as resource views and node information. The manner in which any two storage nodes establish a communication connection can be seen in the description of the subsequent embodiments. One storage node can be selected from the plurality of storage nodes to serve as a master node, and the master node manages other storage nodes (i.e. slave nodes) except the master node.
Optionally, after the creation of the distributed storage system is completed, the plurality of storage nodes in the distributed storage system respectively determine whether the storage nodes can become the master nodes. Taking a first storage node in the plurality of storage nodes as an example, a judgment method of whether the first storage node can become a master node is described below, wherein the first storage node acquires sequence tags of the plurality of storage nodes; determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes; and if so, taking the first storage node as a main node. For example, when the number of storage nodes in an externally available state is greater than half of the total number of the storage nodes, sequence tags of all the storage nodes are acquired, and the storage node with the minimum tag value of the sequence tag is selected from the sequence tags of all the storage nodes as a master node; and if the label value of the sequence label of the first storage node is minimum, the first storage node is the main node.
In the above embodiment, the manner for triggering the first storage node to acquire the sequence tags of the plurality of storage nodes includes, but is not limited to, the following triggering manners:
the method comprises the steps that in a first triggering mode, after a distributed storage system is started, a first storage node receives a system starting instruction sent by a management terminal, and the first storage node counts the number of storage nodes with storage services in an externally available state; and if the number is larger than the set number threshold, acquiring sequence tags of a plurality of storage nodes in an externally available state.
And in the second triggering mode, after the distributed storage system is started, the first storage node receives a system starting instruction sent by the management terminal, judges whether the set time is reached, and if the set time is reached, the first storage node acquires sequence tags of the plurality of storage nodes in an externally available state.
And in a third triggering mode, when the current main node fails in the operation process of the distributed storage system, the first storage node receives a main node failure instruction and acquires sequence labels of other storage nodes in an externally available state except the current main node.
In the embodiments of the present application, the set number threshold and the set condition are not limited. The set number threshold and the set conditions can be adjusted according to actual conditions.
Regarding the capacity expansion of the storage nodes of the distributed storage system, refer to the above deployment process of the first storage node, and the difference is that, if there is a second storage node that has already been deployed before the first storage node, the deployment parameter requirement further includes identification information of the second storage node, so that in the deployment process of the first storage node, a communication connection with the second storage node is established. One way to achieve this is to send a communication establishment request to the second storage node to request establishment of a communication connection with the second storage node, in case the environmental resources of the first storage node meet the deployment parameter requirements; and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node. And the identification information of the second storage node is the IP of the second storage node.
For the capacity reduction of storage nodes of a distributed storage system. One way to implement this is that the master node receives the identification information of the node to be deleted, which is issued by the management terminal. Receiving identification information of a node to be deleted issued by a management terminal; judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; if not, the main node executes the operation of the node to be deleted; and if so, executing own deleting operation by the node to be deleted. In this embodiment, after determining that the node to be deleted is offline, the master node executes an operation of deleting the node to be deleted instead; when the node to be deleted is online, the node to be deleted executes the deletion operation, and the execution process is more reasonable.
In the above embodiment, when the node to be deleted is in an online state, it is determined whether the node to be deleted is providing storage service to the outside; and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted. If the node to be deleted provides storage service to the outside, data which are not downloaded may exist, the data are currently stored in a cache and lost when power is down, and the cache data need to be transferred to other storage nodes.
In the above embodiment, the master node performs the operation of the node to be deleted. One way to implement this is that the master node sends a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, where the other storage nodes are storage nodes that are in an externally available state except the node to be deleted. The storage service data in the node to be deleted is migrated to other storage nodes according to the preset migration rule, and the preset migration rule is not limited in the application and can be adjusted according to actual conditions.
It should be noted that after communication connection is established between storage nodes in the distributed storage system, information synchronization can be achieved, and each storage node can store information such as resource views and IP of other storage nodes. After the nodes to be deleted are deleted, each storage node needs to delete the locally stored resource view of the nodes to be deleted.
It should be noted that the storage cluster needs to provide an external access IP (i.e. a host node IP) as a unified external access entry, and the request for external access does not need to be issued to the storage node, but only needs to be transmitted to the access entry, and the access entry distributes the request, for example, the management terminal issues a deployment instruction, a capacity reduction instruction, and a capacity expansion instruction to the host node.
In the embodiment of the application, a plurality of storage nodes in the distributed storage system support a master-slave strategy, and the master node ensures consistency. Using any node as a main node, and using the main node to proxy the node which can not be served; and supporting a service management function, detecting service abnormity and recovering. The embodiment of the application provides an abstract resource view, isolates actual physical resources, prevents the influence of defective hardware on storage services, and allocates resources to nodes supporting a plurality of services.
In the system embodiment of the application, any storage node in the distributed storage system to be deployed receives a deployment instruction sent by a management terminal, verifies whether own environment resources meet deployment parameter requirements, generates a resource view according to the own environment resources after the own environment resources meet the deployment parameter requirements, and sends the resource view to the management terminal, and the management terminal can isolate actual physical resources according to the resource view, shield hardware with defective functions, and improve data security of the distributed storage system.
In addition to the distributed storage system 100 provided above, some embodiments of the present application also provide a node deployment method and a node deletion method, and the node deployment method and the node deletion method provided in the embodiments of the present application are not limited to the distributed storage system 100 described above.
From the perspective of a management terminal, fig. 2a is a schematic flowchart of a node deployment method according to an exemplary embodiment of the present application. As shown in fig. 2a, the method comprises:
s211: displaying an interface, wherein the interface comprises a node information input item;
s212: responding to the input operation in the node information input item, and acquiring information of a plurality of storage nodes needing to be serially deployed for creating the distributed storage system;
s213: responding to the storage node deployment operation, and sequentially sending deployment instructions to the plurality of storage nodes so that the plurality of storage nodes set the storage service of the plurality of storage nodes to be in a use state according to the deployment instructions, wherein the deployment instructions carry deployment parameter requirements.
From the perspective of a storage node, fig. 2b is a schematic flowchart of a node deployment method provided in an exemplary embodiment of the present application. As shown in fig. 2b, the method comprises:
s221: receiving a deployment instruction sent by a user through a management terminal, wherein the deployment instruction carries a deployment parameter requirement;
s222: verifying whether the environmental resources of the first storage node meet the requirement of deployment parameters, wherein the environmental resources comprise state information of actual physical resources in the first storage node, and if so, executing step S223; if not, go to step S225;
s223: generating a resource view according to the environment resources of the first storage node, and setting the storage service on the first storage node to be in an externally available state;
s224: sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view;
s225: and setting the storage service on the first storage node to be in an unavailable state.
In this embodiment, the storage node is an active electronic device connected to a network, capable of sending, receiving or forwarding information through a communication channel, and on which one or more physical hard disks are present. The storage node may be a server, a workstation, etc., but is not limited thereto. The storage node in the embodiment of the present application may include a plurality of physical hard disks. Physical hard disks include, but are not limited to: solid state drives, mechanical drives, hybrid drives, and the like. A storage cluster refers to a system consisting of various storage devices that store programs and data, control components, and devices (hardware) and algorithms (software) that manage information scheduling. The storage system can centrally manage the storage resources (such as physical hard disks) of the storage nodes.
In this embodiment, the management terminal may be a computer device or a handheld device, and the implementation form of the management terminal may be various, for example, the management terminal may be a smart phone, a personal computer, a tablet computer, a smart speaker, and the like.
Displaying an interface on an electronic display screen of the management terminal; the interface includes a node information entry; the management terminal responds to the input operation of a user in the node information input item and acquires the information of a plurality of storage nodes needing to be serially deployed for creating the distributed storage system; the management terminal responds to storage node deployment operation sent by a user and sequentially sends deployment instructions to the plurality of storage nodes, wherein the deployment instructions carry deployment parameter requirements. The serial deployment mode of each storage node to be deployed means that after any storage node is deployed, the next storage node is deployed. Each storage node to be deployed receives a deployment instruction sent by a management terminal; verifying whether the self environment resource meets the requirement of deployment parameters or not, wherein the environment resource comprises state information of actual physical resources in each storage node; if so, generating a resource view according to the self environment resource; and setting the storage service of the self as a use state, and reporting the resource view to the management terminal. And the management terminal receives the resource views of the plurality of storage nodes and performs state management on the actual physical resources of the plurality of storage nodes according to the resource views. In the embodiment of the application, the deployment mode of the storage nodes adopts a serial deployment mode, and after the deployment of the current storage node is finished, the deployment of the next storage node is carried out, so that the distributed storage with the transverse expansion from a single node to multiple nodes can be supported.
In the above embodiment, the management terminal acquires information of a plurality of storage nodes that need to be serially deployed to create the distributed storage system in response to an input operation of a user in the node information entry. In an implementation manner, a user can manually add the IPs of a plurality of storage nodes in the node information entry, and the management terminal can obtain the IPs of the storage nodes to be deployed.
In the above embodiment, the management terminal responds to the storage node deployment operation, and sequentially sends the deployment instruction to the plurality of storage nodes, so that the plurality of storage nodes set their own storage services in an external state according to the deployment instruction. One way to realize the implementation is that the interface further comprises a deployment control, and the management terminal responds to the triggering operation of the user on the deployment control and sends a deployment instruction to any one storage node; and receiving a deployment completion message sent by the currently deployed storage node, and sending a deployment instruction to the next storage node until all the storage nodes are deployed completely. The sequence of the deployment instruction sent by the management terminal to the storage nodes can be according to the sequence of manually adding the storage node IP, or after the last storage node is deployed, any storage node is randomly selected from undeployed storage nodes for deployment.
After the first storage node to be deployed acquires the deployment instruction sent by the management terminal, the deployment operation of the node is carried out. One way to implement the method is that a first storage node receives a deployment instruction sent by a management terminal, and the deployment instruction carries a deployment parameter requirement; verifying whether the environmental resources of the first storage node meet the requirements of deployment parameters; if so, generating a resource view according to the environment resources of the first storage node, and setting the storage service on the first storage node to be in an externally available state; and sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view.
It should be noted that the environment resources of the storage node include state information of actual physical resources of the storage node; for example, network card resources of the storage node, CPU resources, cache resources, storage resources, and other resources have various parameters and performances, and the association relationship between various actual physical resources. The deployment parameter requirements comprise various parameters and performances of various actual physical resources and the requirements of incidence relations among various actual physical resources; for example, the deployment parameter requirements may be preset according to the characteristics of the environment, such as the environment emphasizing performance, checking whether the memory, the hard disk, and the network card are on the same CPU, and for the environment emphasizing compatibility, the configurations may be relaxed. And the resource view is used for providing network card resources, CPU resources, cache resources and various parameters and performances of the storage resources, the positions of the storage nodes in the distributed storage system, the IP of the nodes and the like used by the storage service.
In the above embodiment, the storage node generates the resource view according to the environment resource of the first storage node, and sends the resource view to the management terminal. And after the management terminal judges whether the hardware resources of the first storage node have performance faults or not according to the resource view, if so, the hardware resources with the performance faults are deleted, and the data security of subsequent storage services is improved.
In the above embodiment, after receiving the resource view, the management terminal may display the resource view on the electronic display screen, so that the user can view the resource view. The management terminal may be deployed on the first storage node.
In the above embodiment, after the first storage node generates the resource view, the storage service on the first storage node is set to the externally available state. And finishing the deployment of the distributed storage system after all the storage nodes are deployed. In the embodiment of the application, the distributed storage system can expand and reduce the capacity of the storage nodes of the distributed storage system according to actual requirements.
And when all the storage nodes are deployed, the creation of the distributed storage system is completed. A distributed storage system may include one storage node or a plurality of storage nodes. In the case where the distributed storage system includes a plurality of storage nodes, communication connections are established between the plurality of storage nodes for sharing of data, such as resource views and node information. For the way in which the two storage nodes establish a communication connection, reference may be made to the description of the subsequent embodiments. One storage node can be selected from the plurality of storage nodes to serve as a master node, and the master node manages other storage nodes (i.e. slave nodes) except the master node.
Optionally, after the creation of the distributed storage system is completed, the plurality of storage nodes in the distributed storage system respectively determine whether the storage nodes can become the master nodes. Taking a first storage node in the plurality of storage nodes as an example, the following describes a judgment method of whether the first storage node is a master node, where the first storage node acquires sequence tags of the plurality of storage nodes; determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes; and if so, taking the first storage node as a main node. For example, when the number of storage nodes in an externally available state is greater than half of the total number of the storage nodes, sequence tags of all the storage nodes are acquired, and the storage node with the minimum tag value of the sequence tag is selected from the sequence tags of all the storage nodes as a master node; and if the label value of the sequence label of the first storage node is minimum, the first storage node is the main node.
In the above embodiment, the manner for triggering the first storage node to acquire the sequence tags of the plurality of storage nodes includes, but is not limited to, the following triggering manners:
the method comprises the steps that in a first triggering mode, after a distributed storage system is started, a first storage node receives a system starting instruction sent by a management terminal, and the first storage node counts the number of storage nodes with storage services in an externally available state; and if the number is larger than the set number threshold, acquiring sequence tags of a plurality of storage nodes in an externally available state.
And in the second triggering mode, after the distributed storage system is started, the first storage node receives a system starting instruction sent by the management terminal, judges whether the set time is reached, and if the set time is reached, the first storage node acquires sequence tags of the plurality of storage nodes in an externally available state.
And in a third triggering mode, when the current main node fails in the operation process of the distributed storage system, the first storage node receives a main node failure instruction and acquires sequence labels of other storage nodes in an externally available state except the current main node.
In the embodiments of the present application, the set number threshold and the set condition are not limited. The set number threshold and the set conditions can be adjusted according to actual conditions.
Regarding the capacity expansion of the storage nodes of the distributed storage system, refer to the above deployment process of the first storage node, and the difference is that, if there is a second storage node that has already been deployed before the first storage node, the deployment parameter requirement further includes identification information of the second storage node, so that in the deployment process of the first storage node, a communication connection with the second storage node is established. One way to achieve this is to send a communication establishment request to the second storage node to request establishment of a communication connection with the second storage node, in case the environmental resources of the first storage node meet the deployment parameter requirements; and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node. And the identification information of the second storage node is the IP of the second storage node.
For the capacity reduction of storage nodes of a distributed storage system. One way to implement this is that the master node receives the identification information of the node to be deleted, which is issued by the management terminal. Receiving identification information of a node to be deleted issued by a management terminal; judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; if not, the main node executes the operation of the node to be deleted; and if so, executing own deleting operation by the node to be deleted. In this embodiment, after determining that the node to be deleted is offline, the master node executes an operation of deleting the node to be deleted instead; when the node to be deleted is online, the node to be deleted executes the deletion operation, and the execution process is more reasonable.
In the above embodiment, when the node to be deleted is in an online state, it is determined whether the node to be deleted is providing storage service to the outside; and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted. If the node to be deleted provides storage service to the outside, data which are not downloaded may exist, the data are currently stored in a cache and lost when power is down, and the cache data need to be transferred to other storage nodes.
In the above embodiment, the master node performs the operation of the node to be deleted. One way to implement this is that the master node sends a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, where the other storage nodes are storage nodes that are in an externally available state except the node to be deleted. The storage service data in the node to be deleted is migrated to other storage nodes according to the preset migration rule, and the preset migration rule is not limited in the application and can be adjusted according to actual conditions.
It should be noted that after communication connection is established between storage nodes in the distributed storage system, information synchronization can be achieved, and each storage node can store information such as resource views and IP of other storage nodes. After the nodes to be deleted are deleted, each storage node needs to delete the locally stored resource view of the nodes to be deleted.
It should be noted that the storage cluster needs to provide an external access IP as a unified external access entry, and the request for external access does not need to be issued to the storage node, but only needs to be transmitted to the access entry, and the access entry distributes the request, for example, the management terminal issues a deployment instruction, a capacity reduction instruction, and a capacity expansion instruction to the host node.
In the embodiment of the application, a plurality of storage nodes in the distributed storage system support a master-slave strategy, and the master node ensures consistency. Using any node as a main node, and using the main node to proxy the node which can not be served; and supporting a service management function, detecting service abnormity and recovering. The embodiment of the application provides an abstract resource view, isolates actual physical resources, prevents the influence of defective hardware on storage services, and allocates resources to nodes supporting a plurality of services.
Based on the description of the foregoing embodiments, fig. 3 is a schematic flowchart of a node deletion method provided in the embodiments of the present application. As shown in fig. 3, the node deletion method includes:
s301: acquiring sequence tags of a plurality of storage nodes in a distributed storage system;
s302: determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes;
s303: if so, taking the first storage node as a main node;
s304: a first storage node receives identification information of a node to be deleted issued by a management terminal;
s305: judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; if yes, executing S306, otherwise executing S307;
s306: the main node executes the operation of deleting the nodes to be deleted;
s307: the node to be deleted performs the delete operation.
In this embodiment, the storage node is an active electronic device connected to a network, capable of sending, receiving or forwarding information through a communication channel, and on which one or more physical hard disks are present. The storage node may be a server, a workstation, etc., but is not limited thereto. The storage node in the embodiment of the present application may include a plurality of physical hard disks. Physical hard disks include, but are not limited to: solid state drives, mechanical drives, hybrid drives, and the like. A storage cluster refers to a system consisting of various storage devices that store programs and data, control components, and devices (hardware) and algorithms (software) that manage information scheduling. The storage system can centrally manage the storage resources (such as physical hard disks) of the storage nodes.
In this embodiment, the management terminal may be a computer device or a handheld device, and the implementation form of the management terminal may be various, for example, the management terminal may be a smart phone, a personal computer, a tablet computer, a smart speaker, and the like.
The master node may be selected by a plurality of storage nodes in the distributed storage system, and the master node manages other storage nodes (i.e., slave nodes) besides the master node. One way to select the master node is to count the number of storage nodes with storage services in an externally available state; if the number is larger than the set number threshold, acquiring sequence tags of a plurality of storage nodes; determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes; and if so, taking the first storage node as a main node. For example, when the number of storage nodes in an externally available state is greater than half of the total number of the storage nodes, sequence tags of all the storage nodes are acquired, and the storage node with the minimum tag value of the sequence tag is selected from the sequence tags of all the storage nodes as a master node; and if the label value of the sequence label of the first storage node is minimum, the first storage node is the main node.
In the embodiments of the present application, the set number threshold and the set condition are not limited. The set number threshold and the set conditions can be adjusted according to actual conditions.
And the main node receives the identification information of the node to be deleted, which is issued by the management terminal. Receiving identification information of a node to be deleted issued by a management terminal; judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; if not, the main node executes the operation of the node to be deleted; and if so, executing own deleting operation by the node to be deleted. In this embodiment, after determining that the node to be deleted is offline, the master node executes an operation of deleting the node to be deleted instead; when the node to be deleted is online, the node to be deleted executes the deletion operation, and the execution process is more reasonable.
In the above embodiment, when the node to be deleted is in an online state, it is determined whether the node to be deleted is providing storage service to the outside; and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted. If the node to be deleted provides storage service to the outside, data which are not downloaded may exist, the data are currently stored in a cache and lost when power is down, and the cache data need to be transferred to other storage nodes.
In the above embodiment, the master node performs the operation of the node to be deleted. One way to implement this is that the master node sends a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, where the other storage nodes are storage nodes that are in an externally available state except the node to be deleted. The storage service data in the node to be deleted is migrated to other storage nodes according to the preset migration rule, and the preset migration rule is not limited in the application and can be adjusted according to actual conditions.
It should be noted that after communication connection is established between storage nodes in the distributed storage system, information synchronization can be achieved, and each storage node can store information such as resource views and IP of other storage nodes. After the nodes to be deleted are deleted, each storage node needs to delete the locally stored resource view of the nodes to be deleted.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 301 to 303 may be device a; for another example, the execution subject of steps 301 and 302 may be device a, and the execution subject of step 303 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
In the method embodiment of the application, any storage node in the distributed storage system to be deployed receives a deployment instruction sent by the management terminal, verifies whether the own environmental resource meets the deployment parameter requirement, generates a resource view according to the own environmental resource after meeting the deployment parameter requirement, and sends the resource view to the management terminal, and the management terminal can isolate the actual physical resource according to the resource view, shield the hardware with the defective function, and improve the data security of the distributed storage system.
Fig. 4 is a schematic structural diagram of a management terminal according to an exemplary embodiment of the present application. As shown in fig. 4, the management terminal includes: memory 401 and processor 402, as well as necessary components including communications components 403 and power components 404.
The memory 401 is used for storing a computer program and may be configured to store other various data to support operations on the management terminal. Examples of such data include instructions for any application or method operating on the management terminal.
The memory 401 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A communication component 403 for establishing a communication connection with other devices.
Correspondingly, the embodiment of the application also provides a computer readable storage medium storing the computer program. The computer-readable storage medium stores a computer program, and the computer program, when executed by one or more processors, causes the one or more processors to perform the steps in the method embodiment of fig. 2 a.
Fig. 5 is a schematic structural diagram of a node deployment device according to an exemplary embodiment of the present application. As shown in fig. 5, the node deployment apparatus includes: a memory 501 and a processor 502, as well as necessary components including a communication component 503 and a power component 504.
The memory 501 is used for storing computer programs and may be configured to store other various data to support operations on the storage nodes. Examples of such data include instructions for any application or method operating on the node-deploying device.
The memory 501, which may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A communication component 503 for establishing a communication connection with the other device.
The processor 502, which may execute computer instructions stored in the memory 501, is configured to: receiving a deployment instruction sent by a user through a management terminal, wherein the deployment instruction carries a deployment parameter requirement; verifying whether the environmental resources of the first storage node meet the requirements of deployment parameters or not, wherein the environmental resources comprise state information of actual physical resources in the first storage node; if so, generating a resource view according to the environment resources of the first storage node, and setting the storage service on the first storage node to be in an externally available state; and sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view.
Optionally, the processor 502, after generating the resource view, may be further configured to: and displaying the resource view.
Optionally, the deployment parameter requirement includes identification information of a second storage node that has been deployed and whose deployment order is before the first storage node, and the processor 502 is further configured to: under the condition that the environmental resources of the first storage node meet the requirement of deployment parameters, sending a communication establishment request to a second storage node to request to establish communication connection with the second storage node; and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node.
Optionally, after the creation of the distributed storage system is completed, the processor 502 may further be configured to: acquiring sequence labels of a plurality of storage nodes, wherein the plurality of storage nodes comprise a first storage node; determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes; and if so, taking the first storage node as a main node.
Optionally, the processor 502, before obtaining the sequence tags of the plurality of storage nodes, may be further configured to: counting the number of storage nodes with storage service in an externally available state; and if the number is larger than the set number threshold, acquiring the sequence tags of the plurality of storage nodes.
Optionally, the processor 502, after having the first storage node as the master node, is further configured to: receiving identification information of a node to be deleted issued by a management terminal; judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; and if not, the main node executes the operation of the node to be deleted.
Optionally, the processor 502 may be further configured to: when the node to be deleted is in an online state, judging whether the node to be deleted provides storage service for the outside; and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Optionally, when the master node executes the operation of the node to be deleted, the processor 502 is specifically configured to: and sending a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Optionally, after establishing the communication connection between the first storage node and the second storage node, the processor 502 may be further configured to: based on the communication connection, receiving the resource view of the node to be deleted sent by the node to be deleted, and storing the resource view of the node to be deleted locally; after the master node performs the operation of deleting the node, the processor 502 may further be configured to: and deleting the resource view of the node to be deleted.
Correspondingly, the embodiment of the application also provides a computer readable storage medium storing the computer program. The computer-readable storage medium stores a computer program, and the computer program, when executed by one or more processors, causes the one or more processors to perform the steps in the method embodiment of fig. 2 b.
Fig. 6 is a schematic structural diagram of a node deployment device according to an exemplary embodiment of the present application. As shown in fig. 6, the node deployment apparatus includes: memory 601 and processor 602, as well as necessary components including communications component 603 and power component 604.
The memory 601 is used for storing computer programs and may be configured to store other various data to support operations on the node-deploying device. Examples of such data include instructions for any application or method operating on the storage node.
The memory 601, which may be implemented by any type of volatile or non-volatile memory device or combination thereof, may include, for example, Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A communication component 603 for establishing a communication connection with other devices.
Correspondingly, the embodiment of the application also provides a computer readable storage medium storing the computer program. The computer-readable storage medium stores a computer program, and the computer program, when executed by one or more processors, causes the one or more processors to perform the steps in the method embodiment of fig. 3.
The communication components of fig. 4-6 described above are configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply components of fig. 4-6 described above provide power to the various components of the device in which the power supply component is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In addition, the management terminal and the storage node in the embodiment of the present application may further include a display and an audio component.
The display includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Wherein, the audio component can be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
Fig. 7 is a schematic structural diagram of a node deployment apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: a receiving module 71, an authentication module 72 and a sending module 73.
The receiving module 71 is configured to receive a deployment instruction sent by a user through a management terminal, where the deployment instruction carries a deployment parameter requirement;
a verification module 72, configured to verify whether an environment resource of the first storage node meets a requirement of a deployment parameter, where the environment resource includes state information of an actual physical resource in the first storage node; if so, generating a resource view according to the environment resources of the first storage node, and setting the storage service on the first storage node to be in an externally available state;
and the sending module 73 is configured to send the resource view to the management terminal, so that the management terminal performs state management on the actual physical resource in the first storage node according to the resource view.
Optionally, the deployment parameter requirement includes identification information of a second storage node that has been deployed and whose deployment order is before the first storage node, and the apparatus further includes: a communication module 74, configured to send a communication establishment request to a second storage node to request to establish a communication connection with the second storage node when the environment resource of the first storage node meets the requirement of the deployment parameter; and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node.
Optionally, after the creation of the distributed storage system is completed, the distributed storage system includes a plurality of storage nodes in an externally available state, and the apparatus further includes: a node management module 75, configured to obtain sequence tags of a plurality of storage nodes, where the plurality of storage nodes includes a first storage node; determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes; and if so, taking the first storage node as a main node.
Optionally, before acquiring the sequence tags of the plurality of storage nodes, the node management module 75 is further configured to count the number of storage nodes whose storage services are in an externally available state; and if the number is larger than the set number threshold, acquiring the sequence tags of the plurality of storage nodes.
Optionally, after the first storage node is used as a master node, the node management module 75 is further configured to receive identification information of a node to be deleted, which is sent by the management terminal; judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted; and if not, the main node executes the operation of the node to be deleted.
Optionally, the node management module 75 is further configured to determine whether the node to be deleted is providing a storage service to the outside when the node to be deleted is in an online state; and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Optionally, the node management module 75 executes a node to be deleted operation in the master node, specifically configured to: and sending a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
Optionally, after establishing the communication connection between the first storage node and the second storage node, the node management module 75 is further configured to receive, based on the communication connection, a resource view of the node to be deleted sent by the node to be deleted, and store the resource view of the node to be deleted locally; the node management module 75 is further configured to delete the resource view of the node to be deleted after the master node executes the operation of the node to be deleted.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 2b, and reference may be made to the related description of the embodiment shown in fig. 2b for a part of this embodiment which is not described in detail. The implementation process and technical effect of this technical solution refer to the description in the embodiment shown in fig. 2b, and are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In the above device embodiment of the present application, any storage node in the distributed storage system to be deployed receives a deployment instruction sent by the management terminal, verifies whether its own environmental resource meets the deployment parameter requirement, and after meeting the deployment parameter requirement, the storage node generates a resource view according to its own environmental resource and sends the resource view to the management terminal, and the management terminal can isolate actual physical resources according to the resource view, shield hardware with defective functions, and improve data security of the distributed storage system.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (12)
1. A node deployment method is suitable for a first storage node, and is characterized by comprising the following steps:
receiving a deployment instruction sent by a user through a management terminal, wherein the deployment instruction carries a deployment parameter requirement;
verifying whether the environmental resources of the first storage node meet the deployment parameter requirements, wherein the environmental resources comprise state information of actual physical resources in the first storage node;
if so, generating a resource view according to the environment resources of a first storage node, and setting the storage service on the first storage node to be in an externally available state;
and sending the resource view to a management terminal so that the management terminal can perform state management on the actual physical resources in the first storage node according to the resource view.
2. The method of claim 1, wherein the deployment parameter requirement comprises identification information of a second storage node that has been deployed and is located sequentially before the first storage node, and wherein the method further comprises:
under the condition that the environmental resources of the first storage node meet the requirement of the deployment parameters, sending a communication establishment request to a second storage node to request to establish communication connection with the second storage node;
and establishing the communication connection between the first storage node and the second storage node according to the result of allowing the communication connection to be established returned by the second storage node.
3. The method of claim 2, wherein after the creation of the distributed storage system is completed, the distributed storage system comprises a plurality of storage nodes in an externally available state, the method further comprising:
obtaining sequence tags of a plurality of storage nodes, wherein the plurality of storage nodes comprise a first storage node;
determining whether the sequence label of the first storage node meets a set condition or not according to the sequence labels of the plurality of storage nodes;
and if so, taking the first storage node as a main node.
4. The method of claim 3, wherein prior to obtaining the sequence tags for the plurality of storage nodes, the method further comprises:
counting the number of storage nodes with storage service in an externally available state;
and if the number is larger than a set number threshold, acquiring sequence tags of a plurality of storage nodes.
5. The method of claim 3, wherein after the first storage node is the master node, the method further comprises:
receiving identification information of a node to be deleted issued by a management terminal;
judging whether the node to be deleted is in an online state or not according to the identification information of the node to be deleted;
and if not, the main node executes the operation of the node to be deleted.
6. The method of claim 5, further comprising:
when the node to be deleted is in an online state, judging whether the node to be deleted provides storage service for the outside;
and if so, sending a data transfer instruction to the node to be deleted so that the node to be deleted stores the data which is not stored into other storage nodes, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
7. The method of claim 5, wherein the master node performs a node to delete operation comprising:
and sending a storage service data migration instruction to the node to be deleted, so that the node to be deleted migrates the storage service data in the node to be deleted to other storage nodes according to the storage service data migration instruction, wherein the other storage nodes are storage nodes which are in an externally available state except the node to be deleted.
8. The method of claim 5, wherein after establishing the communication connection between the first storage node and the second storage node, the method further comprises:
receiving the resource view of the node to be deleted sent by the node to be deleted based on the communication connection, and storing the resource view of the node to be deleted locally;
after the master node executes the operation of the node to be deleted, the method further comprises the following steps:
and deleting the resource view of the node to be deleted.
9. A node deployment apparatus, comprising:
the system comprises a receiving module, a deployment module and a management module, wherein the receiving module is used for receiving a deployment instruction sent by a user through a management terminal, and the deployment instruction carries a deployment parameter requirement;
the verification module is used for verifying whether the environment resources of the first storage node meet the deployment parameter requirements, wherein the environment resources comprise state information of actual physical resources in the first storage node; if so, generating a resource view according to the environment resources of a first storage node, and setting the storage service on the first storage node to be in an externally available state;
and the sending module is used for sending the resource view to a management terminal so that the management terminal can carry out state management on the actual physical resources in the first storage node according to the resource view.
10. A node deployment system, comprising: the management terminal and a plurality of storage nodes which are in communication connection with the management terminal;
the management terminal is used for responding to the input operation of a user in the node information input item and acquiring the information of a plurality of storage nodes needing to be serially deployed for creating the distributed storage system; responding to a storage node deployment operation sent by a user, and sequentially sending deployment instructions to the plurality of storage nodes, wherein the deployment instructions carry deployment parameter requirements; receiving resource views of a plurality of storage nodes, and performing state management on actual physical resources of the plurality of storage nodes according to the resource views;
each storage node is used for receiving a deployment instruction sent by the management terminal; verifying whether the self environment resource meets the requirement of the deployment parameter or not, wherein the environment resource comprises the state information of the actual physical resource in each storage node; if so, generating a resource view according to the self environment resource; and setting the storage service of the self as a use state, and reporting the resource view to the management terminal.
11. A node deployment apparatus, comprising: a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program for implementing the steps in the node deployment method according to any of claims 1 to 8.
12. A computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to perform the steps of the node deployment method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384901.5A CN113138717B (en) | 2021-04-09 | 2021-04-09 | Node deployment method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384901.5A CN113138717B (en) | 2021-04-09 | 2021-04-09 | Node deployment method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113138717A true CN113138717A (en) | 2021-07-20 |
CN113138717B CN113138717B (en) | 2022-11-11 |
Family
ID=76811140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110384901.5A Active CN113138717B (en) | 2021-04-09 | 2021-04-09 | Node deployment method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113138717B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115118779A (en) * | 2022-06-24 | 2022-09-27 | 济南浪潮数据技术有限公司 | Method, system, device and medium for building cluster based on centralized storage |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1955912A (en) * | 2006-10-13 | 2007-05-02 | 清华大学 | Method for distributing resource in large scale storage system |
US20080192643A1 (en) * | 2007-02-13 | 2008-08-14 | International Business Machines Corporation | Method for managing shared resources |
CN103997513A (en) * | 2014-04-21 | 2014-08-20 | 北京邮电大学 | Programmable virtual network service system |
CN108345651A (en) * | 2018-01-22 | 2018-07-31 | 广州欧赛斯信息科技有限公司 | A kind of data integrated system and method for realizing the data virtualization to interconnect |
CN108549580A (en) * | 2018-03-30 | 2018-09-18 | 平安科技(深圳)有限公司 | Methods and terminal device of the automatic deployment Kubernetes from node |
CN108616566A (en) * | 2018-03-14 | 2018-10-02 | 华为技术有限公司 | Raft distributed systems select main method, relevant device and system |
CN110209602A (en) * | 2019-05-17 | 2019-09-06 | 北京航空航天大学 | Region division and space allocation method in cross-domain virtual data space |
CN110366056A (en) * | 2018-04-09 | 2019-10-22 | 中兴通讯股份有限公司 | A kind of implementation method, device, equipment and the storage medium of ASON business model |
CN110493357A (en) * | 2019-09-16 | 2019-11-22 | 深圳市网心科技有限公司 | A kind of calculation resource disposition method, system, device and computer storage medium |
-
2021
- 2021-04-09 CN CN202110384901.5A patent/CN113138717B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1955912A (en) * | 2006-10-13 | 2007-05-02 | 清华大学 | Method for distributing resource in large scale storage system |
US20080192643A1 (en) * | 2007-02-13 | 2008-08-14 | International Business Machines Corporation | Method for managing shared resources |
CN103997513A (en) * | 2014-04-21 | 2014-08-20 | 北京邮电大学 | Programmable virtual network service system |
CN108345651A (en) * | 2018-01-22 | 2018-07-31 | 广州欧赛斯信息科技有限公司 | A kind of data integrated system and method for realizing the data virtualization to interconnect |
CN108616566A (en) * | 2018-03-14 | 2018-10-02 | 华为技术有限公司 | Raft distributed systems select main method, relevant device and system |
CN108549580A (en) * | 2018-03-30 | 2018-09-18 | 平安科技(深圳)有限公司 | Methods and terminal device of the automatic deployment Kubernetes from node |
CN110366056A (en) * | 2018-04-09 | 2019-10-22 | 中兴通讯股份有限公司 | A kind of implementation method, device, equipment and the storage medium of ASON business model |
CN110209602A (en) * | 2019-05-17 | 2019-09-06 | 北京航空航天大学 | Region division and space allocation method in cross-domain virtual data space |
CN110493357A (en) * | 2019-09-16 | 2019-11-22 | 深圳市网心科技有限公司 | A kind of calculation resource disposition method, system, device and computer storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115118779A (en) * | 2022-06-24 | 2022-09-27 | 济南浪潮数据技术有限公司 | Method, system, device and medium for building cluster based on centralized storage |
Also Published As
Publication number | Publication date |
---|---|
CN113138717B (en) | 2022-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111800281B (en) | Network system, management and control method, equipment and storage medium | |
WO2020207266A1 (en) | Network system, instance management method, device, and storage medium | |
CN113342478B (en) | Resource management method, device, network system and storage medium | |
CN116170317A (en) | Network system, service providing and resource scheduling method, device and storage medium | |
CN113301078B (en) | Network system, service deployment and network division method, device and storage medium | |
CN113726846A (en) | Edge cloud system, resource scheduling method, equipment and storage medium | |
CN107566165B (en) | Method and system for discovering and deploying available resources of power cloud data center | |
CN112835688A (en) | Distributed transaction processing method, device and storage medium | |
CN113296903A (en) | Edge cloud system, edge control method, control node and storage medium | |
CN111796838B (en) | Automatic deployment method and device for MPP database | |
CN111858050B (en) | Server cluster hybrid deployment method, cluster management node and related system | |
CN110908774A (en) | Resource scheduling method, device, system and storage medium | |
CN113553140B (en) | Resource scheduling method, equipment and system | |
CN112565317A (en) | Hybrid cloud system, data processing method and device thereof, and storage medium | |
JP7161560B2 (en) | Artificial intelligence development platform management method, device, medium | |
CN106790403B (en) | Method for realizing mobile cloud computing intermediate platform and method for realizing distribution | |
CN113138717B (en) | Node deployment method, device and storage medium | |
CN114996134A (en) | Containerized deployment method, electronic equipment and storage medium | |
CN111813625A (en) | Health check method and device for distributed server cluster | |
CN107770030B (en) | Stage equipment control system, control method and control device based on VPN technology | |
CN114598665A (en) | Resource scheduling method and device, computer readable storage medium and electronic equipment | |
CN113300866B (en) | Node capacity control method, device, system and storage medium | |
CN111431951B (en) | Data processing method, node equipment, system and storage medium | |
EP3349416A1 (en) | Relationship chain processing method and system, and storage medium | |
CN118051344A (en) | Method and device for distributing hardware resources and hardware resource management system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |