CN1328877C - Sharing route realizing and sheduling method - Google Patents

Sharing route realizing and sheduling method Download PDF

Info

Publication number
CN1328877C
CN1328877C CNB011390808A CN01139080A CN1328877C CN 1328877 C CN1328877 C CN 1328877C CN B011390808 A CNB011390808 A CN B011390808A CN 01139080 A CN01139080 A CN 01139080A CN 1328877 C CN1328877 C CN 1328877C
Authority
CN
China
Prior art keywords
thread
ready
formation
state
incident
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB011390808A
Other languages
Chinese (zh)
Other versions
CN1423456A (en
Inventor
谭震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA TECHNOLOGY EXCHANGE CO., LTD.
State Grid Beijing Electric Power Co Ltd
State Grid Economic and Technological Research Institute
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CNB011390808A priority Critical patent/CN1328877C/en
Publication of CN1423456A publication Critical patent/CN1423456A/en
Application granted granted Critical
Publication of CN1328877C publication Critical patent/CN1328877C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The present invention provides a thread realization and dispatch method in a data process operation platform, which relates to a shared thread realization and dispatch technology for avoiding or reducing kernel thread switch, fully utilizing processor efficiency and reducing system resource consumption. The method comprises the following steps: thread creation for finishing the arrangement of a thread control block and thread bonding, thread activation for starting operation, thread operation receiving event to switch into an operation state, thread dispatch for finishing thread state switch, preparation, operation array management, etc. and thread exit for finishing clearance of thread data, releasing thread symbol and adjusting system thread numbers. The method of the present invention sufficiently decreases thread switching time, fully utilizes system resources and provides a great number of thread mechanisms under the premise of not adding system resource consumption to ensure thread consumption of multithread programming.

Description

Sharing thread realizes and dispatching method
Technical field
The present invention relates to share in a kind of data processing operation platform the realization and the dispatching method of thread.Specifically, relate to and a kind ofly avoid or reduce the thread that carries out kernel switching, make full use of processor efficient, and the shared thread that reduces system resources consumption is realized and dispatching method.
Background technology
In typical multiple task operating system, the memory headroom of system is forced to be divided into user's space and kernel spacing by hardware, and user program only can operate in user's space, when needs use kernel, needs to carry out special trap instruction.Traditional process multitask system just is based on such structure, and the full detail of process control structure all is in kernel spacing, and the switching of each process all can relate to the visit to kernel spacing.
Multithreading allows a program to carry out a plurality of tasks simultaneously, and the resource in the shared process space.With tradition is that the multiple task operating system of unit is different with the process, and thread is the lightweight entity, and quite a few allows to visit fast at user's space in the thread structure.Just because of this feature of thread, multithreading has obtained widely using, and the establishment multithread programs can obtain following benefit: make full use of the concurrent ability of multiprocessor and obtain outstanding performance; The throughput of raising program and respond; Reduce the Inter-Process Communication expense, effectively utilize system resource; Need not update routine, promptly can on uniprocessor or multiprocessor, use.
Thread realizes that by writing thread library two kinds of thread library implementation methods are arranged at present.First method is to write the user class storehouse, relies on and existing core functions, and topmost library call is carried out at user's space; Second method is to write the kernel level storehouse, and most of library call runs on kernel spacing, needs the system call support.Some realization of POSIX standard belongs to first method, and OS/2 and Win32 thread belong to second method.Because operating system nucleus only understands process structure (there is Lightweight Process LWP in the later stage), one of main task of thread library be exactly with thread by on the process or LWP that are tied to kernel identification someway, thereby realize the concurrent of thread by kernel.Along with the increase of number of threads, system will face two problems.The one,, the increase of number of threads will consume the quantity of kernel binding process (LWP), and nuclear resource is limited in this; The 2nd,, the increase of number of threads will cause the increase of thread switching times, and the switching of each thread may cause the visit expense of kernel, and the time of switching cost is extra fully.In view of the above-mentioned problems, exist certain restriction in the practical application of multithreading, at first number of threads can not be too big, and secondly, when number of threads arrived some, systematic entirety can improve, even occur descending.Simultaneously, each thread all is an independently run entity, but in the whole life of thread, thread is not at every moment all to work, and a large amount of time spends in waits for that this is undoubtedly a kind of waste on the available resources.
Summary of the invention
The objective of the invention is to propose a kind of shared thread and realize and dispatching method, fully reduce the switching time of thread; And under the prerequisite that does not increase system resources consumption, make full use of system resource, provide the threading mechanism of enormous amount, to reduce the consumption of multi-thread programming on thread.
The related thread of this method is divided into two kinds, user thread and system thread, and wherein user thread is the thread that this method realizes, and system thread is the threading mechanism that is provided by other thread libraries, and user thread is constructed on system thread.The thread of hereinafter mentioning all refers to user thread.
The formation of thread comprises thread identification, thread entry and incident reception area several main parts, and this several sections is managed by thread control block, and thread control block is in user's space.The realization and the dispatching method of whole thread are as follows:
A kind of shared thread is realized and dispatching method, adopts the following step:
1, thread creation, with setting and the thread binding of finishing thread control block, the setting of thread control block comprises
Thread entry setting, incident reception area application IP addresses and thread state initialization;
2, thread activates;
3, thread operation; Change running status over to accept Event triggered, and bind, after current event handling is finished, change idle condition again over to, and separate with system thread with system thread.
4, thread scheduling, thread scheduling comprises the following steps:
A, threaded systems are safeguarded ready thread formation and active thread formation;
B, by main thread and each thread of other thread creations;
C, when idle thread has Event triggered, the sender of incident changes receiving thread over to ready state from idle condition when sending incident, and this thread is put into ready thread formation;
D, system thread block waits for that once there is thread to enter ready thread formation, then system thread is tied on the ready thread one by one in order, and changes this thread over to the active thread formation in ready thread formation; Thread state transfers running status to, and control changes thread entry over to subsequently, and input simultaneously triggers the incident of operation this time, the thread operation;
E, after thread is finished dealing with to current event, check whether its incident reception area has other incidents to exist, if having, then this thread changes ready thread formation over to from the active thread formation; If no, the thread formation out of service of this thread, its state exchange becomes idle condition;
F, when the switching of each thread state, all be accompanied by recomputating of system thread quantity, according to the current thread sum, ready Thread Count, travel line number of passes, idle line number of passes, the system line number of passes calculates the quantity that whether needs to increase or reduce system thread;
5, thread withdraws from;
Withdrawing from of thread to finish the removing of thread-data, the release of thread identification, and Adjustment System number of threads.
Share in the thread creation of thread realization and dispatching method, the thread entry setting comprises, determines the thread identification of a unique thread, the definition of thread entry address and the appointment of thread entry parameter.
Share in the thread activation of thread realization and dispatching method, the activation of thread can also can be undertaken by caller behind thread creation in thread creation.
Sharing the trigger event that thread is realized and the thread of dispatching method moves can be that the message between the thread, the extraneous information of importing or thread activate.
The thread that the present invention proposes is realized and dispatching method is applied in the multi-threaded system, has following beneficial effect:
(1) the thread structure data are positioned at user's space, and the operation of thread (create, switch, withdraw from) does not relate to the visit of any kernel spacing, and speed is fast.
(2) number of threads is not subjected to the restriction of system resource, and the thread of enormous amount can be provided.
(3) have only the thread of running status to take the system thread resource, and thread is fully utilized the system thread resource finishing to understand delivery system thread resources as far as possible after current event is handled.
(4) in the life cycle of thread, a plurality of threads are shared a system thread, make system thread quantity compare Thread Count and significantly reduce, thereby improve running efficiency of system the switching time of minimizing system thread.
Description of drawings
Fig. 1 is a thread control block structural representation of the present invention;
Fig. 2 is that thread state of the present invention switches schematic diagram;
Fig. 3 is a thread scheduling schematic diagram of the present invention;
Fig. 4 is thread operation of the present invention and scheduling flow schematic diagram.
A is an idle thread in the accompanying drawing, and b is ready thread, and c is an active thread, and d is for blocking thread, and e is a system thread.
Specific implementation method
The realization of whole thread and dispatching method are concretely:
1. the establishment of thread
The setting and the thread binding of thread control block mainly finished in the establishment of thread, the setting of thread control block comprises that (thread entry is the starting point of thread run time version when being scheduled for the setting of thread entry, can use the address of pointing to this starting point to represent), the setting of thread entry is kept at this address in the thread control block exactly; (the incident reception area is a core buffer for application of incident reception area and initialization, its application is exactly to obtain an available internal memory space, and content wherein is set to initial condition), these work are finished at user's space fully, do not relate to the visit of any kernel spacing, so efficient is very high.The binding of thread mainly is according to the current thread behaviour in service, and whether decision needs to create the scheduling that new system thread participates in thread.This operation externally provides corresponding unit language.
2, the activation of thread
Be in initial condition behind the thread creation, need the work that Event triggered can be real, this initial triggering is called the activation of thread, and thread was not handled the incident that receives before activating.The activation of thread can be finished in thread creation, also can be by explicit the carrying out of caller behind thread creation.The activation of thread is the handover operation (switching to other states from initial condition) of a thread state.
3, the operation of thread
The operation of thread is by Event triggered, and the kind of incident can be diversified, mainly comprises the message between the thread, extraneous input etc., and it also is a kind of incident that thread above-mentioned activates.After Event triggered, thread changes running status over to, and binds with system thread, after current event handling is finished, changes idle condition again over to, and separates with system thread.
4, the scheduling of thread
The scheduling of thread is the major part of this method, and following several function is mainly finished in the scheduling of thread: the switching of thread state, the management of the ready thread formation of thread, the management of thread active thread formation and the team's operation of joining the team out accordingly.The thread scheduling that this method proposes not is to be finished by an independent entity, and all active threads are finished the traffic control of whole threading mechanism jointly in the threaded systems.Concrete work is as follows:
(1) threaded systems service data structure: ready thread queue data structure and active thread queue data structure;
These two operations that the data structure maintenance is data fully comprise increasing data in formation or pond,
Or from formation and pond, take out data, and wherein formation is safeguarded and is entered the order of queuing data, adopt the principle of first in first out, the pond does not then need to safeguard such order.
(2) each thread can be realized by thread creation unit language by the establishment of main thread (in-process intrinsic system thread) and other thread creation threads.
When (3) certain idle thread had Event triggered, the sender of incident changed receiving thread over to ready state from idle condition, and this thread is put into ready thread formation in transmission incident (incident being put into the incident reception area of receiving thread).
(4) system thread blocks wait in ready thread formation, in case there is thread to enter ready thread formation,
Then system thread is tied on the ready thread one by one in order, and change this thread over to the active thread formation, and thread state is converted to running status, and control changes thread entry over to subsequently, input simultaneously triggers the incident of operation this time, thread operation (to the processing of incident).
(5) after thread is finished dealing with to current event, check whether its incident reception area has other incidents to exist, if having, then this thread changes ready thread formation over to from the active thread formation; If no, the thread formation out of service of this thread, its state exchange becomes idle condition.
(6) when the switching of each thread state, all be accompanied by recomputating of system thread quantity, according to the current thread sum, ready Thread Count, travel line number of passes, idle line number of passes, the system line number of passes calculates the quantity that whether needs to increase or reduce system thread.
5, thread withdraws from
Thread withdraw from the removing of finishing thread-data, the release of thread identification, and Adjustment System number of threads.
As shown in Figure 1, shown the structure of thread control block, comprised thread identification, thread entry address, thread entry parameter, incident reception area address, thread state, thread attribute and other thread-datas; Each several part is described as follows:
Thread identification is used for unique thread of determining, joining the team to operate at thread to replace the thread entity, can be used for addressing when cross-thread message sends.Thread identification must can realize that after thread withdrawed from, sign discharged and allows to reuse in-process unique by unified distribution mechanism.
The thread entry address is used for importing into of when thread moves control, and the form of inlet can define voluntarily.
The thread entry parameter is used to import into thread designated parameters when creating.
Incident reception area address is used to deposit the incident that triggers the thread operation, and the precedence that this reception area maintenance event arrives guarantees to reach earlier incident and handles earlier.
Thread state is used to specify the behavior of thread life cycle stage, and state is capable of circulation, and is switchable, specifically describes the explanation of seeing Fig. 2.
The attribute of thread is used to describe some intrinsic properties of thread, specifies during by thread creation, can revise the part attribute at the thread run duration.
Other data of thread, this part can be according to concrete realization by oneself.
Fig. 2 has described various states and conversion thereof that thread had in the whole lifetime.In the whole lifetime of thread, the state of thread comprises: initial condition, idle condition, ready state, running status, blocked state, closed condition.
Initial condition: promptly be in this state behind the thread creation, when being in initial condition, thread is not moved, and has only the thread activation manipulation just can make thread leave initial condition.When activating, there is not incident in the incident reception area of thread, state exchange is an idle condition; Otherwise, work as activation, and the incident reception area of thread is converted to ready state when incident is arranged.
Idle condition: do not have incident in the incident reception area of this state representation thread, this state is converted to ready state having under the Event triggered; When outside thread terminating operation is arranged, be converted to closed condition.
Ready state: this state representation thread has Event triggered after activation, and thread enters ready thread formation and waits for operation.When thread arrival queue heads, and the system thread binding is arranged, then state exchange is a running status; When waiting for that run duration has outside thread terminating operation, then change closed condition over to.
Running status: after thread is by the system thread binding, then enter running status.Behind end of run,, then change idle condition over to if there is not event handling; Otherwise change ready state over to.
Blocked state: when thread when run duration calls blocking operation, state transformation is become blocked state, after block finishing, can continue to enter running status, unless thread withdraws from voluntarily or outside the termination arranged, this will make thread switch to closed condition.
Closed condition:, then switch to closed condition in the lifetime when thread is terminated or withdraws from.
Fig. 3 has shown the scheduling and the running of thread.Specific as follows:
During thread creation, be in initial condition, thread enters scheduling flow after activation.If current do not have an Event triggered, thread be in all the time idle condition (as among Fig. 3 a), (as e among Fig. 3) is separated with system thread.
When certain idle thread of Event triggered, this idle thread is hung ready thread formation (as b among Fig. 3), waits in line binding.
There are one or more system threads to be blocked in (as d among Fig. 3) in the ready thread formation, when having thread in the ready thread formation, wake a system thread and its binding up, it is hung into the active thread formation, thread operation (as c among Fig. 3).
Return after the intact current event of thread process, it is removed from the active thread formation, whether have successor according to it, decision is switched its state to idle or ready, and separates with system thread.
Call if operating thread has carried out blocking, then it will be marked as obstruction, and this is marked at it and is used in operations such as outside termination.
If operating thread requires to withdraw from or stopped by outside, will implement to separate and terminating operation according to the ruuning situation of thread with the system thread of its binding.
By thread scheduling as described in Figure 3, can be clearly seen that how thread moves.At first, the scheduling of thread is to carry out when each state switches, and can not dispatch at the thread run duration, so the scheduling of thread carries out at user class, does not relate to the switching of system thread, and therefore this scheduling is very safe.The second, the scheduling of thread and the triggering of incident have confidential relation, we can say that the scheduling at thread is by event driven.The 3rd, the binding that the scheduling of thread relates to thread and system thread with separate, this process is dynamically carried out.

Claims (4)

1, a kind of shared thread is realized and dispatching method, it is characterized in that, shares thread realization and dispatching method and adopts the following step:
(1) thread creation, with setting and the thread binding of finishing thread control block, the setting of thread control block comprises thread entry setting, incident reception area application IP addresses and thread state initialization;
(2) thread activates;
(3) thread operation; Change running status over to accept Event triggered, and bind, after current event handling is finished, change idle condition again over to, and separate with system thread with system thread.
(4) thread scheduling, thread scheduling comprises the following steps:
A, threaded systems are safeguarded ready thread formation and active thread formation;
B, by main thread and each thread of other thread creations;
C, when idle thread has Event triggered, the sender of incident changes receiving thread over to ready state from idle condition when sending incident, and this thread is put into ready thread formation;
D, system thread block waits for that once there is thread to enter ready thread formation, then system thread is tied on the ready thread one by one in order, and changes this thread over to the active thread formation in ready thread formation; Thread state transfers running status to, and control changes thread entry over to subsequently, and input simultaneously triggers the incident of operation this time, the thread operation;
E, after thread is finished dealing with to current event, check whether its incident reception area has other incidents to exist, if having, then this thread changes ready thread formation over to from the active thread formation; If no, the thread formation out of service of this thread, its state exchange becomes idle condition;
F, when the switching of each thread state, all be accompanied by recomputating of system thread quantity, according to the current thread sum, ready Thread Count, travel line number of passes, idle line number of passes, the system line number of passes calculates the quantity that whether needs to increase or reduce system thread;
(5) thread withdraws from;
Withdrawing from of thread to finish the removing of thread-data, the release of thread identification, and Adjustment System number of threads.
2, shared thread according to claim 1 is realized and dispatching method, it is characterized in that the thread entry setting comprises in the thread creation: determine the thread identification of a unique thread, the definition of thread entry address and the appointment of thread entry parameter.
3, shared thread according to claim 1 is realized and dispatching method, it is characterized in that, the activation of thread can also can be undertaken by caller behind thread creation in thread creation during thread activated.
4, shared thread according to claim 1 is realized and dispatching method, it is characterized in that, the trigger event of thread operation can be that the information or the thread of the message between the thread, extraneous input activates.
CNB011390808A 2001-12-03 2001-12-03 Sharing route realizing and sheduling method Expired - Fee Related CN1328877C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011390808A CN1328877C (en) 2001-12-03 2001-12-03 Sharing route realizing and sheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011390808A CN1328877C (en) 2001-12-03 2001-12-03 Sharing route realizing and sheduling method

Publications (2)

Publication Number Publication Date
CN1423456A CN1423456A (en) 2003-06-11
CN1328877C true CN1328877C (en) 2007-07-25

Family

ID=4675009

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011390808A Expired - Fee Related CN1328877C (en) 2001-12-03 2001-12-03 Sharing route realizing and sheduling method

Country Status (1)

Country Link
CN (1) CN1328877C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109254834A (en) * 2017-07-13 2019-01-22 普天信息技术有限公司 A kind of multithreading starting synchronous method

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152152B2 (en) * 2004-01-22 2006-12-19 International Business Machines Corporation Method of avoiding flush due to store queue full in a high frequency system with a stall mechanism and no reject mechanism
WO2007073628A1 (en) * 2005-12-29 2007-07-05 Intel Corporation High performance queue implementations in multiprocessor systems
CN101291233B (en) * 2007-04-20 2011-04-20 华为技术有限公司 Method and system realizing event detection
CN101299677B (en) * 2008-04-30 2010-12-01 中兴通讯股份有限公司 Method for sharing unity service course by multiple courses
CN101561764B (en) * 2009-05-18 2012-05-23 华为技术有限公司 Patching method and patching device under multi-core environment
CN103092682B (en) * 2011-10-28 2016-09-28 浙江大华技术股份有限公司 Asynchronous network applications program processing method
CN102981901A (en) * 2012-11-19 2013-03-20 北京思特奇信息技术股份有限公司 Method and device for processing connection request
CN103179294B (en) * 2013-03-12 2015-04-08 厦门亿联网络技术股份有限公司 Method for preventing task congestion in VOIP (voice over internet protocols) phone through thread pool
JP6387747B2 (en) * 2013-09-27 2018-09-12 日本電気株式会社 Information processing apparatus, failure avoidance method, and computer program
US9515901B2 (en) 2013-10-18 2016-12-06 AppDynamics, Inc. Automatic asynchronous handoff identification
CN108052392B (en) * 2017-12-26 2020-12-25 成都质数斯达克科技有限公司 Service processing method and device based on block chain
CN110119267B (en) * 2019-05-14 2020-06-02 重庆八戒电子商务有限公司 Method and device for improving performance of relation programming
CN112579515B (en) * 2019-09-27 2023-03-24 Oppo广东移动通信有限公司 Thread message processing method and related product
CN115225633B (en) * 2022-06-24 2024-04-12 浪潮软件集团有限公司 State machine state transition method and system based on opposite-end network signal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058460A (en) * 1996-06-28 2000-05-02 Sun Microsystems, Inc. Memory allocation in a multithreaded environment
US6223204B1 (en) * 1996-12-18 2001-04-24 Sun Microsystems, Inc. User level adaptive thread blocking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058460A (en) * 1996-06-28 2000-05-02 Sun Microsystems, Inc. Memory allocation in a multithreaded environment
US6223204B1 (en) * 1996-12-18 2001-04-24 Sun Microsystems, Inc. User level adaptive thread blocking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
多线程技术 李新明 李艺,小型微型计算机系统,第19卷第2期 1998 *
多线程的实现方法 张利霞,河南师范大学学报,第29卷第2期 2001 *
多线程的实现方法 张利霞,河南师范大学学报,第29卷第2期 2001;多线程技术 李新明 李艺,小型微型计算机系统,第19卷第2期 1998 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109254834A (en) * 2017-07-13 2019-01-22 普天信息技术有限公司 A kind of multithreading starting synchronous method
CN109254834B (en) * 2017-07-13 2021-05-14 普天信息技术有限公司 Multithreading starting synchronization method

Also Published As

Publication number Publication date
CN1423456A (en) 2003-06-11

Similar Documents

Publication Publication Date Title
CN1328877C (en) Sharing route realizing and sheduling method
US20220214930A1 (en) Systems and Methods for Performing Concurrency Restriction and Throttling over Contended Locks
CN102541653B (en) Method and system for scheduling multitasking thread pools
CN1117319C (en) Method and apparatus for altering thread priorities in multithreaded processor
US20020103847A1 (en) Efficient mechanism for inter-thread communication within a multi-threaded computer system
CN100578459C (en) Method and apparatus of thread scheduling
US8321874B2 (en) Intelligent context migration for user mode scheduling
CN108920267A (en) Task Processing Unit
JP2003518286A (en) System and method for providing a pool of reusable threads for performing queued work items
CN1801101A (en) Thread implementation and thread state switching method in Java operation system
CN101799773A (en) Memory access method of parallel computing
CN100342342C (en) Java virtual machine implementation method supporting multi-process
CN107491346A (en) A kind of task processing method of application, apparatus and system
CN110471777A (en) Multiple users share uses the method and system of Spark cluster in a kind of Python-Web environment
US20050066149A1 (en) Method and system for multithreaded processing using errands
Govindarajan et al. Design and performance evaluation of a multithreaded architecture
CN1327349C (en) Task level resource administration method for micro-kernel embedded real-time operation systems
CN100383743C (en) Real-time task scheduling method in Java operating system
CN1333343C (en) Concurrent operation of a state machine family
CN111651279A (en) Method and system for processing business process
CN113485812B (en) Partition parallel processing method and system based on large-data-volume task
CN112346835B (en) Scheduling processing method and system based on coroutine
CN101266556A (en) Multitask scheduling system
CN1225105C (en) Call processing system adapted for application server and its realizing method
CN102609306B (en) Method for processing video processing tasks by aid of multi-core processing chip and system using method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
ASS Succession or assignment of patent right

Owner name: SHENZHENG CITY ZTE CO., LTD.

Free format text: FORMER OWNER: SHENZHENG CITY ZTE CO., LTD. SHANGHAI SECOND INSTITUTE

Effective date: 20030723

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20030723

Applicant after: Zhongxing Communication Co., Ltd., Shenzhen City

Applicant before: Shanghai Inst. of No.2, Zhongxing Communication Co., Ltd., Shenzhen City

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: STATE GRID BEIJING ELECTRIC POWER COMPANY CHINA TE

Effective date: 20140129

Owner name: BEIJING POWER ECONOMIC RESEARCH INSTITUTE

Free format text: FORMER OWNER: ZTE CORPORATION

Effective date: 20140129

COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 518057 SHENZHEN, GUANGDONG PROVINCE TO: 100055 XICHENG, BEIJING

TR01 Transfer of patent right

Effective date of registration: 20140129

Address after: 100055 No. 15 West Street, Guanganmen station, Beijing, Xicheng District

Patentee after: State Power Economic Research Institute

Patentee after: State Grid Beijing Electric Power Company

Patentee after: CHINA TECHNOLOGY EXCHANGE CO., LTD.

Address before: 518057 Department of law, Zhongxing building, South Science and technology road, Nanshan District hi tech Industrial Park, Shenzhen

Patentee before: ZTE Corporation

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070725

Termination date: 20141203

CF01 Termination of patent right due to non-payment of annual fee