US20170294091A1 - Video-based action recognition security system - Google Patents
Video-based action recognition security system Download PDFInfo
- Publication number
- US20170294091A1 US20170294091A1 US15/479,430 US201715479430A US2017294091A1 US 20170294091 A1 US20170294091 A1 US 20170294091A1 US 201715479430 A US201715479430 A US 201715479430A US 2017294091 A1 US2017294091 A1 US 2017294091A1
- Authority
- US
- United States
- Prior art keywords
- attention
- processing system
- frame
- action
- live video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19695—Arrangements wherein non-video detectors start video recording or forwarding but do not generate an alarm themselves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24143—Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
-
- G06K9/00711—
-
- G06K9/00771—
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G06K2009/00738—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19613—Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
Definitions
- the present invention generally relates to video-based recognition and more particularly to video-based action recognition in a monitoring system.
- Video-based action recognition is the most valuable component of intelligent monitoring systems for many applications such as public safety monitoring, shopping center and factory surveillance, and home security etc.
- Real-time action recognition based on video sequences produced by surveillance cameras not only detects the type of action of interest, but also detects the start and end of the searched action, which often contains a sequence of action progression stages or sub-actions, as well as the most relevant time-dependent regions within video frames.
- Previous approaches to action recognition mainly fall into the following two categories: A) Feature engineering based on individual video frames by handcrafting features from each video frame and tracking them based on displacement information from an optical flow field, and B) Machine learning approaches without considering complex long-range temporal dependencies by extracting features using convolutional neural networks (CNNs) or recurrent neural networks (RNNs), and then using standard classifiers or RNNs for action prediction without attention or with only between-frame attention.
- CNNs convolutional neural networks
- RNNs recurrent neural networks
- a video monitoring system includes a camera.
- the camera is positioned to monitor an area and capture live video to provide a live video stream.
- the video monitoring system further includes a security processing system.
- the security processing system includes a processor and memory coupled to the processor.
- the security processing system is programmed to detect and identify a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention.
- the security processing system is further programmed to trigger an action to alert that a target action sequence has been detected.
- a computer-implemented method for home security.
- the method includes monitoring an area with a camera.
- the method further includes capturing, by the camera, live video to provide a live video stream.
- the method also includes detecting and identifying, by a processor, a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention.
- the method additionally triggering, by the processor, an action to alert that a target action sequence has been detected.
- FIG. 1 shows a block diagram of an exemplary processing system to which the present invention may be applied, in accordance with an embodiment of the present invention
- FIG. 2 shows a block diagram of an exemplary environment to which the present invention can be applied, in accordance with an embodiment of the present invention
- FIG. 3 shows a high-level block/flow diagram of an exemplary high-order convolutional neural network method, in accordance with an embodiment of the present invention
- FIG. 4 is a flow diagram illustrating a method for video based action recognition, in accordance with an embodiment of the present invention
- FIG. 5 shows a high-level block/flow diagram of a deep 3D attention recurrent neural network method, in accordance with an embodiment of the present invention
- FIG. 6 shows a block/flow diagram of a deep 3D attention recurrent neural network method, in accordance with an embodiment of the present invention
- FIG. 7 shows a block/flow diagram of a video monitoring system, in accordance with an embodiment of the present invention.
- FIG. 8 is a flow diagram illustrating a method for video monitoring, in accordance with an embodiment of the present invention.
- Target actions may include an intruder entering a restricted area, a confined animal escaping an enclosure, or a piece of machinery malfunctioning and endangering people or property in the machineries vicinity, etc. It is to be understood that the target actions listed and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention.
- FIG. 1 shows a block diagram of an exemplary processing system 100 to which the invention principles may be applied, in accordance with an embodiment of the present invention.
- the processing system 100 includes at least one processor (CPU) 104 operatively coupled to other components via a system bus 102 .
- a cache 106 operatively coupled to the system bus 102 .
- ROM Read Only Memory
- RAM Random Access Memory
- I/O input/output
- sound adapter 130 operatively coupled to the system bus 102 .
- network adapter 140 operatively coupled to the system bus 102 .
- user interface adapter 150 operatively coupled to the system bus 102 .
- display adapter 160 are operatively coupled to the system bus 102 .
- a first storage device 122 and a second storage device 124 are operatively coupled to system bus 102 by the I/O adapter 120 .
- the storage devices 122 and 124 can be any of a disk storage device (e.g.,, a magnetic or optical disk storage device), a solid state magnetic device, and so forth.
- the storage devices 122 and 124 can be the same type of storage device or different types of storage devices.
- a speaker 132 is operatively coupled to system bus 102 by the sound adapter 130 .
- the speaker 132 can be used to provide an audible alarm or some other indication relating to the present invention.
- a transceiver 142 is operatively coupled to system bus 102 by network adapter 140 .
- a display device 162 is operatively coupled to system bus 102 by display adapter 160 .
- a first user input device 152 , a second user input device 154 , and a third user input device 156 are operatively coupled to system bus 102 by user interface adapter 150 .
- the user input devices 152 , 154 , and 156 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present invention.
- the user input devices 152 , 154 , and 156 can be the same type of user input device or different types of user input devices.
- the user input devices 152 , 154 , and 156 are used to input and output information to and from system 100 .
- processing system 100 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other input devices and or output devices can be included in processing system 100 , depending upon the panicalar implementation of the same, as readily understood by one of ordinary skill in the art.
- various types of wireless and/or wired input and/or output devices can be used,
- additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
- environment 200 described below with respect to FIG. 2 is an environment for implementing respective embodiments of the present invention. Part or all of processing system 100 may be implemented in one or more of the elements of environment 200 .
- processing system 100 may perform at least part of the method described herein including, for example, at least part of method 300 of FIG. 3 and/or at least part of method 400 of FIG. 4 and/or at least part of method 500 of FIG. 5 and/or at least part of method 600 of FIG. 6 and/or at least part of method 800 of FIG. 8 .
- part or all of system 200 may be used to perform at least part of method 300 of FIG. 3 and/or at least pan of method 400 of FIG. 4 and/or at least part of method 500 of FIG. 5 and/or at least part of method 600 of FIG. 6 and/or at least part of method 800 of FIG. 8 .
- FIG. 2 shows an exemplary environment 200 to which the present invention can be applied, in accordance with an embodiment of the present invention.
- the environment 200 is representative of a computer network to which the present invention can be applied.
- the elements shown relative to FIG. 2 are set forth for the sake of illustration. However, it is to be appreciated that the present invention can be applied to other network configurations as readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein, while maintaining the spirit of the present invention.
- the environment 200 at least includes a set of computer processing systems 210 .
- the computer processing systems 210 can be any type of computer processing system including, but not limited to, servers, desktops, laptops, tablets, smart phones, media playback devices, and so forth.
- the computer processing systems 210 include server 210 A, server 210 B, and server 210 C.
- the present invention performs a deep 3D attention recurrent neural network method for any of the computer processing systems 210 .
- any of the computer processing systems 210 can perform video analysis that can be stored in, or accessed by, any of the computer processing systems 210 .
- the output (including active video segments) of the present invention can be used to control other systems and/or devices and/or operations and/or so forth, as readily appreciated by one of ordinary skill in the art given the teachings of the present invention provided herein, while maintaining the spirit of the present invention.
- the elements thereof are interconnected by a network(s) 201 .
- a network(s) 201 may be implemented by a variety of devices, which include but are not limited to, Digital Signal Processing (DSP) circuits, programmable processors, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) Complex Programmable Logic Devices (CPLDs), and so forth.
- DSP Digital Signal Processing
- ASICs Application Specific Integrated Circuits
- FPGAs Field Programmable Gate Arrays
- CPLDs Complex Programmable Logic Devices
- FIG. 3 shows a high-level block/flow diagram of an exemplary high-order convolutional neural network method 300 , in accordance with an embodiment of the present invention.
- step 310 receive an input image 311 .
- step 320 perform convolutions on the input image 311 to obtain maps 321 .
- step 330 perform sub-sampling on the high-order feature maps 321 to obtain a set of maps 331 .
- step 340 perform convolutions on the set of maps 331 to obtain another set of maps 341 .
- step 350 perform sub-sampling on the other set of maps 341 to obtain yet another set of maps 351 that form a fully connected layer 352 .
- the fully connected layer 352 provides a feature vector 352 A.
- the neurons in the fully connected layer 352 have full connections to all activations in the previous layer. Their activations can hence be computed with a matrix multiplication followed by a bias offset.
- a flow chart for a video based action recognition method 400 is illustratively shown, in accordance with an embodiment of the present invention.
- receive one or more frames from one or more video sequences receive one or more frames from one or more video sequences.
- generate, using a deep convolutional neural network generate a feature vector for each patch of the one or more frames.
- generate an attention factor for the feature vectors based on a within-frame attention and a between-frame attention.
- the target action represents at least one of the one or more video sequences.
- control an operation of a processor-based machine to change a state of the processor-based machine, responsive to the at least one of the one or more video sequences including the identified target action.
- Deep 3D attention Long Short-Term Memory may contain multiple modules.
- the Deep 3D attention LTSM may include an input module.
- the input module may be a deep convolutional neural network (CNN).
- CNN deep convolutional neural network
- the output of the last convolutional layer is utilized, which contains K patches and each patch is a D dimensional feature vector.
- the output of this module is a set of features x i t ⁇ , where t ⁇ ⁇ 1, . . . , T ⁇ is the time point index of the frame and i ⁇ ⁇ 1, . . . , K ⁇ is the index of the patch.
- the convolution patch size is a learnable non-fixed parameter.
- the Deep 3D attention LTSM may include an attention module.
- the attention module may contain within-frame attention and between-frame attention, and each could either be a hard or a soft attention.
- Hard attention assesses certain aspects of the frame one feature at a time and aggregates the information.
- Soft attention assesses the frame by concentrating on certain key features based on all the features.
- the within-frame soft attention weight ⁇ i t for patch i of frame t is achieved by:
- x t is the output of the within-frame attention at time point t
- ⁇ right arrow over (h t ) ⁇ , ⁇ right arrow over (c t ) ⁇ are the hidden state and the cell state of the forward LSTM at time point t
- h t is the final hidden state which contains information from both the future and the past.
- the Deep 3D attention LTSM may include an output module.
- the output module may apply a multi-layers deep LSTM to produce q t ⁇ , where is the number of action classes.
- the final output being:
- the Deep 3D attention LTSM may include a domain knowledge module.
- the domain knowledge module may be achieved by embedding a target or additional knowledge followed by a dot product with the output of the input module.
- the cross-entropy loss function has three choices (N is the number of samples), and the training is performed by back-prorogation:
- FIG. 5 shows a high-level block/flow diagram of a deep 3D attention recurrent neural network method 500 , in accordance with an embodiment of the present invention.
- the deep 3D attention recurrent neural network method 500 may include a video 510 (with one embodiment used in step 610 in FIG. 6 ) to supply the video frames analyzed in the deep 3D attention recurrent neural newtork method 500 .
- the video 510 may be fed into an adaptive patch size convoluton network 520 (with one embodiment used in step 620 in FIG. 6 ) to be produce vectors representing the frames of the video.
- the adaptive patch size convoluton network 520 may function as the input module as described above in the Deep 3D attention LTSM.
- the deep 3D attention recurrent neural network method 500 may include a domain knowledge process 540 .
- the domain know ledge process 540 may embed additional knowledge with a dot product of the vectors produced by the adaptive patch size convolution network 520 .
- the domain knowledge process 540 may function as the domain knowledge module as described above in the Deep 3D attention LTSM.
- the deep 3D attention recurrent neural network method 500 may include a 3D attention process 530 (with one embodiment used in steps 630 and 640 in FIG. 6 ).
- the 3D attention process may take the vectors from the adaptive patch size convolution network 520 to produce final 3D attention values.
- the 3D attention process may take the vectors from the adaptive patch size convolution network 520 and the additional knowledge embedded by the knowledge domain process 540 to produce final 3D attention values.
- the 3D attention process 530 may function as the attention module as described above in the Deep 3D attention LTSM.
- the deep 3D attention recurrent neural network method 500 may include a cross entropy with max-neighbor process 550 (with one embodiment used in step 650 in FIG. 6 ).
- the cross entropy with max-neighbor process 550 may apply a deep LSTM to the final 3D attention values from the 3D attention process 530 to produce the final output.
- the cross entropy with max-neighbor process 550 may utilize a cross-entropy loss function as described above.
- the cross entropy with max-neighbor process 550 may function as the output module as described above in the Deep 3D attention LTSM.
- the deep 3D attention recurrent neural network method 500 may include an action category 560 (with one embodiment used in step 660 in FIG. 6 ).
- the action category 560 represents the action the deep 3D attention recurrent neural network method 500 detected from the video 510 .
- FIG. 6 shows a block/ low diagram of a deep 3D attention recurrent neural network method 600 , in accordance with an embodiment of the present invention.
- step 610 receive video frames 612 over time 611 .
- step 620 perform convolutions 621 on the video frames 612 to obtain a set of features 622 and a set of learnable parameters 623 .
- step 630 perform softmax 631 on the set of features 622 and the set of learnable paraeters 623 to obtain the within-frame level attention input 632 .
- step 640 perform bidirectional LSTM 641 and softmax 642 on the within-frame level attention input 632 to obtain the 3D attention output 643 .
- step 650 perform a deep LSTM 651 on the 3D attention output 643 to obtain the RNN output 652 .
- step 660 passing the RNN output 652 into the action category 661 .
- FIG. 7 shows a block/flow diagram of a video monitoring system 700 , in accordance with an embodiment of the present invention.
- the video monitoring system 700 may include a security processing system 710 .
- the security processing system 710 may include a processing system 100 from in FIG. 1 .
- the security processing system 710 may be equipped with computing functions and control.
- the security processing system 710 may include one or more processors 711 (hereafter “processor”).
- the security processing system 710 may include a memory storage 712 .
- the memory storage 712 may include solid state or soft storage and work in conjunction with other devices of the video monitoring system 700 to record data, run algorithms or programs, store safety procedures, a deep 3D attention recurrent neural network, etc.
- the memory storage 712 may include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications.
- the security processing system 710 may include a communication array 716 to handle communications between the different devices in the video monitoring system 700 .
- the communication array 716 may be equipped to communicate with a cellular network system. In this way, the security processing system 710 may contact a control center with information related to the status of the video monitoring system 700 and the property the system is securing.
- the communication array 716 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc.
- the communication array 716 may provide the security processing system 710 a communication channel 760 with other devices in the video monitoring system 700 .
- the security processing system 710 may include a power source 715 .
- the power source 715 may include or employ one or more batteries, a generator with liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy source.
- the power source 715 may include one or more solar cells or one or more fuel cells.
- the power source 715 may include power from the building with the video monitoring system 700 .
- the security processing system 710 may have multiple sources in the power source 715 .
- the security processing system 710 may include power directly from the building and a battery system as a back-up to ensure the video monitoring system 700 stays active if a power interruption occurs.
- the security processing system 710 may include a security light 713 .
- the security light 713 may be illuminated when the security processing system 710 detects an intruder in the area of the security light 713 to deter the intruder or give investigators improved visibility in the area of the security light 713 .
- the security processing system 710 may include a speaker 714 .
- the speaker 714 may act as an alarm when the security processing system 710 detects an intruder in a secure area to deter the intruder or notify investigators of an intruder.
- the security processing system 710 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other input devices and/or output devices can be included in the security processing system 710 , depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- various types of wireless and/or wired input and/or output devices can be used.
- additional processors, displays, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
- the video monitoring system 700 may include a camera 720 .
- the camera 720 may communicate through the communication channel 760 to the security processing system 710 .
- the camera 720 may include a power source 722 .
- the power source 722 may include or employ one or more batteries or other energy source.
- the power source 722 may include one or more solar cells or one or more fuel cells.
- the power source 722 may include power from the building with the video monitoring system 700 .
- the power source 722 may include power through the communication channel 760 linking the camera 720 to the security processing system 710 .
- the camera 720 may have multiple sources in the power source 722 .
- the camera 720 may include power through the communication channel 760 and a battery system as a back-up to ensure the camera 720 stays active if a power interruption occurs.
- the camera 720 may include a communication array 724 to handle communications between the camera 720 and the security processing system 710 .
- the communication array 724 may be equipped to communicate with a cellular network system.
- the communication array 724 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc.
- the communication array 724 may connect the camera 720 to the security processing system 710 through the communication channel 760 .
- the camera 720 may include one or more motor 726 .
- the motor 726 may physically move the camera 720 , so the field of view covered by the camera 720 is greater than the field of view of the camera 720 .
- the motor 726 may be used to zoom a lens in the camera 720 to get a zoomed in image of the area being covered by the camera 720 .
- the motor 720 may be controlled by commands originating in the camera 720 or from commands originating in the security processing system 710 .
- the camera 720 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other lens or lights for night vision or infrared detection may be included in the camera 720 , depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- the video monitoring system 700 may include an electronic lock 730 .
- the electronic lock 730 may communicate through the communication channel 760 to the security processing system 710 .
- the electronic lock 730 may include a power source 736 .
- the power source 736 may include or employ one or more batteries or other energy source.
- the power source 736 may include one or more solar cells or one or more fuel cells.
- the power source 736 may include power from the building with the video monitoring system 700 .
- the power source 736 may include power through the communication channel 760 linking the electronic lock 730 to the security processing system 710 .
- the electronic lock 730 may have multiple sources in the power source 736 .
- the electronic lock 730 may include power through the communication channel 760 and a battery system as a back-up to ensure the electronic lock 730 stays active if a power interruption occurs.
- the electronic lock 730 may include a communication array 738 to handle communications between the electronic lock 730 and the security processing system 710 .
- the communication array 738 may be equipped to communicate with a cellular network system.
- the communication array 738 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc.
- the communication array 738 may connect the electronic lock 730 to the security processing system 710 through the communication channel 760 .
- the electronic lock 730 may include a motor 734 .
- the motor 734 may physically actuate a bolt in the electronic lock 730 .
- the motor 734 actuates one or more bolts along a door to lock the door.
- the motor 734 may actuate a hook in a window to lock the window.
- the motor 734 may be controlled by commands originating in the electronic lock 730 or from commands originating in the security processing system 710 .
- the electronic lock 730 may include a solenoid 732 .
- the solenoid 732 may physically actuate a bolt in the electronic lock 730 .
- the solenoid 732 actuates one or more bolts along a door to lock the door.
- the solenoid 732 may actuate a hook in a window to lock the window.
- the solenoid 732 may be controlled by commands originating in the electronic lock 730 or from commands originating in the security processing system 710 .
- the electronic lock 730 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other engaging mechanisms may be included in the electronic lock 730 , depending upon the particular implementation of the same, gas readily understood by one of ordinary skill in the art.
- the video monitoring system 700 may include an input console 740 .
- the input console 740 may communicate through the communication channel 760 to the security processing system 710 .
- the input console 740 may include a power source 748 .
- the power source 748 may include or employ one or more batteries or other energy source.
- the power source 748 may include one or more solar cells or one or more fuel cells.
- the power source 748 may include power from the building with the video monitoring system 700 .
- the power source 748 may include power through the communication channel 760 linking the input console 740 to the security processing system 710 .
- the input console 740 may have multiple sources in the power source 748 .
- the input console 740 may include power through the communication channel 760 and a battery system as a back-up to ensure the input console 740 stays active if a power interruption occurs.
- the input console 740 may have one or more input devices 741 .
- the input devices 741 may include a keypad 742 , a retinal scanner 744 , or a fingerprint reader 746 .
- the input console 740 may include more than one of the input devices 741 .
- the input console 740 may include a keypad 712 and a fingerprint reader 746 to support two-factor authentication.
- the input console 740 may include a keypad 742 , a retinal scanner 744 . and a fingerprint reader 744 to support three-factor authentication.
- the input console 740 may include a communication array 749 to handle communications between the input console 740 and the security processing system 710 .
- the communication array 749 may be equipped to communicate with a cellular network system.
- the communication array 749 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc.
- the communication array 749 may connect the input console 740 to the security processing system 710 through the communication channel 760 .
- the, input console 740 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other input devices may be included in the input console 740 , such as a camera for facial recognition, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- the video monitoring system 700 may include one or more sensors 750 (hereafter “sensor”).
- the sensor 750 may communicate through the communication channel 760 to the security processing system 710 .
- the sensor 750 may include a power source 756 .
- the power source 756 may include or employ one or more batteries or other energy source.
- the power source 756 may include one or more solar cells or one or more fuel cells.
- the power source 756 may include power from the building with the video monitoring system 700 .
- the power source 756 may include power through the communication channel 760 linking the sensor 750 to the security processing system 710 .
- the sensor 750 may have multiple sources in the power source 756 .
- the sensor 750 may include power through the communication channel 760 and a batter system as a back-up to ensure the input console 740 stays active if a power interruption occurs.
- the sensor 750 may have one or more sensor types 751 .
- the sensor types 751 may include audio 752 or contact 754 .
- the sensor 750 may include more than one of the sensor types 751 .
- the sensor 750 may include an audio 752 and a contact 754 . This embodiment may secure a window being able to detect when the window is closed with the contact 754 and being able to detect if broken with the audio 752 .
- the sensor 750 may include a communtication array 758 to handle communications between the sensor 750 and the security processing system 710 .
- the communication array 758 may be equipped to communicate with a cellular network system.
- the communication array 758 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc.
- the communication array 758 may connect the sensor 750 to the security processing system 710 through the communication channel 760 .
- the senor 750 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other types of sensors may be included in the sensor 750 , such as a temperature sensor for detecting body heat, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- the security processing system 710 may take video from the camera 720 to monitor the area being secured by the video monitoring system 700 .
- the security processing system 710 may recognize action in the video that is outside a normal criteria. This action may include an intruder running up to the premises or a projectile approaching the premises.
- the security processing system 710 may actuate the electronic locks 730 on the premises to secure the premises while sounding an alarm over the speaker 714 and turning on the security light 713 .
- the security processing system 710 may also clip the video of the action sequence and send it to a security monitoring station or the home owner to have evidence of the intrusion or both.
- the security processing system 710 may actuate the motor 734 in the electric lock 730 to close and lock windows when the action recognized is rain. Many other actions can be recognized with the present system, with different actions having different responses.
- the security processing system 710 may use the electronic lock 730 to secure a pet door when the video shows a raccoon approaching the pet door.
- video monitoring system 700 may perform at least part of the method described herein including, for example, at least part of method 300 of FIG. 3 and/or at least part of method 400 of FIG. 4 and/or at least part of method 500 of FIG. 5 and/or at least part of method 600 of FIG. 6 and/or at least part of method 800 of FIG. 8 .
- a flow chart for a video monitoring method 800 is illustratively shown, in accordance with an embodiment of the present invention.
- monitor an area with a camera In block 810 , monitor an area with a camera.
- Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements.
- the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- the medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
- the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
- a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution.
- I/O devices including but not limited to keyboards, displays, pointing devices, etc. may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
- Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
- such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
- This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
A video monitoring system and method are provided. The video monitoring system includes a camera. The camera is positioned to monitor an area and capture live video to provide a live video stream. The video monitoring system also includes a security processing system. The security processing system includes a processor and memory coupled to the processor. The security processing system is programmed to detect and identify a target action sequence in the live video stream using a multi-layer deep long short-term memory process on are attention factor that is based on an within-frame attention and an between-frame attention. The security processing system is further programmed to trigger an action to alert that a target action sequence has been detected.
Description
- This application claims priority to U.S. Provisional Patent Application Ser. No. 62/318,865 filed on Apr. 6, 2016, incorporated herein by reference in its entirety. Moreover, this application is related to commonly assigned U.S. patent application Ser. No. TBD (Attorney Docket Number 15104A), filed concurrently herewith and incorporated herein by reference.
- The present invention generally relates to video-based recognition and more particularly to video-based action recognition in a monitoring system.
- Video-based action recognition is the most valuable component of intelligent monitoring systems for many applications such as public safety monitoring, shopping center and factory surveillance, and home security etc. Real-time action recognition based on video sequences produced by surveillance cameras not only detects the type of action of interest, but also detects the start and end of the searched action, which often contains a sequence of action progression stages or sub-actions, as well as the most relevant time-dependent regions within video frames.
- Previous approaches to action recognition mainly fall into the following two categories: A) Feature engineering based on individual video frames by handcrafting features from each video frame and tracking them based on displacement information from an optical flow field, and B) Machine learning approaches without considering complex long-range temporal dependencies by extracting features using convolutional neural networks (CNNs) or recurrent neural networks (RNNs), and then using standard classifiers or RNNs for action prediction without attention or with only between-frame attention.
- According to an aspect of the present principles, a video monitoring system is provided. The video monitoring system includes a camera. The camera is positioned to monitor an area and capture live video to provide a live video stream. The video monitoring system further includes a security processing system. The security processing system includes a processor and memory coupled to the processor. The security processing system is programmed to detect and identify a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention. The security processing system is further programmed to trigger an action to alert that a target action sequence has been detected.
- According to another aspect of the present principles, a computer-implemented method is provided for home security. The method includes monitoring an area with a camera. The method further includes capturing, by the camera, live video to provide a live video stream. The method also includes detecting and identifying, by a processor, a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention. The method additionally triggering, by the processor, an action to alert that a target action sequence has been detected.
- These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
- The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
-
FIG. 1 shows a block diagram of an exemplary processing system to which the present invention may be applied, in accordance with an embodiment of the present invention; -
FIG. 2 shows a block diagram of an exemplary environment to which the present invention can be applied, in accordance with an embodiment of the present invention; -
FIG. 3 shows a high-level block/flow diagram of an exemplary high-order convolutional neural network method, in accordance with an embodiment of the present invention; -
FIG. 4 is a flow diagram illustrating a method for video based action recognition, in accordance with an embodiment of the present invention; -
FIG. 5 shows a high-level block/flow diagram of a deep 3D attention recurrent neural network method, in accordance with an embodiment of the present invention; -
FIG. 6 shows a block/flow diagram of a deep 3D attention recurrent neural network method, in accordance with an embodiment of the present invention; -
FIG. 7 shows a block/flow diagram of a video monitoring system, in accordance with an embodiment of the present invention; and -
FIG. 8 is a flow diagram illustrating a method for video monitoring, in accordance with an embodiment of the present invention. - A system using Deep 3D attention Long Short-Term Memory for video based action recognition is presented. Unlike previous approaches, this system is capable of capturing long-range complex temporal dependencies in long video sequences with both between-frame and within-frame attention. This system uses a novel objective function enabling users to easily identify key video segments for target actions. Target actions may include an intruder entering a restricted area, a confined animal escaping an enclosure, or a piece of machinery malfunctioning and endangering people or property in the machineries vicinity, etc. It is to be understood that the target actions listed and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention.
-
FIG. 1 shows a block diagram of anexemplary processing system 100 to which the invention principles may be applied, in accordance with an embodiment of the present invention. Theprocessing system 100 includes at least one processor (CPU) 104 operatively coupled to other components via asystem bus 102. Acache 106, a Read Only Memory (ROM) 108, a Random Access Memory (RAM) 110, an input/output (I/O)adapter 120, asound adapter 130, anetwork adapter 140, auser interface adapter 150, and adisplay adapter 160, are operatively coupled to thesystem bus 102. - A
first storage device 122 and asecond storage device 124 are operatively coupled tosystem bus 102 by the I/O adapter 120. Thestorage devices storage devices - A
speaker 132 is operatively coupled tosystem bus 102 by thesound adapter 130. Thespeaker 132 can be used to provide an audible alarm or some other indication relating to the present invention. Atransceiver 142 is operatively coupled tosystem bus 102 bynetwork adapter 140. Adisplay device 162 is operatively coupled tosystem bus 102 bydisplay adapter 160. - A first
user input device 152, a seconduser input device 154, and a thirduser input device 156 are operatively coupled tosystem bus 102 byuser interface adapter 150. Theuser input devices user input devices user input devices system 100. - Of course, the
processing system 100 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and or output devices can be included inprocessing system 100, depending upon the panicalar implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used, Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of theprocessing system 100 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein. - Moreover, it is to be appreciated that
environment 200 described below with respect toFIG. 2 is an environment for implementing respective embodiments of the present invention. Part or all ofprocessing system 100 may be implemented in one or more of the elements ofenvironment 200. - Further, it is to be appreciated that
processing system 100 may perform at least part of the method described herein including, for example, at least part ofmethod 300 ofFIG. 3 and/or at least part ofmethod 400 ofFIG. 4 and/or at least part ofmethod 500 ofFIG. 5 and/or at least part ofmethod 600 ofFIG. 6 and/or at least part ofmethod 800 ofFIG. 8 . Similarly, part or all ofsystem 200 may be used to perform at least part ofmethod 300 ofFIG. 3 and/or at least pan ofmethod 400 ofFIG. 4 and/or at least part ofmethod 500 ofFIG. 5 and/or at least part ofmethod 600 ofFIG. 6 and/or at least part ofmethod 800 ofFIG. 8 . -
FIG. 2 shows anexemplary environment 200 to which the present invention can be applied, in accordance with an embodiment of the present invention. Theenvironment 200 is representative of a computer network to which the present invention can be applied. The elements shown relative toFIG. 2 are set forth for the sake of illustration. However, it is to be appreciated that the present invention can be applied to other network configurations as readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein, while maintaining the spirit of the present invention. - The
environment 200 at least includes a set ofcomputer processing systems 210. Thecomputer processing systems 210 can be any type of computer processing system including, but not limited to, servers, desktops, laptops, tablets, smart phones, media playback devices, and so forth. For the sake of illustration, thecomputer processing systems 210 include server 210A, server 210B, and server 210C. - In an embodiment, the present invention performs a deep 3D attention recurrent neural network method for any of the
computer processing systems 210. Thus, any of thecomputer processing systems 210 can perform video analysis that can be stored in, or accessed by, any of thecomputer processing systems 210. Moreover, the output (including active video segments) of the present invention can be used to control other systems and/or devices and/or operations and/or so forth, as readily appreciated by one of ordinary skill in the art given the teachings of the present invention provided herein, while maintaining the spirit of the present invention. - In the embodiment shown in
FIG. 2 , the elements thereof are interconnected by a network(s) 201. However, in other embodiments, other types of connections can also be used. Additionally, one or more elements inFIG. 2 may be implemented by a variety of devices, which include but are not limited to, Digital Signal Processing (DSP) circuits, programmable processors, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) Complex Programmable Logic Devices (CPLDs), and so forth. These and other variations of the elements ofenvironment 200 are readily determined by one of ordinary skill in the art, given the teachings of the present invention provided herein, while maintaining the spirit of the present invention. -
FIG. 3 shows a high-level block/flow diagram of an exemplary high-order convolutionalneural network method 300, in accordance with an embodiment of the present invention. - At
step 310, receive aninput image 311. - At
step 320, perform convolutions on theinput image 311 to obtainmaps 321. - At
step 330, perform sub-sampling on the high-order feature maps 321 to obtain a set ofmaps 331. - At
step 340, perform convolutions on the set ofmaps 331 to obtain another set ofmaps 341. - At
step 350, perform sub-sampling on the other set ofmaps 341 to obtain yet another set ofmaps 351 that form a fully connectedlayer 352. The fully connectedlayer 352 provides afeature vector 352A. - It is to be appreciated that the neurons in the fully connected
layer 352 have full connections to all activations in the previous layer. Their activations can hence be computed with a matrix multiplication followed by a bias offset. - We can optionally have more fully connected layers rather than just 352 and more repeated steps of 320 and 330 rather than just 340 and 350 depending on different tasks.
- It is to be further appreciated that while a single image is mentioned with respect to step 310, multiple images such as in the case of one or more video sequences can be input and processed in accordance with the
method 300 ofFIG. 3 , while maintaining the spirit of the present invention. - Referring to
FIG. 4 , a flow chart for a video basedaction recognition method 400 is illustratively shown, in accordance with an embodiment of the present invention. Inblock 410, receive one or more frames from one or more video sequences. Inblock 420, generate, using a deep convolutional neural network, a feature vector for each patch of the one or more frames. Inblock 430, generate an attention factor for the feature vectors based on a within-frame attention and a between-frame attention. In block 440, identify a target action using a multi-layer deep long short-term memory process applied to the attention factor. The target action represents at least one of the one or more video sequences. Inblock 450, control an operation of a processor-based machine to change a state of the processor-based machine, responsive to the at least one of the one or more video sequences including the identified target action. - Deep 3D attention Long Short-Term Memory (LSTM) may contain multiple modules. In one embodiment, the Deep 3D attention LTSM may include an input module. The input module may be a deep convolutional neural network (CNN). For each time frame at time point t, the output of the last convolutional layer is utilized, which contains K patches and each patch is a D dimensional feature vector. The output of this module is a set of features xi t ∈ , where t ∈ {1, . . . , T} is the time point index of the frame and i ∈ {1, . . . , K} is the index of the patch. The convolution patch size is a learnable non-fixed parameter.
- In another embodiment, the Deep 3D attention LTSM may include an attention module. The attention module may contain within-frame attention and between-frame attention, and each could either be a hard or a soft attention. Hard attention assesses certain aspects of the frame one feature at a time and aggregates the information. Soft attention assesses the frame by concentrating on certain key features based on all the features. The within-frame soft attention weight αi t for patch i of frame t is achieved by:
-
αi t=softmax(w i T x i t), -
xt=Σi=1 Kαi txi t. - Other options for within-frame level attention could be multilayer perceptron (MLP) followed by a softmax layer. For between-frame soft attention, we use bidirectional LSTMs with:
-
{right arrow over (h t)},{right arrow over (c t)}=LSTMfwd (x t,{right arrow over (h t−1)}, {right arrow over (c t−1)}), - where xt is the output of the within-frame attention at time point t, {right arrow over (ht)},{right arrow over (ct)}are the hidden state and the cell state of the forward LSTM at time point t, are the hidden state and cell state of the backward LSTM at time point t, ht is the final hidden state which contains information from both the future and the past. Given the bandwidth L (i.e. a free parameter) of between-frame attention, the between-frame attention could be calculated with:
-
-
s t=Σj=t−L t+LβjΣi=1 Kαi j x i j. -
-
- In still another embodiment, the Deep 3D attention LTSM may include a domain knowledge module. The domain knowledge module may be achieved by embedding a target or additional knowledge followed by a dot product with the output of the input module.
- The cross-entropy loss function has three choices (N is the number of samples), and the training is performed by back-prorogation:
- To use the last time point:
- To use all time points:
- To use the maximum probability's time point (max-neighbor):
-
FIG. 5 shows a high-level block/flow diagram of a deep 3D attention recurrentneural network method 500, in accordance with an embodiment of the present invention. The deep 3D attention recurrentneural network method 500 may include a video 510 (with one embodiment used instep 610 inFIG. 6 ) to supply the video frames analyzed in the deep 3D attention recurrentneural newtork method 500. The video 510 may be fed into an adaptive patch size convoluton network 520 (with one embodiment used instep 620 inFIG. 6 ) to be produce vectors representing the frames of the video. In one embodiment, the adaptive patchsize convoluton network 520 may function as the input module as described above in the Deep 3D attention LTSM. - The deep 3D attention recurrent
neural network method 500 may include adomain knowledge process 540. The domain knowledge process 540 may embed additional knowledge with a dot product of the vectors produced by the adaptive patchsize convolution network 520. In one embodiment, thedomain knowledge process 540 may function as the domain knowledge module as described above in the Deep 3D attention LTSM. - The deep 3D attention recurrent
neural network method 500 may include a 3D attention process 530 (with one embodiment used insteps FIG. 6 ). In one embodiment, the 3D attention process may take the vectors from the adaptive patchsize convolution network 520 to produce final 3D attention values. In another embodiment, the 3D attention process may take the vectors from the adaptive patchsize convolution network 520 and the additional knowledge embedded by theknowledge domain process 540 to produce final 3D attention values. In yet another embodiment, the3D attention process 530 may function as the attention module as described above in the Deep 3D attention LTSM. - The deep 3D attention recurrent
neural network method 500 may include a cross entropy with max-neighbor process 550 (with one embodiment used instep 650 inFIG. 6 ). In one embodiment, the cross entropy with max-neighbor process 550 may apply a deep LSTM to the final 3D attention values from the3D attention process 530 to produce the final output. In another embodiment, the cross entropy with max-neighbor process 550 may utilize a cross-entropy loss function as described above. In yet another embodiment, the cross entropy with max-neighbor process 550 may function as the output module as described above in the Deep 3D attention LTSM. - The deep 3D attention recurrent
neural network method 500 may include an action category 560 (with one embodiment used instep 660 inFIG. 6 ). Theaction category 560 represents the action the deep 3D attention recurrentneural network method 500 detected from the video 510. -
FIG. 6 shows a block/ low diagram of a deep 3D attention recurrentneural network method 600, in accordance with an embodiment of the present invention. - At
step 610, receivevideo frames 612 overtime 611. - At
step 620, performconvolutions 621 on the video frames 612 to obtain a set offeatures 622 and a set oflearnable parameters 623. - At
step 630, performsoftmax 631 on the set offeatures 622 and the set oflearnable paraeters 623 to obtain the within-framelevel attention input 632. - At
step 640, performbidirectional LSTM 641 andsoftmax 642 on the within-framelevel attention input 632 to obtain the3D attention output 643. - At
step 650, perform adeep LSTM 651 on the3D attention output 643 to obtain theRNN output 652. - At
step 660, passing theRNN output 652 into theaction category 661. - The invention as described may be used in many different embodiments. One useful embodiment may have the invention in a video monitoring system.
FIG. 7 shows a block/flow diagram of avideo monitoring system 700, in accordance with an embodiment of the present invention. Thevideo monitoring system 700 may include asecurity processing system 710. Thesecurity processing system 710 may include aprocessing system 100 from inFIG. 1 . Thesecurity processing system 710 may be equipped with computing functions and control. Thesecurity processing system 710 may include one or more processors 711 (hereafter “processor”). Thesecurity processing system 710 may include amemory storage 712. Thememory storage 712 may include solid state or soft storage and work in conjunction with other devices of thevideo monitoring system 700 to record data, run algorithms or programs, store safety procedures, a deep 3D attention recurrent neural network, etc. Thememory storage 712 may include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications. - The
security processing system 710 may include acommunication array 716 to handle communications between the different devices in thevideo monitoring system 700. In one embodiment, thecommunication array 716 may be equipped to communicate with a cellular network system. In this way, thesecurity processing system 710 may contact a control center with information related to the status of thevideo monitoring system 700 and the property the system is securing. Thecommunication array 716 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc. Thecommunication array 716 may provide the security processing system 710 acommunication channel 760 with other devices in thevideo monitoring system 700. - The
security processing system 710 may include apower source 715. Thepower source 715 may include or employ one or more batteries, a generator with liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy source. In another embodiment, thepower source 715 may include one or more solar cells or one or more fuel cells. In another embodiment, thepower source 715 may include power from the building with thevideo monitoring system 700. Thesecurity processing system 710 may have multiple sources in thepower source 715. In one embodiment, thesecurity processing system 710 may include power directly from the building and a battery system as a back-up to ensure thevideo monitoring system 700 stays active if a power interruption occurs. - The
security processing system 710 may include asecurity light 713. Thesecurity light 713 may be illuminated when thesecurity processing system 710 detects an intruder in the area of thesecurity light 713 to deter the intruder or give investigators improved visibility in the area of thesecurity light 713. Thesecurity processing system 710 may include aspeaker 714. Thespeaker 714 may act as an alarm when thesecurity processing system 710 detects an intruder in a secure area to deter the intruder or notify investigators of an intruder. - Of course, the
security processing system 710 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in thesecurity processing system 710, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, displays, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of thesecurity processing system 710 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein. - The
video monitoring system 700 may include acamera 720. Thecamera 720 may communicate through thecommunication channel 760 to thesecurity processing system 710. Thecamera 720 may include a power source 722. The power source 722 may include or employ one or more batteries or other energy source. In another embodiment, the power source 722 may include one or more solar cells or one or more fuel cells. In another embodiment, the power source 722 may include power from the building with thevideo monitoring system 700. In yet another embodiment, the power source 722 may include power through thecommunication channel 760 linking thecamera 720 to thesecurity processing system 710. Thecamera 720 may have multiple sources in the power source 722. In one embodiment, thecamera 720 may include power through thecommunication channel 760 and a battery system as a back-up to ensure thecamera 720 stays active if a power interruption occurs. - The
camera 720 may include acommunication array 724 to handle communications between thecamera 720 and thesecurity processing system 710. In one embodiment, thecommunication array 724 may be equipped to communicate with a cellular network system. Thecommunication array 724 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc. Thecommunication array 724 may connect thecamera 720 to thesecurity processing system 710 through thecommunication channel 760. - The
camera 720 may include one ormore motor 726. Themotor 726 may physically move thecamera 720, so the field of view covered by thecamera 720 is greater than the field of view of thecamera 720. Themotor 726 may be used to zoom a lens in thecamera 720 to get a zoomed in image of the area being covered by thecamera 720. Themotor 720 may be controlled by commands originating in thecamera 720 or from commands originating in thesecurity processing system 710. - Of course, the
camera 720 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other lens or lights for night vision or infrared detection may be included in thecamera 720, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. - The
video monitoring system 700 may include anelectronic lock 730. Theelectronic lock 730 may communicate through thecommunication channel 760 to thesecurity processing system 710. Theelectronic lock 730 may include apower source 736. Thepower source 736 may include or employ one or more batteries or other energy source. In another embodiment, thepower source 736 may include one or more solar cells or one or more fuel cells. In another embodiment, thepower source 736 may include power from the building with thevideo monitoring system 700. In yet another embodiment, thepower source 736 may include power through thecommunication channel 760 linking theelectronic lock 730 to thesecurity processing system 710. Theelectronic lock 730 may have multiple sources in thepower source 736. In one embodiment, theelectronic lock 730 may include power through thecommunication channel 760 and a battery system as a back-up to ensure theelectronic lock 730 stays active if a power interruption occurs. - The
electronic lock 730 may include acommunication array 738 to handle communications between theelectronic lock 730 and thesecurity processing system 710. In one embodiment, thecommunication array 738 may be equipped to communicate with a cellular network system. Thecommunication array 738 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc. Thecommunication array 738 may connect theelectronic lock 730 to thesecurity processing system 710 through thecommunication channel 760. - The
electronic lock 730 may include amotor 734. Themotor 734 may physically actuate a bolt in theelectronic lock 730. In one embodiment, themotor 734 actuates one or more bolts along a door to lock the door. In another embodiment, themotor 734 may actuate a hook in a window to lock the window. Themotor 734 may be controlled by commands originating in theelectronic lock 730 or from commands originating in thesecurity processing system 710. - The
electronic lock 730 may include asolenoid 732. Thesolenoid 732 may physically actuate a bolt in theelectronic lock 730. In one embodiment, thesolenoid 732 actuates one or more bolts along a door to lock the door. In another embodiment, thesolenoid 732 may actuate a hook in a window to lock the window. Thesolenoid 732 may be controlled by commands originating in theelectronic lock 730 or from commands originating in thesecurity processing system 710. - Of course, the
electronic lock 730 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other engaging mechanisms may be included in theelectronic lock 730, depending upon the particular implementation of the same, gas readily understood by one of ordinary skill in the art. - The
video monitoring system 700 may include aninput console 740. Theinput console 740 may communicate through thecommunication channel 760 to thesecurity processing system 710. Theinput console 740 may include apower source 748. Thepower source 748 may include or employ one or more batteries or other energy source. In another embodiment, thepower source 748 may include one or more solar cells or one or more fuel cells. In another embodiment, thepower source 748 may include power from the building with thevideo monitoring system 700. In yet another embodiment, thepower source 748 may include power through thecommunication channel 760 linking theinput console 740 to thesecurity processing system 710. Theinput console 740 may have multiple sources in thepower source 748. In one embodiment, theinput console 740 may include power through thecommunication channel 760 and a battery system as a back-up to ensure theinput console 740 stays active if a power interruption occurs. - The
input console 740 may have one ormore input devices 741. Theinput devices 741 may include akeypad 742, aretinal scanner 744, or afingerprint reader 746. Theinput console 740 may include more than one of theinput devices 741. In one embodiment, theinput console 740 may include akeypad 712 and afingerprint reader 746 to support two-factor authentication. In one embodiment, theinput console 740 may include akeypad 742, aretinal scanner 744. and afingerprint reader 744 to support three-factor authentication. - The
input console 740 may include acommunication array 749 to handle communications between theinput console 740 and thesecurity processing system 710. In one embodiment, thecommunication array 749 may be equipped to communicate with a cellular network system. Thecommunication array 749 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc. Thecommunication array 749 may connect theinput console 740 to thesecurity processing system 710 through thecommunication channel 760. - Of course, the,
input console 740 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices may be included in theinput console 740, such as a camera for facial recognition, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. - The
video monitoring system 700 may include one or more sensors 750 (hereafter “sensor”). Thesensor 750 may communicate through thecommunication channel 760 to thesecurity processing system 710. Thesensor 750 may include apower source 756. Thepower source 756 may include or employ one or more batteries or other energy source. In another embodiment, thepower source 756 may include one or more solar cells or one or more fuel cells. In another embodiment, thepower source 756 may include power from the building with thevideo monitoring system 700. In yet another embodiment, thepower source 756 may include power through thecommunication channel 760 linking thesensor 750 to thesecurity processing system 710. Thesensor 750 may have multiple sources in thepower source 756. In one embodiment, thesensor 750 may include power through thecommunication channel 760 and a batter system as a back-up to ensure theinput console 740 stays active if a power interruption occurs. - The
sensor 750 may have one or more sensor types 751. The sensor types 751 may include audio 752 or contact 754. Thesensor 750 may include more than one of the sensor types 751. In one embodiment, thesensor 750 may include an audio 752 and acontact 754. This embodiment may secure a window being able to detect when the window is closed with thecontact 754 and being able to detect if broken with the audio 752. - The
sensor 750 may include acommuntication array 758 to handle communications between thesensor 750 and thesecurity processing system 710. In one embodiment, thecommunication array 758 may be equipped to communicate with a cellular network system. Thecommunication array 758 may include a WIFI or equivalent radio system, a local area network (LAN), hardwired system, etc. Thecommunication array 758 may connect thesensor 750 to thesecurity processing system 710 through thecommunication channel 760. - Of course, the
sensor 750 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other types of sensors may be included in thesensor 750, such as a temperature sensor for detecting body heat, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. - The
security processing system 710 may take video from thecamera 720 to monitor the area being secured by thevideo monitoring system 700. Thesecurity processing system 710 may recognize action in the video that is outside a normal criteria. This action may include an intruder running up to the premises or a projectile approaching the premises. In one embodiment, thesecurity processing system 710 may actuate theelectronic locks 730 on the premises to secure the premises while sounding an alarm over thespeaker 714 and turning on thesecurity light 713. Thesecurity processing system 710 may also clip the video of the action sequence and send it to a security monitoring station or the home owner to have evidence of the intrusion or both. In another embodiment, thesecurity processing system 710 may actuate themotor 734 in theelectric lock 730 to close and lock windows when the action recognized is rain. Many other actions can be recognized with the present system, with different actions having different responses. In one embodiment, thesecurity processing system 710 may use theelectronic lock 730 to secure a pet door when the video shows a raccoon approaching the pet door. - Moreover, it is to be appreciated that
video monitoring system 700 may perform at least part of the method described herein including, for example, at least part ofmethod 300 ofFIG. 3 and/or at least part ofmethod 400 ofFIG. 4 and/or at least part ofmethod 500 ofFIG. 5 and/or at least part ofmethod 600 ofFIG. 6 and/or at least part ofmethod 800 ofFIG. 8 . - Referring to
FIG. 8 , a flow chart for avideo monitoring method 800 is illustratively shown, in accordance with an embodiment of the present invention. Inblock 810, monitor an area with a camera. Inblock 820, capture, by the camera, live video as to provide a live video stream. In block 830, detect and identify a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention. Inblock 840, trigger an action to alert that a target action sequence has been detected. - Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
- A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
- It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
- The foregoing is to be understood as being in ever respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
Claims (20)
1. A video monitoring system comprising:
a camera positioned to monitor an area and capture live video to provide a live video stream;
a security processing system is a processor and memory coupled to the processor, the processing system programmed to:
detect and identify a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention; and
trigger an action to alert that a target action sequence has been detected.
2. The system of claim 1 , further comprising one or more sensors capable of detecting a change of state.
3. The system of claim 2 , wherein the one or more sensors include a sensor selected from the group consisting of a temperature sensor, a contact sensor, and an audio sensor.
4. The system of claim 1 , further comprising a speaker that sounds an alarm when receiving the action from the security controller.
5. The system of claim 1 , wherein the processing system is further programmed to recognize targeted action sequences when the video monitoring system is in both an activated state and a deactivated state.
6. The system of claim 1 , wherein the processing system is further programmed to record a video clip of the live video stream when the targeted action sequence is identified.
7. The system of claim 6 , wherein the processing system is further programmed to send the video clip offsite to a user or a security monitoring station.
8. The system of claim 6 , wherein a user selects the targeted action sequence from one or more targeted action sequences, wherein the one or more targeted action sequences include an action sequence selected from the group consisting of a human intrusion, an animal intrusion, or a rain intrusion.
9. The system of claim 1 , further comprising an electronic lock capable of changing a lock state responsive to receiving the action from the processing system.
10. The system of claim 9 , wherein the electronic lock can both close and secure a door connected to the electronic lock.
11. The system of claim 1 , further comprising an input console to transmit an activation command to the processing system when the activation command is entered by a user or a deactivation command to the processing system when the deactivation command is entered by a user.
12. The system of claim 11 , wherein the input console include an input device selected from the group consisting of a keypad, a retinal scanner, and a fingerprint reader.
13. The system of claim 11 , wherein the deactivation command requires two-factor authentication of the user.
14. The system of claim 1 , wherein the within-frame attention and, the between-frame attention use at least one of a softmax layer and a bidirectional long short-term memory process.
15. The system of claim 1 , wherein the within-frame attention and the between-frame attention include an attention selected from the group consisting of a hard attention and a soft attention.
16. The system of claim 1 , wherein the multi-layer deep long short-term memory process utilizes a cross-entropy loss function.
17. The system of claim 15 , wherein the cross-entropy loss function includes a function selected from the group consisting of a last time point cross-entropy loss function, an all-time point cross-entropy loss function, and a max-neighbor cross-entropy loss function.
18. The system of claim 1 , wherein the within-frame attention includes a multilayer perceptron feeding into a softmax layer.
19. A computer-implemented method for home security, the method comprising:
monitoring an area with a camera;
capturing, by the camera, live video to provide a live video stream;
detecting and identifying, by a processor, a target action sequence in the live video stream using a multi-layer deep long short-term memory process on an attention factor that is based on a within-frame attention and a between-frame attention; and
triggering, by the processor, an action to alert that a target action sequence has been detected.
20. The method of claim 19 , wherein the within-frame attention and the between-frame attention include an attention selected from the group consisting of a hard attention and a soft attention.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/479,430 US20170294091A1 (en) | 2016-04-06 | 2017-04-05 | Video-based action recognition security system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662318865P | 2016-04-06 | 2016-04-06 | |
US15/479,430 US20170294091A1 (en) | 2016-04-06 | 2017-04-05 | Video-based action recognition security system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170294091A1 true US20170294091A1 (en) | 2017-10-12 |
Family
ID=59998834
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/479,430 Abandoned US20170294091A1 (en) | 2016-04-06 | 2017-04-05 | Video-based action recognition security system |
US15/479,408 Active 2037-12-20 US10296793B2 (en) | 2016-04-06 | 2017-04-05 | Deep 3D attention long short-term memory for video-based action recognition |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/479,408 Active 2037-12-20 US10296793B2 (en) | 2016-04-06 | 2017-04-05 | Deep 3D attention long short-term memory for video-based action recognition |
Country Status (1)
Country | Link |
---|---|
US (2) | US20170294091A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629326A (en) * | 2018-05-14 | 2018-10-09 | 中国科学院自动化研究所 | The action behavior recognition methods of objective body and device |
CN109766936A (en) * | 2018-12-28 | 2019-05-17 | 西安电子科技大学 | Image change detection method based on information transmitting and attention mechanism |
EP3499411A1 (en) * | 2017-12-15 | 2019-06-19 | Accenture Global Solutions Limited | Capturing series of events in monitoring systems |
CN110222574A (en) * | 2019-05-07 | 2019-09-10 | 杭州智尚云科信息技术有限公司 | Production operation Activity recognition method, apparatus, equipment, system and storage medium based on structuring double fluid convolutional neural networks |
CN110222828A (en) * | 2019-06-12 | 2019-09-10 | 西安交通大学 | A kind of Unsteady Flow method for quick predicting based on interacting depth neural network |
CN110443102A (en) * | 2018-05-04 | 2019-11-12 | 北京眼神科技有限公司 | Living body faces detection method and device |
CN110569773A (en) * | 2019-08-30 | 2019-12-13 | 江南大学 | Double-flow network behavior identification method based on space-time significance behavior attention |
CN111104830A (en) * | 2018-10-29 | 2020-05-05 | 富士通株式会社 | Deep learning model for image recognition, training device and method of deep learning model |
EP3640902A3 (en) * | 2018-10-17 | 2020-05-06 | Tata Consultancy Services Limited | System and method for authenticating humans based on behavioral pattern |
US10691949B2 (en) * | 2016-11-14 | 2020-06-23 | Axis Ab | Action recognition in a video sequence |
CN111738037A (en) * | 2019-03-25 | 2020-10-02 | 广州汽车集团股份有限公司 | Automatic driving method and system and vehicle |
CN112137591A (en) * | 2020-10-12 | 2020-12-29 | 平安科技(深圳)有限公司 | Target object position detection method, device, equipment and medium based on video stream |
EP3907652A1 (en) * | 2020-05-08 | 2021-11-10 | Kepler Vision Technologies B.V. | Method for adapting the quality and/or frame rate of a live video stream based upon pose |
EP3910541A1 (en) * | 2020-05-08 | 2021-11-17 | Kepler Vision Technologies B.V. | Method for adapting the quality and/or frame rate of a live video stream based upon pose |
US11735018B2 (en) | 2018-03-11 | 2023-08-22 | Intellivision Technologies Corp. | Security system with face recognition |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017136784A1 (en) * | 2016-02-05 | 2017-08-10 | Google Inc. | Generative neural networks |
US10176388B1 (en) * | 2016-11-14 | 2019-01-08 | Zoox, Inc. | Spatial and temporal information for semantic segmentation |
CN107679522B (en) * | 2017-10-31 | 2020-10-13 | 内江师范学院 | Multi-stream LSTM-based action identification method |
KR102462426B1 (en) | 2017-12-14 | 2022-11-03 | 삼성전자주식회사 | Electronic device and method for analyzing meaning of speech |
CN108182260B (en) * | 2018-01-03 | 2021-06-08 | 华南理工大学 | Multivariate time sequence classification method based on semantic selection |
CN108256451B (en) * | 2018-01-05 | 2022-09-27 | 百度在线网络技术(北京)有限公司 | Method and device for detecting human face |
CN108399454A (en) * | 2018-03-05 | 2018-08-14 | 山东领能电子科技有限公司 | A kind of completely new sectional convolution neural network target recognition |
CN108388348B (en) * | 2018-03-19 | 2020-11-24 | 浙江大学 | Myoelectric signal gesture recognition method based on deep learning and attention mechanism |
CN108710865B (en) * | 2018-05-28 | 2022-04-22 | 电子科技大学 | Driver abnormal behavior detection method based on neural network |
US11030518B2 (en) | 2018-06-13 | 2021-06-08 | United States Of America As Represented By The Secretary Of The Navy | Asynchronous artificial neural network architecture |
CN108776796B (en) * | 2018-06-26 | 2021-12-03 | 内江师范学院 | Action identification method based on global space-time attention model |
CN109409209A (en) * | 2018-09-11 | 2019-03-01 | 广州杰赛科技股份有限公司 | A kind of Human bodys' response method and apparatus |
US10938840B2 (en) * | 2018-10-15 | 2021-03-02 | Microsoft Technology Licensing, Llc | Neural network architectures employing interrelatedness |
CN109754404B (en) * | 2019-01-02 | 2020-09-01 | 清华大学深圳研究生院 | End-to-end tumor segmentation method based on multi-attention mechanism |
CN110084259B (en) * | 2019-01-10 | 2022-09-20 | 谢飞 | Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics |
KR20200116763A (en) | 2019-04-02 | 2020-10-13 | 삼성전자주식회사 | Method and apparatus for processing similarity using key-value coupling |
EP3731203B1 (en) | 2019-04-24 | 2023-05-31 | Carrier Corporation | Alarm system |
CN110263916B (en) * | 2019-05-31 | 2021-09-10 | 腾讯科技(深圳)有限公司 | Data processing method and device, storage medium and electronic device |
CN110348321A (en) * | 2019-06-18 | 2019-10-18 | 杭州电子科技大学 | Human motion recognition method based on bone space-time characteristic and long memory network in short-term |
US11373407B2 (en) | 2019-10-25 | 2022-06-28 | International Business Machines Corporation | Attention generation |
CN111639652B (en) * | 2020-04-28 | 2024-08-20 | 博泰车联网(南京)有限公司 | Image processing method, device and computer storage medium |
CN111506822B (en) * | 2020-05-28 | 2023-08-18 | 支付宝(杭州)信息技术有限公司 | Data coding and information recommending method, device and equipment |
CN113255616B (en) * | 2021-07-07 | 2021-09-21 | 中国人民解放军国防科技大学 | Video behavior identification method based on deep learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9715543B2 (en) * | 2007-02-28 | 2017-07-25 | Aol Inc. | Personalization techniques using image clouds |
US9807473B2 (en) * | 2015-11-20 | 2017-10-31 | Microsoft Technology Licensing, Llc | Jointly modeling embedding and translation to bridge video and language |
US10242266B2 (en) * | 2016-03-02 | 2019-03-26 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for detecting actions in videos |
US20170262996A1 (en) * | 2016-03-11 | 2017-09-14 | Qualcomm Incorporated | Action localization in sequential data with attention proposals from a recurrent network |
-
2017
- 2017-04-05 US US15/479,430 patent/US20170294091A1/en not_active Abandoned
- 2017-04-05 US US15/479,408 patent/US10296793B2/en active Active
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10691949B2 (en) * | 2016-11-14 | 2020-06-23 | Axis Ab | Action recognition in a video sequence |
EP3499411A1 (en) * | 2017-12-15 | 2019-06-19 | Accenture Global Solutions Limited | Capturing series of events in monitoring systems |
US10417502B2 (en) | 2017-12-15 | 2019-09-17 | Accenture Global Solutions Limited | Capturing series of events in monitoring systems |
US11735018B2 (en) | 2018-03-11 | 2023-08-22 | Intellivision Technologies Corp. | Security system with face recognition |
CN110443102A (en) * | 2018-05-04 | 2019-11-12 | 北京眼神科技有限公司 | Living body faces detection method and device |
CN108629326A (en) * | 2018-05-14 | 2018-10-09 | 中国科学院自动化研究所 | The action behavior recognition methods of objective body and device |
EP3640902A3 (en) * | 2018-10-17 | 2020-05-06 | Tata Consultancy Services Limited | System and method for authenticating humans based on behavioral pattern |
US11361190B2 (en) * | 2018-10-29 | 2022-06-14 | Fujitsu Limited | Deep learning model used for image recognition and training apparatus of the model and method thereof |
CN111104830A (en) * | 2018-10-29 | 2020-05-05 | 富士通株式会社 | Deep learning model for image recognition, training device and method of deep learning model |
EP3648007A1 (en) * | 2018-10-29 | 2020-05-06 | Fujitsu Limited | Deep learning model used for image recognition and training apparatus of the model and method thereof |
CN109766936A (en) * | 2018-12-28 | 2019-05-17 | 西安电子科技大学 | Image change detection method based on information transmitting and attention mechanism |
CN111738037A (en) * | 2019-03-25 | 2020-10-02 | 广州汽车集团股份有限公司 | Automatic driving method and system and vehicle |
CN110222574A (en) * | 2019-05-07 | 2019-09-10 | 杭州智尚云科信息技术有限公司 | Production operation Activity recognition method, apparatus, equipment, system and storage medium based on structuring double fluid convolutional neural networks |
CN110222828A (en) * | 2019-06-12 | 2019-09-10 | 西安交通大学 | A kind of Unsteady Flow method for quick predicting based on interacting depth neural network |
CN110569773A (en) * | 2019-08-30 | 2019-12-13 | 江南大学 | Double-flow network behavior identification method based on space-time significance behavior attention |
EP3907652A1 (en) * | 2020-05-08 | 2021-11-10 | Kepler Vision Technologies B.V. | Method for adapting the quality and/or frame rate of a live video stream based upon pose |
EP3910541A1 (en) * | 2020-05-08 | 2021-11-17 | Kepler Vision Technologies B.V. | Method for adapting the quality and/or frame rate of a live video stream based upon pose |
US12039802B2 (en) | 2020-05-08 | 2024-07-16 | Kepler Vision Technologies B.V. | Method for adapting the quality and/or frame rate of a live video stream based upon pose |
CN112137591A (en) * | 2020-10-12 | 2020-12-29 | 平安科技(深圳)有限公司 | Target object position detection method, device, equipment and medium based on video stream |
Also Published As
Publication number | Publication date |
---|---|
US20170293804A1 (en) | 2017-10-12 |
US10296793B2 (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10296793B2 (en) | Deep 3D attention long short-term memory for video-based action recognition | |
US11196966B2 (en) | Identifying and locating objects by associating video data of the objects with signals identifying wireless devices belonging to the objects | |
US11232685B1 (en) | Security system with dual-mode event video and still image recording | |
US20180357870A1 (en) | Behavior-aware security systems and associated methods | |
US11217076B1 (en) | Camera tampering detection based on audio and video | |
US11647165B1 (en) | Audio/video recording and communication doorbell devices including transistor assemblies, and associated systems and methods | |
US11115629B1 (en) | Confirming package delivery using audio/video recording and communication devices | |
US10593174B1 (en) | Automatic setup mode after disconnect from a network | |
US10755537B1 (en) | Implementing deterrent protocols in response to detected security events | |
WO2018157092A1 (en) | Identification of suspicious persons using audio/video recording and communication devices | |
US10212778B1 (en) | Face recognition systems with external stimulus | |
US10762769B1 (en) | Sending signals for help during an emergency event | |
US11164435B1 (en) | Audio/video recording and communication doorbell devices with supercapacitors | |
US10713928B1 (en) | Arming security systems based on communications among a network of security systems | |
US10922547B1 (en) | Leveraging audio/video recording and communication devices during an emergency situation | |
US11659144B1 (en) | Security video data processing systems and methods | |
US10943442B1 (en) | Customized notifications based on device characteristics | |
US20230386305A1 (en) | Artificial Intelligence (AI)-Based Security Systems for Monitoring and Securing Physical Locations | |
William et al. | Software Reliability Analysis with Various Metrics using Ensembling Machine Learning Approach | |
US11735017B2 (en) | Artificial intelligence (AI)-based security systems for monitoring and securing physical locations | |
Lashmi et al. | Ambient intelligence and IoT based decision support system for intruder detection | |
US10834366B1 (en) | Audio/video recording and communication doorbell devices with power control circuitry | |
US20240312254A1 (en) | Method for adapting the quality and/or frame rate of a live video stream based upon pose | |
EP3907652A1 (en) | Method for adapting the quality and/or frame rate of a live video stream based upon pose | |
Menaga et al. | A Smart Intruder Detection System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MIN, RENQIANG;GAO, YANG;COSATTO, ERIC;REEL/FRAME:041855/0314 Effective date: 20170330 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |