CN114995698A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN114995698A CN114995698A CN202210689328.3A CN202210689328A CN114995698A CN 114995698 A CN114995698 A CN 114995698A CN 202210689328 A CN202210689328 A CN 202210689328A CN 114995698 A CN114995698 A CN 114995698A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- input
- processing
- processing mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000004044 response Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 description 63
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/1444—Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
- G06V30/1456—Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on user interactions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an image processing method and device, and belongs to the technical field of images. Wherein, the method comprises the following steps: receiving a first input in the case of displaying a target interface; responding to the first input, and generating a target image according to the display content of the target interface; receiving a second input; and responding to the second input, and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
Description
Technical Field
The application belongs to the technical field of images, and particularly relates to an image processing method and device.
Background
At present, a user frequently uses screen capturing and recording functions of electronic equipment. For example, when a user browses a short video, the user sees that characters, pictures and the like which are relatively interested in the short video, and a screen capture function is triggered, so that the electronic device generates a screen capture based on the content currently displayed on the screen.
In the prior art, screenshots, recorded videos and the like generated based on a screen are all uniformly stored in a designated album of a gallery, and when a user wants to check an acquired screenshot or a recorded video, a target picture or a target video needs to be searched in a large number of pictures and videos in the album, so that when the user wants to further process the acquired screenshot or the recorded video, the operation is complicated.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, which can solve the problem that in the prior art, when a user performs relevant processing on an acquired screenshot or a recorded video, the operation is complex.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes: receiving a first input in the case of displaying a target interface; responding to the first input, and generating a target image according to the display content of the target interface; receiving a second input; and responding to the second input, and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the first receiving module is used for receiving first input under the condition that a target interface is displayed; the generating module is used for responding to the first input and generating a target image according to the display content of the target interface; the second receiving module is used for receiving a second input; and the processing module is used for responding to the second input and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In this way, in the embodiment of the application, the user triggers the screen capture function and the screen recording function through the first input, and generates the target image according to the content in the currently displayed target interface in response to the first input. Further, if it is detected that the target processing method is set by the user through the second input, the target image is automatically processed in the target processing method after the target image is generated. Therefore, based on the embodiment of the application, the acquired screenshots or recorded videos can be processed according to the target processing mode input by the user, so that the user operation is simplified.
Drawings
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present application;
fig. 2 to 11 are display schematic diagrams of an electronic device according to an embodiment of the present application;
fig. 12 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 13 is one of the hardware configuration diagrams of the electronic device according to the embodiment of the present application;
fig. 14 is a second hardware configuration diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be described below clearly with reference to the drawings of the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived from the embodiments of the present application by a person of ordinary skill in the art, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
In the image processing method provided by the embodiment of the present application, an execution subject may be the image processing apparatus provided by the embodiment of the present application, or an electronic device integrated with the image processing apparatus, where the image processing apparatus may be implemented in a hardware or software manner.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present application, which is applied to an electronic device for example, and includes:
step 110: in a case where a target interface is displayed, a first input is received.
The first input comprises touch input performed by a user on a screen, and is not limited to input of clicking, sliding, dragging and the like; the first input may also be a blank input of the user, such as a gesture action, a face action, and the like, and the first input further includes an input of a physical key on the device by the user, and is not limited to an input of a press and the like. Furthermore, the first input comprises one or more inputs, wherein the plurality of inputs may be consecutive or may be temporally spaced.
In the step, a first input is used for triggering a screen capture function, so that the electronic equipment executes screen capture operation based on a target interface; or the first input is used for triggering a screen recording function, so that the electronic equipment executes screen recording operation based on the target interface.
Optionally, the target interface is any interface displayed by a screen of the electronic device.
For example, in the case of displaying the target interface, the user double-clicks the screen, triggering the screen capture function.
Optionally, the screen capture function provided by the application includes a conventional screen capture function, a long screen capture function, a region screen capture function, a split screen mode screen capture function, a small window mode screen capture function, a multi-scene splicing and combining screen capture function, and the like; the screen recording function comprises a conventional screen recording function, an animation recording function, a screen splitting mode screen recording function, a small window mode screen recording function, a multi-scene splicing and combined screen recording function and the like.
Further, the first input is also used to select any of the above functions.
For example, when the user presses the screen for a long time, referring to fig. 2, the super screen capture floating window 201 is displayed on the target interface, and each screen capture function option that can be realized by the electronic device and each screen recording function option are displayed in the super screen capture floating window 201. Further, the user clicks on any option within super screenshot float window 201.
Step 120: and responding to the first input, and generating a target image according to the display content of the target interface.
Optionally, the target image is a picture; optionally, the target image is a segment of video.
The display content of the target interface may be the whole display content of the target interface or the partial display content of the target interface.
Step 130: a second input is received.
The second input comprises touch input performed by a user on the screen, and is not limited to input such as clicking, sliding and dragging; the second input may also be a user's blank input, such as a gesture action, a facial action, etc., and the second input also includes a user's input to a physical key on the device, not limited to a press, etc. Furthermore, the second input comprises one or more inputs, wherein the plurality of inputs may be consecutive or may be temporally spaced.
In this step, the second input is used to set the target treatment mode.
Optionally, the user may set the target processing mode in advance in a setting page of the screen capture function and the screen recording function.
For example, in the setting page of the screen capture function and the screen recording function, a "destination setting" option is clicked, a first list is displayed, a plurality of destination options are displayed in the first list, one destination option is used for representing a processing mode, and further, a user clicks any destination option, such as clicking a "character extraction" option.
Optionally, in the case of displaying the target interface, before the user triggers the screen capture function and the screen recording function, a target processing mode is set.
For example, referring to fig. 2, in the super screenshot floating window 201, a "destination setting" option is provided, the user clicks the "destination setting" option, referring to fig. 3, a first list 301 is displayed, a plurality of destination options are displayed in the first list 301, one destination option is used for representing a processing mode, and further, the user clicks any destination option, for example, clicks a control 302 corresponding to the "extract text" option. Thus, the user returns to super screenshot floating window 201 to click on the "long screenshot" function option to trigger the long screenshot function.
Step 140: and responding to the second input, and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
Optionally, in a subsequent period of time, if the first input of the user is received, the target image is processed in the target processing mode by default until it is detected that the target processing mode is updated by the user.
In this way, in the embodiment of the application, the user triggers the screen capture function and the screen recording function through the first input, and generates the target image according to the content in the currently displayed target interface in response to the first input. Further, if it is detected that the target processing method is set by the user through the second input, the target image is automatically processed in the target processing method after the target image is generated. Therefore, based on the embodiment of the application, the acquired screenshots or recorded videos can be automatically processed according to the target processing mode input by the user, so that the manual processing of the user is avoided, and the user operation is simplified.
In a flow of an image processing method according to another embodiment of the present application, a target processing method is an image processing method supported by an electronic device, and the target processing method includes at least one of the following:
firstly, extracting the character content in the target image, and displaying the character content in an editable state.
In this processing mode, the text content included in the image is automatically extracted for the target image by using the text recognition function supported by the electronic device, and the extracted text content is displayed.
For example, referring to fig. 4, a generated target image 401 is displayed in a floating manner at the lower left of the target interface, and meanwhile, on the target interface, a floating window 402 pops up, and text extracted from the target image 401 is displayed in the floating window 402.
Furthermore, the displayed text content is editable content, and the user can delete or add the editable content.
For example, referring to FIG. 4, the user may directly copy the textual content displayed in the floating window 402 into the associated edit box.
Optionally, the processing mode option corresponding to the step is "extract text".
And secondly, extracting the text content in the target image, generating a target note according to the text content, and displaying prompt information.
In the processing mode, by utilizing the character recognition function supported by the electronic equipment and the function of automatically creating the label, the character content included in the image is automatically extracted aiming at the target image, and a new note, namely the target note is automatically created based on the extracted character content, wherein the note content is the extracted character content.
For example, the user opens a "note" program, clicks on a new note, see fig. 5, and displays a note page 501 in which text content extracted based on the acquired screenshot or the recorded video is displayed.
Optionally, in the case of generating the target image, a prompt to create a target note is displayed for easy viewing by the user.
For example, referring to FIG. 6, the lower left corner of the target interface displays a target image 601, a floating window 602 pops up within the target interface, and the floating window 602 displays: the text content used for creating the target note and the prompt message.
Optionally, in this step, after the target note is generated, directly jumping from the target interface to a "note" program interface, so as to display a page corresponding to the target note.
For example, referring to FIG. 6, the lower left corner of the target interface displays a target image 601, a floating window 602 pops up within the target interface, and the floating window 602 displays: the text content used for creating the target note, and the prompt message. And next, jumping to the interface shown in fig. 5 from the target interface to display a note page corresponding to the target note.
In the screen recording scene, the text content displayed in the floating window 602 is continuously updated along with the update of the interface content.
Optionally, the processing mode option corresponding to this step is "store notes".
And thirdly, setting the target image as a background image of the electronic equipment, and displaying prompt information.
In this processing manner, the target image is automatically set as the background image by using the function of setting the background image supported by the electronic device.
For example, the target image is set as a wallpaper image.
Alternatively, in the case of generating the target image, prompt information for setting the background image is displayed for the user to view.
Optionally, the processing method option corresponding to this step is "set to wallpaper".
And fourthly, recognizing the image object in the target image and displaying the image object.
In this processing manner, the image object included in the target image is automatically recognized by using the function of image recognition supported by the electronic device.
Optionally, the image object includes a photograph, a pictorial symbol, or the like included in the target image.
Alternatively, in the case of generating the target image, the recognized image object is displayed.
For example, the lower left corner of the target interface displays the target image, a floating window pops up within the target interface, and the floating window displays the image objects included in the target image.
Optionally, the processing mode option corresponding to this step is "identify photo".
Fifthly, translating the first type of character objects in the target image, and displaying a second type of character objects corresponding to the first type of character objects.
In the processing mode, the first type of character objects included in the target image are automatically translated into the corresponding second type of character objects by utilizing the character translation function supported by the electronic equipment.
For example, english appearing in the target image is translated into chinese characters.
Alternatively, in the case of generating a target image, the translated result is displayed, and in addition, the text content before translation may be displayed in synchronization.
For example, the lower left corner of the target interface displays a target image, a floating window pops up in the target interface, and the floating window displays the text content before and after translation.
Optionally, the processing mode option corresponding to the step is "word translation".
And sixthly, identifying the file in the target image and displaying the file.
In the processing method, a file included in a target image is automatically recognized using a function of file recognition supported by an electronic device.
Alternatively, the files include various types of music, videos, news pages, articles for public numbers, and the like.
Alternatively, in the case of generating the target image, the identified file is displayed.
For example, referring to fig. 7, the lower left corner of the target interface displays a target image 701, a floating window 702 pops up above the target interface, and the floating window 702 displays a file included in the target image.
Optionally, a processing mode option corresponding to the step is "file identification".
In the embodiment, a plurality of processing modes capable of being performed on the target image are provided by using some functions supported by the electronic equipment, so that the user does not need to reuse a function program, a downloaded function program and the like carried by the electronic equipment system to perform manual processing on the target image, and the user operation is simplified.
In the flow of the image processing method according to another embodiment of the present application, step 140 includes:
substep A1: and storing the target image into a target folder corresponding to the target processing mode.
In this embodiment, the target images are stored in the target folders corresponding to the target processing manners, so that the target images are classified and managed.
Optionally, in combination with the previous embodiment, after the target image is stored in the target folder, the target image is obtained from the target folder, so as to perform the following steps: and extracting characters and the like.
Optionally, in a case that the target processing mode is an image processing mode not supported by the electronic device, the target image is not processed by the corresponding processing mode, and the target image can be directly stored in a target folder corresponding to the target processing. Therefore, even if the electronic equipment cannot perform corresponding image processing on the target image, the user can still be prevented from finding a large number of images, and the images in the target folder need to be processed in the same item, so that the user does not need to memorize the images respectively, and the user operation is simplified.
Optionally, when the target processing manner includes multiple processing manners, the target images may be stored in folders corresponding to the processing manners.
In this embodiment, on one hand, according to a target processing mode set by a user, classification management can be performed on an acquired screenshot and a recorded video, so that the user is prevented from searching a large number of images, and user operation is simplified; on the other hand, the generated target image can be automatically processed directly in the image processing mode which can be completed by the electronic equipment, and the centralized viewing and processing by the user are facilitated in the image processing mode which can not be completed by the electronic equipment, so that the effect of simplifying the user operation can be achieved.
In the flow of the image processing method according to another embodiment of the present application, step 130 includes at least any one of the following:
substep B1: a second input to a target control is received, the target control being for indicating a target processing mode.
In this step, a plurality of controls may be generated for the functions supported by the electronic device for selection by the user.
For example, referring to fig. 8, in the first list, a "system capability" option 801 is provided, the user clicks the "system capability" option 801, referring to fig. 9, a second list 901 is displayed, in the second list 901, a plurality of controls are displayed, one control is used for representing one purpose, and the processing mode corresponding to the purpose is a processing mode supported by the electronic device system. Further, the user clicks any control in the second list 901 as a target control.
Substep B2: a second input to target processing information is received, the target processing information indicating a target processing mode.
In this step, the user can customize the target processing mode by the described mode.
For example, referring to fig. 8, in the first list, a "custom destination" option 802 is provided, the user clicks the "custom destination" option 802, referring to fig. 10, an input box 10001 is displayed, and the user inputs destination processing information for describing a destination processing manner in the input box 10001.
Optionally, after receiving the target processing information input by the user, identifying whether the target processing mode described by the target processing information is a processing mode supported by the electronic device, if so, performing direct processing, and if not, storing the designated folder for the target image.
Wherein, referring to fig. 3, the other options provided in the first list 301 are common options.
In this embodiment, two methods for setting the target processing manner are provided, and a user can select an appropriate method according to a requirement to complete setting of the target processing manner. Wherein, the user can adopt two methods to set a plurality of processing modes as the target processing mode. Therefore, based on the embodiment, the user can set the processing mode supported by the electronic equipment and also can customize the processing mode, so that more requirements of the user are met, and the user operation is simplified.
In a flow of an image processing method according to another embodiment of the present application, at least one processing manner corresponding to the second input includes: a first target processing mode and a second target processing mode; correspondingly, step 140, comprises:
substep C1: processing the target image in a first target processing mode and a second target processing mode according to the target sequence;
in this embodiment, the user may set a plurality of processing modes through the second input, and correspondingly, when processing the target image, the target image may be sequentially processed in a preset processing mode.
Alternatively, when the target image is processed in a plurality of processing methods, the execution end point of the processing action corresponding to the previous processing method is used as a node as the execution start point of the processing action of the next processing method until all processing actions are completed.
Wherein the target order is associated with the input parameters of the second input.
For example, in the second input, if the user sets the first target processing method first and then sets the second target processing method, the user performs the processing in the first target processing method first and then performs the processing in the second target processing method.
For another example, in the second input, after the user sets the first target processing mode and the second target processing mode, the user sets the order of the two processing modes, and then the set order is used as the target order.
For example, referring to fig. 11, the user sequentially clicks a "text extraction" option 1101 and a "note storage" option 1102 in the first list, so that after the target image is generated, the target image is first stored in a folder corresponding to the "text extraction", and meanwhile, text content in the target image is extracted and displayed; and then, storing the target image into a folder corresponding to the storage note, and simultaneously extracting the text content in the target image to create a new note. Wherein, the user can adjust the processing result currently displayed by the interface based on the controls displayed in the interface, such as the 'previous step' and the 'next step'.
In this embodiment, a method for performing combination processing on a target image is provided, so that a user is prevented from performing multiple manual processing on the target image, requirements of the user on multiple processes are further met, and user operations are simplified.
In other embodiments of the present application, the user may not set the target processing mode for the acquired screenshot and the recorded video, so that the acquired screenshot and the recorded video are stored in the folder corresponding to the "do not aim" setting.
In other embodiments of the present application, for the acquired screenshot and the recorded video, the user may not set the target processing mode, so that the acquired screenshot and the recorded video are stored with reference to the existing storage path.
In other embodiments of the present application, after the target image is processed in the target processing manner, the processed image, the processed extracted text, the processed generated note, and the like may be stored in the target folder corresponding to the target processing manner, so that the user may view the processed image in a centralized manner.
In summary, the present application aims to provide a classification method suitable for screenshot and screen recording, which can help a user to browse and search quickly and improve efficiency; meanwhile, the complicated steps of processing the images of the screenshot and the screen recording by the user by using related software, programs and the like can be simplified; furthermore, a chain function of multiple processing combined automatic operation is provided, the image processing efficiency is effectively improved, and the cost is saved. Based on the classification and automatic processing of screen capture and screen recording, multiple functions such as accurate checking of reading notes, temporary storage and user-defined screen capture can be achieved, the use efficiency of a user on the screen capture and screen recording functions is improved, and therefore user experience is improved.
In the image processing method provided by the embodiment of the application, the execution main body can be an image processing device. In the embodiment of the present application, an image processing apparatus is taken as an example to execute an image processing method, and the image processing apparatus provided in the embodiment of the present application is described.
Fig. 12 shows a block diagram of an image processing apparatus of another embodiment of the present application, the apparatus including:
a first receiving module 10, configured to receive a first input in a case that a target interface is displayed;
a generating module 20, configured to generate a target image according to the display content of the target interface in response to the first input;
a second receiving module 30, configured to receive a second input;
and the processing module 40 is used for responding to the second input and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
In this way, in the embodiment of the application, the user triggers the screen capture function and the screen recording function through the first input, and generates the target image according to the content in the currently displayed target interface in response to the first input. Further, if it is detected that the target processing method is set by the user through the second input, the target image is automatically processed in the target processing method after the target image is generated. Therefore, based on the embodiment of the application, the acquired screenshot or the recorded video can be automatically processed according to the target processing mode input by the user, so that the manual processing of the user is avoided, and the user operation is simplified.
Optionally, the target processing mode is an image processing mode supported by the electronic device;
a target treatment comprising at least one of:
extracting the text content in the target image, and displaying the text content in an editable state;
extracting the text content in the target image, generating a target note according to the text content, and displaying prompt information;
setting the target image as a background image of the electronic equipment, and displaying prompt information;
identifying an image object in the target image and displaying the image object;
translating the first type of character objects in the target image, and displaying a second type of character objects corresponding to the first type of character objects;
and identifying the file in the target image and displaying the file.
Optionally, the processing module 40 includes:
and the storage unit is used for storing the target image to a target folder corresponding to the target processing mode.
Optionally, the second receiving module 30 includes at least any one of the following:
the first receiving unit is used for receiving second input of a target control, and the target control is used for indicating a target processing mode;
a second receiving unit for receiving a second input of target processing information, the target processing information being indicative of a target processing manner.
Optionally, the at least one processing manner corresponding to the second input includes: a first target processing mode and a second target processing mode;
a processing module 40 comprising:
the processing unit is used for processing the target image in a first target processing mode and a second target processing mode according to the target sequence;
wherein the target order is associated with the input parameters of the second input.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), an assistant, or a self-service machine, and the embodiments of the present application are not limited in particular.
The image processing apparatus according to the embodiment of the present application may be an apparatus having an action system. The action system may be an Android (Android) action system, an iOS action system, or other possible action systems, and the embodiment of the present application is not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 13, an electronic device 100 is further provided in this embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and executable on the processor 101, where the program or the instruction is executed by the processor 101 to implement each step of any one of the above embodiments of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device according to the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is used for receiving a first input under the condition that a target interface is displayed; a processor 1010, configured to generate a target image according to display content of the target interface in response to the first input; a user input unit 1007 also used for receiving a second input; the processor 1010 is further configured to, in response to the second input, process the target image in a target processing manner, where the target processing manner is at least one processing manner corresponding to the second input.
In this way, in the embodiment of the application, the user triggers the screen capture function and the screen recording function through the first input, and responds to the first input to generate the target image according to the content in the currently displayed target interface. Further, if it is detected that the target processing mode is set by the user through the second input, the target image is automatically processed in the target processing mode after the target image is generated. Therefore, based on the embodiment of the application, the acquired screenshot or the recorded video can be automatically processed according to the target processing mode input by the user, so that the manual processing of the user is avoided, and the user operation is simplified.
Optionally, the target processing mode is an image processing mode supported by the electronic device; the target processing mode comprises at least one of the following items: extracting the text content in the target image, and displaying the text content in an editable state; extracting the text content in the target image, generating a target note according to the text content, and displaying prompt information; setting the target image as a background image of the electronic equipment, and displaying prompt information; identifying an image object in the target image and displaying the image object; translating a first type of character object in the target image, and displaying a second type of character object corresponding to the first type of character object; and identifying the file in the target image and displaying the file.
Optionally, the processor 1010 is further configured to store the target image in a target folder corresponding to the target processing manner.
Optionally, the user input unit 1007 is further configured to receive the second input to a target control, where the target control is used to indicate the target processing manner; receiving the second input of target processing information, the target processing information indicating the target processing mode.
Optionally, the at least one processing manner corresponding to the second input includes: a first target processing mode and a second target processing mode; the processor 1010 is further configured to process the target image in the first target processing manner and the second target processing manner according to a target sequence; wherein the target order is associated with the input parameters of the second input.
In conclusion, the present application aims to provide a classification method suitable for screen capture and screen recording, which can help a user to quickly browse and search, and improve efficiency; meanwhile, the complicated steps of processing the images of the screenshot and the screen recording by the user by using related software, programs and the like can be simplified; furthermore, a chain function of multiple processing combined automatic operation is provided, the image processing efficiency is effectively improved, and the cost is saved. Based on screen capture, screen recording classification and automatic processing, multiple functions such as accurate reading note checking, temporary storage and user-defined screen capture function can be achieved, the service efficiency of a user for screen capture and screen recording functions is greatly improved, and user experience is improved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and an action stick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to applications and action systems. The processor 1010 may integrate an application processor, which mainly handles motion systems, user pages, applications, etc., and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing image processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An image processing method, characterized in that the method comprises:
receiving a first input in the case of displaying a target interface;
responding to the first input, and generating a target image according to the display content of the target interface;
receiving a second input;
and responding to the second input, and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
2. The method according to claim 1, wherein the target processing mode is an image processing mode supported by an electronic device;
the target processing mode comprises at least one of the following items:
extracting the text content in the target image, and displaying the text content in an editable state;
extracting the text content in the target image, generating a target note according to the text content, and displaying prompt information;
setting the target image as a background image of the electronic equipment, and displaying prompt information;
identifying an image object in the target image and displaying the image object;
translating a first type of character object in the target image, and displaying a second type of character object corresponding to the first type of character object;
and identifying the file in the target image and displaying the file.
3. The method of claim 1, wherein said processing the target image in a target processing manner in response to the second input comprises:
and storing the target image to a target folder corresponding to the target processing mode.
4. The method of claim 1, wherein receiving a second input comprises at least any one of:
receiving the second input to a target control, the target control being for indicating the target processing mode;
receiving the second input of target processing information, the target processing information indicating the target processing mode.
5. The method of claim 1, wherein the at least one processing mode corresponding to the second input comprises: a first target processing mode and a second target processing mode;
the processing the target image in a target processing mode comprises the following steps:
processing the target image in the first target processing mode and the second target processing mode according to a target sequence;
wherein the target order is associated with the input parameters of the second input.
6. An image processing apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving a first input under the condition that a target interface is displayed;
the generating module is used for responding to the first input and generating a target image according to the display content of the target interface;
a second receiving module for receiving a second input;
and the processing module is used for responding to the second input and processing the target image in a target processing mode, wherein the target processing mode is at least one processing mode corresponding to the second input.
7. The apparatus of claim 6, wherein the target processing mode is an image processing mode supported by an electronic device;
the target processing mode comprises at least one of the following items:
extracting the text content in the target image, and displaying the text content in an editable state;
extracting the text content in the target image, generating a target note according to the text content, and displaying prompt information;
setting the target image as a background image of the electronic equipment, and displaying prompt information;
identifying an image object in the target image and displaying the image object;
translating a first type of character object in the target image, and displaying a second type of character object corresponding to the first type of character object;
and identifying the file in the target image and displaying the file.
8. The apparatus of claim 6, wherein the processing module comprises:
and the storage unit is used for storing the target image to a target folder corresponding to the target processing mode.
9. The apparatus of claim 6, wherein the second receiving module comprises at least any one of:
a first receiving unit, configured to receive the second input to a target control, where the target control is used to indicate the target processing manner;
a second receiving unit, configured to receive the second input of target processing information, where the target processing information is used to indicate the target processing manner.
10. The apparatus of claim 6, wherein the at least one processing mode corresponding to the second input comprises: a first target processing mode and a second target processing mode;
the processing module comprises:
the processing unit is used for processing the target image in the first target processing mode and the second target processing mode according to a target sequence;
wherein the target order is associated with the input parameters of the second input.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210689328.3A CN114995698A (en) | 2022-06-17 | 2022-06-17 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210689328.3A CN114995698A (en) | 2022-06-17 | 2022-06-17 | Image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114995698A true CN114995698A (en) | 2022-09-02 |
Family
ID=83034688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210689328.3A Pending CN114995698A (en) | 2022-06-17 | 2022-06-17 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114995698A (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104090762A (en) * | 2014-07-10 | 2014-10-08 | 福州瑞芯微电子有限公司 | Screenshot processing device and method |
CN104461474A (en) * | 2013-09-12 | 2015-03-25 | 北京三星通信技术研究有限公司 | Mobile terminal and screen-shooting method and device therefor |
CN105893498A (en) * | 2016-03-30 | 2016-08-24 | 努比亚技术有限公司 | Method and device for achieving screen capture and method and device for searching for images |
CN106293482A (en) * | 2016-08-24 | 2017-01-04 | 惠州Tcl移动通信有限公司 | A kind of mobile terminal and the method and system of wallpaper is set |
CN107682525A (en) * | 2017-08-29 | 2018-02-09 | 努比亚技术有限公司 | Screenshot processing method, terminal and computer-readable recording medium |
US20180060309A1 (en) * | 2016-08-24 | 2018-03-01 | International Business Machines Corporation | Automated translation of screen images for software documentation |
CN108459799A (en) * | 2018-01-26 | 2018-08-28 | 努比亚技术有限公司 | A kind of processing method of picture, mobile terminal and computer readable storage medium |
CN108921802A (en) * | 2018-06-29 | 2018-11-30 | 联想(北京)有限公司 | A kind of image processing method and device |
CN109814786A (en) * | 2019-01-25 | 2019-05-28 | 维沃移动通信有限公司 | Image storage method and terminal device |
CN110781688A (en) * | 2019-09-20 | 2020-02-11 | 华为技术有限公司 | Method and electronic device for machine translation |
CN111382289A (en) * | 2020-03-13 | 2020-07-07 | 闻泰通讯股份有限公司 | Picture display method and device, computer equipment and storage medium |
CN111601012A (en) * | 2020-05-28 | 2020-08-28 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
-
2022
- 2022-06-17 CN CN202210689328.3A patent/CN114995698A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104461474A (en) * | 2013-09-12 | 2015-03-25 | 北京三星通信技术研究有限公司 | Mobile terminal and screen-shooting method and device therefor |
CN104090762A (en) * | 2014-07-10 | 2014-10-08 | 福州瑞芯微电子有限公司 | Screenshot processing device and method |
CN105893498A (en) * | 2016-03-30 | 2016-08-24 | 努比亚技术有限公司 | Method and device for achieving screen capture and method and device for searching for images |
CN106293482A (en) * | 2016-08-24 | 2017-01-04 | 惠州Tcl移动通信有限公司 | A kind of mobile terminal and the method and system of wallpaper is set |
US20180060309A1 (en) * | 2016-08-24 | 2018-03-01 | International Business Machines Corporation | Automated translation of screen images for software documentation |
CN107682525A (en) * | 2017-08-29 | 2018-02-09 | 努比亚技术有限公司 | Screenshot processing method, terminal and computer-readable recording medium |
CN108459799A (en) * | 2018-01-26 | 2018-08-28 | 努比亚技术有限公司 | A kind of processing method of picture, mobile terminal and computer readable storage medium |
CN108921802A (en) * | 2018-06-29 | 2018-11-30 | 联想(北京)有限公司 | A kind of image processing method and device |
CN109814786A (en) * | 2019-01-25 | 2019-05-28 | 维沃移动通信有限公司 | Image storage method and terminal device |
CN110781688A (en) * | 2019-09-20 | 2020-02-11 | 华为技术有限公司 | Method and electronic device for machine translation |
CN111382289A (en) * | 2020-03-13 | 2020-07-07 | 闻泰通讯股份有限公司 | Picture display method and device, computer equipment and storage medium |
CN111601012A (en) * | 2020-05-28 | 2020-08-28 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111381751A (en) | Text processing method and device | |
WO2016091095A1 (en) | Searching method and system based on touch operation on terminal interface | |
CN114302009A (en) | Video processing method, video processing device, electronic equipment and medium | |
CN116017043B (en) | Video generation method, device, electronic equipment and storage medium | |
WO2024160133A1 (en) | Image generation method and apparatus, electronic device, and storage medium | |
CN116910368A (en) | Content processing method, device, equipment and storage medium | |
CN114995698A (en) | Image processing method and device | |
CN112202958B (en) | Screenshot method and device and electronic equipment | |
CN115309487A (en) | Display method, display device, electronic equipment and readable storage medium | |
CN115437736A (en) | Method and device for recording notes | |
CN113835598A (en) | Information acquisition method and device and electronic equipment | |
WO2016101768A1 (en) | Terminal and touch operation-based search method and device | |
CN113794943A (en) | Video cover setting method and device, electronic equipment and storage medium | |
CN113253904A (en) | Display method, display device and electronic equipment | |
CN112287131A (en) | Information interaction method and information interaction device | |
CN113360684A (en) | Picture management method and device and electronic equipment | |
CN115131649A (en) | Content identification method and device and electronic equipment | |
CN117633273A (en) | Image display method, device, equipment and readable storage medium | |
CN117312595A (en) | Picture display method and device | |
CN117010326A (en) | Text processing method and device, and training method and device for text processing model | |
CN115499610A (en) | Video generation method, video generation device, electronic device, and storage medium | |
CN118214929A (en) | Playing progress adjusting method and device | |
CN115904095A (en) | Information input method and device, electronic equipment and readable storage medium | |
CN118885092A (en) | Information processing method, apparatus, electronic device, storage medium, and program product | |
CN115168078A (en) | Data processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |