CN106249879A - The display packing of a kind of virtual reality image and terminal - Google Patents
The display packing of a kind of virtual reality image and terminal Download PDFInfo
- Publication number
- CN106249879A CN106249879A CN201610575613.7A CN201610575613A CN106249879A CN 106249879 A CN106249879 A CN 106249879A CN 201610575613 A CN201610575613 A CN 201610575613A CN 106249879 A CN106249879 A CN 106249879A
- Authority
- CN
- China
- Prior art keywords
- gesture information
- control area
- virtual reality
- instruction
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments providing display packing and the terminal of a kind of virtual reality image, wherein, described method includes: obtain the gesture information in the input of virtual reality viewing area;Judge the most corresponding scaling instruction preset of described gesture information;Wherein, described default scaling instruction includes amplification instruction, reduces instruction;If the corresponding described default scaling instruction of described gesture information, obtain the control area that described gesture information is corresponding;Wherein, described control area belongs to described virtual reality viewing area;The image shown by described control area is zoomed in or out according to described gesture information.Embodiment of the present invention terminal can realize fine control by gesture to virtual reality viewing area, can zoom in and out the image of specific region.
Description
Technical field
The present invention relates to electronic technology field, particularly relate to display packing and the terminal of a kind of virtual reality image.
Background technology
Virtual reality (Virtual Reality, VR) is to utilize computer to generate a kind of simulated environment, utilizes multi-source information
The interactive three-dimensional dynamic vision merged and the system emulation of entity behavior make user be immersed in this environment.
Smart mobile phone, panel computer etc. can be put into virtual reality glasses viewing 3D with the terminal of display screen by user
Video, sees virtual tourism scenic spot etc..
Current VR glasses mainly control VR focus by the way of manual touch screen and move, by focus from dynamic vision
Option in select, thus control VR dynamic vision.Wherein, VR focus is for positioning at dynamic vision.
At present, EyeSight Technologies have developed a kind of gesture control being applicable to smart mobile phone, can
Realize contactless control by the post-positioned pick-up head of mobile phone to input.
Although although feasible by photographic head observation this scheme of hand exercise track, but inevitably, utilization is taken the photograph
As head detects and judges the problem that gesture there are still precision, it is impossible to accurately obtain and judge the control instruction that gesture is corresponding, nothing
Method depends merely on gesture precise controlling virtual reality (Virtual Reality, VR) glasses;When seeing photograph or video by VR glasses,
Also certain region of visual scene cannot be zoomed in or out.
Summary of the invention
The embodiment of the present invention provides display packing and the terminal of a kind of virtual reality image, it is possible to the image to specific region
Zoom in and out.
First aspect, embodiments provides the display packing of a kind of virtual reality image, and the method includes:
Obtain the gesture information in the input of virtual reality viewing area;
Judge the most corresponding scaling instruction preset of described gesture information;Wherein, described default scaling instruction includes putting
Instruct greatly, reduce instruction;
If the corresponding described default scaling instruction of described gesture information, obtain the control zone that described gesture information is corresponding
Territory;Wherein, described control area belongs to described virtual reality viewing area;
The image shown by described control area is zoomed in or out according to described gesture information.
On the other hand, embodiments providing a kind of terminal, this terminal includes:
First acquiring unit, for obtaining the gesture information in the input of virtual reality viewing area;
Judging unit, for judging the most corresponding scaling instruction preset of described gesture information;Wherein, described default contracting
Put instruction to include amplification instruction, reduce instruction;
Second acquisition unit, if for the corresponding described default scaling instruction of described gesture information, obtaining described gesture
The control area that information is corresponding;Wherein, described control area belongs to described virtual reality viewing area;
Control unit, for zooming in or out the image shown by described control area according to described gesture information.
The embodiment of the present invention is by obtaining the gesture information in the input of virtual reality viewing area;Whether judge gesture information
The corresponding scaling instruction preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region;The image shown by control area is zoomed in or out, it is possible to by gesture to virtual reality viewing area according to gesture information
Territory realizes fine control, zooms in and out the image shown by virtual reality viewing area.
Accompanying drawing explanation
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing is briefly described, it should be apparent that, the accompanying drawing in describing below is some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow diagram of the display packing of a kind of virtual reality image that the embodiment of the present invention provides;
Fig. 2 is the schematic flow diagram of the display packing of a kind of virtual reality image that another embodiment of the present invention provides;
Fig. 3 is the schematic block diagram of a kind of terminal that the embodiment of the present invention provides;
Fig. 4 is a kind of terminal schematic block diagram that another embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise
Example, broadly falls into the scope of protection of the invention.
Should be appreciated that when using in this specification and in the appended claims, term " includes " and " comprising " instruction
Described feature, entirety, step, operation, element and/or the existence of assembly, but it is not precluded from one or more further feature, whole
Body, step, operation, element, assembly and/or the existence of its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicating other situation, otherwise " ", " " and " being somebody's turn to do " of singulative is intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is
Refer to the one or more any combination being associated in the item listed and likely combine, and including that these combine.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if be detected that [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to really
Fixed " or " [described condition or event] once being detected " or " in response to [described condition or event] being detected ".
In implementing, the terminal described in the embodiment of the present invention includes but not limited to such as have touch sensitive surface
Mobile phone, laptop computer or the tablet PC of (such as, touch-screen display and/or touch pad) etc other just
Portable device.It is to be further understood that in certain embodiments, described equipment not portable communication device, but have tactile
Touch the desk computer of sensing surface (such as, touch-screen display and/or touch pad).
In discussion below, describe the terminal including display and touch sensitive surface.It is, however, to be understood that
It is that terminal can include such as physical keyboard, mouse and/or control other physical user-interface device one or more of bar.
Terminal supports various application programs, such as following in one or more: drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, dish imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support the application of application program, photo management application program, digital camera application program, digital camera application program, web-browsing
Program, digital music player application and/or video frequency player application program.
The various application programs that can perform in terminal can use at least one of such as touch sensitive surface public
Physical user-interface device.Among applications and/or can adjust in corresponding application programs and/or change and touch sensitive table
The corresponding information of display in one or more functions in face and terminal.So, the public physical structure of terminal (such as, touches
Sensing surface) the various application programs with the most directly perceived and transparent user interface can be supported.
Refer to the exemplary flow that Fig. 1, Fig. 1 are the display packings of a kind of virtual reality image that the embodiment of the present invention provides
Figure.In the present embodiment, the executive agent of the display packing of virtual reality image is terminal, and terminal is virtual reality display terminal, empty
Intend reality display terminal and there is photographic head.Virtual reality display terminal can be virtual reality glasses, but is not limited to this.Virtual
Non-real end has.The display packing of virtual reality image can comprise the following steps that as shown in Figure 1
S101: obtain the gesture information in the input of virtual reality viewing area.
Terminal obtains, by photographic head, the gesture letter that user is inputted by contactless mode in virtual reality viewing area
Breath.
S102: judge the most corresponding scaling instruction preset of described gesture information.
The most corresponding scaling instruction preset of the gesture information that terminal judges gets.
Wherein, the scaling instruction preset pre-sets and is stored in terminal.Preset scaling instruction can be
Arbitrary virtual reality viewing area repeatedly inputs clicking operation, or in any virtual reality viewing area input dilation procedure or receipts
Hold together operation, but be not limited to this, specifically can be configured according to time situation, not limit.Dilation procedure is any two
Individual finger moves along contrary direction.Draw operation in close along the line segment formed by two fingers for any two finger.
Further, the scaling instruction preset includes amplification instruction, reduces instruction.
Terminal, when determining the scaling instruction that the gesture information correspondence got is preset, performs step S103;Otherwise, do not do
Any process, terminates this flow process, or returns step S101.
S103: if the corresponding described default scaling instruction of described gesture information, obtain the control that described gesture information is corresponding
Region processed;Wherein, described control area belongs to described virtual reality viewing area.
Terminal is when determining the scaling instruction that the gesture information correspondence got is preset, in virtual reality viewing area really
Surely the control area that gesture information is corresponding is obtained.
Wherein, control area belongs to virtual reality viewing area.The control area that gesture information is corresponding is to be selected by gesture
The control area selected can the be finger region overlapping with virtual reality viewing area, or hands in virtual reality viewing area
Refer to corresponding contact area, but be not limited to this.
It is understood that when terminal gets control area corresponding to gesture information, terminal can be by virtual reality
The focus of viewing area moves to this control area.Region or position that this focus currently selects for identifying gesture information are believed
Breath.
S104: zoom in or out the image shown by described control area according to described gesture information.
Terminal zooms in or out the image shown by the control area that gesture information is corresponding according to gesture information.
Wherein, when the amplification instruction that gesture information correspondence is preset, shown by the control area that amplifying gesture information is corresponding
Image.When the amplification instruction that gesture information correspondence is preset, reduce the image shown by the control area that gesture information is corresponding.
Such scheme, terminal is by obtaining the gesture information in the input of virtual reality viewing area;Judge that gesture information is
The scaling instruction that no correspondence is preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region processed;The image shown by control area is zoomed in or out, it is possible to by gesture, virtual reality is shown according to gesture information
Region realizes fine control, and image control area shown by corresponding to gesture information in virtual reality viewing area is carried out
Scaling.
Refer to the signal that Fig. 2, Fig. 2 are the display packings of a kind of virtual reality image that another embodiment of the present invention provides
Flow chart.In the present embodiment, the executive agent of the display packing of virtual reality image is terminal, and terminal is that virtual reality display is whole
End, virtual reality display terminal has photographic head.Virtual reality display terminal can be virtual reality glasses, but is not limited to
This.Virtual reality terminal has.The display packing of virtual reality image can comprise the following steps that as shown in Figure 1
S201: obtain the gesture information in the input of virtual reality viewing area.
Terminal obtains, by photographic head, the gesture letter that user is inputted by contactless mode in virtual reality viewing area
Breath.
S202: judge the most corresponding scaling instruction preset of described gesture information.
The most corresponding scaling instruction preset of the gesture information that terminal judges gets.
Wherein, the scaling instruction preset pre-sets and is stored in terminal.Preset amplification instruction can be
Arbitrary virtual reality viewing area repeatedly inputs clicking operation, or inputs dilation procedure in any virtual reality viewing area;In advance
If instruction of reducing can be to draw operation in, but be not limited to this, the amplification instruction preset, reducing instruction specifically can be according to the time
Situation is configured, and does not limits.Dilation procedure is that any two finger moves along contrary direction.Gathering operation is
Any two finger is close along the line segment formed by two fingers.
Wherein, the scaling instruction preset includes amplification instruction, reduces instruction.
Terminal, when determining the scaling instruction that the gesture information correspondence got is preset, performs step S203;Otherwise, do not do
Any process, terminates this flow process, or returns step S201.
S203: if the corresponding described default scaling instruction of described gesture information, obtain the control that described gesture information is corresponding
Region processed;Wherein, described control area belongs to described virtual reality viewing area.
Terminal is when determining the scaling instruction that the gesture information correspondence got is preset, in virtual reality viewing area really
Surely the control area that gesture information is corresponding is obtained.
Wherein, control area belongs to virtual reality viewing area.The control area that gesture information is corresponding is to be selected by gesture
The control area selected can the be finger region overlapping with virtual reality viewing area, or hands in virtual reality viewing area
Refer to corresponding contact area, but be not limited to this.
It is understood that when terminal gets control area corresponding to gesture information, terminal can be by virtual reality
The focus of viewing area moves to this control area.Region or position that this focus currently selects for identifying gesture information are believed
Breath.
Further, step S203 includes: if the corresponding described default scaling instruction of described gesture information, obtain described
The control area that gesture information is corresponding, and obtain times magnification numerical value corresponding to described gesture information or minification value.
Such as, terminal, when determining the scaling instruction that the gesture information correspondence got is preset, obtains described gesture information
Corresponding control area, and obtain times magnification numerical value corresponding to gesture information or minification value.
Wherein, different amplification instruction can corresponding different times magnification numerical value, it is also possible to corresponding identical amplification
Value;Different reduce instruction can corresponding different minification value, it is also possible to corresponding identical minification value.
Times magnification numerical value or minification value that multiple gesture informations are corresponding can be different;Multiple gesture informations are corresponding
The fixing times magnification numerical value preset or default fixing minification value.
S204: zoom in or out the image shown by described control area according to described gesture information.
Terminal zooms in or out the image shown by the control area that gesture information is corresponding according to gesture information.
Wherein, when the amplification instruction that gesture information correspondence is preset, shown by the control area that amplifying gesture information is corresponding
Image.When the amplification instruction that gesture information correspondence is preset, reduce the image shown by the control area that gesture information is corresponding.
Terminal can the image shown by control area corresponding for gesture information is amplified by fixing multiple or
Reduce, can zoom in or out by the different multiples that gesture information is corresponding, do not limit.
That presets reduces instruction and can include one, it is also possible to include at least two, or more, does not limits.
Further, when multiple gesture information correspondences preset fixing times magnification numerical value or default fixing minification value
Time, step S204 may include that if the amplification instruction preset of described gesture information correspondence, according to described gesture information by described
The fixing times magnification numerical value preset amplifies the image shown by described control area;If or described gesture information is default contracting
Little instruction, reduces the figure shown by described control area according to described gesture information as described default fixing minification value
Picture.
Such as, terminal is in the amplification instruction determining that the gesture information correspondence got is preset, and terminal is corresponding by gesture information
The image shown by control area be amplified to the fixing multiple preset.Wherein, the amplification instruction preset can include one, also
At least two can be included, or more, do not limit.
Terminal determine the gesture information correspondence got preset reduce instruction, terminal is by control corresponding for gesture information
Image down shown by region is to the fixing multiple preset.
Wherein, that presets reduces instruction and can include one, it is also possible to include at least two, or more, does not limits
System.
Further, when times magnification numerical value corresponding to multiple gesture informations or minification value can different time, step
Rapid S204 may include that and amplifies the image shown by described control area as the times magnification numerical value that described gesture information is corresponding, or
The minification value corresponding as described gesture information reduces the image shown by described control area.
Such as, when times magnification numerical value corresponding to multiple gesture informations or minification value can different time, terminal obtains
Taking times magnification numerical value corresponding to current gesture information or minification value, the times magnification numerical value corresponding by gesture information amplifies control
Image shown by region processed, or reduce the image shown by control area as the minification value that gesture information is corresponding.
Such as, at control area adopting consecutive click chemical reaction corresponding to gesture information twice, the times magnification numerical value of this gesture information mark
For twice, the image shown by control area corresponding for gesture information is amplified twice by terminal.In the control that gesture information is corresponding
Region adopting consecutive click chemical reaction three times, the times magnification numerical value of this gesture information mark is three times, and terminal is by control zone corresponding for gesture information
Image shown by territory is put three times greater.The like, in the control area adopting consecutive click chemical reaction n times that gesture information is corresponding, this gesture is believed
The times magnification numerical value of breath mark is N times, and the image shown by control area corresponding for gesture information is amplified N times by terminal.Wherein,
N is positive integer, and the value of N can be configured according to practical situation, not limit.
Input n times in the control area that gesture information is corresponding continuously and draw operation, the minification of this gesture information mark in
Value is for N times, and terminal is by image down N times shown by control area corresponding for gesture information.
It is understood that terminal clicks on the control area that gesture information is corresponding for the first time, or for the first time in this control
When operation is drawn in region input in, the focus of virtual reality viewing area can be moved to this control area by terminal.
During it is understood that the control area that gesture information is corresponding is zoomed in and out by terminal according to gesture information, gesture
Area after control area corresponding to information is amplified is original N times, or control area corresponding to gesture information reduce after face
Long-pending is original 1/N.
S205: zoom in or out the virtual reality viewing area institute adjacent with described control area according to described gesture information
The image of display.
Terminal when getting times magnification numerical value corresponding to gesture letter or minification value, according to times magnification numerical value amplify with
The image shown by virtual reality viewing area that control area is adjacent, or reduce adjacent with control area according to minification value
The image shown by virtual reality viewing area.The area of the virtual reality viewing area adjacent with control area can basis
The center of control area, virtual reality viewing area the gross area depending on, do not limit.
Owing to the display size of virtual reality viewing area is fixing, the scene after therefore amplifying is only able to display hands
The 1/N size of the control area that gesture letter is corresponding.
Such scheme, terminal is by obtaining the gesture information in the input of virtual reality viewing area;Judge that gesture information is
The scaling instruction that no correspondence is preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region processed;The image shown by control area is zoomed in or out, it is possible to by gesture, virtual reality is shown according to gesture information
Region realizes fine control, to gesture information in virtual reality viewing area corresponding shown by image zoom in and out.
Image shown by control area corresponding for gesture information can be entered by terminal by default fixedly scaling multiple value
Row zooms in or out, it is also possible to zoom in or out by each self-corresponding scaling multiple value of gesture information, it is also possible to will be with hands
The adjacent image shown by region in control area corresponding to gesture information zooms in and out, it is possible to control virtual reality viewing area flexibly
Image shown by territory.
See Fig. 3, be the schematic block diagram of a kind of terminal that the embodiment of the present invention provides.Terminal can be mobile phone, flat board
The mobile terminals such as computer, but it is not limited to this, it is also possible to for other-end, it is not construed as limiting herein.The terminal 300 of the present embodiment is wrapped
The each module included, for performing each step in embodiment corresponding to Fig. 1, specifically refers to embodiment corresponding to Fig. 1 and Fig. 1
In associated description, do not repeat.The terminal of the present embodiment includes: the first acquiring unit 310, judging unit 320, second obtain
Take unit 330 and control unit 340.
First acquiring unit 310 is for obtaining the gesture information in the input of virtual reality viewing area.
Such as, the first acquiring unit 310 obtains the gesture information in the input of virtual reality viewing area.First acquiring unit
Gesture information is sent by 310 to judging unit 320.
Judging unit 320 is for receiving the gesture information that the first acquiring unit 310 sends, it is judged that gesture information is the most corresponding
The scaling instruction preset;Wherein, the scaling instruction preset includes amplification instruction, reduces instruction.
Such as, it is judged that unit 320 receives the gesture information that the first acquiring unit 310 sends, it is judged that gesture information is the most right
The scaling instruction that should preset;Wherein, the scaling instruction preset includes amplification instruction, reduces instruction.
Judging unit 320 will determine that result sends to second acquisition unit 330.
Second acquisition unit 330 is for receiving the judged result that judging unit 320 sends, if it is judged that believe for gesture
The corresponding described default scaling instruction of breath, obtains the control area that gesture information is corresponding;Wherein, control area belongs to virtual reality
Viewing area.
Such as, second acquisition unit 330 receives the judged result that judging unit 320 sends, if it is judged that be gesture
The corresponding described default scaling instruction of information, obtains the control area that gesture information is corresponding;Wherein, control area belongs to virtual existing
Real viewing area.
Control area information corresponding for gesture information is sent by second acquisition unit 330 to control unit 340.
The control area information that gesture information that control unit 340 sends for receiving second acquisition unit 330 is corresponding, root
The image shown by described control area is zoomed in or out according to described gesture information.
Such as, control unit 340 receives control area information corresponding to gesture information that second acquisition unit 330 sends,
The image shown by described control area is zoomed in or out according to described gesture information.
Such scheme, terminal is by obtaining the gesture information in the input of virtual reality viewing area;Judge that gesture information is
The scaling instruction that no correspondence is preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region processed;The image shown by control area is zoomed in or out, it is possible to by gesture, virtual reality is shown according to gesture information
Region realizes fine control, to gesture information in virtual reality viewing area corresponding shown by image zoom in and out.
Continuing with seeing Fig. 3, in another kind of embodiment, each module that terminal 300 includes is for performing reality corresponding to Fig. 2
Execute each step in example, specifically refer to the associated description in embodiment corresponding to Fig. 2 and Fig. 2, do not repeat.Specifically
Ground:
First acquiring unit 310 is for obtaining the gesture information in the input of virtual reality viewing area.
Such as, the first acquiring unit 310 obtains the gesture information in the input of virtual reality viewing area.First acquiring unit
Gesture information is sent by 310 to judging unit 320.
Judging unit 320 is for receiving the gesture information that the first acquiring unit 310 sends, it is judged that gesture information is the most corresponding
The scaling instruction preset;Wherein, the scaling instruction preset includes amplification instruction, reduces instruction.
Such as, it is judged that unit 320 receives the gesture information that the first acquiring unit 310 sends, it is judged that gesture information is the most right
The scaling instruction that should preset;Wherein, the scaling instruction preset includes amplification instruction, reduces instruction.
Judging unit 320 will determine that result sends to second acquisition unit 330.
Second acquisition unit 330 is for receiving the judged result that judging unit 320 sends, if it is judged that believe for gesture
The corresponding described default scaling instruction of breath, obtains the control area that gesture information is corresponding;Wherein, control area belongs to virtual reality
Viewing area.
Such as, second acquisition unit 330 receives the judged result that judging unit 320 sends, if it is judged that be gesture
The corresponding described default scaling instruction of information, obtains the control area that gesture information is corresponding;Wherein control area belongs to virtual existing
Real viewing area.
Further, second acquisition unit 330 is for obtaining the control area that gesture information is corresponding, and obtains gesture letter
The times magnification numerical value of breath correspondence or minification value;Wherein, gesture information is corresponding described times magnification numerical value or minification value
Different, or gesture information correspondence preset fixing times magnification numerical value or default fixing minification value.
Such as, second acquisition unit 330 obtains the control area that gesture information is corresponding, and it is corresponding to obtain gesture information
Times magnification numerical value or minification value.
Wherein, described times magnification numerical value or minification value that gesture information is corresponding are different, or gesture information is corresponding
The fixing times magnification numerical value preset or default fixing minification value.
Control area information corresponding for gesture information is sent by second acquisition unit 330 to control unit 340.
The control area information that gesture information that control unit 340 sends for receiving second acquisition unit 330 is corresponding, root
The image shown by described control area is zoomed in or out according to described gesture information.
Such as, control unit 340 receives control area information corresponding to gesture information that second acquisition unit 330 sends,
The image shown by described control area is zoomed in or out according to described gesture information.
Further, control unit 340 is additionally operable to zoom in or out according to gesture information adjacent with control area virtual
Reality image shown by viewing area.
Such as, control unit 340 zooms in or out the virtual reality viewing area adjacent with control area according to gesture information
Image shown by territory.
Further, when described times magnification numerical value corresponding to gesture information or minification value are different, control single
The image shown by control area amplifies for the times magnification numerical value corresponding as gesture information in unit 340, or it is corresponding to press gesture information
Minification value reduce the image shown by described control area.
Such as, when described times magnification numerical value corresponding to gesture information or minification value are different, control unit 340
The times magnification numerical value corresponding as gesture information amplifies the image shown by control area, or presses the minification that gesture information is corresponding
Value reduces the image shown by described control area.
Further, when the fixing times magnification numerical value that gesture information correspondence is preset or default fixing minification value,
If the amplification instruction that control unit 340 is preset for gesture information correspondence, according to gesture information by the fixing times magnification preset
Numerical value amplifies the image shown by control area;If or being default to reduce instruction, according to gesture information for gesture information
The image shown by control area is reduced as described default fixing minification value.
Such as, when the fixing times magnification numerical value that gesture information correspondence is preset or default fixing minification value, control
If the amplification instruction that unit 340 gesture information correspondence is preset, amplify by the fixing times magnification numerical value preset according to gesture information
Image shown by control area;If or gesture information is default to reduce instruction, according to gesture information by described default
Fixing minification value reduces the image shown by control area.
Such scheme, terminal is by obtaining the gesture information in the input of virtual reality viewing area;Judge that gesture information is
The scaling instruction that no correspondence is preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region processed;The image shown by control area is zoomed in or out, it is possible to by gesture, virtual reality is shown according to gesture information
Region realizes fine control, to gesture information in virtual reality viewing area corresponding shown by image zoom in and out.
Image shown by control area corresponding for gesture information can be entered by terminal by default fixedly scaling multiple value
Row zooms in or out, it is also possible to zoom in or out by each self-corresponding scaling multiple value of gesture information, it is also possible to will be with hands
The adjacent image shown by region in control area corresponding to gesture information zooms in and out, it is possible to control virtual reality viewing area flexibly
Image shown by territory.
See Fig. 4, be a kind of terminal schematic block diagram of another embodiment of the present invention offer.The present embodiment as depicted
In terminal 400 may include that one or more processor 410;One or more input equipments 420, one or more outputs
Equipment 430 and memorizer 440.Above-mentioned processor 410, input equipment 420, outut device 430 and memorizer 440 pass through bus
450 connect.
Memorizer 440 is used for storing programmed instruction.
Processor 410 operation below performing according to the programmed instruction of memorizer 440 storage:
Processor 410 is for obtaining the gesture information in the input of virtual reality viewing area.
Processor 410 is additionally operable to judge the most corresponding scaling instruction preset of described gesture information;Wherein, described default
Scaling instruction includes amplification instruction, reduces instruction.
If processor 410 is additionally operable to the corresponding described default scaling instruction of described gesture information, obtain described gesture letter
The control area that breath is corresponding.
Processor 410 is additionally operable to zoom in or out the image shown by described control area according to described gesture information.
Further, processor 410 is additionally operable to zoom in or out according to described gesture information adjacent with described control area
The image shown by virtual reality viewing area.
Further, processor 410 is for obtaining the control area that described gesture information is corresponding, and obtains described gesture
Times magnification numerical value that information is corresponding or minification value.
Further, described gesture information is corresponding described times magnification numerical value or minification value are different;Processor
410 amplify the image shown by described control area specifically for the times magnification numerical value corresponding as described gesture information, or by institute
The minification value stating gesture information corresponding reduces the image shown by described control area.
Further, described gesture information correspondence preset fixing times magnification numerical value or default fixing minification value;
If processor 410 is additionally operable to the amplification instruction that described gesture information correspondence is preset, preset by described according to described gesture information
Fixing times magnification numerical value amplify image shown by described control area;If or being default contracting for described gesture information
Little instruction, reduces the figure shown by described control area according to described gesture information as described default fixing minification value
Picture.
Such scheme, terminal is by obtaining the gesture information in the input of virtual reality viewing area;Judge that gesture information is
The scaling instruction that no correspondence is preset;If the corresponding described default scaling instruction of gesture information, obtain the control that gesture information is corresponding
Region processed;The image shown by control area is zoomed in or out, it is possible to by gesture, virtual reality is shown according to gesture information
Region realizes fine control, to gesture information in virtual reality viewing area corresponding shown by image zoom in and out.
Image shown by control area corresponding for gesture information can be entered by terminal by default fixedly scaling multiple value
Row zooms in or out, it is also possible to zoom in or out by each self-corresponding scaling multiple value of gesture information, it is also possible to will be with hands
The adjacent image shown by region in control area corresponding to gesture information zooms in and out, it is possible to control virtual reality viewing area flexibly
Image shown by territory.
Should be appreciated that in embodiments of the present invention, alleged processor 410 can be CPU (Central
Processing Unit, CPU), this processor can also is that other general processors, digital signal processor (Digital
Signal Processor, DSP), special IC (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device can also be the processor etc. of any routine.
Input equipment 420 can include that Trackpad, fingerprint adopt sensor (for gathering the finger print information of user and fingerprint
Directional information), mike etc., outut device 430 can include display (LCD etc.), speaker etc..
This memorizer 440 can include read only memory and random access memory, and to processor 410 provide instruction and
Data.A part for memorizer 440 can also include nonvolatile RAM.Such as, memorizer 440 can also be deposited
The information of storage device type.
In implementing, processor 410, input equipment 420 described in the embodiment of the present invention, outut device 430 can
Perform described in first embodiment and second embodiment of the display packing of the virtual reality image that the embodiment of the present invention provides
Implementation, it is possible to perform the implementation of terminal described by the embodiment of the present invention, do not repeat them here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes
Unit and algorithm steps, it is possible to electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware
With the interchangeability of software, the most generally describe composition and the step of each example according to function.This
A little functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Specially
Industry technical staff can use different methods to realize described function to each specifically should being used for, but this realization is not
It is considered as beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, and the end of foregoing description
End and the specific works process of unit, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed terminal and method, can be passed through it
Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only
Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing
Close or be desirably integrated into another system, or some features can be ignored, or not performing.It addition, shown or discussed phase
Coupling between Hu or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication
Connect, it is also possible to be electric, machinery or other form connect.
Step in embodiment of the present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit.Above-mentioned integrated
Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit
Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art contributed, or this technical scheme completely or partially can be with the form of software product
Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention
Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in the amendment of various equivalence or replace
Changing, these amendments or replacement all should be contained within protection scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain required is as the criterion.
Claims (10)
1. the display packing of a virtual reality image, it is characterised in that described method includes:
Obtain the gesture information in the input of virtual reality viewing area;
Judge the most corresponding scaling instruction preset of described gesture information;Wherein, described default scaling instruction includes that amplification refers to
Make, reduce instruction;
If the corresponding described default scaling instruction of described gesture information, obtain the control area that described gesture information is corresponding;Its
In, described control area belongs to described virtual reality viewing area;
The image shown by described control area is zoomed in or out according to described gesture information.
Method the most according to claim 1, it is characterised in that described method also includes: amplify according to described gesture information
Or reduce image virtual reality viewing area shown by adjacent with described control area.
Method the most according to claim 1 and 2, it is characterised in that if described gesture information is corresponding described default
Scaling instruction, the control area obtaining described gesture information corresponding includes:
Obtain the control area that described gesture information is corresponding, and obtain times magnification numerical value corresponding to described gesture information or reduce
Multiple value.
Method the most according to claim 3, it is characterised in that described times magnification numerical value that described gesture information is corresponding or contracting
Little multiple value is different;
The described image zoomed in or out shown by described control area according to described gesture information includes:
Amplify the image shown by described control area as the times magnification numerical value that described gesture information is corresponding, or believe by described gesture
The minification value that breath is corresponding reduces the image shown by described control area.
Method the most according to claim 3, it is characterised in that the fixing times magnification numerical value that described gesture information correspondence is preset
Or the fixing minification value preset;
The described image zoomed in or out shown by described control area according to described gesture information includes: if described gesture letter
The amplification instruction that breath correspondence is preset, amplifies described control zone according to described gesture information by described default fixing times magnification numerical value
Image shown by territory;Or
If described gesture information is default to reduce instruction, according to described gesture information by described default fixing minification
Value reduces the image shown by described control area.
6. a terminal, it is characterised in that described terminal includes:
First acquiring unit, for obtaining the gesture information in the input of virtual reality viewing area;
Judging unit, for judging the most corresponding scaling instruction preset of described gesture information;Wherein, described default scaling refers to
Order includes amplification instruction, reduces instruction;
Second acquisition unit, if for the corresponding described default scaling instruction of described gesture information, obtaining described gesture information
Corresponding control area;Wherein, described control area belongs to described virtual reality viewing area;
Control unit, for zooming in or out the image shown by described control area according to described gesture information.
Terminal the most according to claim 6, it is characterised in that described control unit is additionally operable to put according to described gesture information
Big or reduce image virtual reality viewing area shown by adjacent with described control area.
8. according to the terminal described in claim 6 or 7, it is characterised in that described second acquisition unit is used for obtaining described gesture
The control area that information is corresponding, and obtain times magnification numerical value corresponding to described gesture information or minification value.
Terminal the most according to claim 8, it is characterised in that described times magnification numerical value that described gesture information is corresponding or contracting
Little multiple value is different;
Described control unit amplifies the figure shown by described control area for the times magnification numerical value corresponding as described gesture information
Picture, or reduce the image shown by described control area as the minification value that described gesture information is corresponding.
Terminal the most according to claim 8, it is characterised in that the fixing amplification that described gesture information correspondence is preset
Value or the fixing minification value preset;
If the amplification instruction that described control unit is preset for described gesture information correspondence, according to described gesture information by described
The fixing times magnification numerical value preset amplifies the image shown by described control area;If or being default for described gesture information
Reduce instruction, reduce shown by described control area as described default fixing minification value according to described gesture information
Image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575613.7A CN106249879A (en) | 2016-07-19 | 2016-07-19 | The display packing of a kind of virtual reality image and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575613.7A CN106249879A (en) | 2016-07-19 | 2016-07-19 | The display packing of a kind of virtual reality image and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106249879A true CN106249879A (en) | 2016-12-21 |
Family
ID=57614026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610575613.7A Withdrawn CN106249879A (en) | 2016-07-19 | 2016-07-19 | The display packing of a kind of virtual reality image and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106249879A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106873783A (en) * | 2017-03-29 | 2017-06-20 | 联想(北京)有限公司 | Information processing method, electronic equipment and input unit |
CN107479712A (en) * | 2017-08-18 | 2017-12-15 | 北京小米移动软件有限公司 | information processing method and device based on head-mounted display apparatus |
CN108563335A (en) * | 2018-04-24 | 2018-09-21 | 网易(杭州)网络有限公司 | Virtual reality exchange method, device, storage medium and electronic equipment |
CN109511004A (en) * | 2017-09-14 | 2019-03-22 | 中兴通讯股份有限公司 | A kind of method for processing video frequency and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945078A (en) * | 2012-11-13 | 2013-02-27 | 深圳先进技术研究院 | Human-computer interaction equipment and human-computer interaction method |
CN103942053A (en) * | 2014-04-17 | 2014-07-23 | 北京航空航天大学 | Three-dimensional model gesture touch browsing interaction method based on mobile terminal |
CN105190480A (en) * | 2013-05-09 | 2015-12-23 | 索尼电脑娱乐公司 | Information processing device and information processing method |
-
2016
- 2016-07-19 CN CN201610575613.7A patent/CN106249879A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945078A (en) * | 2012-11-13 | 2013-02-27 | 深圳先进技术研究院 | Human-computer interaction equipment and human-computer interaction method |
CN105190480A (en) * | 2013-05-09 | 2015-12-23 | 索尼电脑娱乐公司 | Information processing device and information processing method |
CN103942053A (en) * | 2014-04-17 | 2014-07-23 | 北京航空航天大学 | Three-dimensional model gesture touch browsing interaction method based on mobile terminal |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106873783A (en) * | 2017-03-29 | 2017-06-20 | 联想(北京)有限公司 | Information processing method, electronic equipment and input unit |
CN107479712A (en) * | 2017-08-18 | 2017-12-15 | 北京小米移动软件有限公司 | information processing method and device based on head-mounted display apparatus |
CN107479712B (en) * | 2017-08-18 | 2020-08-04 | 北京小米移动软件有限公司 | Information processing method and device based on head-mounted display equipment |
CN109511004A (en) * | 2017-09-14 | 2019-03-22 | 中兴通讯股份有限公司 | A kind of method for processing video frequency and device |
CN109511004B (en) * | 2017-09-14 | 2023-09-01 | 中兴通讯股份有限公司 | Video processing method and device |
CN108563335A (en) * | 2018-04-24 | 2018-09-21 | 网易(杭州)网络有限公司 | Virtual reality exchange method, device, storage medium and electronic equipment |
CN108563335B (en) * | 2018-04-24 | 2021-03-23 | 网易(杭州)网络有限公司 | Virtual reality interaction method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10409366B2 (en) | Method and apparatus for controlling display of digital content using eye movement | |
CN106575203B (en) | Hover-based interaction with rendered content | |
US10503255B2 (en) | Haptic feedback assisted text manipulation | |
US20130215018A1 (en) | Touch position locating method, text selecting method, device, and electronic equipment | |
US20130002706A1 (en) | Method and apparatus for customizing a display screen of a user interface | |
US20110248939A1 (en) | Apparatus and method for sensing touch | |
CN106293395A (en) | A kind of virtual reality glasses and interface alternation method thereof | |
CN105745612B (en) | For showing the readjustment size technology of content | |
US8780059B2 (en) | User interface | |
MX2014002955A (en) | Formula entry for limited display devices. | |
KR20140078629A (en) | User interface for editing a value in place | |
US11204653B2 (en) | Method and device for handling event invocation using a stylus pen | |
CN106249879A (en) | The display packing of a kind of virtual reality image and terminal | |
US20140181737A1 (en) | Method for processing contents and electronic device thereof | |
CN106294549A (en) | A kind of image processing method and terminal | |
US20160124618A1 (en) | Managing content displayed on a touch screen enabled device | |
CN113268182A (en) | Application icon management method and electronic equipment | |
CN109271027B (en) | Page control method and device and electronic equipment | |
CN108228024A (en) | A kind of method of application control, terminal and computer-readable medium | |
CN106201222A (en) | The display packing of a kind of virtual reality interface and terminal | |
CN106155554A (en) | A kind of multi-screen display method and terminal | |
CN108491152A (en) | Touch screen terminal control method, terminal and medium based on virtual cursor | |
CN106227752A (en) | A kind of photograph sharing method and terminal | |
US20200341608A1 (en) | Method of panning image | |
KR101840196B1 (en) | Mobile terminal and method for controlling thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20161221 |
|
WW01 | Invention patent application withdrawn after publication |