WO2015105879A1 - Drag and drop user interface for purchasing media content - Google Patents

Drag and drop user interface for purchasing media content Download PDF

Info

Publication number
WO2015105879A1
WO2015105879A1 PCT/US2015/010487 US2015010487W WO2015105879A1 WO 2015105879 A1 WO2015105879 A1 WO 2015105879A1 US 2015010487 W US2015010487 W US 2015010487W WO 2015105879 A1 WO2015105879 A1 WO 2015105879A1
Authority
WO
WIPO (PCT)
Prior art keywords
content item
type
content
user interface
input data
Prior art date
Application number
PCT/US2015/010487
Other languages
French (fr)
Inventor
Juan M. NOGUEROL
Shaun Kohei WESTBROOK
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2015105879A1 publication Critical patent/WO2015105879A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • G06Q20/123Shopping for digital content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Definitions

  • the present disclosure generally relates to selecting content items that are displayed in a user interface. More specifically, exemplary embodiments of the present disclosure provide a system and method for displaying a user interface that detects input from a user to enable the user to purchase and access various forms of media content.
  • Entertainment systems including televisions, media centers, set top boxes, and other multimedia devices enable content to be purchased and delivered to an end user in a variety of forms and in a variety of manners.
  • content can now be streamed directly to a television, computing device and the like directly from a content provider.
  • the delivered content can include video content such as movies, television programs and so on.
  • Exemplary embodiments of the present disclosure provide a method for selecting media content from a media content provider using a user interface.
  • the user interface presents a number of content items.
  • a first type of input data is received which causes a selection mechanism to move among the plurality of displayed content items.
  • a second type of input data which is different from the first type of input data, is subsequently received.
  • the second type of input data causes the selection mechanism to select a particular content item from the plurality of displayed content items.
  • a third type of input data is received.
  • the third type of input data is used to drag or move the content item from a first location on the user interface to a second location on the user interface.
  • the third type of input data is a combination of the first type of input data and the second type of input data and enables the content item to subsequently be viewed.
  • a computing device having a processor and a memory.
  • the computing device stores instructions which, when executed by the processor, performs a method for selecting media content from a media content provider. More specifically, the computing device is configured to display a user interface having a plurality of content items. One of the content items is selected from the plurality of content items when a first type of input is received. In response to receiving the first type of input, the user interface is divided into a plurality of regions/quadrants. The selected content item can then be moved to a first region of the plurality of regions in response to a second type of received input. Playback of the content item is then initiated when the content item is in the first region for a first predetermined amount of time.
  • a non-transitory tangible computer- readable storage medium includes computer executable instructions which, when executed by at least one processor, performs a method for selecting media content from a media content provider.
  • a user interface having a plurality of content items is displayed.
  • a first type of input data is received.
  • the first type of input data causes a selection mechanism to move among the plurality of displayed content items.
  • a second type of input data is then received.
  • the second type of input data causes the selection mechanism to select a content item from the plurality of content items.
  • the user interface is divided into a first plurality of regions/quadrants.
  • each region of the first plurality of regions enables a different type of playback option for the content item.
  • a third type of input data is then received.
  • the third type of input data is used to drag the content item to a first region of the first plurality of regions.
  • the third type of input data is a combination of the first type of input data and the second type of input data.
  • FIG. 1 illustrates a block diagram of an exemplary system for delivering video content according to one or more embodiments of the present disclosure
  • FIG. 2 is a block diagram of an exemplary receiving device according to one or more embodiments of the present disclosure
  • FIG. 3 illustrates an exemplary input device according to one or more embodiments of the present disclosure
  • FIG. 4 illustrates another exemplary input device according to one or more embodiments of the present disclosure
  • FIG. 5 illustrates an exemplary user interface for providing content to a user according to one or more embodiments of the present disclosure
  • FIG. 6 illustrates the exemplary user interface of FIG. 5 in which a particular content item in the user interface has been selected according to one or more embodiments of the present disclosure
  • FIG. 7 and FIG. 8 illustrate the exemplary user interface of FIG. 5 that presents various purchasing options for the selected content item according to one or more embodiments of the present disclosure
  • FIG. 9 illustrates the exemplary user interface of FIG. 8 that presents various viewing options for the selected content item according to one or more embodiments of the present disclosure
  • FIG. 10 illustrates a method for selecting a content item according to one or more embodiments of the present disclosure.
  • a user interface for selecting content is provided. More specifically, the user interface of the present disclosure enables a user of an electronic device on which the user interface is displayed, to use various hand gestures, other types of actions, or selection mechanisms to scroll through and select content for purchase and ultimate display on the electronic device.
  • the user interface provides information regarding the content that is to be purchased as well as different purchasing options and viewing options. For example, when a user selects a particular content item, the user interface presents the user with a variety of purchasing options. Once the purchasing options have been displayed and ultimately selected, the user interface then presents the user with a variety of viewing options such as, for example, a resolution at which the content is to be displayed.
  • FIG. 1 illustrates a block diagram of an exemplary system 100 for delivering video content in accordance with one or more embodiments of the present disclosure.
  • content that is to be delivered to an end user, a computing device, a media content player and like originates from a content source 102.
  • the content source 102 can include video server containing video content, audio server containing audio content, multimedia server containing video/audio content, a video game server, a movie studio, production house and the like.
  • the content can be supplied in a variety of forms.
  • the content can be in the form of broadcast content.
  • the broadcast content can be provided to a broadcast affiliate manager 104 which collects and stores the content from the content source 102.
  • the broadcast affiliate manager 1 04 can also schedule delivery of the content over a delivery network such as, for example, delivery network 1 (106).
  • exemplary broadcast affiliate managers 104 can include various national broadcast services, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS) and so on.
  • the delivery network 1 (106) of the system 100 can include a satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 (106) can also provide local content delivery using, for example, a local delivery system such as an over the air (OTA) broadcast, a satellite broadcast, or a cable broadcast.
  • the delivery network 1 (106) can provide content to a receiving device 1 08.
  • the receiving device 108 can be in a user's home and can enable the user to search through various types and forms of content.
  • the receiving device 108 can be a set top box, a digital video recorder (DVR), a gateway, a modem, a television, a gaming system, a computer, a mobile phone, a tablet computer and so on.
  • the receiving device 1 08 can act as entry point, or gateway, for a home network system that includes additional devices configured as either client, server or peer devices in the home network.
  • the special content can be delivered from the content source 102 to a content manager 1 10.
  • the content manager 1 10 can be a service provider, such as an Internet website, that is separate from or affiliated, with a content provider, broadcast service, or delivery network service.
  • the content manager 1 10 can also incorporate Internet content 1 1 1 into the delivery system.
  • the content manager 1 1 0 can be configured to deliver the content to a receiving device 108 over delivery network 1 (106) or a separate delivery network, such as, for example, delivery network 2 (1 12).
  • Delivery network 2 (1 1 2) can include high-speed broadband Internet type communications systems.
  • content from the broadcast affiliate manager 104 can also be delivered using delivery network 2 (1 12).
  • content from the content manager 1 1 0 can be delivered using delivery network 1 (106).
  • the user can also obtain content directly from the Internet via delivery network 1 (106) and/or delivery network 2 (1 12) without necessarily having the content managed by the content manager 1 1 0.
  • the special content can be provided as an augmentation to the broadcast content. Accordingly, the special content can provide alternative displays as well as providing purchasing and merchandising options, enhancement materials and the like. In another exemplary embodiment, the special content can partially or completely replace some programming content provided as broadcast content. In yet another exemplary embodiment, the special content can be completely separate from the broadcast content, and can be an alternative that the user can choose to utilize. For instance, the special content can be a library of movies, previews, music, and other such content that is not yet available as broadcast content.
  • the System 100 also includes a receiving device 108.
  • the receiving device 108 can receive various types of content from one or both of delivery network 1 (106) and delivery network 2 (1 1 2).
  • the receiving device 108 utilizes a processor to process the received content, and separates the content based on user preferences and received commands.
  • the receiving device 1 08 can also include a storage device, such as a hard drive, solid state memory, volatile memory, nonvolatile memory, optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2.
  • the processed content is provided to a display device 1 14.
  • the display device 1 14 can be a 2-D type display or a 3-D display or other medium capable of showing content to user.
  • the receiving device 1 08 can be configured to interface with a second screen such as a touch screen control device 1 16.
  • the touch screen control device 1 1 6 can be adapted to provide user control for the receiving device 1 08 and/or the display device 1 14.
  • the touch screen device 1 16 can also be capable of displaying or otherwise providing content.
  • a touch screen device 1 16 is specifically shown and discussed, other input mediums can be used to control the receiving device. Examples include an image sensor, a voice sensor, a controller and the like.
  • the content can be graphics entries, such as user interface entries, or can be a portion of the content, or the entirety of the content that is delivered to the display device 1 14.
  • the touch screen control device 1 16 can interface with the receiving device 108 using any signal transmission system, such as infrared (I R) or radio frequency (RF) communications.
  • I R infrared
  • RF radio frequency
  • Other examples include standard protocols such as infrared data association (IRDA) standard, Wi-Fi, Bluetooth and any other such communication protocols.
  • the system 100 can include a backend server 1 18 and a usage database 120.
  • the backend server 1 18 can include a personalization engine that analyzes the usage habits of a user and makes recommendations for content based on the usage habits of the user.
  • the usage database 120 can be part of the backend server 1 1 8.
  • the backend server 1 18, as well as the usage database 120 can be connected to the system 100 and accessed through either delivery network 1 (106) and/or delivery network 2 (1 12).
  • the usage database 120 and backend server 1 18 can be part of or accessible by the receiving device 108.
  • the backend server 1 18 and usage database 120 can be part of or accessible by a local area network to which the receiving device 108 is connected.
  • FIG. 2 illustrates a block diagram of an exemplary receiving device 200 according to one or more embodiments of the present disclosure.
  • Receiving device 200 can be similar to the receiving device 1 08 described above with respect to FIG. 1 .
  • the receiving device 200 can be part of a gateway device, modem, set-top box, or other similar communications device.
  • the receiving device 200 can be part of a video game system, television, computing device, DVD player or other electronic device and the like.
  • the receiving device 200 can also be incorporated into other systems including an audio device or any other display device.
  • the receiving device 200 can be a standalone device such as a set top box coupled to a display device, such as, for example, a television.
  • Device 200 can include an input signal receiver 202 that is configured to receive content.
  • the input signal receiver 202 can be used for receiving, demodulating, and decoding signals provided over one or more networks described above.
  • the input signal can be selected and retrieved by the input signal receiver 202 based on user input provided through one or more sensors, a control interface, or a touch panel interface 222.
  • the touch panel interface 222 can also be adapted to interface with a cellular phone, a tablet, a mouse, a remote control or the like.
  • the decoded output signal is provided to an input stream processor 204.
  • the input stream processor 204 performs signal selection and processing that includes separation of video content from audio content for a content stream.
  • the audio content is then provided to an audio processor 206 for conversion from the received format, such as, for example, from a compressed digital signal, to an analog waveform signal.
  • the analog waveform signal can then be provided to an audio interface 208 and further to the display device or audio amplifier.
  • the audio interface 208 can provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF).
  • HDMI High-Definition Multimedia Interface
  • SPDIF Sony/Philips Digital Interconnect Format
  • the audio interface can also include amplifiers for driving one more sets of speakers.
  • the audio processor 206 also performs any necessary conversion for the storage of the audio signals.
  • the video output from the input stream processor 204 can be provided to a video processor 21 0.
  • the video signal can be one of several formats.
  • the video processor 210 provides, as necessary, a conversion of the video content, based on the input signal format.
  • the video processor 21 0 can also perform any other conversions for the storage and/or playback of the video signals.
  • Device 200 can also include a storage device 212 configured to store the audio and video content output by the input stream processor 204.
  • the storage device 212 enables the audio and video content to be retrieved and output based on commands received from a controller 214.
  • the content can also be retrieved and output based on commands received from a user interface 216, the touch panel interface 222 and/or user gestures or audible commands received and/or sensed from a camera or other such sensor.
  • the storage device 212 can be a hard disk drive, volatile memory, non-volatile memory, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or can be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.
  • SRAM static RAM
  • DRAM dynamic RAM
  • CD compact disk
  • DVD digital video disk
  • the video processor 210 can provide the content to a display interface 218.
  • the display interface 218 provides the content to a display device such as described above.
  • the device 200 can also include a controller 214.
  • the controller 214 can be interconnected to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, and a user interface 216 via a bus.
  • the controller 214 is configured to manage the conversion process for converting the input stream signal into a signal that is suitable for storage on the storage device 21 2 or into a signal that is suitable for display on the user interface 216.
  • the controller 214 can also be configured to manage the retrieval and playback of stored content or content that streamed from a content provider. Furthermore, the controller 214 can perform searching of content stored or to be delivered via the delivery networks.
  • controller 214 can be adapted to extract metadata, criteria, characteristics or the like from audio and video media by using audio processor 206 and video processor 210, respectively. That is, metadata, criteria, characteristics or the like that is contained in the vertical blanking interval, auxiliary data fields associated with video, or in other areas in the video signal can be harvested by using the video processor 210 with controller 214 to generate metadata that can be used for functions such as generating an electronic program guide having descriptive information about received video, supporting an auxiliary information service, and the like.
  • the audio processor 206 working with controller 214 can be adapted to recognize audio watermarks that can be in an audio signal.
  • Such audio watermarks can then be used to perform some action such as the recognition of the audio signal, provide security which identifies the source of an audio signal, or perform some other service.
  • metadata, criteria, characteristics or the like, to support the actions listed above can come from a network source which are processed by controller 214.
  • FIG. 3 illustrates an exemplary input device 300 according to one or more embodiments of the present disclosure.
  • the input device 300 can be used as an input device for use with the systems and devices described above with respect to FIG. 1 and FIG. 2.
  • the input device 300 can enable a user to interact with a user interface, such as, for example, the user interface 550 described below with respect to FIG. 5.
  • the input device 300 can also be used to initiate and/or select various functions that are available to a user to enable the user to acquire, consume or otherwise access or modify multimedia content.
  • FIG. 3 illustrates an exemplary electronic device such as, for example, a tablet or touch panel input device.
  • input device 300 can be similar to touch screen device 1 16 shown and described with respect to FIG. 1 .
  • the input device 300 can be combined with or communicate with the user interface 216 and/or touch panel interface 222 of the receiving device 200 shown and described with respect to FIG. 2.
  • the input device 300 can enable operation of a receiving device or set top box based on detected hand movements, gestures, and/or actions translated through the panel and/or by one or more sensors (e.g., a camera).
  • the commands sensed or received by the input device 300 can be used by the controller 214 (FIG. 2) to enable selection of a particular content item that is displayed in a user interface.
  • the input device 300 can be included as part of a remote control device 400 such as shown in FIG. 4.
  • the touch panel 300 can also include one or more sensors for detecting audio, video, and combinations thereof.
  • the input device 300 can be configured to sense various gestures from a user.
  • a touch screen of the input device 300 can enable a user to provide a number of different types of user interaction.
  • a configuration of sensors associated with the input device can enable a user to define various movements that are each associated with a specific command or operation. More specifically, the input device 300 can sense movement of a user's finger, fingers or hand either on the touch screen or by use of another sensor such as a camera. In another embodiment, the input device 300 can determine its own movement in a variety of directions and update the display of the user interface accordingly.
  • the input device 300 can use one or more sensors, such as a gyroscope, to determine two-dimensional motion and/or three dimensional motion of the input device 300.
  • the input device 300 can recognize alphanumeric input traces which can be converted into alphanumeric text that is displayable on the user interface or on the input device 300 itself.
  • FIG. 4 illustrates another exemplary input device 400 according to one or more embodiments of the present disclosure.
  • input device 400 can be used to interact with the various user interfaces of the present disclosure.
  • the input device 400 can be a conventional remote control having an alphanumerical key pad 404 and a navigation section 402.
  • the input device 400 can also include a set of function buttons 406 that, when selected, initiate a particular system function.
  • the input device 400 can also include a set of programmable application specific buttons 408 that, when selected, initiate a defined function associated with a particular application executed by the electronic device.
  • the input device 400 can also include a touch panel 41 0 that can operate in the manner described above with respect to FIG. 3. Although specifics features and buttons of the input device 400 have been discussed, the input device 400 can be a controller with any number of buttons or functionalities. For example, the input device 400 can be a video game controller, wand, or any other such device capable of providing a signal to a user interface of an electronic device. Additionally, it should be noted either or both of the input devices 300 and 400 depicted and described in FIG. 3 and FIG. 4 can be used substantially simultaneously, simultaneously and/or sequentially to interact with the user interface and/or systems described herein.
  • the input device 400 can include at least one of an audio sensor and/or a visual sensor.
  • the audio sensor can sense audible commands issued from a user and translate the audible commands into functions to be executed by the user.
  • the visual sensor can be used to sense the presence of a user and match user information of the sensed user to stored visual data in the usage database 120 (FIG. 1 ).
  • input device 400 can use a visual sensor and/or audio sensor as a primary input mechanism where buttons are present in some embodiments and buttons are not present in other embodiments.
  • matching visual data sensed by the visual sensor can enable the system to recognize one or more users that are present and retrieve user profile information associated with the sensed user. In other exemplary embodiments, this can include content previously purchased by the user, content that the user is watching, settings of the user interface and the like.
  • the visual sensor can sense physical movements of a user and translate those movements into control commands for controlling the operation of the system or the user interface.
  • the system can have a set of pre-stored command gestures that, if sensed, enable a controller (e.g., controller 214 of FIG. 2) to execute a particular feature or function of the system.
  • An exemplary type of gesture command can include the user waving a hand in a rightward direction which can initiate a fast forward command or a next screen command or a leftward direction which can initiate a rewind or previous screen command depending on the current context.
  • the gesture can include closing an open hand to simulate "grabbing" a particular content item that is displayed or opening a closed hand to simulate a "release" of the held content item.
  • the input device 300 and input device 400 can enable a user to interact with the exemplary user interfaces described below.
  • the exemplary user interfaces can contain different types of content, such as multimedia content, that can be output on a display.
  • multimedia content refers to audio, video, and data that can be acquired or otherwise received and which can be at least one of output for display to a user and stored in a storage device for subsequent viewing.
  • FIG. 5 illustrates an exemplary user interface 550 for providing content for example, multimedia content, to a user according to one or more embodiments of the present disclosure.
  • the user interface 550 can be presented on a television, computer, tablet computer, phone, or other portable device and the like such as described above.
  • the user interface 550 can include a menu 510 that enables a user to select various types of media content.
  • the types of media content can include special content and normal content such as described above with respect to FIG. 1 . More specifically, the menu 510 can enable a user to select different types of content including movies, television shows, music, print media, and the like.
  • the menu item "Option 2" has been selected.
  • the user interface 550 displays various content items 520 in the form of movies.
  • the content items displayed in the user interface 550 can include various types of content items.
  • the content items displayed can include both movies and music.
  • the content items 520 can include both print media and movies.
  • the content items 520 can be divided into various categories 530.
  • the categories 530 can be based on recent or new releases, genre, actor, title, user preferences, content that is currently trending and so on.
  • the categories 530 that are displayed, or the content items 520 in each category can be based, at least in part, on a particular user that is using or accessing the user interface 550.
  • an image sensor associated with a device on which the user interface 550 is displayed can sense the presence of a particular user.
  • various settings, preferences and content e.g., previously purchased content, content matching or related to the user's preferences
  • FIG. 6 illustrates the exemplary user interface of FIG. 5 in which a particular content item 540 in the user interface 650 has been selected by a selection mechanism 600 according to one or more exemplary embodiments of the present disclosure.
  • a device on which the user interface 650 is displayed can include an image sensor or other input device that recognizes gestures and/or movements provided by a user of the device.
  • the input data can be provided by actuation of a button on a controller, movement of the controller, various touch patterns on a touch sensitive device and the like.
  • the various forms of input data described above provided by the user and sensed by the image sensor or the input device can be used to determine a selection of one or more of the displayed content items 520.
  • Selection mechanism 600 can be presented as a cursor, arrow, indicator, representation of a hand, representation of a body part and the like.
  • a selection mechanism 600 can be presented and moved around the user interface 650.
  • an image sensor associated with the device on which the user interface 650 is displayed can be configured to detect movement of a user's hand from a first location to a second location.
  • the user interface 650 can output a representation of a selection mechanism 600 on the user interface 650. Further, as the user's hand moves from the first location to the second location, the selection mechanism 600 can mirror the movement of the user's hand. In response to the movement of the selection mechanism 600, different content items 520 in the user interface 650 can be highlighted for selection or otherwise displayed in prominence on the user interface 650. For example, as shown in FIG. 6, the content item 540 labeled "Movie A" is highlighted or otherwise shown to be the selected content item. As the selection mechanism 600 moves between the content items 520, different content items 520 can be highlighted or otherwise be displayed in prominence on the user interface 650.
  • a second type of input data can be provided to the device on which the user interface 650 is displayed to select the highlighted content item.
  • the input data can be based on a gesture provided by a user. For example, a user can close an open hand to simulate "grabbing" or picking up the highlighted content item.
  • a button on a controller associated with the user interface 650 can be actuated to select the highlighted content item 540.
  • a user can issue a voice command to select the highlighted content item 540.
  • the input data that causes the selection of the highlighted content item 540 can be any suitable movement, actuation of a button, audible command or combinations thereof.
  • the icon or graphic that represents the selection mechanism 600 can be updated accordingly.
  • the selection mechanism can be seen as an open hand as the user is navigating through the various content items 520 that are displayed on the user interface 650.
  • the image of selection mechanism 600 can be updated to show a closed hand, show the selected content item 540 in the hand and so on.
  • an image sensor associated with the device on which the user interface 650 is displayed can also detect alternate movements, gestures and the like to causes a selected or highlighted content item 540 to be deselected. For example, if a user closes an open hand to simulate a "grab," which causes the highlighted content item 540 to be selected, the user can open the hand to simulate a "release" or deselection of the highlighted content item 540. In other embodiments, the deselection of the selected content item can also occur based on other gestures, voice commands, touch input, voice commands and combinations thereof.
  • FIG. 7 illustrates an exemplary user interface 750 that presents various purchasing options for the selected content item 560 according to one or more embodiments of the present disclosure.
  • FIG. 7 illustrates an exemplary user interface 750 that presents various purchasing options for the selected content item 560 according to one or more embodiments of the present disclosure.
  • the received input data can include a sensed gesture such as closing an open hand to simulate a "grab", a button actuation, a voice command or various combinations thereof.
  • the selected content item 560 can be presented on the user interface 750 in prominence with respect to the non- selected content items 520.
  • selection of a particular content item in the user interface 750 can cause the user interface to be divided into various regions/quadrants that present various purchasing options to the user.
  • the user interface 750 in response to detecting that a content item 520 has been selected using a particular gesture, button actuation, voice command and the like, the user interface 750 presents different regions such as a "Buy” option 710, a "Trailer” option 720, a “Rent” option 730 and a "Cancel” option 740.
  • the "Buy” option 71 0 enables a user to purchase the selected content item 540
  • the "Trailer” option 720 enables a user to preview the selected content item 540
  • the "Rent” option 730 enables the user to rent the selected content item 540 for a period of time
  • the "Cancel” option 740 causes deselection of the selected content item 560.
  • specific purchasing options are mentioned, other purchasing or viewing options can be presented in response to a particular sensed gesture, button actuation, voice commands and so on.
  • third type of input data can be received by the device on which the user interface 750 is displayed.
  • third type of input data can be a combination of the first type of input data and the second type of input data. More specifically, the third type of input data can be a "drag" operation.
  • the selected content item 560 and/or the selection mechanism 600 can be moved to one of the regions such as "Buy” option 710, the "Trailer” option 720, the “Rent” option 730 or the “Cancel” option 740. Movement of the selected content item 560 can be based on a detected movement of a user, input from a controller, input on a touch sensitive device such as a swipe or other gesture, a voice command and combinations thereof.
  • a timer 800 can be displayed on the user interface 850.
  • the timer 800 displays a predetermined or threshold amount of a time until the selected purchasing option completes. For example, if the user drags the selected content item 560 to the "Buy" option 710, the timer 800 appears and starts a countdown. Once the countdown is complete, the user interface 850 can initiate playback of the selected content item 560.
  • the selected content item 560 can be stored for later viewing.
  • the timer 800 can reset or be removed from the user interface 850 when the selected content item 560 is removed from the region in which it has been placed.
  • the timer 800 can be removed from the display or reset in response to received input data such as, for example, a user opening a closed hand to simulate a "release" gesture.
  • the user interface 850 can return to a state in which a user can navigate through the various content items 520 as shown in FIG. 5.
  • FIG. 9 illustrates an exemplary user interface 950 that presents various viewing options for the selected content item 560 according to one or more embodiments of the present disclosure. More specifically, FIG. 9 illustrates a user interface 950 once the timer 800 of Fig. 8 has expired.
  • the user interface 950 can also be configured to display a second set of regions (quadrants) that correspond to various viewing options once a user has selected a desired purchasing option.
  • the viewing options can include a particular type of resolution for the selected content item 560.
  • the second set of regions can include a "Standard Definition” option 91 0, a "High Definition” option 920, a "4K” option 930 or a "Cancel” option 940.
  • a timer 900 can be displayed on the user interface 950. As with the timer 800 of FIG. 8, the timer 900 displays a predetermined or threshold amount of a time that needs to lapse until the selected action occurs.
  • the timer 900 appears and starts a countdown. Once the countdown is complete, the user interface 950 can initiate playback of the selected content item 560 in the desired resolution.
  • the selected content item 560 can be stored for later viewing at the selected resolution. It should be noted that the selected viewing option is associated with the previously selected purchasing option such as described with respect to FIG. 8. As such, if the user selected the "Rent" option 730 and subsequently selected the "HD" option 920, the selected content item would be rented for a specified amount of time and would be presented in high definition when viewed.
  • the timer 900 can reset or be removed from the user interface 950 when the selected content item 560 is removed from the region in which it has been placed.
  • the timer 900 can be removed from the display or reset in response to received input data such as, for example, a user opening a closed hand to simulate a "release" gesture.
  • the user interface 560 can return to a state in which a user can navigate through the various content items 520.
  • FIG. 10 illustrates a method 1000 for selecting a content item from a user interface according to one or more embodiments of the present disclosure.
  • the method 1 000 can be used with the exemplary user interfaces described above with respect to FIG. 5 - FIG. 9.
  • Method 1 000 begins when input data is received 1010 that enables a user to navigate through various content items that are displayed in on a user interface.
  • the content items can be video content, music content, written content and the like.
  • the received input data can be data that is sensed by one or more sensors associated with a device that is configured to display the user interface.
  • the input data can be a gesture, actuation of a button or directional pad on a controller, input on a touch sensitive device and the like such as described above.
  • the input data for selecting the content item can be a gesture from a user that is sensed by one or more sensors of a device that is displaying the user interface.
  • the gesture can be an action of a user's hand, actuation of a button on a controller, an audible command, input from a touch-sensitive device and the like.
  • the image of the selection mechanism can change based on the received input. For example, if the selection mechanism is a hand, the image of the hand can change from an open hand to a hand that is closed. In another example the hand can be shown to be holding the selected content item.
  • Operation 1030 provides that purchasing options are then displayed on the user interface.
  • the purchasing options can be shown in response to the input data that is received as described in operation 1020.
  • the purchasing options can be displayed in various portions of the user interface and can include an option to buy the selected content item, rent the selected content item or preview the selected content item.
  • the input data can include a hand gesture, arm motion, voice command, controller movement and so on.
  • a timer can be displayed on the user interface. The timer can present an amount of time that shows how many seconds need to lapse before the indicated action takes place.
  • the user interface can display one or more viewing options for the selected content item.
  • the viewing options correspond to a resolution at which the content item is to be viewed.
  • Example viewing options include standard definition, high definition and so on.
  • a user can select 1060 one of the viewing options in the manner described above. More specifically, a user can cause input data to be received by the device that indicates where the selection mechanism should move.
  • a timer can be displayed on the user interface. The timer can show an amount of time that needs to lapse before the selection of the viewing option is complete.
  • the playback corresponds to the purchasing option that was selected in combination with the viewing option that was selected. For example, if the purchase option was a preview option and the viewing option that was selected was a high definition option, a preview of the selected content item is output on the user interface in high definition format.
  • the playback can start once the transaction is complete or can be started at a later time.
  • Embodiments of the present disclosure are described above with reference to block diagrams and operational illustrations of methods and the like.
  • the operations described can occur out of the order as shown in any of the figures. Additionally, one or more operations can be removed or executed substantially concurrently. For example, two blocks shown in succession can be executed substantially concurrently. Additionally, the blocks can be executed in the reverse order.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A user interface used for purchasing or renting a content item is presented where the user interface displays a number of content items. A first type of input data 1010 is received which causes a selection mechanism of the user interface to move among the plurality of content items. A second type of input data can then be received 1020 that causes the selection mechanism to select a particular content item from the plurality of displayed content items. When the content item is selected, a third type of input data is received 1040. The third type of input data is used to drag or move the content item from a first location on the user interface to a second location on the user interface and subsequently initiates playback of the selected content item 1070.

Description

DRAG AND DROP USER INTERFACE FOR PURCHASING MEDIA CONTENT
REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application Serial No. 61 /924,643 filed January 7, 2014 and U.S. Provisional Application Serial No. 62/01 3,897 filed June 18, 2014 which are incorporated by reference herein in their entirety.
TECHNICAL FIELD OF THE INVENTION
The present disclosure generally relates to selecting content items that are displayed in a user interface. More specifically, exemplary embodiments of the present disclosure provide a system and method for displaying a user interface that detects input from a user to enable the user to purchase and access various forms of media content.
BACKGROUND OF THE INVENTION
Entertainment systems, including televisions, media centers, set top boxes, and other multimedia devices enable content to be purchased and delivered to an end user in a variety of forms and in a variety of manners. For example, content can now be streamed directly to a television, computing device and the like directly from a content provider. The delivered content can include video content such as movies, television programs and so on.
Due to the large amount of content available, it can be difficult for a user to navigate and purchase a particular content item in an easily discernable way. It is with respect to these and other general considerations that embodiments of the present disclosure have been made. Also, although relatively specific problems have been discussed, it should be understood that the embodiments should not be limited to solving the specific problems identified in the background.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Description of the Embodiments section below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Exemplary embodiments of the present disclosure provide a method for selecting media content from a media content provider using a user interface. The user interface presents a number of content items. A first type of input data is received which causes a selection mechanism to move among the plurality of displayed content items. A second type of input data, which is different from the first type of input data, is subsequently received. The second type of input data causes the selection mechanism to select a particular content item from the plurality of displayed content items. When the content item is selected, a third type of input data is received. The third type of input data is used to drag or move the content item from a first location on the user interface to a second location on the user interface. The third type of input data is a combination of the first type of input data and the second type of input data and enables the content item to subsequently be viewed.
In further exemplary embodiment, a computing device having a processor and a memory is described. The computing device, and more specifically the memory of the computing device, stores instructions which, when executed by the processor, performs a method for selecting media content from a media content provider. More specifically, the computing device is configured to display a user interface having a plurality of content items. One of the content items is selected from the plurality of content items when a first type of input is received. In response to receiving the first type of input, the user interface is divided into a plurality of regions/quadrants. The selected content item can then be moved to a first region of the plurality of regions in response to a second type of received input. Playback of the content item is then initiated when the content item is in the first region for a first predetermined amount of time.
In yet another exemplary embodiment a non-transitory tangible computer- readable storage medium is provided. The computer-readable storage medium includes computer executable instructions which, when executed by at least one processor, performs a method for selecting media content from a media content provider. As part of this method, a user interface having a plurality of content items is displayed. Once the user interface is displayed, a first type of input data is received. The first type of input data causes a selection mechanism to move among the plurality of displayed content items. A second type of input data is then received. The second type of input data causes the selection mechanism to select a content item from the plurality of content items. When the content item is selected, the user interface is divided into a first plurality of regions/quadrants. Further each region of the first plurality of regions enables a different type of playback option for the content item. A third type of input data is then received. The third type of input data is used to drag the content item to a first region of the first plurality of regions. In other exemplary embodiments, the third type of input data is a combination of the first type of input data and the second type of input data. When the content item is in a particular region, the content item can be output based on the type of playback option associated with the selected region.
BRIEF DESCRIPTION OF THE FIGURES
Exemplary embodiments are described by the figures in which:
FIG. 1 illustrates a block diagram of an exemplary system for delivering video content according to one or more embodiments of the present disclosure;
FIG. 2 is a block diagram of an exemplary receiving device according to one or more embodiments of the present disclosure;
FIG. 3 illustrates an exemplary input device according to one or more embodiments of the present disclosure;
FIG. 4 illustrates another exemplary input device according to one or more embodiments of the present disclosure;
FIG. 5 illustrates an exemplary user interface for providing content to a user according to one or more embodiments of the present disclosure;
FIG. 6 illustrates the exemplary user interface of FIG. 5 in which a particular content item in the user interface has been selected according to one or more embodiments of the present disclosure;
FIG. 7 and FIG. 8 illustrate the exemplary user interface of FIG. 5 that presents various purchasing options for the selected content item according to one or more embodiments of the present disclosure;
FIG. 9 illustrates the exemplary user interface of FIG. 8 that presents various viewing options for the selected content item according to one or more embodiments of the present disclosure; and FIG. 10 illustrates a method for selecting a content item according to one or more embodiments of the present disclosure.
DESCRIPTION OF THE EMBODIMENTS
Various embodiments are described more fully below with reference to the accompanying drawings, which form a part hereof, and which show specific exemplary embodiments. However, embodiments can be implemented in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the embodiments to those skilled in the art. The following detailed description is, therefore, not to be taken in a limiting sense.
In various embodiments of the present disclosure, a user interface for selecting content is provided. More specifically, the user interface of the present disclosure enables a user of an electronic device on which the user interface is displayed, to use various hand gestures, other types of actions, or selection mechanisms to scroll through and select content for purchase and ultimate display on the electronic device. The user interface provides information regarding the content that is to be purchased as well as different purchasing options and viewing options. For example, when a user selects a particular content item, the user interface presents the user with a variety of purchasing options. Once the purchasing options have been displayed and ultimately selected, the user interface then presents the user with a variety of viewing options such as, for example, a resolution at which the content is to be displayed.
FIG. 1 illustrates a block diagram of an exemplary system 100 for delivering video content in accordance with one or more embodiments of the present disclosure. In certain embodiments, content that is to be delivered to an end user, a computing device, a media content player and like originates from a content source 102. The content source 102 can include video server containing video content, audio server containing audio content, multimedia server containing video/audio content, a video game server, a movie studio, production house and the like. In certain embodiments, the content can be supplied in a variety of forms. In one example, the content can be in the form of broadcast content. The broadcast content can be provided to a broadcast affiliate manager 104 which collects and stores the content from the content source 102. The broadcast affiliate manager 1 04 can also schedule delivery of the content over a delivery network such as, for example, delivery network 1 (106). Exemplary broadcast affiliate managers 104 can include various national broadcast services, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS) and so on.
The delivery network 1 (106) of the system 100 can include a satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 (106) can also provide local content delivery using, for example, a local delivery system such as an over the air (OTA) broadcast, a satellite broadcast, or a cable broadcast. The delivery network 1 (106) can provide content to a receiving device 1 08. The receiving device 108 can be in a user's home and can enable the user to search through various types and forms of content. In various embodiments, the receiving device 108 can be a set top box, a digital video recorder (DVR), a gateway, a modem, a television, a gaming system, a computer, a mobile phone, a tablet computer and so on. Further, the receiving device 1 08 can act as entry point, or gateway, for a home network system that includes additional devices configured as either client, server or peer devices in the home network.
In addition to delivering local content, the system 100 can also be configured to deliver special content in an exemplary embodiment. Special content can include content that is delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager 1 04. Examples of special content can include movies, video games, music, videos, live streaming events, magazines and other such content. In some instances, special content can directly requested by a user of the electronic device.
In exemplary embodiments, the special content can be delivered from the content source 102 to a content manager 1 10. The content manager 1 10 can be a service provider, such as an Internet website, that is separate from or affiliated, with a content provider, broadcast service, or delivery network service. The content manager 1 10 can also incorporate Internet content 1 1 1 into the delivery system. The content manager 1 1 0 can be configured to deliver the content to a receiving device 108 over delivery network 1 (106) or a separate delivery network, such as, for example, delivery network 2 (1 12). Delivery network 2 (1 1 2) can include high-speed broadband Internet type communications systems. Although specific delivery networks are discussed, in certain exemplary embodiments content from the broadcast affiliate manager 104 can also be delivered using delivery network 2 (1 12). Likewise, content from the content manager 1 1 0 can be delivered using delivery network 1 (106). In addition, the user can also obtain content directly from the Internet via delivery network 1 (106) and/or delivery network 2 (1 12) without necessarily having the content managed by the content manager 1 1 0.
In exemplary embodiments, the special content can be provided as an augmentation to the broadcast content. Accordingly, the special content can provide alternative displays as well as providing purchasing and merchandising options, enhancement materials and the like. In another exemplary embodiment, the special content can partially or completely replace some programming content provided as broadcast content. In yet another exemplary embodiment, the special content can be completely separate from the broadcast content, and can be an alternative that the user can choose to utilize. For instance, the special content can be a library of movies, previews, music, and other such content that is not yet available as broadcast content.
System 100 also includes a receiving device 108. The receiving device 108 can receive various types of content from one or both of delivery network 1 (106) and delivery network 2 (1 1 2). The receiving device 108 utilizes a processor to process the received content, and separates the content based on user preferences and received commands. The receiving device 1 08 can also include a storage device, such as a hard drive, solid state memory, volatile memory, nonvolatile memory, optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2. The processed content is provided to a display device 1 14. The display device 1 14 can be a 2-D type display or a 3-D display or other medium capable of showing content to user.
In certain embodiments, the receiving device 1 08 can be configured to interface with a second screen such as a touch screen control device 1 16. The touch screen control device 1 1 6 can be adapted to provide user control for the receiving device 1 08 and/or the display device 1 14. The touch screen device 1 16 can also be capable of displaying or otherwise providing content. Although a touch screen device 1 16 is specifically shown and discussed, other input mediums can be used to control the receiving device. Examples include an image sensor, a voice sensor, a controller and the like.
The content can be graphics entries, such as user interface entries, or can be a portion of the content, or the entirety of the content that is delivered to the display device 1 14. The touch screen control device 1 16 can interface with the receiving device 108 using any signal transmission system, such as infrared (I R) or radio frequency (RF) communications. Other examples include standard protocols such as infrared data association (IRDA) standard, Wi-Fi, Bluetooth and any other such communication protocols.
As also shown in FIG. 1 , the system 100 can include a backend server 1 18 and a usage database 120. The backend server 1 18 can include a personalization engine that analyzes the usage habits of a user and makes recommendations for content based on the usage habits of the user. In some cases, the usage database 120 can be part of the backend server 1 1 8. In embodiments, the backend server 1 18, as well as the usage database 120, can be connected to the system 100 and accessed through either delivery network 1 (106) and/or delivery network 2 (1 12). In another exemplary embodiment, the usage database 120 and backend server 1 18 can be part of or accessible by the receiving device 108. In still yet another embodiment, the backend server 1 18 and usage database 120 can be part of or accessible by a local area network to which the receiving device 108 is connected.
FIG. 2 illustrates a block diagram of an exemplary receiving device 200 according to one or more embodiments of the present disclosure. Receiving device 200 can be similar to the receiving device 1 08 described above with respect to FIG. 1 . As such, the receiving device 200 can be part of a gateway device, modem, set-top box, or other similar communications device. In addition, the receiving device 200 can be part of a video game system, television, computing device, DVD player or other electronic device and the like. The receiving device 200 can also be incorporated into other systems including an audio device or any other display device. For example, the receiving device 200 can be a standalone device such as a set top box coupled to a display device, such as, for example, a television. Device 200 can include an input signal receiver 202 that is configured to receive content. The input signal receiver 202 can be used for receiving, demodulating, and decoding signals provided over one or more networks described above. In exemplary embodiments, the input signal can be selected and retrieved by the input signal receiver 202 based on user input provided through one or more sensors, a control interface, or a touch panel interface 222. The touch panel interface 222 can also be adapted to interface with a cellular phone, a tablet, a mouse, a remote control or the like.
Once content is received by the input signal receiver 202 and decoded, the decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs signal selection and processing that includes separation of video content from audio content for a content stream. The audio content is then provided to an audio processor 206 for conversion from the received format, such as, for example, from a compressed digital signal, to an analog waveform signal.
The analog waveform signal can then be provided to an audio interface 208 and further to the display device or audio amplifier. Alternatively, the audio interface 208 can provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio interface can also include amplifiers for driving one more sets of speakers. The audio processor 206 also performs any necessary conversion for the storage of the audio signals.
The video output from the input stream processor 204 can be provided to a video processor 21 0. The video signal can be one of several formats. The video processor 210 provides, as necessary, a conversion of the video content, based on the input signal format. The video processor 21 0 can also perform any other conversions for the storage and/or playback of the video signals.
Device 200 can also include a storage device 212 configured to store the audio and video content output by the input stream processor 204. The storage device 212 enables the audio and video content to be retrieved and output based on commands received from a controller 214. The content can also be retrieved and output based on commands received from a user interface 216, the touch panel interface 222 and/or user gestures or audible commands received and/or sensed from a camera or other such sensor. The storage device 212 can be a hard disk drive, volatile memory, non-volatile memory, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or can be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.
Once the video processor 21 0 has converted the video signal to appropriate or desired formats, the video processor 210 can provide the content to a display interface 218. The display interface 218 provides the content to a display device such as described above.
The device 200 can also include a controller 214. The controller 214 can be interconnected to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, and a user interface 216 via a bus. The controller 214 is configured to manage the conversion process for converting the input stream signal into a signal that is suitable for storage on the storage device 21 2 or into a signal that is suitable for display on the user interface 216. The controller 214 can also be configured to manage the retrieval and playback of stored content or content that streamed from a content provider. Furthermore, the controller 214 can perform searching of content stored or to be delivered via the delivery networks.
As shown in FIG. 2, the controller 214 can be coupled to control memory 220 (e.g., volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM) and the like) and be used for storing information related to the device 200 and/or related to the user interface 216. Further, the control memory 220 can be a single memory device or more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory 220 can be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
In other exemplary embodiments, controller 214 can be adapted to extract metadata, criteria, characteristics or the like from audio and video media by using audio processor 206 and video processor 210, respectively. That is, metadata, criteria, characteristics or the like that is contained in the vertical blanking interval, auxiliary data fields associated with video, or in other areas in the video signal can be harvested by using the video processor 210 with controller 214 to generate metadata that can be used for functions such as generating an electronic program guide having descriptive information about received video, supporting an auxiliary information service, and the like. Similarly, the audio processor 206 working with controller 214 can be adapted to recognize audio watermarks that can be in an audio signal. Such audio watermarks can then be used to perform some action such as the recognition of the audio signal, provide security which identifies the source of an audio signal, or perform some other service. Furthermore, metadata, criteria, characteristics or the like, to support the actions listed above can come from a network source which are processed by controller 214.
FIG. 3 illustrates an exemplary input device 300 according to one or more embodiments of the present disclosure. The input device 300 can be used as an input device for use with the systems and devices described above with respect to FIG. 1 and FIG. 2.
In some embodiments, the input device 300 can enable a user to interact with a user interface, such as, for example, the user interface 550 described below with respect to FIG. 5. The input device 300 can also be used to initiate and/or select various functions that are available to a user to enable the user to acquire, consume or otherwise access or modify multimedia content.
Specifically, FIG. 3 illustrates an exemplary electronic device such as, for example, a tablet or touch panel input device. In one embodiment, input device 300 can be similar to touch screen device 1 16 shown and described with respect to FIG. 1 . In another embodiment, the input device 300 can be combined with or communicate with the user interface 216 and/or touch panel interface 222 of the receiving device 200 shown and described with respect to FIG. 2.
In some embodiments, the input device 300 can enable operation of a receiving device or set top box based on detected hand movements, gestures, and/or actions translated through the panel and/or by one or more sensors (e.g., a camera). In other embodiments, the commands sensed or received by the input device 300 can be used by the controller 214 (FIG. 2) to enable selection of a particular content item that is displayed in a user interface.
In other embodiments, the input device 300 can be included as part of a remote control device 400 such as shown in FIG. 4. The touch panel 300 can also include one or more sensors for detecting audio, video, and combinations thereof.
In certain exemplary embodiments, the input device 300 can be configured to sense various gestures from a user. In other exemplary embodiments, a touch screen of the input device 300 can enable a user to provide a number of different types of user interaction. For example, a configuration of sensors associated with the input device can enable a user to define various movements that are each associated with a specific command or operation. More specifically, the input device 300 can sense movement of a user's finger, fingers or hand either on the touch screen or by use of another sensor such as a camera. In another embodiment, the input device 300 can determine its own movement in a variety of directions and update the display of the user interface accordingly.
For example, the input device 300 can use one or more sensors, such as a gyroscope, to determine two-dimensional motion and/or three dimensional motion of the input device 300. In another example, the input device 300 can recognize alphanumeric input traces which can be converted into alphanumeric text that is displayable on the user interface or on the input device 300 itself.
FIG. 4 illustrates another exemplary input device 400 according to one or more embodiments of the present disclosure. As with input device 300 described above, input device 400 can be used to interact with the various user interfaces of the present disclosure.
In certain exemplary embodiments, the input device 400 can be a conventional remote control having an alphanumerical key pad 404 and a navigation section 402. The input device 400 can also include a set of function buttons 406 that, when selected, initiate a particular system function. Additionally, the input device 400 can also include a set of programmable application specific buttons 408 that, when selected, initiate a defined function associated with a particular application executed by the electronic device.
The input device 400 can also include a touch panel 41 0 that can operate in the manner described above with respect to FIG. 3. Although specifics features and buttons of the input device 400 have been discussed, the input device 400 can be a controller with any number of buttons or functionalities. For example, the input device 400 can be a video game controller, wand, or any other such device capable of providing a signal to a user interface of an electronic device. Additionally, it should be noted either or both of the input devices 300 and 400 depicted and described in FIG. 3 and FIG. 4 can be used substantially simultaneously, simultaneously and/or sequentially to interact with the user interface and/or systems described herein.
In another embodiment, the input device 400 can include at least one of an audio sensor and/or a visual sensor. In this embodiment, the audio sensor can sense audible commands issued from a user and translate the audible commands into functions to be executed by the user. The visual sensor can be used to sense the presence of a user and match user information of the sensed user to stored visual data in the usage database 120 (FIG. 1 ). For some exemplary embodiments, input device 400 can use a visual sensor and/or audio sensor as a primary input mechanism where buttons are present in some embodiments and buttons are not present in other embodiments.
In some exemplary embodiments, matching visual data sensed by the visual sensor can enable the system to recognize one or more users that are present and retrieve user profile information associated with the sensed user. In other exemplary embodiments, this can include content previously purchased by the user, content that the user is watching, settings of the user interface and the like.
In additional exemplary embodiments, the visual sensor can sense physical movements of a user and translate those movements into control commands for controlling the operation of the system or the user interface. In such embodiments, the system can have a set of pre-stored command gestures that, if sensed, enable a controller (e.g., controller 214 of FIG. 2) to execute a particular feature or function of the system. An exemplary type of gesture command can include the user waving a hand in a rightward direction which can initiate a fast forward command or a next screen command or a leftward direction which can initiate a rewind or previous screen command depending on the current context. In another exemplary embodiment, the gesture can include closing an open hand to simulate "grabbing" a particular content item that is displayed or opening a closed hand to simulate a "release" of the held content item.
The description of the physical gestures above is merely exemplary and should not be taken as limiting. Rather, this description is intended to illustrate the general concept of physical gesture control that can be recognized by the system. Further, although specific gestures are described, the system, and more particularly the user interface described herein, can be programmed to recognize any physical gesture and allow that gesture to be tied to at least one executable function of the system.
In some exemplary embodiments the input device 300 and input device 400 can enable a user to interact with the exemplary user interfaces described below. As will be discussed in detail below, the exemplary user interfaces can contain different types of content, such as multimedia content, that can be output on a display.
As used herein, the term multimedia content refers to audio, video, and data that can be acquired or otherwise received and which can be at least one of output for display to a user and stored in a storage device for subsequent viewing.
FIG. 5 illustrates an exemplary user interface 550 for providing content for example, multimedia content, to a user according to one or more embodiments of the present disclosure. In some exemplary embodiments, the user interface 550 can be presented on a television, computer, tablet computer, phone, or other portable device and the like such as described above.
As shown in FIG. 5, the user interface 550 can include a menu 510 that enables a user to select various types of media content. The types of media content can include special content and normal content such as described above with respect to FIG. 1 . More specifically, the menu 510 can enable a user to select different types of content including movies, television shows, music, print media, and the like.
In the user interface 550 shown in FIG. 5, the menu item "Option 2" has been selected. As a result, the user interface 550 displays various content items 520 in the form of movies. Although movies are specifically mentioned, the content items displayed in the user interface 550 can include various types of content items. For example, the content items displayed can include both movies and music. In another example, the content items 520 can include both print media and movies.
As also shown in FIG. 5, the content items 520 can be divided into various categories 530. The categories 530 can be based on recent or new releases, genre, actor, title, user preferences, content that is currently trending and so on. In another embodiment, the categories 530 that are displayed, or the content items 520 in each category can be based, at least in part, on a particular user that is using or accessing the user interface 550.
For example, as discussed above, an image sensor associated with a device on which the user interface 550 is displayed can sense the presence of a particular user. In response to determining the presence of the particular user, various settings, preferences and content (e.g., previously purchased content, content matching or related to the user's preferences) can be presented on the user interface 550.
FIG. 6 illustrates the exemplary user interface of FIG. 5 in which a particular content item 540 in the user interface 650 has been selected by a selection mechanism 600 according to one or more exemplary embodiments of the present disclosure. In embodiments, and as discussed above, a device on which the user interface 650 is displayed can include an image sensor or other input device that recognizes gestures and/or movements provided by a user of the device. In other exemplary embodiments, the input data can be provided by actuation of a button on a controller, movement of the controller, various touch patterns on a touch sensitive device and the like. The various forms of input data described above provided by the user and sensed by the image sensor or the input device can be used to determine a selection of one or more of the displayed content items 520. Selection mechanism 600 can be presented as a cursor, arrow, indicator, representation of a hand, representation of a body part and the like.
More specifically, as input data is received by the device on which the user interface 650 is displayed, a selection mechanism 600 can be presented and moved around the user interface 650. For example, an image sensor associated with the device on which the user interface 650 is displayed can be configured to detect movement of a user's hand from a first location to a second location.
In response to the detected movement, the user interface 650 can output a representation of a selection mechanism 600 on the user interface 650. Further, as the user's hand moves from the first location to the second location, the selection mechanism 600 can mirror the movement of the user's hand. In response to the movement of the selection mechanism 600, different content items 520 in the user interface 650 can be highlighted for selection or otherwise displayed in prominence on the user interface 650. For example, as shown in FIG. 6, the content item 540 labeled "Movie A" is highlighted or otherwise shown to be the selected content item. As the selection mechanism 600 moves between the content items 520, different content items 520 can be highlighted or otherwise be displayed in prominence on the user interface 650.
Once a desired content item has been highlighted, a second type of input data can be provided to the device on which the user interface 650 is displayed to select the highlighted content item. In certain embodiments, the input data can be based on a gesture provided by a user. For example, a user can close an open hand to simulate "grabbing" or picking up the highlighted content item. In another embodiment, a button on a controller associated with the user interface 650 can be actuated to select the highlighted content item 540. In yet another embodiment, a user can issue a voice command to select the highlighted content item 540. Although specific examples are given, the input data that causes the selection of the highlighted content item 540 can be any suitable movement, actuation of a button, audible command or combinations thereof.
When the second type of input data is received, the icon or graphic that represents the selection mechanism 600 can be updated accordingly. For example, the selection mechanism can be seen as an open hand as the user is navigating through the various content items 520 that are displayed on the user interface 650. However, once the user's hand is closed or a button on a controller is actuated, the image of selection mechanism 600 can be updated to show a closed hand, show the selected content item 540 in the hand and so on.
In exemplary another embodiment, an image sensor associated with the device on which the user interface 650 is displayed can also detect alternate movements, gestures and the like to causes a selected or highlighted content item 540 to be deselected. For example, if a user closes an open hand to simulate a "grab," which causes the highlighted content item 540 to be selected, the user can open the hand to simulate a "release" or deselection of the highlighted content item 540. In other embodiments, the deselection of the selected content item can also occur based on other gestures, voice commands, touch input, voice commands and combinations thereof. FIG. 7 illustrates an exemplary user interface 750 that presents various purchasing options for the selected content item 560 according to one or more embodiments of the present disclosure. FIG. 7 shows the user interface 750 after the highlighted content item 540 has been selected based on received input data. As discussed above, the received input data can include a sensed gesture such as closing an open hand to simulate a "grab", a button actuation, a voice command or various combinations thereof.
As also shown in FIG. 7, when the highlighted content item 540 of FIG. 6 is selected using the detected input, the selected content item 560 can be presented on the user interface 750 in prominence with respect to the non- selected content items 520. In addition, selection of a particular content item in the user interface 750 can cause the user interface to be divided into various regions/quadrants that present various purchasing options to the user.
For example, and as shown by FIG. 7, in response to detecting that a content item 520 has been selected using a particular gesture, button actuation, voice command and the like, the user interface 750 presents different regions such as a "Buy" option 710, a "Trailer" option 720, a "Rent" option 730 and a "Cancel" option 740. As can be appreciated, the "Buy" option 71 0 enables a user to purchase the selected content item 540, the "Trailer" option 720 enables a user to preview the selected content item 540, the "Rent" option 730 enables the user to rent the selected content item 540 for a period of time and the "Cancel" option 740 causes deselection of the selected content item 560. Although specific purchasing options are mentioned, other purchasing or viewing options can be presented in response to a particular sensed gesture, button actuation, voice commands and so on.
Once the various regions have been displayed on the user interface 750, a third type of input data can be received by the device on which the user interface 750 is displayed. In embodiments, third type of input data can be a combination of the first type of input data and the second type of input data. More specifically, the third type of input data can be a "drag" operation.
For example, and as shown in user interface 850 of FIG. 8, once a particular content item has been selected using, for example, a "grab" such as described above, the selected content item 560 and/or the selection mechanism 600 can be moved to one of the regions such as "Buy" option 710, the "Trailer" option 720, the "Rent" option 730 or the "Cancel" option 740. Movement of the selected content item 560 can be based on a detected movement of a user, input from a controller, input on a touch sensitive device such as a swipe or other gesture, a voice command and combinations thereof.
When the selected content item 560 has been moved to the desired region, quadrant, and the like, a timer 800 can be displayed on the user interface 850. The timer 800 displays a predetermined or threshold amount of a time until the selected purchasing option completes. For example, if the user drags the selected content item 560 to the "Buy" option 710, the timer 800 appears and starts a countdown. Once the countdown is complete, the user interface 850 can initiate playback of the selected content item 560. In another exemplary embodiment, the selected content item 560 can be stored for later viewing.
In some exemplary embodiments, the timer 800 can reset or be removed from the user interface 850 when the selected content item 560 is removed from the region in which it has been placed. In another embodiment, the timer 800 can be removed from the display or reset in response to received input data such as, for example, a user opening a closed hand to simulate a "release" gesture. In response to the received input data, the user interface 850 can return to a state in which a user can navigate through the various content items 520 as shown in FIG. 5.
FIG. 9 illustrates an exemplary user interface 950 that presents various viewing options for the selected content item 560 according to one or more embodiments of the present disclosure. More specifically, FIG. 9 illustrates a user interface 950 once the timer 800 of Fig. 8 has expired.
As shown in FIG. 9, the user interface 950 can also be configured to display a second set of regions (quadrants) that correspond to various viewing options once a user has selected a desired purchasing option. The viewing options can include a particular type of resolution for the selected content item 560. For example, the second set of regions can include a "Standard Definition" option 91 0, a "High Definition" option 920, a "4K" option 930 or a "Cancel" option 940.
When the selected content item 560 has been moved to the region that represents the desired viewing option, a timer 900 can be displayed on the user interface 950. As with the timer 800 of FIG. 8, the timer 900 displays a predetermined or threshold amount of a time that needs to lapse until the selected action occurs.
For example, if the user drags the selected content item 560 to the "4K" option 930, the timer 900 appears and starts a countdown. Once the countdown is complete, the user interface 950 can initiate playback of the selected content item 560 in the desired resolution. In another embodiment, the selected content item 560 can be stored for later viewing at the selected resolution. It should be noted that the selected viewing option is associated with the previously selected purchasing option such as described with respect to FIG. 8. As such, if the user selected the "Rent" option 730 and subsequently selected the "HD" option 920, the selected content item would be rented for a specified amount of time and would be presented in high definition when viewed.
In some exemplary embodiments, the timer 900 can reset or be removed from the user interface 950 when the selected content item 560 is removed from the region in which it has been placed. In another exemplary embodiment, the timer 900 can be removed from the display or reset in response to received input data such as, for example, a user opening a closed hand to simulate a "release" gesture. In response to the received input data, the user interface 560 can return to a state in which a user can navigate through the various content items 520.
FIG. 10 illustrates a method 1000 for selecting a content item from a user interface according to one or more embodiments of the present disclosure. In some exemplary embodiments, the method 1 000 can be used with the exemplary user interfaces described above with respect to FIG. 5 - FIG. 9.
Method 1 000 begins when input data is received 1010 that enables a user to navigate through various content items that are displayed in on a user interface. The content items can be video content, music content, written content and the like. Further, the received input data can be data that is sensed by one or more sensors associated with a device that is configured to display the user interface. The input data can be a gesture, actuation of a button or directional pad on a controller, input on a touch sensitive device and the like such as described above.
Flow then proceeds to operation 1020 in which input data is received that selects one of the content items displayed in the user interface. In embodiments, the input data for selecting the content item can be a gesture from a user that is sensed by one or more sensors of a device that is displaying the user interface. The gesture can be an action of a user's hand, actuation of a button on a controller, an audible command, input from a touch-sensitive device and the like.
In exemplary embodiments where the user interface shows a selection mechanism, the image of the selection mechanism can change based on the received input. For example, if the selection mechanism is a hand, the image of the hand can change from an open hand to a hand that is closed. In another example the hand can be shown to be holding the selected content item.
Operation 1030 provides that purchasing options are then displayed on the user interface. The purchasing options can be shown in response to the input data that is received as described in operation 1020. The purchasing options can be displayed in various portions of the user interface and can include an option to buy the selected content item, rent the selected content item or preview the selected content item.
Flow then proceeds to operation 1040 in which input data is received that moves the selected content item to one of the purchasing options. The input data can include a hand gesture, arm motion, voice command, controller movement and so on. Once the content item is adjacent or hovers over a particular purchasing option, a timer can be displayed on the user interface. The timer can present an amount of time that shows how many seconds need to lapse before the indicated action takes place.
Once the timer expires, flow then proceeds to operation 1050 and the user interface can display one or more viewing options for the selected content item. In embodiments, the viewing options correspond to a resolution at which the content item is to be viewed. Example viewing options include standard definition, high definition and so on. As with the purchasing option, a user can select 1060 one of the viewing options in the manner described above. More specifically, a user can cause input data to be received by the device that indicates where the selection mechanism should move. Once the desired viewing option has been selected, a timer can be displayed on the user interface. The timer can show an amount of time that needs to lapse before the selection of the viewing option is complete.
Flow then proceeds to operation 1070 and playback of the selected content item is initiated. The playback corresponds to the purchasing option that was selected in combination with the viewing option that was selected. For example, if the purchase option was a preview option and the viewing option that was selected was a high definition option, a preview of the selected content item is output on the user interface in high definition format. The playback can start once the transaction is complete or can be started at a later time.
Embodiments of the present disclosure are described above with reference to block diagrams and operational illustrations of methods and the like. The operations described can occur out of the order as shown in any of the figures. Additionally, one or more operations can be removed or executed substantially concurrently. For example, two blocks shown in succession can be executed substantially concurrently. Additionally, the blocks can be executed in the reverse order.
The description and illustration of one or more exemplary embodiments provided in this disclosure are not intended to limit or restrict the scope of the present disclosure as claimed. The embodiments, examples, and details provided in this disclosure are considered sufficient to convey possession and enable others to make and use the best mode of the claimed embodiments. Additionally, the claimed embodiments should not be construed as being limited to any embodiment, example, or detail provided above. Regardless of whether shown and described in combination or separately, the various features, including structural features and methodological features, are intended to be selectively included or omitted to produce an embodiment with a particular set of features. Having been provided with the description and illustration of the present application, one skilled in the art can envision variations, modifications, and alternate embodiments falling within the spirit of the broader aspects of the embodiments described herein that do not depart from the broader scope of the claimed embodiments.

Claims

1 . A method for selecting media content from a content provider, the method comprising:
displaying a user interface (750) having a plurality of content items;
showing movement of a selection mechanism (600) among the plurality of content items (520) in response to a receipt of a first type of input data (750) showing movement of the selection mechanism (600) selecting a content item (560) from the plurality of content items in response to a receipt of a second type of input data, the second type of input data is different from the first type of input data; and
when the content item is selected (560), moving the content item from a first region (710) of the user interface to a second region (91 0) on the user interface in response to receipt of a third type of input data.
2. The method of claim 1 , further comprising dividing the user interface into a first plurality of regions in response to receiving the second type of input data.
3. The method of claim 2, wherein at least one region of the plurality of regions corresponds to the second location.
4. The method of claim 2, wherein at least one region of the plurality of regions enables playback of the at least one content item.
5. The method of claim 4, further comprising initiating playback of the content item when the selection mechanism causes the content item to be located at the second location for a first determined amount of time.
6. The method of claim 5, wherein the user interface displays a first timer associated with the first determined amount of time.
7. The method of claim 4, further comprising displaying a second plurality of regions in response to selection of the at least one regions that initiates playback of the content item.
8. The method of claim 7, wherein the second plurality of regions display resolution options for playback of the content item.
9. The method of claim 8, further comprising displaying a second timer in response to the selection mechanism moving the content item to one of the second plurality of regions
10. The method of claim 1 , further comprising that the third type of input data is a combination of the first type of input data and the second type of input data.
1 1 . The method of claim 1 , further comprising receiving a fourth type of input data to deselect the selected content item.
12. An apparatus comprising:
a processor (214); and
a memory (220) coupled to said processor, the memory for storing instructions which,
when executed by the processor, perform the operations of:
displaying a user interface (750) having a plurality of content items;
showing movement of a selection mechanism (600) among the plurality of content items (520) in response to a receipt of a first type of input data (750) showing movement of the selection mechanism (600) selecting a content item (560) from the plurality of content items in response to a receipt of a second type of input data, the second type of input data is different from the first type of input data; and
when the content item is selected (560), moving the content item from a first region (710) of the user interface to a second region (91 0) on the user interface in response to receipt of a third type of input data.
13. The apparatus of claim 12, further comprising instructions for dividing the user interface into a first plurality of regions in response to receiving the second type of input data.
14. The apparatus of claim 13, wherein at least one region of the plurality of regions corresponds to the second location.
15. The apparatus of claim 13, wherein at least one region of the plurality of regions enables playback of the at least one content item.
16. The apparatus of claim 15, further comprising instructions for playing back the content item when the selection mechanism causes the content item to be located at the second location for a first determined amount of time.
17. The apparatus of claim 16, wherein the user interface displays a first timer associated with the first determined amount of time.
18. The apparatus of claim 15, further comprising instructions for displaying a second plurality of regions in response to selection of the at least one regions that initiates playback of the content item.
19. The apparatus of claim 18, wherein the second plurality of regions display resolution options for playback of the content item.
20. The apparatus of claim 19, further comprising instructions for displaying a second timer in response to the selection mechanism moving the content item to one of the second plurality of regions
21 . The apparatus of claim 12, further comprising that the third type of input data is a combination of the first type of input data and the second type of input data.
22. The apparatus of claim 12, further comprising further comprising instructions for receiving a fourth type of input data to deselect the selected content item.
23. A method comprising:
displaying a user interface (550) having a plurality of content items using a video processor (210);
displaying a selection of a content item (560) from the plurality of content items based on a first type of received input using said video processor (210); displaying the user interface (550) as a first plurality of regions in response to the selection the content item (560) using said video processor (210);
displaying a movement of the content item (560) to a first region of the plurality of regions in response to a second type of received input using said video processor (210); and
initiating playback of the content item (560) when the content item is in the first region for a first determined amount of time using said video processor (210).
24. The method of claim 23, further comprising displaying a second plurality of regions in response to the content item being placed in the first region for a threshold amount of time.
25. The method of claim 24, further comprising that the second plurality of regions enables a selection of a resolution of the playback of the content item.
26. The method of claim 25, further comprising selecting a resolution of the playback of the content item when the content item is placed in at least one region of the second plurality of region for a second determined amount of time.
27. The method of claim 26, further comprising displaying a timer for selecting the resolution when the content item is placed in the at least one region of the second plurality of regions for the second determined amount of time.
28. The method of claim 23, further comprising displaying a timer for initiating playback of the content item when the content item is in the first region for the first determined amount of time.
29. The method of claim 23, further comprising that the first type of received input is associated with a first type of detected movement and the second type of received input is associated with a second type of detected movement.
30. An apparatus comprising:
a processor (214); and
a memory (220) coupled to the at least one processor (214), the memory for storing instructions which, when executed by the processor, performs the operations comprising:
displaying a user interface (550) having a plurality of content items using a video processor (210);
displaying a selection of a content item (560) from the plurality of content items based on a first type of received input using said video processor (210); displaying the user interface (550) as a first plurality of regions in response to the selection the content item (560) using said video processor (210);
displaying a movement of the content item (560) to a first region of the plurality of regions in response to a second type of received input using said video processor (210); and
initiating playback of the content item (560) when the content item is in the first region for a first determined amount of time using said video processor (210).
31 . The apparatus of claim 30, further comprising instructions for displaying a second plurality of regions in response to the content item being placed in the first region for a threshold amount of time.
32. The apparatus of claim 31 , further comprising that the second plurality of regions enables a selection of a resolution of the playback of the content item.
33. The apparatus of claim 32, further comprising selecting a resolution of the playback of the content item when the content item is placed in at least one region of the second plurality of region for a second determined amount of time.
34. The apparatus of claim 33, further comprising instructions for displaying a timer for selecting the resolution when the content item is placed in the at least one region of the second plurality of regions for the second determined amount of time.
35. The apparatus of claim 30, further comprising instructions for displaying a timer for initiating playback of the content item when the content item is in the first region for the first determined amount of time.
36. The apparatus of claim 30, further comprising that the first type of received input is associated with a first type of detected movement and the second type of received input is associated with a second type of detected movement.
PCT/US2015/010487 2014-01-07 2015-01-07 Drag and drop user interface for purchasing media content WO2015105879A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201461924643P 2014-01-07 2014-01-07
US61/924,643 2014-01-07
US201462013897P 2014-06-18 2014-06-18
US62/013,897 2014-06-18

Publications (1)

Publication Number Publication Date
WO2015105879A1 true WO2015105879A1 (en) 2015-07-16

Family

ID=52432944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/010487 WO2015105879A1 (en) 2014-01-07 2015-01-07 Drag and drop user interface for purchasing media content

Country Status (1)

Country Link
WO (1) WO2015105879A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017033180A (en) * 2015-07-30 2017-02-09 キヤノン株式会社 Information processing apparatus, control method thereof, and program
EP3726366A1 (en) * 2019-04-15 2020-10-21 Konica Minolta, Inc. Operation receiving apparatus, control method, image forming system, and program
CN115309309A (en) * 2022-08-17 2022-11-08 维沃移动通信有限公司 Content sharing method and device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020049978A1 (en) * 2000-10-20 2002-04-25 Rodriguez Arturo A. System and method for access and placement of media content information items on a screen display with a remote control device
US20100037261A1 (en) * 2008-08-07 2010-02-11 Sony Corporation Display apparatus and display method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020049978A1 (en) * 2000-10-20 2002-04-25 Rodriguez Arturo A. System and method for access and placement of media content information items on a screen display with a remote control device
US20100037261A1 (en) * 2008-08-07 2010-02-11 Sony Corporation Display apparatus and display method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017033180A (en) * 2015-07-30 2017-02-09 キヤノン株式会社 Information processing apparatus, control method thereof, and program
EP3726366A1 (en) * 2019-04-15 2020-10-21 Konica Minolta, Inc. Operation receiving apparatus, control method, image forming system, and program
CN111835935A (en) * 2019-04-15 2020-10-27 柯尼卡美能达株式会社 Operation reception device, control method, image forming system, and recording medium
JP2020177261A (en) * 2019-04-15 2020-10-29 コニカミノルタ株式会社 Operation acceptance device, control method, image formation system, and program
US11350001B2 (en) 2019-04-15 2022-05-31 Konica Minolta, Inc. Operation receiving apparatus, control method, image forming system, and recording medium
CN111835935B (en) * 2019-04-15 2022-11-15 柯尼卡美能达株式会社 Operation reception device, control method, image forming system, and recording medium
JP7275795B2 (en) 2019-04-15 2023-05-18 コニカミノルタ株式会社 OPERATION RECEIVING DEVICE, CONTROL METHOD, IMAGE FORMING SYSTEM AND PROGRAM
CN115309309A (en) * 2022-08-17 2022-11-08 维沃移动通信有限公司 Content sharing method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US10514832B2 (en) Method for locating regions of interest in a user interface
US20140150023A1 (en) Contextual user interface
EP2801208B1 (en) Method and system for synchronising content on a second screen
KR101669017B1 (en) System, method and user interface for content search
KR101718533B1 (en) Apparatus and method for grid navigation
JP5968337B2 (en) Method and apparatus for providing media recommendations
EP2659666A1 (en) Method and system for providing additional content related to a displayed content
WO2015105879A1 (en) Drag and drop user interface for purchasing media content
US9380341B2 (en) Method and system for a program guide
US9825961B2 (en) Method and apparatus for assigning devices to a media service
WO2013191858A1 (en) A method and system for providing recommendations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15701608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15701608

Country of ref document: EP

Kind code of ref document: A1