US20100031149A1 - Content preparation systems and methods for interactive video systems - Google Patents
Content preparation systems and methods for interactive video systems Download PDFInfo
- Publication number
- US20100031149A1 US20100031149A1 US12/495,548 US49554809A US2010031149A1 US 20100031149 A1 US20100031149 A1 US 20100031149A1 US 49554809 A US49554809 A US 49554809A US 2010031149 A1 US2010031149 A1 US 2010031149A1
- Authority
- US
- United States
- Prior art keywords
- original
- content
- video
- frames
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 75
- 238000002360 preparation method Methods 0.000 title abstract description 19
- 238000012545 processing Methods 0.000 claims description 43
- 238000003780 insertion Methods 0.000 claims description 14
- 230000037431 insertion Effects 0.000 claims description 14
- 230000033001 locomotion Effects 0.000 claims description 14
- 238000013515 script Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 46
- 230000000694 effects Effects 0.000 description 20
- 238000011161 development Methods 0.000 description 12
- 230000018109 developmental process Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 12
- 238000005406 washing Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 239000000203 mixture Substances 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000005096 rolling process Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000007639 printing Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241001310793 Podium Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000009264 composting Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/322—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier used signal is digitally coded
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangementsÂ
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Definitions
- Embodiments of the invention generally relate to interactive systems and methods for performing video compositing in an entertainment environment.
- Interactive entertainment is a popular leisure activity for people across the globe.
- One favorite activity for many is karaoke, which temporarily turns lay persons into âstarsâ as they sing the lyrics to a favorite song.
- Karaoke machines play the music of a selected song while simultaneously displaying the song lyrics to a user.
- an interactive role performance system allows users to select a role to play in a movie scene and replace the original actor of that role with their own performance.
- the interactive role performance system if a participant wants to reenact scenes from a favorite movie, the participant can select a scene from that movie, record his or her own performance, and the system inserts that performance in place of the original character, creating the appearance that the participant is interacting with the other characters in the movie scene. For example, if a participant wants to reenact a scene from STAR WARS, he can record his own performance as LUKE SKYWALKER and that performance is combined into the scene in place of the actor's (e.g., Mark Hamill) performance.
- LUKE SKYWALKER e.g., Mark Hamill
- a content preparation system is used to generate the scenes used by the interactive role performance system.
- Original media content from a variety of sources, such as movies, television, and commercials, can be used to provide participants with a wide variety of scenes and roles.
- the content preparation system takes an original media content, removes a character from the content, and recreates the background. By recreating the background after removing the character, the user is given greater freedom to perform as the user can perform anywhere within the scene. For example, a scene from STAR WARS is generated by removing the LUKE SKYWALKER character from the scene, and recreating the background behind LUKE SKYWALKER, leaving a clear, recreated background where the participant's performance can be inserted.
- a method for preparing media content for use with a video image combining system.
- the method includes receiving original video content comprising multiple frames having a plurality of original characters associated therewith and selecting particular frames of the multiple frames displaying at least one of the plurality of original characters. For each of the particular frames displaying the at least one original character, the method comprises receiving the particular frame, wherein the particular frame displays a background image in which the at least one original character occupies a position therein, and modifying the particular frame to erase the at least one original character, wherein the modifying comprises digitally removing the at least one character by extending the background image of the particular frame to fill the position of the at least one original character to allow for subsequent insertion of a replacement character in the position.
- the method further comprises combining the modified particular frames with remaining frames of the multiple frames to create modified video content and generating metadata associated with the modified video content, the metadata being configured to direct the subsequent insertion of the replacement character into the modified video content, the metadata indicating at least: a first frame and a last frame of the particular frames and the position the at least one original character occupied in the original video content.
- a system for preparing media content for use with a video image combining system.
- the system comprises a database, an editing module and a processing module.
- the database is configured to store original video content, the original video content comprising multiple frames having a plurality of original characters associated therewith.
- the editing module is configured to execute on a computing device and is further configured to: extract consecutive select frames of the multiple frames that display at least one of the plurality of original characters within a background image; modify the select frames to remove the at least one original character, wherein the modifying comprises extending the background image in each of the select frames over a position of the at least one original character; and arrange the modified select frames with other frames of the multiple frames to generate modified video content.
- the processing module is configured to generate metadata associated with the modified video content to coordinate a subsequent combination of a replacement character image with the modified video content, the metadata further comprising: first data identifying at least a first frame and a last frame of the select frames; and second data indicating the position of the at least one original character in the original video content.
- a system for preparing media content for use in interactive video entertainment.
- the system comprises: means for receiving original video content comprising multiple frames having an original character associated therewith; means for selecting particular frames of the multiple frames displaying at least the original character within a background image; means for modifying the particular frames to remove the original character by extending the background image to replace the original character and to allow for subsequent real-time insertion of a replacement character; means for combining the modified particular frames with remaining frames of the multiple frames to create modified video content; and means for generating metadata associated with the modified video content and usable for the subsequent real-time insertion of the replacement character, the metadata indicating at least, a first frame and a last frame of the particular frames, and a position of the original character within the particular frames of the original video content.
- a computer-readable medium for an interactive video system.
- the computer-readable medium comprises: modified media content comprising a first plurality of frames representing original video content having a background video image, and a second plurality of consecutive frames representing modified original video content having the background video image from which an image of at least one original character has been replaced by a continuation of the background video image over a position of the at least one original character.
- the computer-readable medium also comprises metadata associated with the modified media content, the metadata comprising first data indicating a beginning frame and an end frame of the second plurality of consecutive frames and second data indicating the position of the at least one original character.
- the above-described system and methods can comprise original video or media content including a single original character and/or metadata that does not include information that identifies the position of the original character.
- the systems and methods summarized above can advantageously be implemented using computer software.
- the system is implemented as a number of software modules that comprise computer executable code for performing the functions described herein.
- any module that can be implemented using software to be executed on a general purpose computer can also be implemented using a different combination of hardware, software, and/or firmware.
- FIG. 1 illustrates an exemplary embodiment of an interactive role performance system according to certain embodiments of the invention.
- FIG. 2 illustrates a flowchart of an exemplary embodiment of a video compositing process according to certain embodiments of the invention.
- FIG. 3 illustrates a flowchart of an exemplary embodiment of a media content preparation process according to certain embodiments of the invention.
- FIG. 4A-4B illustrate alternative embodiments of the media content preparation process of FIG. 3 .
- FIGS. 5A-5D illustrate a frame of media content during various phases of the content preparation process in which a single actor is washed out of the scene.
- FIGS. 6A-6B illustrate an exemplary matte layer created during the media content preparation process of FIG. 3 .
- FIG. 7 illustrates an embodiment of a data flow diagram of an interactive role performance system configured to operate with multiple players in different geographic locations.
- FIG. 8 illustrates an embodiment of a wireframe for a video compositing interface of the interactive role performance system of FIG. 1 .
- FIG. 9 illustrates an exemplary screen display of one embodiment of a cascade interface for a video compositing interface.
- FIG. 10 illustrates an exemplary screen display of one embodiment of the movement and selection process of the cascade interface of FIG. 9 .
- FIG. 11 illustrates an exemplary screen display of one embodiment of a performance screen of a video compositing interface.
- FIG. 12 illustrates an exemplary screen display of one embodiment of the role selection screen of a video compositing interface.
- FIG. 13 illustrates an exemplary screen display of one embodiment of a large screen view of a display window of a video compositing interface.
- FIG. 14 illustrates an exemplary screen display of one embodiment of a script printing screen of a video compositing interface.
- FIG. 15 illustrates an exemplary screen display of one embodiment of the camera setup screen of a video compositing interface.
- FIG. 16 illustrates an exemplary screen display of one embodiment of a reference frame setup screen of a video compositing interface.
- FIG. 17 illustrates an exemplary screen display of one embodiment of an add introduction screen of a video compositing interface.
- FIGS. 18-20 illustrate exemplary screen displays of one embodiment of the setting screens of a video compositing interface.
- Certain interactive role performance systems and methods are disclosed herein that allow users to select a role to play in a movie scene and replace an original actor of that role with their own performance.
- the participant can select a scene from that movie, record his or her own performance, and the interactive role performance system inserts that performance in place of the original character, creating the appearance that the participant is interacting with the other characters in the movie scene.
- content preparation systems and methods are provided that generate the scenes used by the interactive role performance system.
- Original media content from a variety of sources, such as movies, television, and commercials, can be used to provide participants with a wide variety of scenes and roles.
- the content preparation system takes original media content, removes a character from the content, and recreates the background. By recreating the background after removing the character, the participant is given greater freedom to perform as the user can perform anywhere within the scene.
- the present disclosure is not limited by the source of the media content, and other media content sources may be used, such as, for example, video games, animation, sports clips, newscasts, music videos, commercials, television, documentaries, combinations of the same or the like. Neither is the present disclosure limited by the format of the media content, and other formats may be used, such as, for example, still images, computer generated graphics, posters, music, three-dimensional (3D) images, holograms, combinations of the above or the like. It is also recognized that in other embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth in order to illustrate, and not to limit, the invention.
- Conditional language such as, among others, âcan,â âcould,â âmight,â or âmay,â unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that some embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- an âactorâ or âcharacterâ as used herein are broad terms and are used in their ordinary sense and include, without limitation, any replaceable element in a media content, such as still or video content.
- an âactorâ or âcharacterâ can be a person (live or animated), an animal, an avatar, a computer-generated character, a game character, a cartoon character, and/or a thing.
- video âscene,â âclip,â âimage,â and âcontentâ are broad terms and are used in their ordinary sense and include, without limitation, any type of media content.
- media content can include pictures, videos, film, television, documentaries, commercials, sports, music, music videos, games, posters, original content, user-generated content, licensed content, royalty free content, any pre-existing moving image or graphic content, still images, digital avatars, online content, combinations of the above, or the like.
- the media content may or may not include audio, dialogue, and/or effects.
- the media content can be in English or any other language.
- compositing is a broad term and is used in its ordinary sense and includes, without limitation, the superimposing or combining of multiple signals, such as, for example, video and/or audio signals, to form a combined signal or display. Furthermore, compositing does not require two signals and/or video images to be stored as a single signal, file and/or image. Rather, âcompositingâ can include the simultaneous, or substantially simultaneous, playing of two or more signals (for example, video files) such that the signals are output via a single display or interface.
- compositetor refers to any device or system, implemented in hardware, software, or firmware, or any combination thereof, that performs in whole or in part a compositing function.
- real time is a broad term and is used in its ordinary sense and includes, without limitation, a state or period of time during which some event or response takes place.
- a real-time system or application can produce a response to a particular stimulus or input without intentional delay such that the response is generated during, or shortly thereafter, the receiving of the stimulus or input.
- a device processing data in real time may process the data as it is received by the device.
- a real-time signal is one that is capable of being displayed, played back, or processed within a particular time after being received or captured by a particular device or system, wherein said particular time can include non-intentional delay(s).
- this particular time is on the order of one millisecond. In other embodiments, the particular time may be more or less than one millisecond.
- âreal timeâ refers to events simulated at a speed similar to the speed at which the events would occur in real life.
- database as used herein is a broad term and is used in its ordinary sense and includes without limitation any data source.
- a database may be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase and MICROSOFT SQL SERVER as well as other types of databases such as, for example, a flat file database, an entity-relationship database, and object-oriented database, and/or a record-based database.
- a database may also be one or more files stored on a storage device, such as a hard drive or optical device.
- Metadata is a broad term and is used in its ordinary sense and includes without limitation any information associated with a media content.
- the information can comprise control data providing an interactive role performance system with directions on how to process the media content or the information can be descriptive information identifying the media content.
- the metadata can comprise in and/or out points of characters, actions in the original scene, audio levels, camera movement, switching, positions, zoom, pan, camera control signals, lighting information, color and hue information, titles, descriptions, category, tags, combinations of the same or the like. Metadata can be recorded in a text document, database, eXtensible Markup Language (XML) file, and/or embedded within the washed or customized content.
- XML eXtensible Markup Language
- FIG. 1 illustrates an exemplary embodiment of an interactive role performance system 100 according to certain embodiments of the invention.
- the interactive role performance system 100 is configured to selectively insert an image of one or more users into prerecorded media, such as a movie.
- prerecorded media such as a movie.
- the image of the one or more users are recorded and/or inserted in real time.
- the prerecorded content database 101 stores âwashedâ media content 104 , or content wherein an actor or character has been removed for replacement, and/or metadata 105 .
- a video source 102 receives, processes, and/or stores media content received from a studio, comprising the source media files.
- a media content processing system 103 prepares or âwashesâ media content clips of actors and/or objects and creates corresponding metadata 105 for the washed media content 104 .
- the completed washed content 104 is then sent to the prerecorded content database 101 .
- the washed media content 104 is available for use in the video composting process.
- the video recorder 110 captures an image and/or video of the user.
- the feed from the video recorder 110 is sent to the video compositor 120 and/or an optional user content database 115 for storage.
- the video compositor 120 accesses the washed content 104 stored on the prerecorded content database 101 and combines the washed content with the feed from the video recorder 110 .
- the final combined output is shown on the display 125 .
- the interactive role performance system 100 comprises a prerecorded content database 101 .
- the database 101 comprises data, video files, audio files, metadata and/or other information usable to control the video compositing process.
- the washed media content 104 of the database 101 can comprise one or more video clips, such as movie clips comprising video and/or audio content, usable in the background of a combined video image.
- the media content 104 comprises washed content, as described in more detail below, wherein a character or other object has been removed from the media content 104 .
- the media content 104 can comprise unaltered video scenes from a movie or other audiovisual work.
- the video content comprises a QUICKTIME file, a MPEG file, a WMA file, a WMV file, a MP4 file, a MKV file, a JPEG file, and/or the like.
- the database 101 can also comprise one or more matte files, as described in more detail below, usable to overlay an inserted user image.
- each matte file can be associated with one or more particular background clips.
- the matte files can be integrated with the background clips.
- the database 101 comprises metadata 105 usable to control the selective combination of one or more character images with the background media content 104 and/or matte files and/or to control the display of such images.
- metadata 105 can comprise reference data files to control the display and/or removal of user images, subtitle information, color/hue reference information (for example, color or black and white output), information for moving matte files, resize data, movie poster-frame information for producing still-frame movie posters, combinations of the same or the like.
- the metadata 105 comprises descriptive information associated with the video content, such as actor information, key art, studio logos, titles, combinations of the same or the like.
- the database 101 comprises any type of media device(s) and/or memory capable of storing the above described information.
- the database 101 can comprise one or more of the following: servers, hard drives, personal computers, DVDs, optical disks, flash memory, USB storage devices, thumb drives, tapes, magnetic disks, combinations of the same or the like.
- the database 101 can comprise multiple databases located remote to each other.
- the interactive role performance system 100 further comprises a video source 102 and/or a media content processing system 103 . These components can be used during content development to process original source content to produce washed content to be stored in the prerecorded content database. After washing the source content, some or all of the washed content can be, in certain embodiments, stored in the prerecorded content database as the washed media content 104 .
- the video source 102 can comprise one or more computers, workstations, servers, combinations of the same or the like for processing and/or storing original source content.
- Source media from studios can be acquired in a variety of formats, such as digibeta tapes, digital files, DVDs, video tapes and/or the like.
- Source media from the studios can be âingestedâ by the video source 102 to create a copy for use in the washing process and/or formatted into an uncompressed digital file.
- the digital files are stored on a redundant array of hard drives connected to the video source directly or through a network.
- a playback machine such as a digibeta playback deck, DVD player, video player, and/or the like may be used to play back the source media with the video source ingesting the output.
- the video source 102 can further comprise a âhelperâ video card with a Serial Digital Interface (SDI) and/or Audio Engineering Society digital audio (AES3) inputs/outputs to assist the video source 102 in processing media content.
- the video source stores digital files in a database.
- One or more hard drives can be used for storing the master source.
- master files can be backed up on tape archival systems and/or stored on additional hard drives. Finished washed content can then be copied to one or more prerecorded content database 101 as media content 104 for distribution to and/or use by participants.
- the media content processing system 103 processes the video source 102 digital files.
- the media content processing system 103 can comprise, for example, computer workstations equipped with speakers and/or headphones, video/audio editing software, timecode readers/generators, and/or house sync boxes.
- the editing software comprises FINAL CUT PRO, PHOTOSHOP, AFTER EFFECTS, ADOBE AUDITION, SOUNDTRACK PRO, and/or the like. Operators can use the workstations and/or editing software to wash individual frames of scenes from the digital files of selected elements. Operators can further use the workstations and/or editing software to recreate the backgrounds behind the washed elements of the scenes.
- the media content processing system 103 further comprises a database and workflow manager to check the accuracy of information, provide accessibility for production management, track financial/royalty requirements, and/or provide archival security.
- the interactive role performance system 100 further comprises a video recorder 110 , such as, for example, a digital video camera, a web camera, a smart phone camera, combinations of the same or the like, for obtaining one or more images to be inserted into the combined video image.
- the video recorder 110 obtains a real-time video image of a participant that is selectively âinsertedâ into a scene of the media content 104 from the prerecorded content database 101 to produce a real-time, interactive video image at a display.
- the video recorder 110 further comprises, or is associated with, a video content processor that modifies a video image obtained through the recorder 110 .
- a video content processor that modifies a video image obtained through the recorder 110 .
- such embodiments of the video recorder 110 can be used with a green screen, a blue screen, and/or other similar chroma-key equipment to prepare the obtained video image for compositing with the media content 104 .
- video image captured through the video recorder 110 can be digitally modified to remove certain portions of the captured image.
- background subtraction techniques as discussed in more detail herein, can be used to isolate a foreground element, such as an image of the user.
- the video recorder 100 further comprises a wired or wireless microphone, a remote control, a tripod, combinations of the same or the like. Multiple video recorders located together or remotely from each other can be used to capture multiple participants and/or multiple angles.
- the video recorder 110 captures images in 3D and/or infrared formats.
- the illustrated interactive role performance system 100 can also comprise an optional user content database 115 .
- the user content database 115 stores video and/or audio data captured through the video recorder 110 . Such data can be stored directly to the user content database 115 and/or can be further processed, as discussed herein, prior to such storage. Examples of such processing include, but are not limited to, removing, replacing, and/or enhancing video elements, such as music, vocals, score, sound effects, special effects, combinations of the same or the like.
- the user content database 115 allows for the repeated and/or later playing of a combined video by storing a participant's âperformance.â In other embodiments, the user content database 115 stores avatar or other computer-generated images of a participant or other character for use in the interactive role performance system 100 .
- the prerecorded content database 101 and/or the user content database 115 can communicate with other components and/or modules of the interactive role performance system 100 via a wired or wireless network such as, for example, a local area network, a wide area network, the Internet, an intranet, a fiber optic network, combinations of the same or the like.
- a wired or wireless network such as, for example, a local area network, a wide area network, the Internet, an intranet, a fiber optic network, combinations of the same or the like.
- the interactive role performance system 100 further comprises a video compositor module 120 configured to combine video images received from the prerecorded content database 101 with video images from the video recorder 110 and/or the user content database 115 .
- the video compositor 120 comprises at least a processor and a memory.
- Such memory can include, for example, SDRAM, EEPROM, flash, non-volatile memory, volatile memory, a hard drive, an optical drive, a combinations of the above or the like.
- the video compositor 120 further comprises a graphics processing unit (GPU).
- GPU graphics processing unit
- the video compositor module 120 advantageously combines and/or cause the display of multiple video images during playback without saving such images in a combined format.
- the media content 104 from the prerecorded content database 101 can be combined with the user video image from the video recorder 110 and/or the user content database 115 to form a combined image without storing such content in a combined file.
- the combined image is stored in a single file or location for later playback, comment, use and/or the like.
- the video compositor module 120 is advantageously configured to cause the display 125 to output the combined video image.
- the display 125 can comprise a television, a monitor, a liquid crystal display (LCD), a cellular phone display, a computer display, combinations of the same or the like.
- the video compositor 120 can be integrated with the display 125 . Additional components can also be integrated together.
- smart phone, PDA, or other mobile device could comprise the prerecorded content database 101 , the video recorder 110 in the form of a camera, the video compositor 120 in the form of compositing software, and/or the display 125 . It is understood that components can be integrated in other ways, such as a camera with a memory for prerecorded content and a processor for running compositing software.
- the prerecorded content database 101 , video recorder 110 , user content database 115 , video compositor 120 , and/or display 125 are used during video compositing when a user interacts with the interactive role performance system 100 .
- the components used during video compositing can be provided to the user separately from those components used in initially preparing and developing the prerecorded content.
- the content can be delivered to the user either through a physical medium or by online delivery.
- a user receives the video compositing elements without a prerecorded content database 101 .
- the prerecorded content database 101 can be available online for the video compositor 120 to access over a network and/or the Internet.
- the user receives a prerecorded content database 101 containing a limited number of media content 104 files on a CD/DVD or other physical medium, with additional content available separately.
- washed media content 104 can be stored on a central database from which additional content can be downloaded by the video compositor 120 .
- connection media may be used to link elements of the interactive role performance system 100 .
- the elements are directly connected with fiber optic channels, Ethernet, and/or the like.
- Elements of the interactive role performance system 100 can also be spread out in remote locations and connected through the Internet and/or a virtual private network.
- Elements of the interactive role performance system 100 can also be wirelessly connected with other elements.
- elements, such as the video recorder 110 , video compositor 120 , and/or display 125 are connected directly with audio and/or video cables and/or through a wireless connection, such as via BLUETOOTH or other radio frequency communications.
- the interactive compositing system 100 has been described herein with reference to video technology, it will be understood from the disclosure herein that other types of media can be used by the system 100 .
- such media can comprise video games, animation, still pictures, posters, combinations of the same or the like.
- FIG. 2 illustrates a flowchart of an exemplary embodiment of a video compositing process 200 , according to certain embodiments of the invention.
- the video compositing process 200 is executed by the interactive role performance systems described herein to perform video compositing using washed content.
- the video compositing process 200 is described hereinafter with reference to the components of the interactive role performance system 100 of FIG. 1 .
- the video compositing process 200 begins with Block 205 , during which the user of the interactive role performance system 100 can select an available washed scene from the media content 104 stored on the prerecorded content database 101 .
- the user selection is made using some form of input device, such as a remote control, a mouse, a keyboard, a touch screen, a keypad and/or the like.
- the video compositor 120 communicates with the prerecorded content database 101 to receive the washed media content 104 .
- the interactive role performance system 100 can further provide the user or participant with a script of the lines or dialogue in a scene.
- the interactive role performance system 100 further comprises a printer and provides the user with a printed script.
- the video compositor 120 displays the scene on the display 125 with directions, prompts or instructions to the user for properly acting out the scene.
- the directions can be text prompts and/or an outline on the display 125 directing the user to assume a specified stance, position himself or herself in a specified location, and/or face a specified direction.
- Directions can also comprise of lines of dialogue that the user repeats, visual prompts on the screen, and/or voice directions of actions to be taken, such as the directions that a real director would give to an actor.
- the participant acts out the scene selected at Block 205 .
- the participant follows the directional prompts given at Block 210 .
- the directional prompts can be interactive and/or can be given out while the user is acting out the scene.
- a real-time feed of the participant is displayed while the participant acts out the scene.
- the real-time feed gives the participant feedback on what the participant's actions look like on screen.
- the prerecorded washed scene is combined with the real-time feed to provide the participant with real-time feedback on what the final scene will look like.
- graphics are superimposed over the real-time feed of the user to provide clearer directions to the user.
- the graphics can range from text to computer-generated graphics.
- directional prompts can consist of interactive mini-games, directing the user to punch blocks, hit balls, hula dance, hula hoop, and/or the like, wherein the participant can act out the scene based on the directional prompts.
- the image of the participant is captured by the video recorder 110 .
- the participant acts out the scene in front of a green screen.
- screens of different colors or no screen can be used.
- Various techniques can also be used to isolate the image of the participant from the background. For example, chroma-key techniques can be used to separate the user from the background by the video compositor 120 and/or video recorder 110 .
- a background processing technique is used to allow the participant to act out the scene in front of any background, with the background screen being optional.
- the video compositor 120 can use background subtraction, where a previously recorded reference image of the background is compared to the captured video image to identify a new element in the captured image, thereby isolating the image of the user. The new element is then identified as the foreground and/or is separated out from the background for insertion into the media content 104 scene.
- the user generated content captured by the video recorded 110 is stored on the user content database 115 .
- the video recorder 110 can also record sound with the video.
- pre-existing sound clips are used and/or no sound is recorded.
- the metadata 105 associated with the washed media content 104 directs the video recorder 110 to turn on or off at certain times.
- the metadata 105 can be information contained in an XML file that controls the video recorder 110 .
- the video recorder 110 can be powered off or otherwise temporarily disabled to prevent extraneous sound and/or video from being recorded.
- the sound and the video capture can be controlled independently of each other, providing greater control over how sound and/or video is captured by the video recorder 110 .
- the image of the participant is inserted into the washed media content 104 either as a video or a still image.
- the video compositor 120 receives the user image from the video recorder 110 and/or from the user content database 115 .
- Various insertion techniques can be used by the video compositor 120 to insert the participant's image into the washed scene.
- the image of the participant can be played concurrently with and overlaid over the washed content, or the image can be incorporated into the washed content.
- the image insertion is unrestricted, with the image of the participant being capable of appearing anywhere within the washed scene.
- the metadata 105 or scene information directs the video compositor 120 on where the user image is to be inserted into the scene.
- This metadata 105 or scene information can further comprise display and removal points recording where a replaceable (or removed) actor appears in and/or exits the original scene corresponding to the washed scene.
- the display and removal points comprise of the beginning and end frames of the scenes having the replaceable actor.
- the metadata 105 also controls the insertion of audio, such as speech from the participant, into the washed content.
- the washed content also contains mattes, which determine whether elements should appear in front of the inserted user image.
- the participant is inserted as an extra or additional actor in a scene, without replacing an existing character.
- the processing is offloaded to the GPU in the video compositor 120 to reduce load on a processor of the video compositor 120 .
- the video compositing process 200 can comprise a multi-pass processing of the media content.
- a two-pass process can be used in which a first pass determines which pixels of the user-generated content should be designated as transparent or opaque.
- the elements that are to be added to or combined with the washed content 104 e.g., the user image
- these pixel values are identified through background subtraction processes described in more detail herein.
- the user content is then inserted into the washed content scene.
- further processing can be performed that blends user-generated content more cleanly with the washed media content 104 .
- a border of several pixels around the inserted content can be blended into the washed content by applying a gradient of opaqueness to the border to create a more seamless integration with the washed content.
- Additional processing can be applied to the combined video to improve the image. For example, pixel sampling can be conducted to determine and correct the green levels in the image. Shadows, outlines, and/or color correction can also be applied to the combined video.
- the following annotated source code illustrates one embodiment of the background subtracting program used to process the user-generated image.
- the exemplary program disclosed above takes the inputs of two images: a source image (e.g., a video image including the user) and a reference background image (e.g., without the image of the user) to be removed from each frame of the source image.
- a source image e.g., a video image including the user
- a reference background image e.g., without the image of the user
- the interactive role performance system 100 can record the reference background image after the user steps out of the video recorder 110 .
- the program processes one pixel or textel at a time and returns a value that is fully transparent or opaque based on color channel differences between the source image and the reference background.
- the portions of the source image matching portions of the reference background are set transparent, while the portions of the source image that are different from the reference background are kept opaque.
- the pixels in a frame that are opaque determine the actor and/or object that was added to the scene after the reference background image was captured.
- certain established threshold values determine if a particular pixel is to be designated as opaque or not.
- the threshold value can compensate for small variations in the source image and the reference background image due to inconsistencies in how the source and background images were recorded. For example, if the lighting is inconsistent during the image acquisition process, the background from the source and background reference may not be recorded identically.
- the threshold level could be set as a higher value to compensate for greater differences.
- the threshold value can be a set value, set by the user, or adaptively set by the role performance system 100 .
- the program further determines the RGB pixel color of the source and reference background images and then converts the RGB values to HSV color space.
- the program determines the difference in each RGB and HSV color channel, wherein RGB color channels are red, green, and blue and HSV color channels are hue, saturation, and value.
- the program determines the greatest difference of all RGB color channels. The RGB and HSV difference is measured against the threshold value to determine if the pixel should be set as opaque. Otherwise, the pixel is set to transparent.
- an additional reference frame is taken of the participant within the background.
- the output quality of the background processing can be checked using the two reference frames.
- a background subtraction process can be performed on the participant reference frame instead of the entire user content.
- the process outputs an isolated image of the participant which, in certain embodiments, is representative of the quality of the output from processing the entire user content.
- Using the participant reference frame allows the output quality of the background processing to be tested with smaller files and less processing.
- video images can be processed by the video compositor 120 and/or by a remote transcoding server.
- the combined video can be encoded by the video compositor 120 while the user content is captured.
- user content is captured in raw format and encoded at a later time by a transcoding server located remote to the video recorder 110 . After capture, the user content can be uploaded to the transcoding server.
- the transcoding server is a virtual server running on a cluster of servers. As such, the transcoding server has significantly more processing power than the video compositor 120 . The additional processing power allows the transcoding server to engage in additional processing passes over the user content and the washed content to provide a higher quality video.
- the transcoding server transcodes the content into a flash video file, MPEG, JPEG, and/or other file type for viewing at a display 125 and/or for submission to a content sharing website.
- the transcoding server in certain embodiments, can further apply a watermark to the user-generated content and/or the displayed combined content for copy control purposes.
- the combined video is shown on the display 125 .
- the combined video is shown in real-time with respect to the capturing of the user image by the video recorder 110 .
- Custom filters can further be applied during playback to improve the displayed image.
- the combined video is saved for later playback and/or displayed at a later time.
- the video can also be displayed remotely at a location different from the user. It is understood that the combined video is not necessarily stored in a single file.
- the combined video can exist as separate files that are overlaid onto each other, played back, and/or synchronized to generate a combined image.
- the combined video can comprise of a matte file, a subtitle file, a washed content file, and/or a user content file.
- the combined video or elements of the combined video are uploaded to a website for searching, sharing, and/or viewing combined video content.
- the user content can be sent to the website and played back along with a corresponding washed content stored on the website to generate the combined video.
- the participant can create an introduction for the combined video using the interactive role performance system 100 using a process similar to that used to create the combined video.
- the media content preparation process or âwashingâ process is a processing development for audio and/or video that increases the realism of the interactive experience.
- a user playing the role of DARTH VADER in STAR WARS would be positioned precisely in front of the villain before starting the scene.
- the original DARTH VADER character would be visible behind the user image, detracting from the supposed realism of the experience.
- the washing processes described herein advantageously removes the original DARTH VADER character from the scene such that the participant image need not be limited to a particular area and can be freely able to move within the scene, thereby increasing the realism of the experience.
- the washing process can be applied to the audio of a scene.
- data files accompanying the clip supply audio switch data in real time, turning various audio tracks on or off in order to silence the replaceable character so the user can play the part uninterrupted.
- the character's audio is filtered out of the original audio to create a modified audio used in the washed content, allowing the participant increased freedom in the timing of his or her lines.
- more content versatility is provided by disclosed systems and methods that are able to simplify the compositing process by moving more of the content processing to the content development phase, improving control over both audio and/or video manipulation, and thereby improving the âreplacementâ effect.
- editing software can be used to entirely remove actors from a scene, allowing the washed content to be used as the prerecorded background into which the user is inserted and simplifying the user image insertion step during the compositing process.
- FIG. 3 illustrates a flowchart of an exemplary embodiment of a media content preparation process 300 , used during content development according to certain embodiments of the invention.
- the process 300 is executed by embodiments of the interactive role performance systems described herein, usually by the video source 102 and/or media content processing system 103 .
- the media content preparation 300 is described hereinafter with reference to the components of the interactive role performance system 100 of FIG. 1 .
- a scene is selected from media content stored on the video source 102 .
- a scene is identified by one or more watchers viewing the entire source media content to select scenes that can viably be used as washed content. For instance, the one or more watchers can log the start/end times of the scene.
- media content is copied onto a network media server and then reviewed by one or more watchers.
- scenes are selected based on certain predetermined criteria and/or the ease with which the source content can be washed.
- selection criteria can comprise the duration of the scene, the visibility of the primary actor, the immobility of the background, the minimal motion of the foreground, a clear view of the actors with little or no blocking objects, and/or the consistency of the background.
- scenes are generally avoided if the camera is in motion, the background is in motion, there is a large amount of foreground action, there are many camera angles, scenes have lots of actions, or scenes have lots of overlapping dialogue.
- a media content clip comprising a selected scene, is captured from the media content.
- the frame selection is accomplished by a program implementing one or more selection criteria.
- the media content processing system 103 extracts individual frames from the selected media content clip.
- the media content clip is exported into individual consecutive frames, such as 24 to 30 frames per second of playback.
- clips can contain more frames or fewer frames, depending on the format of the source media content.
- the media content processing system 103 identifies and/or selects the particular frames that contain a selected character and/or object and washes the frames through a series of manipulations to remove the selected character from the scene.
- manipulations extend or continue the background image to remove the character and can comprise borrowing pixels from a background where the actor and/or object is not present, retouching the areas with consistent background materials, fabricating pixels by filing areas with appropriate artwork from within the frame or other sources and/or blending the areas into the surrounding background.
- the process is repeated for every play option in each scene, breaking the clips into multiple video tracks and/or using editing software to bundle the different tracks into unique âprerecordedâ background clips for each option.
- different tracks can have unique data file triggers or metadata that correspond to different âinâ and/or âoutâ points within the scene. For example, one set of data file triggers can determine when a user image is to be on or off the screen; another can dictate when a customized special effects layer is activated; a third can command a particular background matte layer to appear or disappear as needed.
- a more robust clip development process provides an increased ability to separate audio tracks and/or isolate sound effects, musical scores, and/or the voices of different characters for individual manipulation.
- Media content received from the studios can contain multiple audio tracks separate from the video. For example, tracks 1 and 2 can contain mixed audio, while tracks 3 and 4 contain the music and/or effects.
- Certain embodiments of a the interactive role performance system 100 can either control audio data that has been delivered in separate tracks and/or mix separate tracks together, or can break audio tracks apart if the source material has them combined. Creating separate audio tracks allows for the editing of some tracks while not touching others. Certain embodiments can substitute and/or remove movie score audio, alter and/or remove actor audio, and/or enhance, alter, and/or remove sound effect data, then later recombine the tracks for association with different user play options.
- certain embodiments of the invention can separate the audio tracks from STAR WARS to remove DARTH VADER'S speaking parts, replace the John Williams score with a royalty-free track, and/or enhance the light saber sound effects.
- the system can condense the separate tracks down to one master track to be played when the user chooses to replace DARTH VADER.
- a similar approach could be taken to alter different tracks for a LUKE SKYWALKER play option.
- the resulting experience can have better audio accompaniment because the sound elements can be better manipulated during content development than they could be on-the-fly.
- the media content processing system 103 creates mattes from the media content.
- compositing systems involve superimposing a new video layer, or âmatte,â of the user over the original background content in order to create the illusion that the user is âinâ the prerecorded content. While this effect works well in many cases, certain prerecorded backgrounds contain foreground elements, such as desks, podiums, castle walls, other actors and/or the like, that appear in front of the actor to be replaced. In many cases, these foreground elements also move, such as when a bird flies across the frame, a person walks in front of the actor, and/or a camera move effectively changes the position of the stationary wall or desk relative to the actor in the frame. In order to create a more 3D interactive experience, these foreground elements can be recreated or somehow moved so as to be visible in front of the superimposed user's image.
- mattes can comprise, but are not limited to, video files that contain transparency information such that white space allows subordinate video layers to show through and/or black space prevents subordinate video layers from showing through.
- Certain mattes can be created based on elements of the target prerecorded clip such that any element which should be âin frontâ of the userâsuch as a deskâis black, and/or the elements that should be âbehindâ the user are white.
- the matte layers cause portions of a background image to come to the foreground in front of an inserted user image.
- a moving matte is required for a motion scene. The matte creation process is described in further detail below. Once a matte is created, it can be synchronized to the media content clip to match up with the motion of the object that appears in the foreground.
- the video recorder 110 captures the user image without making any camera moves, pans or zooms. These functions can be accomplished through the software of the video composition 120 system.
- the original scene can be analyzed and metadata 105 can be recorded that captures the in and/or out points, actions in the original scene, audio levels, camera movement, switching, positions, zoom and/or pan.
- the metadata 105 can further instruct the video recorder 110 to scale, move within the x-y coordinates of the overall combined frame, and/or switch to a different angle.
- Metadata 105 can be recorded in a text document, database, XML file, and/or embedded within the washed content.
- the media content processing system 103 records actor position, size information, and/or other metadata 105 associated with the washed media content 104 .
- processing software in the media content processing system 103 analyzes the media content clip to generate metadata, such as the position and size information.
- the actor position and/or size information are used during the setup of the camera, lights and/or green screen to determine the orientation and/or size of the inserted user in the scene. Using this information allows the inserted user image to match as closely as possible with the character that is being replaced.
- the media content processing system 103 creates an outline graphic representing the removed character's position in the washed scene.
- the participant uses the outline graphic to determine where he/she should position himself/herself during recording of his/her performance of the particular scene.
- an outline graphic is not included in the washed scene. Moreover, a user can freely move around within the scene and/or is not required to appear in a specific position.
- the media processing system 103 transcribes and/or prepares subtitles of the dialogue for each scene or clip.
- subtitles appear when the removed character would be speaking and disappear when the actor is not.
- subtitles may not be required and/or are already available and do not need to be created.
- the media processing system 103 outputs a washed scene after completing processing the media content.
- the media processing system saves the washed content into a local storage device and/or saves the washed content directly to the prerecorded content database 101 as the media content 104 .
- the washed content can further undergo a quality control process to ensure that the washed content has been properly created.
- the washed content may also be saved into a backup storage system.
- poster art for display can be created by washing actors out of media content.
- FIG. 4A illustrates an alternative embodiment of the media content preparation process of FIG. 3 .
- a scene is selected and frames from the scene are created.
- an actor is removed from one frame.
- a background such as a wall, is recreated behind the actor.
- the washed frame is extended or repeated for the rest of the frames in the scene. In some scenes, the background is similar from one frame to another, and reusing the washed frame saves additional effort.
- a track or file with the data triggers for the in and/or out points of the actor and/or other metadata is created. In some embodiments, the in and/or out points are represented by the first and/or last frames the actor appears in.
- Block 410 If more than one actor is selected for removal from the scene, the process can go back to Block 410 and repeat Blocks 415 , 420 , and 425 for the next actor. The process can be repeated as many times as necessary for the number of actors to be washed.
- Block 430 one or more tracks with the associated data triggers are bundled into a single washed media content scene.
- FIG. 4B illustrates another alternative embodiment of the media content preparation process of FIG. 3 .
- a scene is selected and frames from the scene are created.
- elements of the set are reshot and/or a background is digitally recreated either entirely or by combining the newly shot set elements with the original content at Block 460 .
- a track or file with the data triggers and/or other metadata for the scene is recorded.
- one or more tracks are bundled into one washed scene.
- the media content preparation process can be accomplished by using any existing or new technologies that can allow for the altering of video content, such as the ability to map or track camera movements from the original content and/or recreate them with an altered background.
- any of the described media content preparation processes can be used singly or in combination to create the washed content.
- Embodiments of the content development process also allow for customization and/or alteration of other elements affecting the interactive experience.
- These elements can comprise, but are not limited to, subtitle data, colors, fonts, placement, actor cues and/or suggestions, audio and/or video special effects, information about user image size, location, dynamic movement, color hue, saturation, distortion, play pattern interactivity such as voting, ranking, and/or commenting, properties for online uploading, sharing, and/or blogging, particulars about creating, sharing, printing movie stills and/or posters based on each scene, gaming elements, pitch, vocals, accuracy, volume, clapping, combinations of the same or the like.
- certain analysis can be performed that suggests users appearing in a scene from LORD OF THE RINGS should appear more orange than users appearing in a scene from THE MATRIX.
- Color saturation, lighting, hue data and/or other metadata can be written into the data files or metadata 105 for each respective scene, such that during the performance, the interactive role performance system 100 can use the data files or metadata 105 to manipulate the live image in order to more realistically blend the user into the background footage.
- digital resizing and/or movement data can be programmed into each scene that dictates where the user appears in the frame of prerecorded content, and/or the size of the user image relative to the rest of the scene. This information can be used to create dynamic effects, such as digitally simulating camera movement over the course of the scene. This data could also be written into the metadata 105 associated with the piece of washed media content 104 .
- control data or metadata 105 for these elements is bundled with the associated washed media content 104 and/or matte layers during content development.
- These elements can be referenced and/or controlled with data files which are invisible to the user, but can be embedded in software elements and/or included in digital files (for example, an Internet downloaded file or XML file) or the like, appropriately associated with the original content purchased by the user.
- digital files for example, an Internet downloaded file or XML file
- FIGS. 5A to 5D illustrate a frame from a media content during various phases of certain embodiments of the washing process in which a single actor is washed out of the scene.
- the frames, as illustrated, are described hereinafter with reference to the components of the interactive role performance system 100 of FIG. 1 .
- FIG. 5A illustrates a frame from a media content clip processed by the media content processing system 103 .
- the frame depicts two actors: the first actor 505 is the target actor to be washed from the frame while the second actor 510 is retained in the frame.
- FIG. 5B illustrates the frame of FIG. 5B after the actor 505 has been washed from the scene.
- an outline graphic 515 is added to the washed content to depict the location of the washed actor.
- the retained actor 510 remains unchanged in the scene.
- Individual washed frames comprise the complete washed content scenes 104 stored on the prerecorded content database 101 .
- FIG. 5C illustrates a real-time feed of a user from a video recorder 110 superimposed over a washed content, wherein the user image 520 is added onto the scene.
- the user can use an outline graphic to position himself in the scene. That is, the user can move into a position such that the user is generally within the position of the washed actor as indicated by the outline graphic 515 .
- the video compositor 120 automatically positions the feed from the video recorder 110 in a frame such that an outline graphic is unnecessary by using previously recorded actor position data to determine where the user image is placed.
- FIG. 5D illustrates a frame from a completed combined video.
- the user 520 is inserted into the scene alongside the retained actor 510 .
- the completed combined video is displayed on the display 125 .
- the combined video can also be saved for future playback, or the combined video can be recreated from the washed scene and user content without saving the combined video.
- FIGS. 6A and 6B illustrate an exemplary matte layer created during the media content preparation process of FIG. 3 .
- FIG. 6A illustrates a matte layer created from the frame illustrated in FIG. 6B .
- the flight attendant 620 is part of the foreground scene and appears in front of the passenger 630 selected for the washing process.
- the matte creation can be performed by âtracingâ the particular figure with a digital pointer, frame-by-frame, or using other software means available to track and/or trace the elements.
- the resulting matte layer 610 can be either a moving or stationary video file used during playback of the washed content to delineate a foreground element of the original source content. Associating this matte with the real-time user image from the video recorder 110 essentially âblocksâ the user's image where a foreground object, such as the flight attendant, covers the user image, and thereby creates the illusion that the user is positioned between background and foreground elements.
- a moving matte layer the foreground element can be kept in front of the participant's image even when the foreground element moves, such as if the flight attendant moves in front of the user.
- the resulting composition advantageously creates a more realistic, multi-dimensional interactive experience.
- additional features can be employed with that utilize components of the interactive role performance system 100 hosted and/or deployed in an online environment.
- one method of hosting the content online allows a party or user to control the storage, filtering, and/or distribution of the finished video output.
- a new video file is generated with the combined image of the user and the prerecorded content.
- This âoutputâ file could be saved for later playback, shared online, or sold as a DVD to the user in a variety of fashions.
- the output of the composition in a single, cohesive video stream is relatively efficient, certain problems also can arise with such an arrangement.
- certain systems and methods isolate the user's recorded performance from the prerecorded background throughout the entire process, such that the images are not combined, except visually during performance playback.
- the washed clip is not altered or re-recorded during a performance. Rather, the washed clip can be merely referenced again if a playback option is selected, then replayed in tandem with the user's overlay segment.
- the video files can be protected in the disclosed interactive role performance systems.
- the prerecorded background content and/or the recorded performance is stored in a non-standard video format such that it is unplayable in standard video formats or players.
- the fact that the images are separate or the background content is an individual file is concealed.
- the background and/or user media files are stored separately on the user's local system.
- One method is to lock each background content clip to a specific operating system, and/or render them non-transferable between systems. Another method is to make only the user file uploadable to a website for hosting and/or sharing, and render the background video unsharable.
- an online system runs an auto query each time an offline system becomes web enabled in order to register the software operating system and/or lock the content to the that system.
- Another method is to use a dynamic URL for a website, and/or change it regularly.
- the uploaded clips are digitally âwatermarkedâ in order to track their use should they be found outside controlled channels.
- combined content is stored only on a secure storage location, such as a controlled server, and only links or references to the protected content are allowed from programs or applets.
- the programs can stream the files from the secure storage location without saving a copy of the content.
- the programs are authorized by the secure storage location before access to the protected content is allowed.
- the user-generated content can be filtered in order to remove objectionable material.
- One approach is to establish nudity and/or profanity filters in the finished file upload process. During upload, each performance can be filtered in real time for nudity and/or profanity, and then assigned a numerical number based on its evaluation. Numbers below a certain benchmark can be manually reviewed by screeners, and/or numbers below a certain lower benchmark can be automatically rejected and discarded. Another way can be a complete manual review of the user generated content.
- One advantage to utilizing the Internet or other network as a platform is the ability to engage multiple users from multiple remote locations with multiple cameras in numerous forms of interaction.
- FIG. 7 illustrates an embodiment of a data flow diagram of an interactive role performance system configured to operate with multiple players in different geographic locations. For instance, a user in New York and a user in California can mutually or individually select a scene from STAR WARS to perform, such as with opposite roles.
- the California user selects the scene on his or her interactive role performance system.
- the California user selects the role of LUKE SKYWALKER for playing.
- the New York user selects the same scene on his or her interactive role performance system.
- the New York user chooses the role of DARTH VADER.
- the resulting composition is a single ensemble scene, even though the users are geographically distant.
- California user data and New York user data are combined to produce a single ensemble scene, wherein both participant images are combined in the same background scene.
- more complex media bundles and/or data files can also be quickly accessed and/or executed, making more intricate user experiences possible.
- the above multi-player effect for instance, can require the use of additional background content bundles of completely washed scenes (see above), driven by data files or metadata which trigger the camera inputs from each respective user.
- the multi-camera use could also be executed such that a user in New York selects a previously performed clip posted by his friend in California, and decides to act opposite his friend after the fact.
- this process can require controlled switching of the California clip (where the user performed as LUKE SKYWALKER) with a washed content prepared for DARTH VADER in order to constitute the background for the new, live user image streaming from New York.
- These multi-player scenes can thus either be performed live by both parties, or live by one party and prerecorded by the other party. They can also either play the opposite or the same character, and either replace characters or simply be inserted into the same scene. In some embodiments, there can be three or more users working together to create a single scene. Multi-camera, multi-location video games can also function well in this environment. It is understood that interactive role performance system can also be used for multiple players in the same location (e.g., participants in the same living room).
- the online environment can be a website for sharing combined video and/or buying additional washed content.
- the website allows users to share their combined videos with other viewers. Users can rate videos, allowing videos to be ranked based on popularity. Videos can also be ranked based on number of views, age, and/or other selection criteria. Users can compete in contests using their performances. Users can choose to share videos with select individuals or can choose to make videos publicly available to anyone. Users can also build social networks with each other.
- the website can comprise a home page which displays user information after the user logs in.
- User information can comprise messages, comments, invites, uploads, downloads, viewing statistics, and/or popularity of performances.
- the website can further comprise performance gallery pages where combined videos are displayed and where users may search for combined videos based on associated metadata.
- the website can further comprise store pages, where additional content may be purchased for the interactive role performance system 100 . The purchased content can then be downloaded to the interactive role performance system 100 .
- the Internet offers several advantages. These comprise, but are not limited to, the ability to generate and monetize script print-outs, teleprompters and application text output for scripts and lyrics, the ability to generate a video introduction to be used to introduce emails and postings, the ability to select between output devices including various computer platforms, various multimedia and mobile devices, set-top boxes, and video gaming consoles, the ability to download clips with embedded data files, the ability to perform clips with the use of an online interface, the ability to upload files into a sharing forum, vote on clips, share comments, feedback and ranking information, and award prizes, the ability to select the sharing/playback information between private/public and limited/mass distribution options, the ability to select between output options and platforms, the ability to generate still frames and order customized products such as T-shirts containing the generated still frames, the ability to utilize 3D rendering and avatar elements to enhance the production value, the ability to use video and audio special effects either before, during, or after a performance, the
- the interactive role performance system 100 provides a user interface for the user to control the video compositing process.
- FIG. 8 illustrates an embodiment of a wireframe 800 of various pages of a video compositing interface.
- the interactive role performance system 100 provides a graphical user interface for the user to view and/or select washed scenes and/or combined video scenes.
- a cascade user interface can advantageously allow the user to view a plurality of scenes or data tiles on one screen (Block 805 ).
- the cascade interface comprises a plurality of rows and columns of images of scenes. The scenes can be still or static images and/or video clips.
- FIG. 9 illustrates an exemplary screen display of one embodiment of the cascade interface.
- the display 900 includes four columns and five rows of screen or data tiles arranged in a three dimensional array. Each of the tiles further includes a graphical representation of the media content that it represents, such as still images of movies.
- the illustrated bottom, front or first row position 905 displays the âclosestâ scenes or screen tiles to the user. Close scenes can be denoted by a color image, unless the scene is from a black and white movie, larger size, and/or a title. Scenes on âfartherâ rows are progressively grayed out and/or smaller. The âcloserâ scenes partially overlay the subsequent âfartherâ scenes.
- Additional information can be superimposed on the image, such as the number of washed scenes, the run-time of scenes, the number of combined video created using washed scenes from the movie 915 , and/or the like.
- Scene ordering can be contextual based. For example, the most recently selected scenes can appear on the first row position 905 , with less used scenes displayed on progressively further rows.
- the interface is âfocusedâ on the first row of data tiles, that is, the selected scene is one from the first row.
- Keystrokes or other user controls can send a selection command to the interface that can move the focus from one selected scene to another on the first row.
- Focus can be shifted to another row by moving the cascade and selecting a new first row and/or by using a mouse to select a clip on another row.
- up to 20 scenes can be displayed at one time.
- Other scenes are displayed by ârollingâ or shifting the cascade.
- the first row position 905 consists of scenes 1 - 4
- the second row position 907 consists of 5 - 8 and so on until the fifth row position 913 of 17 - 20 .
- Scenes above 20 are undisplayed.
- the user can use an input device, such as a keyboard, keypad, touch screen, mouse, remote, and/or the like to send a navigation command to the interface to roll down the cascade.
- the first row of data tiles can be rolled or shifted out of the current selection with the second row of scenes 5 - 8 appearing in the first or front row position 905 .
- Subsequent rows move to âcloserâ row positions.
- a new fifth row with scenes 21 - 24 appears in the furthest, end or back row position 913 .
- the cascade can be rolled until the undisplayed scenes are sequentially displayed to the user.
- the cascade can stop rolling once the last scene is displayed or it can loop back to the initial first row, with scenes 1 - 4 appearing in the fifth row position 913 with the user able to keep rolling the cascade and repeating the display of the scenes.
- the cascade can also be rolled up, with new scenes appearing as the closest row 905 instead of the farthest or end row 913 . It is understood that fewer or greater number of scenes can be displayed by using fewer or greater numbers of rows and/or columns. In certain embodiments, more than four columns can be displayed. In some embodiments, less than four columns can be displayed. In certain embodiments, more than five rows can be displayed. In some embodiments, fewer than five rows can be displayed. The number of rows and columns used can depend on the number of scenes to be displayed on a single screen.
- filters can further be applied to the scenes such that only certain categories of scenes are displayed.
- selectable filters 930 , 935 are displayed at the top of the cascade interface.
- Scenes can be filtered based on categories such as available movie clips, movie content ratings (e.g., âG,â âPG,â âR,â etc.), and/or performances of combined videos. Scenes can also be filtered based on categories such as movies, TV, commercials sports, emotives, combinations of the above, or the like.
- a search bar can also allow the user to search for specific scenes. Searches can be based on actors, move titles, scene names, descriptions, and/or the like.
- FIG. 10 illustrates an exemplary screen display of one embodiment of the movement and selection process of the cascade interface of FIG. 9 .
- the user can roll down the cascade, causing new images to be displayed.
- the mouse pointer changes to a gripping hand, indicating the user has grabbed the cascade and can now roll the cascade.
- Dragging up can roll the cascade up, while dragging down can roll the cascade down.
- the cascade can roll through multiple rows depending on how far the user moves the mouse.
- the displayed scenes appear in the normal cascade configuration of FIG. 9 .
- the user can then select an image.
- other input devices can be used to control the cascade, including, but not limited to, a keyboard, arrow keys, a mouse, a remote control, a touch pad, or the like.
- a selected image can display a play icon so that the user can play the scene corresponding to the image.
- the select screen of FIG. 10 illustrates one embodiment where selection converts the image to a video clip so that the movie scene is played in the cascade. In some embodiments, hovering a cursor over the scene can cause the scene to automatically play. Selecting a scene can also cause the cascade interface to proceed to another screen, such as the performance screen at Block 810 in FIG. 8 , which displays the washed content from the movie, and the selectable actors. Selecting a row and/or clip can also cause the cascade to âfold downâ into a single row, with the further rows being folded into the first row simulating a stack of cards or a ROLODEX.
- the cascade can operate in various manners and is not limited to displaying scenes.
- the cascade could display the closest images in the top row instead of the bottom row.
- the cascade could be rolled horizontally instead of vertically.
- the user could use a keyboard, touch screen, keypad and/or remote to move the cascade.
- the user could select the number of rows and columns that make up the cascade.
- the user could re-order the images by moving images into different positions. Closer and farther images could be indicated using other visual cues or no cues could be used.
- the cascade could be used to display titles, DVD covers, album covers, photographs, icons, and/or other images.
- FIG. 11 illustrates an exemplary screen display of one embodiment of the performance screen.
- a cascade interface 1105 displays the available washed content from the selected movie.
- the cascade interface 1105 of FIG. 11 can behave similarly to the cascade interfaces of FIGS. 9 and 10 .
- a large display window 1110 can display the washed content scene in a higher resolution.
- Scene information 1115 associated with the washed content can also be displayed, and may comprise, for example, editable title, description, category, and/or tags associated with the washed content.
- the wireframe 800 proceeds to a role selection screen at Block 815 .
- the role selection screen allows a user to select an actor to play, to be added in the scene as an extra, and/or to select a completely washed clip where no actors are left.
- FIG. 12 illustrates one embodiment of the role selection screen.
- the user can choose to display a larger view of the display window when viewing a scene.
- FIG. 13 illustrates one embodiment of a large screen view of the display window.
- FIG. 14 illustrates an exemplary screen display of one embodiment of a script printing screen.
- the script can be provided as a PDF, text, Word document, image, and/or other file type.
- FIG. 15 illustrates an exemplary screen display of one embodiment of the camera setup screen.
- the instructions can comprise positioning information of the user relative to the camera, green screen and/or monitor.
- the camera can capture a reference frame of the scene.
- FIG. 16 illustrates an exemplary screen display of one embodiment of the reference frame setup. The user can be instructed to step out of the scene and press record to capture a reference frame of the background.
- the camera auto focus and/or white exposure may be turned off to get a more consistent background image.
- the wireframe 800 moves to a record video screen, wherein the participant records a video of himself or herself to be combined with the washed content.
- the video combining process can be include the compositing process 200 described above with reference to FIG. 2 .
- another role can be selected, allowing one participant to play multiple roles or more than one participant to play roles in the same scene.
- FIG. 17 illustrates an exemplary screen display of one embodiment of an add introduction screen.
- a cascade displays available backgrounds.
- the background can be a message, advertisement, product placement, logo, still image, combinations of the above, or the like.
- a display window shows a larger image of the selected background.
- the user can record an introduction using a process similar to the video compositing process 200 described in FIG. 2 .
- the user can add metadata to the introduction, such as title, description, category, and/or tags.
- the user can upload the video to a central storage for sharing, such as a website.
- FIGS. 18-20 illustrate exemplary screen displays of one embodiment of the setting screens.
- the user can determine recording settings, user account settings, and/or parental control settings.
- the interactive role performance system 100 can be provided in a self-contained, mobile unit.
- the mobile unit can be a movable kiosk, an automobile, and/or a portable device.
- the mobile units can be set up at college campuses, high schools, movie theaters, retailers and/or other public venues. Users can use the mobile units to create performances without having to purchase their own system.
- the interactive role performance system 100 is provided in a mobile device, such as a laptop, PDA, cell phone, smart phone, or the like.
- the mobile device can be used to view, preview, and/or record media content.
- the mobile device is connected to an online content database from which the mobile device can upload participant performances and download washed content and other users' performances.
- the interactive role performance system 100 can be provided as a package comprising of a green screen, a stand for the screen, a USB camera, a camera hook or clip, a remote, a tripod, and/or a CD or DVD containing software implementing the functions of the interactive role performance system and a number of prerecorded content.
- systems and methods disclosed herein can be advantageously used with the video compositing systems and method disclosed in U.S. Pat. No. 7,528,890, issued May 5, 2009, which is hereby incorporated herein by reference to be considered part of this specification.
- the interactive role performance system 100 can be used in a gaming system.
- a gamer can use the interactive role performance system 100 to record his actions and insert it into a game.
- the game could be a music video game where the gamer is playing a musical instrument.
- the gamer's image could be recorded and inserted into the game as a band member playing a song onstage.
- the gamer could also be inserted into a music video for the song that the gamer is playing.
- the interactive role performance system 100 can be used in other types of games, such as a movie making game, a fighting game, and/or a role playing game.
- the system can be used in a variety of markets or distribution channels, such as education, airlines, prisons, or for gaming, dating, corporate training, education, professional services, and/or entertainment use, in either the U.S. or internationally. It can be used for advertising or promotions, product placement, viral marketing, on-line sharing, contests, surveys, consumer products, affiliate programs, clothing and apparel, still photographs, avatars, greeting cards, mash-ups, hardware, software, or licensing.
- markets or distribution channels such as education, airlines, prisons, or for gaming, dating, corporate training, education, professional services, and/or entertainment use, in either the U.S. or internationally. It can be used for advertising or promotions, product placement, viral marketing, on-line sharing, contests, surveys, consumer products, affiliate programs, clothing and apparel, still photographs, avatars, greeting cards, mash-ups, hardware, software, or licensing.
- the content may be, but is not limited to film, television, music, music videos, documentaries, news, sports, video games, original content, user-generated content, licensed content, royalty free content, any pre-existing moving image or graphic content, still images, digital avatars, and/or online content.
- a user can replace a sports commentator in a sports clip and provide alternate commentary, giving his own analysis and/or opinion of the game.
- the content may or may not include audio, dialogue, and/or effects.
- the content can be in English or any other language.
- the user experience might include, but would not be limited to, a keyboard, mouse, manual, or remote user interface, the use of a wired or wireless webcam, camera positioning via manual or digital means, sound recording by means of one or more wired, wireless, or built-in microphones, accessories such as props, costumes, a colored green screen with or without a stand, no green screen, coin-operated kiosks with or without an operator or operators, automated interface navigation with manual or automatic data entry, automated demos, tutorials, and explanations, any type of compositingâwith or without a chroma key, and/or any type of output on any platform.
- the systems and methods described herein can advantageously be implemented using computer software, hardware, firmware, or any combination of software, hardware, and firmware.
- the system is implemented as a number of software modules that comprise computer executable code for performing the functions described herein.
- the computer-executable code is executed on one or more general purpose computers.
- any module that can be implemented using software to be executed on a general purpose computer can also be implemented using a different combination of hardware, software or firmware.
- such a module can be implemented completely in hardware using a combination of integrated circuits.
- such a module can be implemented completely or partially using specialized computers designed to perform the particular functions described herein rather than by general purpose computers.
- These computer program instructions can be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified herein.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the acts specified herein.
Abstract
Content preparation systems and methods are disclosed that generate scenes used by an interactive role performance system for inserting a user image as a character in the scene. Original media content from a variety of sources, such as movies, television, and commercials, can provide participants with a wide variety of scenes and roles. In some examples, the content preparation system removes an original character from the selected media content and recreates the background to enable an image of a user to be inserted therein. By recreating the background after removing the character, the user is given greater freedom to perform, as the image of the user can perform anywhere within the scene. Moreover, systems and methods can generate and store metadata associated with the modified media content that facilitates the combining of the modified media content and the user image to replace the removed character image.
Description
- This application claims the benefit of priority under 35 U.S.C. §119(e) of U.S. Provisional Patent Application No. 61/077,363, filed Jul. 1, 2008, and entitled âINTERACTIVE SYSTEMS AND METHODS FOR VIDEO COMPOSITING,â and U.S. Provisional Patent Application No. 61/144,383, filed Jan. 13, 2009, and entitled âINTERACTIVE SYSTEMS AND METHODS FOR VIDEO COMPOSITING,â the entirety of each of which is hereby incorporated herein by reference to be considered part of this specification.
- The present application is also related to the following applications filed on even date herewith, each of which is hereby incorporated herein by reference in its entirety to be considered part of this specification: U.S. patent application No. ##/###,###, entitled âINTERACTIVE SYSTEMS AND METHODS FOR VIDEO COMPOSITINGâ (Attorney Docket: YOOSTR.012A2); and U.S. patent application No. ##/###,###, entitled âUSER INTERFACE SYSTEMS AND METHODS FOR INTERACTIVE VIDEO SYSTEMSâ (Attorney Docket: YOOSTR.012A3).
- 1. Field
- Embodiments of the invention generally relate to interactive systems and methods for performing video compositing in an entertainment environment.
- 2. Description of the Related Art
- Interactive entertainment is a popular leisure activity for people across the globe. One favorite activity for many is karaoke, which temporarily turns lay persons into âstarsâ as they sing the lyrics to a favorite song. Karaoke machines play the music of a selected song while simultaneously displaying the song lyrics to a user.
- Another favorite leisure activity for millions is watching movies. Billions of dollars are spent each year on movie purchases and rentals for home use. Home movie watching, however, has predominantly been a passive activity, wherein there is little if any viewer interaction. Furthermore, although one may watch the same movie repeatedly, each time the same characters appear and recite the same lines and perform the same actions.
- In view of the foregoing, a need exists for interactive systems and methods for video compositing allowing a more seamless integration with existing video scenes. Moreover, there is a need for systems and methods that can provide a real-time output of a combined video. Further, there is a need for systems and methods that users can operate with little skill or experience. Finally, there is a need for systems and methods that can generate media content, such as content wherein a character has been removed, for interactive role performance systems.
- In certain embodiments, an interactive role performance system allows users to select a role to play in a movie scene and replace the original actor of that role with their own performance. Using the interactive role performance system, if a participant wants to reenact scenes from a favorite movie, the participant can select a scene from that movie, record his or her own performance, and the system inserts that performance in place of the original character, creating the appearance that the participant is interacting with the other characters in the movie scene. For example, if a participant wants to reenact a scene from STAR WARS, he can record his own performance as LUKE SKYWALKER and that performance is combined into the scene in place of the actor's (e.g., Mark Hamill) performance.
- In some embodiments, a content preparation system is used to generate the scenes used by the interactive role performance system. Original media content from a variety of sources, such as movies, television, and commercials, can be used to provide participants with a wide variety of scenes and roles. The content preparation system takes an original media content, removes a character from the content, and recreates the background. By recreating the background after removing the character, the user is given greater freedom to perform as the user can perform anywhere within the scene. For example, a scene from STAR WARS is generated by removing the LUKE SKYWALKER character from the scene, and recreating the background behind LUKE SKYWALKER, leaving a clear, recreated background where the participant's performance can be inserted.
- In certain embodiments, a method is disclosed for preparing media content for use with a video image combining system. The method includes receiving original video content comprising multiple frames having a plurality of original characters associated therewith and selecting particular frames of the multiple frames displaying at least one of the plurality of original characters. For each of the particular frames displaying the at least one original character, the method comprises receiving the particular frame, wherein the particular frame displays a background image in which the at least one original character occupies a position therein, and modifying the particular frame to erase the at least one original character, wherein the modifying comprises digitally removing the at least one character by extending the background image of the particular frame to fill the position of the at least one original character to allow for subsequent insertion of a replacement character in the position. The method further comprises combining the modified particular frames with remaining frames of the multiple frames to create modified video content and generating metadata associated with the modified video content, the metadata being configured to direct the subsequent insertion of the replacement character into the modified video content, the metadata indicating at least: a first frame and a last frame of the particular frames and the position the at least one original character occupied in the original video content.
- In some embodiments, a system is disclosed for preparing media content for use with a video image combining system. The system comprises a database, an editing module and a processing module. The database is configured to store original video content, the original video content comprising multiple frames having a plurality of original characters associated therewith. The editing module is configured to execute on a computing device and is further configured to: extract consecutive select frames of the multiple frames that display at least one of the plurality of original characters within a background image; modify the select frames to remove the at least one original character, wherein the modifying comprises extending the background image in each of the select frames over a position of the at least one original character; and arrange the modified select frames with other frames of the multiple frames to generate modified video content. The processing module is configured to generate metadata associated with the modified video content to coordinate a subsequent combination of a replacement character image with the modified video content, the metadata further comprising: first data identifying at least a first frame and a last frame of the select frames; and second data indicating the position of the at least one original character in the original video content.
- In certain embodiments, a system is disclosed for preparing media content for use in interactive video entertainment. The system comprises: means for receiving original video content comprising multiple frames having an original character associated therewith; means for selecting particular frames of the multiple frames displaying at least the original character within a background image; means for modifying the particular frames to remove the original character by extending the background image to replace the original character and to allow for subsequent real-time insertion of a replacement character; means for combining the modified particular frames with remaining frames of the multiple frames to create modified video content; and means for generating metadata associated with the modified video content and usable for the subsequent real-time insertion of the replacement character, the metadata indicating at least, a first frame and a last frame of the particular frames, and a position of the original character within the particular frames of the original video content.
- In some embodiments, a computer-readable medium is disclosed for an interactive video system. The computer-readable medium comprises: modified media content comprising a first plurality of frames representing original video content having a background video image, and a second plurality of consecutive frames representing modified original video content having the background video image from which an image of at least one original character has been replaced by a continuation of the background video image over a position of the at least one original character. The computer-readable medium also comprises metadata associated with the modified media content, the metadata comprising first data indicating a beginning frame and an end frame of the second plurality of consecutive frames and second data indicating the position of the at least one original character.
- In yet other embodiments, the above-described system and methods can comprise original video or media content including a single original character and/or metadata that does not include information that identifies the position of the original character.
- Furthermore, in certain embodiments, the systems and methods summarized above can advantageously be implemented using computer software. In one embodiment, the system is implemented as a number of software modules that comprise computer executable code for performing the functions described herein. However, a skilled artisan will appreciate that any module that can be implemented using software to be executed on a general purpose computer can also be implemented using a different combination of hardware, software, and/or firmware.
- For purposes of summarizing the disclosure, certain aspects, advantages and novel features of the inventions have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, the invention may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
- The drawings, associated descriptions, and specific implementations are provided to illustrate embodiments of the invention and not to limit the scope of the disclosure. In addition, methods and functions described herein are not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state.
-
FIG. 1 illustrates an exemplary embodiment of an interactive role performance system according to certain embodiments of the invention. -
FIG. 2 illustrates a flowchart of an exemplary embodiment of a video compositing process according to certain embodiments of the invention. -
FIG. 3 illustrates a flowchart of an exemplary embodiment of a media content preparation process according to certain embodiments of the invention. -
FIG. 4A-4B illustrate alternative embodiments of the media content preparation process ofFIG. 3 . -
FIGS. 5A-5D illustrate a frame of media content during various phases of the content preparation process in which a single actor is washed out of the scene. -
FIGS. 6A-6B illustrate an exemplary matte layer created during the media content preparation process ofFIG. 3 . -
FIG. 7 illustrates an embodiment of a data flow diagram of an interactive role performance system configured to operate with multiple players in different geographic locations. -
FIG. 8 illustrates an embodiment of a wireframe for a video compositing interface of the interactive role performance system ofFIG. 1 . -
FIG. 9 illustrates an exemplary screen display of one embodiment of a cascade interface for a video compositing interface. -
FIG. 10 illustrates an exemplary screen display of one embodiment of the movement and selection process of the cascade interface ofFIG. 9 . -
FIG. 11 illustrates an exemplary screen display of one embodiment of a performance screen of a video compositing interface. -
FIG. 12 illustrates an exemplary screen display of one embodiment of the role selection screen of a video compositing interface. -
FIG. 13 illustrates an exemplary screen display of one embodiment of a large screen view of a display window of a video compositing interface. -
FIG. 14 illustrates an exemplary screen display of one embodiment of a script printing screen of a video compositing interface. -
FIG. 15 illustrates an exemplary screen display of one embodiment of the camera setup screen of a video compositing interface. -
FIG. 16 illustrates an exemplary screen display of one embodiment of a reference frame setup screen of a video compositing interface. -
FIG. 17 illustrates an exemplary screen display of one embodiment of an add introduction screen of a video compositing interface. -
FIGS. 18-20 illustrate exemplary screen displays of one embodiment of the setting screens of a video compositing interface. - Certain interactive role performance systems and methods are disclosed herein that allow users to select a role to play in a movie scene and replace an original actor of that role with their own performance. In certain embodiments, if a participant wants to reenact scenes from a favorite movie, the participant can select a scene from that movie, record his or her own performance, and the interactive role performance system inserts that performance in place of the original character, creating the appearance that the participant is interacting with the other characters in the movie scene.
- In some embodiments, content preparation systems and methods are provided that generate the scenes used by the interactive role performance system. Original media content from a variety of sources, such as movies, television, and commercials, can be used to provide participants with a wide variety of scenes and roles. In some embodiments, the content preparation system takes original media content, removes a character from the content, and recreates the background. By recreating the background after removing the character, the participant is given greater freedom to perform as the user can perform anywhere within the scene.
- The features of the systems and methods will now be described with reference to the drawings summarized above. Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings, associated descriptions, and specific implementation are provided to illustrate embodiments of the invention and not to limit the scope of the disclosure.
- For purposes of illustration, some embodiments will be described in the context of video formats and movie scenes. However, the present disclosure is not limited by the source of the media content, and other media content sources may be used, such as, for example, video games, animation, sports clips, newscasts, music videos, commercials, television, documentaries, combinations of the same or the like. Neither is the present disclosure limited by the format of the media content, and other formats may be used, such as, for example, still images, computer generated graphics, posters, music, three-dimensional (3D) images, holograms, combinations of the above or the like. It is also recognized that in other embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth in order to illustrate, and not to limit, the invention.
- Conditional language, such as, among others, âcan,â âcould,â âmight,â or âmay,â unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that some embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- The terms âactorâ or âcharacterâ as used herein are broad terms and are used in their ordinary sense and include, without limitation, any replaceable element in a media content, such as still or video content. For example, an âactorâ or âcharacterâ can be a person (live or animated), an animal, an avatar, a computer-generated character, a game character, a cartoon character, and/or a thing.
- The terms âvideo,â âscene,â âclip,â âimage,â and âcontentâ are broad terms and are used in their ordinary sense and include, without limitation, any type of media content. For example, media content can include pictures, videos, film, television, documentaries, commercials, sports, music, music videos, games, posters, original content, user-generated content, licensed content, royalty free content, any pre-existing moving image or graphic content, still images, digital avatars, online content, combinations of the above, or the like. The media content may or may not include audio, dialogue, and/or effects. The media content can be in English or any other language.
- The term âcompositingâ as used herein is a broad term and is used in its ordinary sense and includes, without limitation, the superimposing or combining of multiple signals, such as, for example, video and/or audio signals, to form a combined signal or display. Furthermore, compositing does not require two signals and/or video images to be stored as a single signal, file and/or image. Rather, âcompositingâ can include the simultaneous, or substantially simultaneous, playing of two or more signals (for example, video files) such that the signals are output via a single display or interface. The term âcompositorâ refers to any device or system, implemented in hardware, software, or firmware, or any combination thereof, that performs in whole or in part a compositing function.
- The term âreal timeâ as used herein is a broad term and is used in its ordinary sense and includes, without limitation, a state or period of time during which some event or response takes place. A real-time system or application can produce a response to a particular stimulus or input without intentional delay such that the response is generated during, or shortly thereafter, the receiving of the stimulus or input. For example, a device processing data in real time may process the data as it is received by the device.
- Moreover, a real-time signal is one that is capable of being displayed, played back, or processed within a particular time after being received or captured by a particular device or system, wherein said particular time can include non-intentional delay(s). In one embodiment, this particular time is on the order of one millisecond. In other embodiments, the particular time may be more or less than one millisecond. In yet other embodiments, âreal timeâ refers to events simulated at a speed similar to the speed at which the events would occur in real life.
- The term âdatabaseâ as used herein is a broad term and is used in its ordinary sense and includes without limitation any data source. A database may be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase and MICROSOFT SQL SERVER as well as other types of databases such as, for example, a flat file database, an entity-relationship database, and object-oriented database, and/or a record-based database. A database may also be one or more files stored on a storage device, such as a hard drive or optical device.
- The term âmetadataâ as used herein is a broad term and is used in its ordinary sense and includes without limitation any information associated with a media content. For example, the information can comprise control data providing an interactive role performance system with directions on how to process the media content or the information can be descriptive information identifying the media content. The metadata can comprise in and/or out points of characters, actions in the original scene, audio levels, camera movement, switching, positions, zoom, pan, camera control signals, lighting information, color and hue information, titles, descriptions, category, tags, combinations of the same or the like. Metadata can be recorded in a text document, database, eXtensible Markup Language (XML) file, and/or embedded within the washed or customized content.
-
FIG. 1 illustrates an exemplary embodiment of an interactiverole performance system 100 according to certain embodiments of the invention. In certain embodiments, the interactiverole performance system 100 is configured to selectively insert an image of one or more users into prerecorded media, such as a movie. In some embodiments, the image of the one or more users are recorded and/or inserted in real time. - Referring to
FIG. 1 , theprerecorded content database 101 stores âwashedâmedia content 104, or content wherein an actor or character has been removed for replacement, and/ormetadata 105. During content development, avideo source 102 receives, processes, and/or stores media content received from a studio, comprising the source media files. A mediacontent processing system 103 prepares or âwashesâ media content clips of actors and/or objects and creates correspondingmetadata 105 for the washedmedia content 104. The completed washedcontent 104 is then sent to theprerecorded content database 101. - From the
content database 101, the washedmedia content 104 is available for use in the video composting process. Thevideo recorder 110 captures an image and/or video of the user. The feed from thevideo recorder 110 is sent to thevideo compositor 120 and/or an optionaluser content database 115 for storage. Thevideo compositor 120 accesses the washedcontent 104 stored on theprerecorded content database 101 and combines the washed content with the feed from thevideo recorder 110. The final combined output is shown on thedisplay 125. - As illustrated, the interactive
role performance system 100 comprises aprerecorded content database 101. Thedatabase 101 comprises data, video files, audio files, metadata and/or other information usable to control the video compositing process. For instance, the washedmedia content 104 of thedatabase 101 can comprise one or more video clips, such as movie clips comprising video and/or audio content, usable in the background of a combined video image. In certain embodiments, themedia content 104 comprises washed content, as described in more detail below, wherein a character or other object has been removed from themedia content 104. In other embodiments, themedia content 104 can comprise unaltered video scenes from a movie or other audiovisual work. In certain embodiments, the video content comprises a QUICKTIME file, a MPEG file, a WMA file, a WMV file, a MP4 file, a MKV file, a JPEG file, and/or the like. - The
database 101 can also comprise one or more matte files, as described in more detail below, usable to overlay an inserted user image. In certain embodiments, each matte file can be associated with one or more particular background clips. In other embodiments, the matte files can be integrated with the background clips. - Referring to
FIG. 1 , thedatabase 101 comprisesmetadata 105 usable to control the selective combination of one or more character images with thebackground media content 104 and/or matte files and/or to control the display of such images. For instance,such metadata 105 can comprise reference data files to control the display and/or removal of user images, subtitle information, color/hue reference information (for example, color or black and white output), information for moving matte files, resize data, movie poster-frame information for producing still-frame movie posters, combinations of the same or the like. In some embodiments, themetadata 105 comprises descriptive information associated with the video content, such as actor information, key art, studio logos, titles, combinations of the same or the like. - The
database 101 comprises any type of media device(s) and/or memory capable of storing the above described information. For instance, thedatabase 101 can comprise one or more of the following: servers, hard drives, personal computers, DVDs, optical disks, flash memory, USB storage devices, thumb drives, tapes, magnetic disks, combinations of the same or the like. Moreover, thedatabase 101 can comprise multiple databases located remote to each other. - In certain embodiments, the interactive
role performance system 100 further comprises avideo source 102 and/or a mediacontent processing system 103. These components can be used during content development to process original source content to produce washed content to be stored in the prerecorded content database. After washing the source content, some or all of the washed content can be, in certain embodiments, stored in the prerecorded content database as the washedmedia content 104. - In certain embodiments, the
video source 102 can comprise one or more computers, workstations, servers, combinations of the same or the like for processing and/or storing original source content. Source media from studios can be acquired in a variety of formats, such as digibeta tapes, digital files, DVDs, video tapes and/or the like. Source media from the studios can be âingestedâ by thevideo source 102 to create a copy for use in the washing process and/or formatted into an uncompressed digital file. In some embodiments, the digital files are stored on a redundant array of hard drives connected to the video source directly or through a network. - In certain embodiments, a playback machine, such as a digibeta playback deck, DVD player, video player, and/or the like may be used to play back the source media with the video source ingesting the output. In certain embodiments, the
video source 102 can further comprise a âhelperâ video card with a Serial Digital Interface (SDI) and/or Audio Engineering Society digital audio (AES3) inputs/outputs to assist thevideo source 102 in processing media content. In certain embodiments, the video source stores digital files in a database. One or more hard drives can be used for storing the master source. Additionally, master files can be backed up on tape archival systems and/or stored on additional hard drives. Finished washed content can then be copied to one or moreprerecorded content database 101 asmedia content 104 for distribution to and/or use by participants. - In certain embodiments, the media
content processing system 103 processes thevideo source 102 digital files. The mediacontent processing system 103 can comprise, for example, computer workstations equipped with speakers and/or headphones, video/audio editing software, timecode readers/generators, and/or house sync boxes. In some embodiments, the editing software comprises FINAL CUT PRO, PHOTOSHOP, AFTER EFFECTS, ADOBE AUDITION, SOUNDTRACK PRO, and/or the like. Operators can use the workstations and/or editing software to wash individual frames of scenes from the digital files of selected elements. Operators can further use the workstations and/or editing software to recreate the backgrounds behind the washed elements of the scenes. In some embodiments, the mediacontent processing system 103 further comprises a database and workflow manager to check the accuracy of information, provide accessibility for production management, track financial/royalty requirements, and/or provide archival security. - Referring to
FIG. 1 , the interactiverole performance system 100 further comprises avideo recorder 110, such as, for example, a digital video camera, a web camera, a smart phone camera, combinations of the same or the like, for obtaining one or more images to be inserted into the combined video image. In certain embodiments, thevideo recorder 110 obtains a real-time video image of a participant that is selectively âinsertedâ into a scene of themedia content 104 from theprerecorded content database 101 to produce a real-time, interactive video image at a display. - In certain embodiments, the
video recorder 110 further comprises, or is associated with, a video content processor that modifies a video image obtained through therecorder 110. For instance, such embodiments of thevideo recorder 110 can be used with a green screen, a blue screen, and/or other similar chroma-key equipment to prepare the obtained video image for compositing with themedia content 104. - In other embodiments, video image captured through the
video recorder 110 can be digitally modified to remove certain portions of the captured image. For example, background subtraction techniques, as discussed in more detail herein, can be used to isolate a foreground element, such as an image of the user. In certain embodiments, thevideo recorder 100 further comprises a wired or wireless microphone, a remote control, a tripod, combinations of the same or the like. Multiple video recorders located together or remotely from each other can be used to capture multiple participants and/or multiple angles. In some embodiments, thevideo recorder 110 captures images in 3D and/or infrared formats. - The illustrated interactive
role performance system 100 can also comprise an optionaluser content database 115. In certain embodiments, theuser content database 115 stores video and/or audio data captured through thevideo recorder 110. Such data can be stored directly to theuser content database 115 and/or can be further processed, as discussed herein, prior to such storage. Examples of such processing include, but are not limited to, removing, replacing, and/or enhancing video elements, such as music, vocals, score, sound effects, special effects, combinations of the same or the like. Theuser content database 115, in certain embodiments, allows for the repeated and/or later playing of a combined video by storing a participant's âperformance.â In other embodiments, theuser content database 115 stores avatar or other computer-generated images of a participant or other character for use in the interactiverole performance system 100. - In certain embodiments, the
prerecorded content database 101 and/or theuser content database 115 can communicate with other components and/or modules of the interactiverole performance system 100 via a wired or wireless network such as, for example, a local area network, a wide area network, the Internet, an intranet, a fiber optic network, combinations of the same or the like. - As illustrated in
FIG. 1 , the interactiverole performance system 100 further comprises avideo compositor module 120 configured to combine video images received from theprerecorded content database 101 with video images from thevideo recorder 110 and/or theuser content database 115. In certain embodiments, thevideo compositor 120 comprises at least a processor and a memory. Such memory can include, for example, SDRAM, EEPROM, flash, non-volatile memory, volatile memory, a hard drive, an optical drive, a combinations of the above or the like. In certain embodiments, thevideo compositor 120 further comprises a graphics processing unit (GPU). - In certain embodiments, the
video compositor module 120 advantageously combines and/or cause the display of multiple video images during playback without saving such images in a combined format. For instance, themedia content 104 from theprerecorded content database 101 can be combined with the user video image from thevideo recorder 110 and/or theuser content database 115 to form a combined image without storing such content in a combined file. In other embodiments, the combined image is stored in a single file or location for later playback, comment, use and/or the like. - The
video compositor module 120 is advantageously configured to cause thedisplay 125 to output the combined video image. In certain embodiments, thedisplay 125 can comprise a television, a monitor, a liquid crystal display (LCD), a cellular phone display, a computer display, combinations of the same or the like. In certain further embodiments, thevideo compositor 120 can be integrated with thedisplay 125. Additional components can also be integrated together. For example, smart phone, PDA, or other mobile device could comprise theprerecorded content database 101, thevideo recorder 110 in the form of a camera, thevideo compositor 120 in the form of compositing software, and/or thedisplay 125. It is understood that components can be integrated in other ways, such as a camera with a memory for prerecorded content and a processor for running compositing software. - In certain embodiments, the
prerecorded content database 101,video recorder 110,user content database 115,video compositor 120, and/ordisplay 125 are used during video compositing when a user interacts with the interactiverole performance system 100. In certain embodiments, the components used during video compositing can be provided to the user separately from those components used in initially preparing and developing the prerecorded content. - In certain embodiments, the content can be delivered to the user either through a physical medium or by online delivery. For instance, in certain embodiments, a user receives the video compositing elements without a
prerecorded content database 101. Instead, theprerecorded content database 101 can be available online for thevideo compositor 120 to access over a network and/or the Internet. In other embodiments, the user receives aprerecorded content database 101 containing a limited number ofmedia content 104 files on a CD/DVD or other physical medium, with additional content available separately. For example, washedmedia content 104 can be stored on a central database from which additional content can be downloaded by thevideo compositor 120. - A variety of connection media may be used to link elements of the interactive
role performance system 100. In some embodiments, the elements are directly connected with fiber optic channels, Ethernet, and/or the like. Elements of the interactiverole performance system 100 can also be spread out in remote locations and connected through the Internet and/or a virtual private network. Elements of the interactiverole performance system 100 can also be wirelessly connected with other elements. In some embodiments, elements, such as thevideo recorder 110,video compositor 120, and/ordisplay 125 are connected directly with audio and/or video cables and/or through a wireless connection, such as via BLUETOOTH or other radio frequency communications. - Although the
interactive compositing system 100 has been described herein with reference to video technology, it will be understood from the disclosure herein that other types of media can be used by thesystem 100. For instance, such media can comprise video games, animation, still pictures, posters, combinations of the same or the like. -
FIG. 2 illustrates a flowchart of an exemplary embodiment of avideo compositing process 200, according to certain embodiments of the invention. In certain embodiments, thevideo compositing process 200 is executed by the interactive role performance systems described herein to perform video compositing using washed content. For exemplary purposes, thevideo compositing process 200 is described hereinafter with reference to the components of the interactiverole performance system 100 ofFIG. 1 . - The
video compositing process 200 begins withBlock 205, during which the user of the interactiverole performance system 100 can select an available washed scene from themedia content 104 stored on theprerecorded content database 101. In certain embodiments, the user selection is made using some form of input device, such as a remote control, a mouse, a keyboard, a touch screen, a keypad and/or the like. In certain embodiments, thevideo compositor 120 communicates with theprerecorded content database 101 to receive the washedmedia content 104. The interactiverole performance system 100 can further provide the user or participant with a script of the lines or dialogue in a scene. In some embodiments, the interactiverole performance system 100 further comprises a printer and provides the user with a printed script. - At
Block 210, the scene selected by the user begins playing. In certain embodiments, thevideo compositor 120 displays the scene on thedisplay 125 with directions, prompts or instructions to the user for properly acting out the scene. For instance, the directions can be text prompts and/or an outline on thedisplay 125 directing the user to assume a specified stance, position himself or herself in a specified location, and/or face a specified direction. Directions can also comprise of lines of dialogue that the user repeats, visual prompts on the screen, and/or voice directions of actions to be taken, such as the directions that a real director would give to an actor. - At
Block 220, the participant acts out the scene selected atBlock 205. In certain embodiments, the participant follows the directional prompts given atBlock 210. The directional prompts can be interactive and/or can be given out while the user is acting out the scene. - In some embodiments, a real-time feed of the participant is displayed while the participant acts out the scene. The real-time feed gives the participant feedback on what the participant's actions look like on screen. In certain embodiments, the prerecorded washed scene is combined with the real-time feed to provide the participant with real-time feedback on what the final scene will look like. In some embodiments, graphics are superimposed over the real-time feed of the user to provide clearer directions to the user. The graphics can range from text to computer-generated graphics. In some embodiments, directional prompts can consist of interactive mini-games, directing the user to punch blocks, hit balls, hula dance, hula hoop, and/or the like, wherein the participant can act out the scene based on the directional prompts.
- At
Block 230, the image of the participant is captured by thevideo recorder 110. In certain embodiments, the participant acts out the scene in front of a green screen. However, in other embodiments, screens of different colors or no screen can be used. Various techniques can also be used to isolate the image of the participant from the background. For example, chroma-key techniques can be used to separate the user from the background by thevideo compositor 120 and/orvideo recorder 110. - In some embodiments, a background processing technique is used to allow the participant to act out the scene in front of any background, with the background screen being optional. For example, the
video compositor 120 can use background subtraction, where a previously recorded reference image of the background is compared to the captured video image to identify a new element in the captured image, thereby isolating the image of the user. The new element is then identified as the foreground and/or is separated out from the background for insertion into themedia content 104 scene. - In certain embodiments, the user generated content captured by the video recorded 110 is stored on the
user content database 115. Thevideo recorder 110 can also record sound with the video. Moreover, in some embodiments, pre-existing sound clips are used and/or no sound is recorded. - In some embodiments, the
metadata 105 associated with the washedmedia content 104 directs thevideo recorder 110 to turn on or off at certain times. Themetadata 105 can be information contained in an XML file that controls thevideo recorder 110. For example, when the image of the participant is not currently being inserted in the scene, thevideo recorder 110 can be powered off or otherwise temporarily disabled to prevent extraneous sound and/or video from being recorded. In certain embodiments, the sound and the video capture can be controlled independently of each other, providing greater control over how sound and/or video is captured by thevideo recorder 110. - At
Block 240, the image of the participant is inserted into the washedmedia content 104 either as a video or a still image. Thevideo compositor 120 receives the user image from thevideo recorder 110 and/or from theuser content database 115. Various insertion techniques can be used by thevideo compositor 120 to insert the participant's image into the washed scene. For example, the image of the participant can be played concurrently with and overlaid over the washed content, or the image can be incorporated into the washed content. - In some embodiments, the image insertion is unrestricted, with the image of the participant being capable of appearing anywhere within the washed scene. In certain embodiments, the
metadata 105 or scene information directs thevideo compositor 120 on where the user image is to be inserted into the scene. Thismetadata 105 or scene information can further comprise display and removal points recording where a replaceable (or removed) actor appears in and/or exits the original scene corresponding to the washed scene. In some embodiments, the display and removal points comprise of the beginning and end frames of the scenes having the replaceable actor. - In certain embodiments, the
metadata 105 also controls the insertion of audio, such as speech from the participant, into the washed content. In some embodiments, the washed content also contains mattes, which determine whether elements should appear in front of the inserted user image. In some embodiments, the participant is inserted as an extra or additional actor in a scene, without replacing an existing character. - In certain embodiments, the processing is offloaded to the GPU in the
video compositor 120 to reduce load on a processor of thevideo compositor 120. - In certain embodiments, the
video compositing process 200 can comprise a multi-pass processing of the media content. For example, a two-pass process can be used in which a first pass determines which pixels of the user-generated content should be designated as transparent or opaque. In particular, the elements that are to be added to or combined with the washed content 104 (e.g., the user image) are composed of opaque pixels. In certain embodiments, these pixel values are identified through background subtraction processes described in more detail herein. In a second pass, the user content is then inserted into the washed content scene. - In some embodiments, further processing can be performed that blends user-generated content more cleanly with the washed
media content 104. For example, a border of several pixels around the inserted content can be blended into the washed content by applying a gradient of opaqueness to the border to create a more seamless integration with the washed content. Additional processing can be applied to the combined video to improve the image. For example, pixel sampling can be conducted to determine and correct the green levels in the image. Shadows, outlines, and/or color correction can also be applied to the combined video. - The following annotated source code illustrates one embodiment of the background subtracting program used to process the user-generated image.
-
// All values are normalized between 0.0 and 1.0. float4 main( float2 theTextel : TEXCOORD0, float4 theColor : COLOR0 ) : COLOR0 { // grab the RGBA pixel color of the source and the background float4 aSrcColor = tex2D(srcTex, theTextel); float4 aBackColor = tex2D(tex1, theTextel); // convert the RGB values to HSV color space (hue, saturation, value) float3 aSrcHSV = RGBtoHSV((float3)aSrcColor); float3 aBackHSV = RGBtoHSV((float3)aBackColor); float3 aRBBDiff, aHSVDiff; float aMax; // find the difference in each RGB color channel aRBBDiff.r = abs(aSrcColor.r â aBackColor.r); aRBBDiff.g = abs(aSrcColor.g â aBackColor.g); aRBBDiff.b = abs(aSrcColor.b â aBackColor.b); // find the greatest difference of all RGB color channels aMax = max(max(aRBBDiff.r, aRBBDiff.g), aRBBDiff.b); // find the difference in each HSV color channel aHSVDiff[0] = abs(aSrcHSV[0] â aBackHSV[0]); aHSVDiff[1] = abs(aSrcHSV[1] â aBackHSV[1]); aHSVDiff[2] = abs(aSrcHSV[2] â aBackHSV[2]); // the next lines return an opaque color value for the source pixel if it matches one of the conditional criteria below // determine if the hue values differs from the threshold if (aHSVDiff[0] > 0.075) return float4(aSrcColor.r, aSrcColor.g, aSrcColor.b, 1.0); // determine if the red values differs from the threshold if (aRBBDiff.r > 0.25 && aMax == aRBBDiff.r) return float4(aSrcColor.r, aSrcColor.g, aSrcColor.b, 1.0); // determine if the green values differs from the threshold if (aRBBDiff.g > 0.20 && aMax == aRBBDiff.g) return float4(aSrcColor.r, aSrcColor.g, aSrcColor.b, 1.0); // determine if the blue values differs from the threshold if (aRBBDiff.b > 0.18 && aMax == aRBBDiff.b) return float4(aSrcColor.r, aSrcColor.g, aSrcColor.b, 1.0); // if no value is determined to be opaque then set it to transparent. the default return float4(aSrcColor.r, aSrcColor.g, aSrcColor.b, 0.0); } - The exemplary program disclosed above takes the inputs of two images: a source image (e.g., a video image including the user) and a reference background image (e.g., without the image of the user) to be removed from each frame of the source image. For instance, the interactive
role performance system 100 can record the reference background image after the user steps out of thevideo recorder 110. - In certain embodiments, the program processes one pixel or textel at a time and returns a value that is fully transparent or opaque based on color channel differences between the source image and the reference background. In the described source code, the portions of the source image matching portions of the reference background are set transparent, while the portions of the source image that are different from the reference background are kept opaque. Collectively, the pixels in a frame that are opaque determine the actor and/or object that was added to the scene after the reference background image was captured.
- In certain embodiments, certain established threshold values determine if a particular pixel is to be designated as opaque or not. Moreover, the threshold value can compensate for small variations in the source image and the reference background image due to inconsistencies in how the source and background images were recorded. For example, if the lighting is inconsistent during the image acquisition process, the background from the source and background reference may not be recorded identically. By using a threshold value, small differences between the backgrounds are ignored and the background of the source image is set transparent. Depending on the level of inconsistencies between the source image and the reference background, the threshold level could be set as a higher value to compensate for greater differences. In some embodiments, the threshold value can be a set value, set by the user, or adaptively set by the
role performance system 100. - As described, the program further determines the RGB pixel color of the source and reference background images and then converts the RGB values to HSV color space. The program then determines the difference in each RGB and HSV color channel, wherein RGB color channels are red, green, and blue and HSV color channels are hue, saturation, and value. The program determines the greatest difference of all RGB color channels. The RGB and HSV difference is measured against the threshold value to determine if the pixel should be set as opaque. Otherwise, the pixel is set to transparent.
- In certain further embodiments, an additional reference frame is taken of the participant within the background. By using a participant reference frame and the reference background image, the output quality of the background processing can be checked using the two reference frames. For example, a background subtraction process can be performed on the participant reference frame instead of the entire user content. The process outputs an isolated image of the participant which, in certain embodiments, is representative of the quality of the output from processing the entire user content. Using the participant reference frame allows the output quality of the background processing to be tested with smaller files and less processing.
- In certain embodiments, video images can be processed by the
video compositor 120 and/or by a remote transcoding server. For example, the combined video can be encoded by thevideo compositor 120 while the user content is captured. In some embodiments, user content is captured in raw format and encoded at a later time by a transcoding server located remote to thevideo recorder 110. After capture, the user content can be uploaded to the transcoding server. In some embodiments, the transcoding server is a virtual server running on a cluster of servers. As such, the transcoding server has significantly more processing power than thevideo compositor 120. The additional processing power allows the transcoding server to engage in additional processing passes over the user content and the washed content to provide a higher quality video. In some embodiments, the transcoding server transcodes the content into a flash video file, MPEG, JPEG, and/or other file type for viewing at adisplay 125 and/or for submission to a content sharing website. The transcoding server, in certain embodiments, can further apply a watermark to the user-generated content and/or the displayed combined content for copy control purposes. - At
Block 250, the combined video is shown on thedisplay 125. In certain embodiments, the combined video is shown in real-time with respect to the capturing of the user image by thevideo recorder 110. Custom filters can further be applied during playback to improve the displayed image. In some embodiments, the combined video is saved for later playback and/or displayed at a later time. The video can also be displayed remotely at a location different from the user. It is understood that the combined video is not necessarily stored in a single file. The combined video can exist as separate files that are overlaid onto each other, played back, and/or synchronized to generate a combined image. The combined video can comprise of a matte file, a subtitle file, a washed content file, and/or a user content file. - In some embodiments, the combined video or elements of the combined video, such as the user content file, are uploaded to a website for searching, sharing, and/or viewing combined video content. For example, the user content can be sent to the website and played back along with a corresponding washed content stored on the website to generate the combined video. In addition, the participant can create an introduction for the combined video using the interactive
role performance system 100 using a process similar to that used to create the combined video. - The media content preparation process or âwashingâ process is a processing development for audio and/or video that increases the realism of the interactive experience. For example, in certain conventional compositing systems, a user playing the role of DARTH VADER in STAR WARS would be positioned precisely in front of the villain before starting the scene. During the scene, if the user were to move from side to side, the original DARTH VADER character would be visible behind the user image, detracting from the supposed realism of the experience. However, the washing processes described herein advantageously removes the original DARTH VADER character from the scene such that the participant image need not be limited to a particular area and can be freely able to move within the scene, thereby increasing the realism of the experience.
- Likewise, the washing process can be applied to the audio of a scene. In certain conventional compositing systems, data files accompanying the clip supply audio switch data in real time, turning various audio tracks on or off in order to silence the replaceable character so the user can play the part uninterrupted. In one embodiment of the washing process, the character's audio is filtered out of the original audio to create a modified audio used in the washed content, allowing the participant increased freedom in the timing of his or her lines.
- In certain embodiments, more content versatility is provided by disclosed systems and methods that are able to simplify the compositing process by moving more of the content processing to the content development phase, improving control over both audio and/or video manipulation, and thereby improving the âreplacementâ effect. For example, with video, editing software can be used to entirely remove actors from a scene, allowing the washed content to be used as the prerecorded background into which the user is inserted and simplifying the user image insertion step during the compositing process.
-
FIG. 3 illustrates a flowchart of an exemplary embodiment of a mediacontent preparation process 300, used during content development according to certain embodiments of the invention. In certain embodiments, theprocess 300 is executed by embodiments of the interactive role performance systems described herein, usually by thevideo source 102 and/or mediacontent processing system 103. For exemplary purposes, themedia content preparation 300 is described hereinafter with reference to the components of the interactiverole performance system 100 ofFIG. 1 . - At
Block 305, a scene is selected from media content stored on thevideo source 102. In certain embodiments, a scene is identified by one or more watchers viewing the entire source media content to select scenes that can viably be used as washed content. For instance, the one or more watchers can log the start/end times of the scene. In some embodiments, media content is copied onto a network media server and then reviewed by one or more watchers. - In some embodiments, scenes are selected based on certain predetermined criteria and/or the ease with which the source content can be washed. Such selection criteria can comprise the duration of the scene, the visibility of the primary actor, the immobility of the background, the minimal motion of the foreground, a clear view of the actors with little or no blocking objects, and/or the consistency of the background. In certain embodiments, scenes are generally avoided if the camera is in motion, the background is in motion, there is a large amount of foreground action, there are many camera angles, scenes have lots of actions, or scenes have lots of overlapping dialogue. In some embodiments, a media content clip, comprising a selected scene, is captured from the media content. In certain embodiments, the frame selection is accomplished by a program implementing one or more selection criteria.
- At
Block 310, the mediacontent processing system 103 extracts individual frames from the selected media content clip. In certain embodiments, the media content clip is exported into individual consecutive frames, such as 24 to 30 frames per second of playback. In other embodiments, clips can contain more frames or fewer frames, depending on the format of the source media content. - At
Block 315, the mediacontent processing system 103 identifies and/or selects the particular frames that contain a selected character and/or object and washes the frames through a series of manipulations to remove the selected character from the scene. In certain embodiments, such manipulations extend or continue the background image to remove the character and can comprise borrowing pixels from a background where the actor and/or object is not present, retouching the areas with consistent background materials, fabricating pixels by filing areas with appropriate artwork from within the frame or other sources and/or blending the areas into the surrounding background. - In certain embodiments, the process is repeated for every play option in each scene, breaking the clips into multiple video tracks and/or using editing software to bundle the different tracks into unique âprerecordedâ background clips for each option. Within each bundle, different tracks can have unique data file triggers or metadata that correspond to different âinâ and/or âoutâ points within the scene. For example, one set of data file triggers can determine when a user image is to be on or off the screen; another can dictate when a customized special effects layer is activated; a third can command a particular background matte layer to appear or disappear as needed.
- With audio, a more robust clip development process provides an increased ability to separate audio tracks and/or isolate sound effects, musical scores, and/or the voices of different characters for individual manipulation. Media content received from the studios can contain multiple audio tracks separate from the video. For example, tracks 1 and 2 can contain mixed audio, while
tracks 3 and 4 contain the music and/or effects. Certain embodiments of a the interactiverole performance system 100 can either control audio data that has been delivered in separate tracks and/or mix separate tracks together, or can break audio tracks apart if the source material has them combined. Creating separate audio tracks allows for the editing of some tracks while not touching others. Certain embodiments can substitute and/or remove movie score audio, alter and/or remove actor audio, and/or enhance, alter, and/or remove sound effect data, then later recombine the tracks for association with different user play options. - For instance, certain embodiments of the invention can separate the audio tracks from STAR WARS to remove DARTH VADER'S speaking parts, replace the John Williams score with a royalty-free track, and/or enhance the light saber sound effects. When finished, the system can condense the separate tracks down to one master track to be played when the user chooses to replace DARTH VADER. A similar approach could be taken to alter different tracks for a LUKE SKYWALKER play option. The resulting experience can have better audio accompaniment because the sound elements can be better manipulated during content development than they could be on-the-fly.
- At
Block 320, the mediacontent processing system 103 creates mattes from the media content. In some embodiments, compositing systems involve superimposing a new video layer, or âmatte,â of the user over the original background content in order to create the illusion that the user is âinâ the prerecorded content. While this effect works well in many cases, certain prerecorded backgrounds contain foreground elements, such as desks, podiums, castle walls, other actors and/or the like, that appear in front of the actor to be replaced. In many cases, these foreground elements also move, such as when a bird flies across the frame, a person walks in front of the actor, and/or a camera move effectively changes the position of the stationary wall or desk relative to the actor in the frame. In order to create a more 3D interactive experience, these foreground elements can be recreated or somehow moved so as to be visible in front of the superimposed user's image. - One way for an element to appear in the foreground is by creating additional video matte layers during content development. For purposes of this disclosure, mattes can comprise, but are not limited to, video files that contain transparency information such that white space allows subordinate video layers to show through and/or black space prevents subordinate video layers from showing through. Certain mattes can be created based on elements of the target prerecorded clip such that any element which should be âin frontâ of the userâsuch as a deskâis black, and/or the elements that should be âbehindâ the user are white. Thus, in certain embodiments, the matte layers cause portions of a background image to come to the foreground in front of an inserted user image. In some embodiments, a moving matte is required for a motion scene. The matte creation process is described in further detail below. Once a matte is created, it can be synchronized to the media content clip to match up with the motion of the object that appears in the foreground.
- In certain embodiments, the
video recorder 110 captures the user image without making any camera moves, pans or zooms. These functions can be accomplished through the software of thevideo composition 120 system. In order to match the original scene, the original scene can be analyzed andmetadata 105 can be recorded that captures the in and/or out points, actions in the original scene, audio levels, camera movement, switching, positions, zoom and/or pan. Themetadata 105 can further instruct thevideo recorder 110 to scale, move within the x-y coordinates of the overall combined frame, and/or switch to a different angle.Metadata 105 can be recorded in a text document, database, XML file, and/or embedded within the washed content. - At
Block 325, the mediacontent processing system 103 records actor position, size information, and/orother metadata 105 associated with the washedmedia content 104. In some embodiments, processing software in the mediacontent processing system 103 analyzes the media content clip to generate metadata, such as the position and size information. In certain embodiments, the actor position and/or size information are used during the setup of the camera, lights and/or green screen to determine the orientation and/or size of the inserted user in the scene. Using this information allows the inserted user image to match as closely as possible with the character that is being replaced. - At
Block 330, the mediacontent processing system 103 creates an outline graphic representing the removed character's position in the washed scene. In certain embodiments, the participant uses the outline graphic to determine where he/she should position himself/herself during recording of his/her performance of the particular scene. In some embodiments, an outline graphic is not included in the washed scene. Moreover, a user can freely move around within the scene and/or is not required to appear in a specific position. - At
Block 335, themedia processing system 103 transcribes and/or prepares subtitles of the dialogue for each scene or clip. In certain embodiments, subtitles appear when the removed character would be speaking and disappear when the actor is not. In some embodiments, subtitles may not be required and/or are already available and do not need to be created. - At
Block 340, themedia processing system 103 outputs a washed scene after completing processing the media content. In certain embodiments, the media processing system saves the washed content into a local storage device and/or saves the washed content directly to theprerecorded content database 101 as themedia content 104. The washed content can further undergo a quality control process to ensure that the washed content has been properly created. The washed content may also be saved into a backup storage system. In certain embodiments, poster art for display can be created by washing actors out of media content. -
FIG. 4A illustrates an alternative embodiment of the media content preparation process ofFIG. 3 . AtBlock 405, a scene is selected and frames from the scene are created. AtBlock 410, an actor is removed from one frame. AtBlock 415, a background, such as a wall, is recreated behind the actor. AtBlock 420, the washed frame is extended or repeated for the rest of the frames in the scene. In some scenes, the background is similar from one frame to another, and reusing the washed frame saves additional effort. AtBlock 425, a track or file with the data triggers for the in and/or out points of the actor and/or other metadata is created. In some embodiments, the in and/or out points are represented by the first and/or last frames the actor appears in. If more than one actor is selected for removal from the scene, the process can go back toBlock 410 and repeatBlocks Block 430, one or more tracks with the associated data triggers are bundled into a single washed media content scene. -
FIG. 4B illustrates another alternative embodiment of the media content preparation process ofFIG. 3 . AtBlock 450, a scene is selected and frames from the scene are created. AtBlock 455, elements of the set are reshot and/or a background is digitally recreated either entirely or by combining the newly shot set elements with the original content atBlock 460. AtBlock 465, a track or file with the data triggers and/or other metadata for the scene is recorded. AtBlock 470, one or more tracks are bundled into one washed scene. - It will be understood that the media content preparation process can be accomplished by using any existing or new technologies that can allow for the altering of video content, such as the ability to map or track camera movements from the original content and/or recreate them with an altered background. In addition, any of the described media content preparation processes can be used singly or in combination to create the washed content.
- Embodiments of the content development process also allow for customization and/or alteration of other elements affecting the interactive experience. These elements can comprise, but are not limited to, subtitle data, colors, fonts, placement, actor cues and/or suggestions, audio and/or video special effects, information about user image size, location, dynamic movement, color hue, saturation, distortion, play pattern interactivity such as voting, ranking, and/or commenting, properties for online uploading, sharing, and/or blogging, particulars about creating, sharing, printing movie stills and/or posters based on each scene, gaming elements, pitch, vocals, accuracy, volume, clapping, combinations of the same or the like.
- For example, certain analysis can be performed that suggests users appearing in a scene from LORD OF THE RINGS should appear more orange than users appearing in a scene from THE MATRIX. Color saturation, lighting, hue data and/or other metadata can be written into the data files or
metadata 105 for each respective scene, such that during the performance, the interactiverole performance system 100 can use the data files ormetadata 105 to manipulate the live image in order to more realistically blend the user into the background footage. - Likewise, digital resizing and/or movement data can be programmed into each scene that dictates where the user appears in the frame of prerecorded content, and/or the size of the user image relative to the rest of the scene. This information can be used to create dynamic effects, such as digitally simulating camera movement over the course of the scene. This data could also be written into the
metadata 105 associated with the piece of washedmedia content 104. - In certain embodiments, the control data or
metadata 105 for these elements is bundled with the associated washedmedia content 104 and/or matte layers during content development. These elements can be referenced and/or controlled with data files which are invisible to the user, but can be embedded in software elements and/or included in digital files (for example, an Internet downloaded file or XML file) or the like, appropriately associated with the original content purchased by the user. These improvements to the content development process can make the interactive experience more realistic, more immersive, and ultimately more enjoyable to the user. -
FIGS. 5A to 5D illustrate a frame from a media content during various phases of certain embodiments of the washing process in which a single actor is washed out of the scene. For exemplary purposes, the frames, as illustrated, are described hereinafter with reference to the components of the interactiverole performance system 100 ofFIG. 1 . -
FIG. 5A illustrates a frame from a media content clip processed by the mediacontent processing system 103. The frame depicts two actors: thefirst actor 505 is the target actor to be washed from the frame while thesecond actor 510 is retained in the frame. -
FIG. 5B illustrates the frame ofFIG. 5B after theactor 505 has been washed from the scene. In certain embodiments, an outline graphic 515 is added to the washed content to depict the location of the washed actor. The retainedactor 510 remains unchanged in the scene. Individual washed frames comprise the complete washedcontent scenes 104 stored on theprerecorded content database 101. -
FIG. 5C illustrates a real-time feed of a user from avideo recorder 110 superimposed over a washed content, wherein theuser image 520 is added onto the scene. In certain embodiments, the user can use an outline graphic to position himself in the scene. That is, the user can move into a position such that the user is generally within the position of the washed actor as indicated by the outline graphic 515. In some embodiments, thevideo compositor 120 automatically positions the feed from thevideo recorder 110 in a frame such that an outline graphic is unnecessary by using previously recorded actor position data to determine where the user image is placed. -
FIG. 5D illustrates a frame from a completed combined video. Theuser 520 is inserted into the scene alongside the retainedactor 510. In certain embodiments, the completed combined video is displayed on thedisplay 125. The combined video can also be saved for future playback, or the combined video can be recreated from the washed scene and user content without saving the combined video. -
FIGS. 6A and 6B illustrate an exemplary matte layer created during the media content preparation process ofFIG. 3 . In particular,FIG. 6A illustrates a matte layer created from the frame illustrated inFIG. 6B . InFIG. 6B , theflight attendant 620 is part of the foreground scene and appears in front of thepassenger 630 selected for the washing process. In certain embodiments the matte creation can be performed by âtracingâ the particular figure with a digital pointer, frame-by-frame, or using other software means available to track and/or trace the elements. - The resulting
matte layer 610 can be either a moving or stationary video file used during playback of the washed content to delineate a foreground element of the original source content. Associating this matte with the real-time user image from thevideo recorder 110 essentially âblocksâ the user's image where a foreground object, such as the flight attendant, covers the user image, and thereby creates the illusion that the user is positioned between background and foreground elements. By using a moving matte layer, the foreground element can be kept in front of the participant's image even when the foreground element moves, such as if the flight attendant moves in front of the user. The resulting composition advantageously creates a more realistic, multi-dimensional interactive experience. - In yet other embodiments of the invention, additional features can be employed with that utilize components of the interactive
role performance system 100 hosted and/or deployed in an online environment. For instance, one method of hosting the content online allows a party or user to control the storage, filtering, and/or distribution of the finished video output. In certain embodiments of the technology, a new video file is generated with the combined image of the user and the prerecorded content. This âoutputâ file could be saved for later playback, shared online, or sold as a DVD to the user in a variety of fashions. Though the output of the composition in a single, cohesive video stream is relatively efficient, certain problems also can arise with such an arrangement. - First, in spite of advanced video encryption techniques, users could find ways to copy and/or share their recorded files at will. Second, without control over the output content, it can be difficult to police or filter which output files could be shared online. Third, generating fully-integrated output files with each user experience can create redundancies in the storage process, increase hosting expenses, and/or decrease overall system capacity and/or efficiency.
- To address these issues, certain systems and methods isolate the user's recorded performance from the prerecorded background throughout the entire process, such that the images are not combined, except visually during performance playback. In certain embodiments, the washed clip is not altered or re-recorded during a performance. Rather, the washed clip can be merely referenced again if a playback option is selected, then replayed in tandem with the user's overlay segment.
- There are several additional or alternative ways that the video files can be protected in the disclosed interactive role performance systems. In one embodiment, the prerecorded background content and/or the recorded performance is stored in a non-standard video format such that it is unplayable in standard video formats or players. In some embodiments, the fact that the images are separate or the background content is an individual file is concealed. In certain embodiments, the background and/or user media files are stored separately on the user's local system.
- Other content protection methods can also be used. One method is to lock each background content clip to a specific operating system, and/or render them non-transferable between systems. Another method is to make only the user file uploadable to a website for hosting and/or sharing, and render the background video unsharable. In some embodiments, an online system runs an auto query each time an offline system becomes web enabled in order to register the software operating system and/or lock the content to the that system. Another method is to use a dynamic URL for a website, and/or change it regularly. In some embodiments, the uploaded clips are digitally âwatermarkedâ in order to track their use should they be found outside controlled channels. In one embodiment, combined content is stored only on a secure storage location, such as a controlled server, and only links or references to the protected content are allowed from programs or applets. The programs can stream the files from the secure storage location without saving a copy of the content. In some embodiments, the programs are authorized by the secure storage location before access to the protected content is allowed.
- Furthermore, there are processes contemplated in which the user-generated content can be filtered in order to remove objectionable material. One approach is to establish nudity and/or profanity filters in the finished file upload process. During upload, each performance can be filtered in real time for nudity and/or profanity, and then assigned a numerical number based on its evaluation. Numbers below a certain benchmark can be manually reviewed by screeners, and/or numbers below a certain lower benchmark can be automatically rejected and discarded. Another way can be a complete manual review of the user generated content.
- One advantage to utilizing the Internet or other network as a platform is the ability to engage multiple users from multiple remote locations with multiple cameras in numerous forms of interaction.
-
FIG. 7 illustrates an embodiment of a data flow diagram of an interactive role performance system configured to operate with multiple players in different geographic locations. For instance, a user in New York and a user in California can mutually or individually select a scene from STAR WARS to perform, such as with opposite roles. AtBlock 705, the California user selects the scene on his or her interactive role performance system. AtBlock 710, the California user selects the role of LUKE SKYWALKER for playing. AtBlock 720, the New York user selects the same scene on his or her interactive role performance system. AtBlock 725, the New York user chooses the role of DARTH VADER. When the players start the scene and play out the scene, the resulting composition is a single ensemble scene, even though the users are geographically distant. AtBlock 730, California user data and New York user data are combined to produce a single ensemble scene, wherein both participant images are combined in the same background scene. - Using an online platform, more complex media bundles and/or data files can also be quickly accessed and/or executed, making more intricate user experiences possible. The above multi-player effect, for instance, can require the use of additional background content bundles of completely washed scenes (see above), driven by data files or metadata which trigger the camera inputs from each respective user. The multi-camera use could also be executed such that a user in New York selects a previously performed clip posted by his friend in California, and decides to act opposite his friend after the fact.
- In certain embodiments, this process can require controlled switching of the California clip (where the user performed as LUKE SKYWALKER) with a washed content prepared for DARTH VADER in order to constitute the background for the new, live user image streaming from New York. These multi-player scenes can thus either be performed live by both parties, or live by one party and prerecorded by the other party. They can also either play the opposite or the same character, and either replace characters or simply be inserted into the same scene. In some embodiments, there can be three or more users working together to create a single scene. Multi-camera, multi-location video games can also function well in this environment. It is understood that interactive role performance system can also be used for multiple players in the same location (e.g., participants in the same living room).
- In certain embodiments, the online environment can be a website for sharing combined video and/or buying additional washed content. The website allows users to share their combined videos with other viewers. Users can rate videos, allowing videos to be ranked based on popularity. Videos can also be ranked based on number of views, age, and/or other selection criteria. Users can compete in contests using their performances. Users can choose to share videos with select individuals or can choose to make videos publicly available to anyone. Users can also build social networks with each other.
- The website can comprise a home page which displays user information after the user logs in. User information can comprise messages, comments, invites, uploads, downloads, viewing statistics, and/or popularity of performances. The website can further comprise performance gallery pages where combined videos are displayed and where users may search for combined videos based on associated metadata. The website can further comprise store pages, where additional content may be purchased for the interactive
role performance system 100. The purchased content can then be downloaded to the interactiverole performance system 100. - In addition to allowing increased protection, filtering, efficiency, and/or multi-camera playability, the Internet offers several advantages. These comprise, but are not limited to, the ability to generate and monetize script print-outs, teleprompters and application text output for scripts and lyrics, the ability to generate a video introduction to be used to introduce emails and postings, the ability to select between output devices including various computer platforms, various multimedia and mobile devices, set-top boxes, and video gaming consoles, the ability to download clips with embedded data files, the ability to perform clips with the use of an online interface, the ability to upload files into a sharing forum, vote on clips, share comments, feedback and ranking information, and award prizes, the ability to select the sharing/playback information between private/public and limited/mass distribution options, the ability to select between output options and platforms, the ability to generate still frames and order customized products such as T-shirts containing the generated still frames, the ability to utilize 3D rendering and avatar elements to enhance the production value, the ability to use video and audio special effects either before, during, or after a performance, the ability to include animation of any kind, the ability to create or utilize video mash-ups, the ability to select additional levels of parental controls and content filtering, the ability to manipulate content through audio and video mixing tools, editing suites, mash-up controls, and the like, and/or the ability to record new content such as audio information to mix into the clips.
- In certain embodiments, the interactive
role performance system 100 provides a user interface for the user to control the video compositing process.FIG. 8 illustrates an embodiment of awireframe 800 of various pages of a video compositing interface. - In certain embodiments, the interactive
role performance system 100 provides a graphical user interface for the user to view and/or select washed scenes and/or combined video scenes. A cascade user interface can advantageously allow the user to view a plurality of scenes or data tiles on one screen (Block 805). In some embodiments, the cascade interface comprises a plurality of rows and columns of images of scenes. The scenes can be still or static images and/or video clips.FIG. 9 illustrates an exemplary screen display of one embodiment of the cascade interface. - As illustrated in
FIG. 9 , thedisplay 900 includes four columns and five rows of screen or data tiles arranged in a three dimensional array. Each of the tiles further includes a graphical representation of the media content that it represents, such as still images of movies. The illustrated bottom, front orfirst row position 905, displays the âclosestâ scenes or screen tiles to the user. Close scenes can be denoted by a color image, unless the scene is from a black and white movie, larger size, and/or a title. Scenes on âfartherâ rows are progressively grayed out and/or smaller. The âcloserâ scenes partially overlay the subsequent âfartherâ scenes. Additional information can be superimposed on the image, such as the number of washed scenes, the run-time of scenes, the number of combined video created using washed scenes from themovie 915, and/or the like. Scene ordering can be contextual based. For example, the most recently selected scenes can appear on thefirst row position 905, with less used scenes displayed on progressively further rows. - In
FIG. 9 , the interface is âfocusedâ on the first row of data tiles, that is, the selected scene is one from the first row. Keystrokes or other user controls can send a selection command to the interface that can move the focus from one selected scene to another on the first row. Focus can be shifted to another row by moving the cascade and selecting a new first row and/or by using a mouse to select a clip on another row. - In
FIG. 9 , up to 20 scenes can be displayed at one time. Other scenes are displayed by ârollingâ or shifting the cascade. For example, thefirst row position 905 consists of scenes 1-4, thesecond row position 907 consists of 5-8 and so on until thefifth row position 913 of 17-20. Scenes above 20 are undisplayed. The user can use an input device, such as a keyboard, keypad, touch screen, mouse, remote, and/or the like to send a navigation command to the interface to roll down the cascade. The first row of data tiles can be rolled or shifted out of the current selection with the second row of scenes 5-8 appearing in the first orfront row position 905. Subsequent rows move to âcloserâ row positions. A new fifth row with scenes 21-24 appears in the furthest, end orback row position 913. - In certain embodiments, the cascade can be rolled until the undisplayed scenes are sequentially displayed to the user. The cascade can stop rolling once the last scene is displayed or it can loop back to the initial first row, with scenes 1-4 appearing in the
fifth row position 913 with the user able to keep rolling the cascade and repeating the display of the scenes. The cascade can also be rolled up, with new scenes appearing as theclosest row 905 instead of the farthest orend row 913. It is understood that fewer or greater number of scenes can be displayed by using fewer or greater numbers of rows and/or columns. In certain embodiments, more than four columns can be displayed. In some embodiments, less than four columns can be displayed. In certain embodiments, more than five rows can be displayed. In some embodiments, fewer than five rows can be displayed. The number of rows and columns used can depend on the number of scenes to be displayed on a single screen. - In
FIG. 9 , filters can further be applied to the scenes such that only certain categories of scenes are displayed. In certain embodiments,selectable filters -
FIG. 10 illustrates an exemplary screen display of one embodiment of the movement and selection process of the cascade interface ofFIG. 9 . The user can roll down the cascade, causing new images to be displayed. In certain embodiments, when the user holds down a button on the mouse while the mouse pointer is over the cascade, the mouse pointer changes to a gripping hand, indicating the user has grabbed the cascade and can now roll the cascade. Dragging up can roll the cascade up, while dragging down can roll the cascade down. The cascade can roll through multiple rows depending on how far the user moves the mouse. After the user finishes rolling the cascade, the displayed scenes appear in the normal cascade configuration ofFIG. 9 . The user can then select an image. It is understood that other input devices can be used to control the cascade, including, but not limited to, a keyboard, arrow keys, a mouse, a remote control, a touch pad, or the like. - A selected image can display a play icon so that the user can play the scene corresponding to the image. The select screen of
FIG. 10 illustrates one embodiment where selection converts the image to a video clip so that the movie scene is played in the cascade. In some embodiments, hovering a cursor over the scene can cause the scene to automatically play. Selecting a scene can also cause the cascade interface to proceed to another screen, such as the performance screen atBlock 810 inFIG. 8 , which displays the washed content from the movie, and the selectable actors. Selecting a row and/or clip can also cause the cascade to âfold downâ into a single row, with the further rows being folded into the first row simulating a stack of cards or a ROLODEX. - It will be recognized that the cascade can operate in various manners and is not limited to displaying scenes. For example, the cascade could display the closest images in the top row instead of the bottom row. The cascade could be rolled horizontally instead of vertically. The user could use a keyboard, touch screen, keypad and/or remote to move the cascade. The user could select the number of rows and columns that make up the cascade. The user could re-order the images by moving images into different positions. Closer and farther images could be indicated using other visual cues or no cues could be used. The cascade could be used to display titles, DVD covers, album covers, photographs, icons, and/or other images.
- Referring back to
FIG. 8 , after an image in the cascade is selected, thewireframe 800 moves to Block 810, where a scene to be performed can be selected.FIG. 11 illustrates an exemplary screen display of one embodiment of the performance screen. Acascade interface 1105 displays the available washed content from the selected movie. Thecascade interface 1105 ofFIG. 11 can behave similarly to the cascade interfaces ofFIGS. 9 and 10 . Alarge display window 1110 can display the washed content scene in a higher resolution.Scene information 1115 associated with the washed content can also be displayed, and may comprise, for example, editable title, description, category, and/or tags associated with the washed content. - After a performance is selected, the
wireframe 800 proceeds to a role selection screen atBlock 815. The role selection screen allows a user to select an actor to play, to be added in the scene as an extra, and/or to select a completely washed clip where no actors are left.FIG. 12 illustrates one embodiment of the role selection screen. - Moreover, in certain embodiments, the user can choose to display a larger view of the display window when viewing a scene.
FIG. 13 illustrates one embodiment of a large screen view of the display window. - After selecting a role, the user can print out a script of the lines in the scene.
FIG. 14 illustrates an exemplary screen display of one embodiment of a script printing screen. In certain embodiments, the script can be provided as a PDF, text, Word document, image, and/or other file type. - Referring back to
FIG. 8 , atBlock 820, the user is instructed on how to setup the camera.FIG. 15 illustrates an exemplary screen display of one embodiment of the camera setup screen. The instructions can comprise positioning information of the user relative to the camera, green screen and/or monitor. Before recording can proceed, the camera can capture a reference frame of the scene.FIG. 16 illustrates an exemplary screen display of one embodiment of the reference frame setup. The user can be instructed to step out of the scene and press record to capture a reference frame of the background. In certain embodiments, the camera auto focus and/or white exposure may be turned off to get a more consistent background image. - At
Block 825, thewireframe 800 moves to a record video screen, wherein the participant records a video of himself or herself to be combined with the washed content. For instance, the video combining process can be include thecompositing process 200 described above with reference toFIG. 2 . In some embodiments, another role can be selected, allowing one participant to play multiple roles or more than one participant to play roles in the same scene. - At
Block 830, the user can add an introduction for the combined video.FIG. 17 illustrates an exemplary screen display of one embodiment of an add introduction screen. In certain embodiments, a cascade displays available backgrounds. The background can be a message, advertisement, product placement, logo, still image, combinations of the above, or the like. A display window shows a larger image of the selected background. The user can record an introduction using a process similar to thevideo compositing process 200 described inFIG. 2 . The user can add metadata to the introduction, such as title, description, category, and/or tags. Once the combined video is complete, the user can upload the video to a central storage for sharing, such as a website. - The user can access the settings screen, at
Block 835, from many of the interface screens.FIGS. 18-20 illustrate exemplary screen displays of one embodiment of the setting screens. The user can determine recording settings, user account settings, and/or parental control settings. - It should be noted that the above developments would accompany any embodiment of the system, whether as a stand-alone hardware device for the living room, a computer-based system of any platform, on video game systems of any video game platform, any mobile technology, any public venue system or kiosk, or any other foreseeable embodiment.
- In certain embodiments, the interactive
role performance system 100 can be provided in a self-contained, mobile unit. The mobile unit can be a movable kiosk, an automobile, and/or a portable device. The mobile units can be set up at college campuses, high schools, movie theaters, retailers and/or other public venues. Users can use the mobile units to create performances without having to purchase their own system. - In some embodiments, the interactive
role performance system 100 is provided in a mobile device, such as a laptop, PDA, cell phone, smart phone, or the like. The mobile device can be used to view, preview, and/or record media content. In some embodiments, the mobile device is connected to an online content database from which the mobile device can upload participant performances and download washed content and other users' performances. - In certain embodiments, the interactive
role performance system 100 can be provided as a package comprising of a green screen, a stand for the screen, a USB camera, a camera hook or clip, a remote, a tripod, and/or a CD or DVD containing software implementing the functions of the interactive role performance system and a number of prerecorded content. Moreover, systems and methods disclosed herein can be advantageously used with the video compositing systems and method disclosed in U.S. Pat. No. 7,528,890, issued May 5, 2009, which is hereby incorporated herein by reference to be considered part of this specification. - In some embodiments, the interactive
role performance system 100 can be used in a gaming system. For example, a gamer can use the interactiverole performance system 100 to record his actions and insert it into a game. The game could be a music video game where the gamer is playing a musical instrument. The gamer's image could be recorded and inserted into the game as a band member playing a song onstage. The gamer could also be inserted into a music video for the song that the gamer is playing. The interactiverole performance system 100 can be used in other types of games, such as a movie making game, a fighting game, and/or a role playing game. - Similarly, the system can be used in a variety of markets or distribution channels, such as education, airlines, prisons, or for gaming, dating, corporate training, education, professional services, and/or entertainment use, in either the U.S. or internationally. It can be used for advertising or promotions, product placement, viral marketing, on-line sharing, contests, surveys, consumer products, affiliate programs, clothing and apparel, still photographs, avatars, greeting cards, mash-ups, hardware, software, or licensing.
- The content may be, but is not limited to film, television, music, music videos, documentaries, news, sports, video games, original content, user-generated content, licensed content, royalty free content, any pre-existing moving image or graphic content, still images, digital avatars, and/or online content. For example, a user can replace a sports commentator in a sports clip and provide alternate commentary, giving his own analysis and/or opinion of the game. The content may or may not include audio, dialogue, and/or effects. The content can be in English or any other language.
- The user experience might include, but would not be limited to, a keyboard, mouse, manual, or remote user interface, the use of a wired or wireless webcam, camera positioning via manual or digital means, sound recording by means of one or more wired, wireless, or built-in microphones, accessories such as props, costumes, a colored green screen with or without a stand, no green screen, coin-operated kiosks with or without an operator or operators, automated interface navigation with manual or automatic data entry, automated demos, tutorials, and explanations, any type of compositingâwith or without a chroma key, and/or any type of output on any platform.
- Furthermore, in certain embodiments, the systems and methods described herein can advantageously be implemented using computer software, hardware, firmware, or any combination of software, hardware, and firmware. In one embodiment, the system is implemented as a number of software modules that comprise computer executable code for performing the functions described herein. In certain embodiments, the computer-executable code is executed on one or more general purpose computers. However, a skilled artisan will appreciate, in light of this disclosure, that any module that can be implemented using software to be executed on a general purpose computer can also be implemented using a different combination of hardware, software or firmware. For example, such a module can be implemented completely in hardware using a combination of integrated circuits. Alternatively or additionally, such a module can be implemented completely or partially using specialized computers designed to perform the particular functions described herein rather than by general purpose computers.
- Moreover, certain embodiments of the invention are described with reference to methods, apparatus (systems) and computer program products that can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the acts specified herein to transform data from a first state to a second state.
- These computer program instructions can be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified herein.
- The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the acts specified herein.
- While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure.
Claims (20)
1. A method for preparing media content for use with a video image combining system, the method comprising:
receiving original video content comprising multiple frames having a plurality of original characters associated therewith;
selecting particular frames of the multiple frames displaying at least one of the plurality of original characters;
for each of the particular frames displaying the at least one original character,
receiving the particular frame, wherein the particular frame displays a background image in which the at least one original character occupies a position therein, and
modifying the particular frame to erase the at least one original character, wherein said modifying comprises digitally removing the at least one character by extending the background image of the particular frame to fill the position of the at least one original character to allow for subsequent insertion of a replacement character in the position;
combining the modified particular frames with remaining frames of the multiple frames to create modified video content; and
generating metadata associated with the modified video content, the metadata being configured to direct the subsequent insertion of the replacement character into the modified video content, the metadata indicating at least,
a first frame and a last frame of the particular frames, and
the position the at least one original character occupied in the original video content.
2. The method of claim 1 , further comprising for at least a portion of the particular frames displaying the at least one original character:
generating a matte layer corresponding to the particular frame, wherein the matte layer delineates an element of the particular frame in a foreground of the at least one original character; and
associating the matte layer with the modified video content.
3. The method of claim 1 , wherein the metadata comprises data for generating a visual outline of the position of the at least one original character.
4. The method of claim 1 , further comprising:
receiving original audio content comprising audio associated with the plurality of original characters; and
modifying the original audio content to remove audio associated with the at least one original character to create modified audio content.
5. The method of claim 4 , further comprising generating subtitle data that corresponds to the original audio content.
6. The method of claim 5 , further comprising combining the subtitle data, the modified audio content, and the modified video content to create a bundled media content file.
7. The method of claim 1 , additionally comprising storing the modified video content and the metadata on a common computer-readable medium.
8. The method of claim 1 , wherein the original video content comprises at least one of a portion of a movie, a television show, and a commercial.
9. The method of claim 1 , additionally comprising:
selecting second particular frames of the multiple frames displaying a second one of the plurality of original characters; and
for each of the second particular frames, modifying the second particular frame to erase the second original character, wherein said modifying comprises digitally removing the second original character by extending the background image to fill a position of the second original character to allow for subsequent insertion of a second replacement character in the position of the second original character.
10. A system for preparing media content for use with a video image combining system, the system comprising:
a database configured to store original video content, the original video content comprising multiple frames having a plurality of original characters associated therewith;
an editing module configured to execute on a computing device, the editing module being configured to,
extract consecutive select frames of the multiple frames that display at least one of the plurality of original characters within a background image,
modify the select frames to remove the at least one original character, wherein said modifying comprises extending the background image in each of the select frames over a position of the at least one original character, and
arrange the modified select frames with other frames of the multiple frames to generate modified video content; and
a processing module configured to generate metadata associated with the modified video content to coordinate a subsequent combination of a replacement character image with the modified video content, the metadata further comprising,
first data identifying at least a first frame and a last frame of the select frames, and
second data indicating the position of the at least one original character in the original video content.
11. The system of claim 10 , wherein the metadata is further indicative of at least one of hue, color, and lighting information of the modified video content.
12. The system of claim 10 , wherein the metadata is further indicative of at least one of a camera location, a camera distance, a camera selection, and a camera angle of the modified video content.
13. The system of claim 10 , wherein the metadata comprises an eXtensible Markup Language (XML) file.
14. The system of claim 10 , wherein the second data is indicative of an outline delineating the position of the at least one original character.
15. The system of claim 10 , wherein the metadata further comprises script data associated with a dialogue of the at least one original character in the original video content.
16. The system of claim 10 , wherein the metadata further comprises user instruction data associated with movement of the at least one original character within the original video content.
17. The system of claim 10 , wherein the original video content comprises a video game.
18. A system for preparing media content for use in interactive video entertainment, the system comprising:
means for receiving original video content comprising multiple frames having an original character associated therewith;
means for selecting particular frames of the multiple frames displaying at least the original character within a background image;
means for modifying the particular frames to remove the original character by extending the background image to replace the original character and to allow for subsequent real-time insertion of a replacement character;
means for combining the modified particular frames with remaining frames of the multiple frames to create modified video content; and
means for generating metadata associated with the modified video content and usable for the subsequent real-time insertion of the replacement character, the metadata indicating at least,
a first frame and a last frame of the particular frames, and
a position of the original character within the particular frames of the original video content.
19. The system of claim 18 , further comprising means for generating at least one matte layer corresponding to the particular frames, wherein the at least one matte layer delineates an object of the particular frames in a foreground of the at least one original character.
20. A computer-readable medium for an interactive video system, the computer-readable medium comprising:
modified media content comprising,
a first plurality of frames representing original video content having a background video image, and
a second plurality of consecutive frames representing modified original video content having the background video image from which an image of at least one original character has been replaced by a continuation of the background video image over a position of the at least one original character; and
metadata associated with the modified media content, the metadata comprising,
first data indicating a beginning frame and an end frame of the second plurality of consecutive frames, and
second data indicating the position of the at least one original character.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/495,548 US20100031149A1 (en) | 2008-07-01 | 2009-06-30 | Content preparation systems and methods for interactive video systems |
US13/869,341 US9143721B2 (en) | 2008-07-01 | 2013-04-24 | Content preparation systems and methods for interactive video systems |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US7736308P | 2008-07-01 | 2008-07-01 | |
US14438309P | 2009-01-13 | 2009-01-13 | |
US12/495,548 US20100031149A1 (en) | 2008-07-01 | 2009-06-30 | Content preparation systems and methods for interactive video systems |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/869,341 Continuation US9143721B2 (en) | 2008-07-01 | 2013-04-24 | Content preparation systems and methods for interactive video systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100031149A1 true US20100031149A1 (en) | 2010-02-04 |
Family
ID=41055364
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/495,517 Active 2031-07-10 US8824861B2 (en) | 2008-07-01 | 2009-06-30 | Interactive systems and methods for video compositing |
US12/495,590 Abandoned US20100035682A1 (en) | 2008-07-01 | 2009-06-30 | User interface systems and methods for interactive video systems |
US12/495,548 Abandoned US20100031149A1 (en) | 2008-07-01 | 2009-06-30 | Content preparation systems and methods for interactive video systems |
US13/869,341 Active 2030-05-16 US9143721B2 (en) | 2008-07-01 | 2013-04-24 | Content preparation systems and methods for interactive video systems |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/495,517 Active 2031-07-10 US8824861B2 (en) | 2008-07-01 | 2009-06-30 | Interactive systems and methods for video compositing |
US12/495,590 Abandoned US20100035682A1 (en) | 2008-07-01 | 2009-06-30 | User interface systems and methods for interactive video systems |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/869,341 Active 2030-05-16 US9143721B2 (en) | 2008-07-01 | 2013-04-24 | Content preparation systems and methods for interactive video systems |
Country Status (3)
Country | Link |
---|---|
US (4) | US8824861B2 (en) |
TW (1) | TW201005583A (en) |
WO (1) | WO2010002921A1 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090132900A1 (en) * | 2007-11-20 | 2009-05-21 | Steven Zielke | Method for producing and outputting web pages via a computer network, and web page produced thereby |
US20100008639A1 (en) * | 2008-07-08 | 2010-01-14 | Sceneplay, Inc. | Media Generating System and Method |
US20100027961A1 (en) * | 2008-07-01 | 2010-02-04 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
US20100142928A1 (en) * | 2005-08-06 | 2010-06-10 | Quantum Signal, Llc | Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream |
US20100162315A1 (en) * | 2008-12-24 | 2010-06-24 | Samsung Electronics Co., Ltd. | Program information displaying method and display apparatus using the same |
US20100278450A1 (en) * | 2005-06-08 | 2010-11-04 | Mike Arthur Derrenberger | Method, Apparatus And System For Alternate Image/Video Insertion |
US20100287491A1 (en) * | 2009-04-06 | 2010-11-11 | Robby Gurdan | Desktop control for a host apparatus of a digital multimedia network |
US20110025918A1 (en) * | 2003-05-02 | 2011-02-03 | Megamedia, Llc | Methods and systems for controlling video compositing in an interactive entertainment system |
US20110219076A1 (en) * | 2010-03-04 | 2011-09-08 | Tomas Owen Roope | System and method for integrating user generated content |
WO2011143615A1 (en) * | 2010-05-14 | 2011-11-17 | Robert Patton Stribling | Systems and methods for providing event-related video sharing services |
US20120030376A1 (en) * | 2010-07-30 | 2012-02-02 | Verizon Patent And Licensing Inc. | User-based prioritization for content transcoding |
WO2012023951A1 (en) * | 2010-08-16 | 2012-02-23 | Boardwalk Technology Group, Llc | Mobile replacement-dialogue recording system |
WO2012051585A1 (en) * | 2010-10-14 | 2012-04-19 | Fixmaster, Inc. | System and method for creating and analyzing interactive experiences |
US20120151341A1 (en) * | 2010-12-10 | 2012-06-14 | Ko Steve S | Interactive Screen Saver Method and Apparatus |
US20120190456A1 (en) * | 2011-01-21 | 2012-07-26 | Rogers Henk B | Systems and methods for providing an interactive multiplayer story |
US20120265859A1 (en) * | 2011-04-14 | 2012-10-18 | Audish Ltd. | Synchronized Video System |
US20120281114A1 (en) * | 2011-05-03 | 2012-11-08 | Ivi Media Llc | System, method and apparatus for providing an adaptive media experience |
US20130176379A1 (en) * | 2010-12-02 | 2013-07-11 | Polycom, Inc. | Removing a Self Image From a Continuous Presence Video Image |
US20140022396A1 (en) * | 2012-07-20 | 2014-01-23 | Geoffrey Dowd | Systems and Methods for Live View Photo Layer in Digital Imaging Applications |
WO2014091484A1 (en) * | 2012-12-11 | 2014-06-19 | Scooltv, Inc. | A system and method for creating a video |
US9031375B2 (en) | 2013-04-18 | 2015-05-12 | Rapt Media, Inc. | Video frame still image sequences |
EP2711851A3 (en) * | 2012-09-25 | 2016-07-27 | Samsung Electronics Co., Ltd | Display apparatus and control method thereof |
WO2016134415A1 (en) * | 2015-02-23 | 2016-09-01 | Zuma Beach Ip Pty Ltd | Generation of combined videos |
US20170024098A1 (en) * | 2014-10-25 | 2017-01-26 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US9912721B2 (en) | 2010-05-14 | 2018-03-06 | Highlight Broadcast Network, Llc | Systems and methods for providing event-related video sharing services |
US10165245B2 (en) | 2012-07-06 | 2018-12-25 | Kaltura, Inc. | Pre-fetching video content |
US20190012053A1 (en) * | 2017-07-07 | 2019-01-10 | Open Text Sa Ulc | Systems and methods for content sharing through external systems |
US10410266B2 (en) | 2012-08-08 | 2019-09-10 | Lowe's Companies, Inc. | Systems and methods for recording transaction and product customization information |
US11081140B1 (en) * | 2020-06-24 | 2021-08-03 | Facebook, Inc. | Systems and methods for generating templates for short-form media content |
US11145109B1 (en) * | 2020-10-05 | 2021-10-12 | Weta Digital Limited | Method for editing computer-generated images to maintain alignment between objects specified in frame space and objects specified in scene space |
US11347387B1 (en) * | 2021-06-30 | 2022-05-31 | At&T Intellectual Property I, L.P. | System for fan-based creation and composition of cross-franchise content |
US20220172401A1 (en) * | 2020-11-27 | 2022-06-02 | Canon Kabushiki Kaisha | Image processing apparatus, image generation method, and storage medium |
US20220286758A1 (en) * | 2020-07-17 | 2022-09-08 | Beijing Bytrdance Network Technology Co., Ltd. | Video recording method, apparatus, electronic device and non-transitory storage medium |
US20230076000A1 (en) * | 2021-08-31 | 2023-03-09 | JBF Interlude 2009 LTD | Shader-based dynamic video manipulation |
US11653072B2 (en) * | 2018-09-12 | 2023-05-16 | Zuma Beach Ip Pty Ltd | Method and system for generating interactive media content |
US20230186015A1 (en) * | 2014-10-25 | 2023-06-15 | Yieldmo, Inc. | Methods for serving interactive content to a user |
Families Citing this family (147)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8074248B2 (en) | 2005-07-26 | 2011-12-06 | Activevideo Networks, Inc. | System and method for providing video content associated with a source image to a television in a communication network |
US9826197B2 (en) | 2007-01-12 | 2017-11-21 | Activevideo Networks, Inc. | Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device |
WO2008088741A2 (en) | 2007-01-12 | 2008-07-24 | Ictv, Inc. | Interactive encoded content system including object models for viewing on a remote device |
WO2009120616A1 (en) | 2008-03-25 | 2009-10-01 | Wms Gaming, Inc. | Generating casino floor maps |
KR20100028344A (en) * | 2008-09-04 | 2010-03-12 | ìŒì±ì ììŁŒìíìŹ | Method and apparatus for editing image of portable terminal |
CA2774649A1 (en) * | 2008-09-18 | 2010-03-25 | Screen Test Studios, Llc | Interactive entertainment system for recording performance |
US8549404B2 (en) | 2009-04-30 | 2013-10-01 | Apple Inc. | Auditioning tools for a media editing application |
US9564173B2 (en) | 2009-04-30 | 2017-02-07 | Apple Inc. | Media editing application for auditioning different types of media clips |
US8881013B2 (en) * | 2009-04-30 | 2014-11-04 | Apple Inc. | Tool for tracking versions of media sections in a composite presentation |
US20100293465A1 (en) * | 2009-05-14 | 2010-11-18 | Kleinschmidt Paul E | Teleprompter System, Method, And Device |
KR101377235B1 (en) * | 2009-06-13 | 2014-04-10 | ëĄë ì€í, ìžìœíŹë ìŽí°ë | System for sequential juxtaposition of separately recorded scenes |
US10636413B2 (en) | 2009-06-13 | 2020-04-28 | Rolr, Inc. | System for communication skills training using juxtaposition of recorded takes |
US9424534B2 (en) * | 2009-07-19 | 2016-08-23 | Infomedia Services Limited | Voting system with content |
KR101624648B1 (en) * | 2009-08-05 | 2016-05-26 | ìŒì±ì ììŁŒìíìŹ | Digital image signal processing method, medium for recording the method, digital image signal pocessing apparatus |
KR101437626B1 (en) * | 2009-08-12 | 2014-09-03 | í°ìš ëŒìŽìŒì± | System and method for region-of-interest-based artifact reduction in image sequences |
US9165605B1 (en) * | 2009-09-11 | 2015-10-20 | Lindsay Friedman | System and method for personal floating video |
KR101648339B1 (en) * | 2009-09-24 | 2016-08-17 | ìŒì±ì ììŁŒìíìŹ | Apparatus and method for providing service using a sensor and image recognition in portable terminal |
US9981193B2 (en) | 2009-10-27 | 2018-05-29 | Harmonix Music Systems, Inc. | Movement based recognition and evaluation |
US8702485B2 (en) | 2010-06-11 | 2014-04-22 | Harmonix Music Systems, Inc. | Dance game and tutorial |
WO2011056657A2 (en) * | 2009-10-27 | 2011-05-12 | Harmonix Music Systems, Inc. | Gesture-based user interface |
US9628722B2 (en) * | 2010-03-30 | 2017-04-18 | Personify, Inc. | Systems and methods for embedding a foreground video into a background feed based on a control input |
US9358456B1 (en) | 2010-06-11 | 2016-06-07 | Harmonix Music Systems, Inc. | Dance competition game |
US20120017150A1 (en) * | 2010-07-15 | 2012-01-19 | MySongToYou, Inc. | Creating and disseminating of user generated media over a network |
US8381246B2 (en) * | 2010-08-27 | 2013-02-19 | Telefonaktiebolaget L M Ericsson (Publ) | Methods and apparatus for providing electronic program guides |
US8649592B2 (en) | 2010-08-30 | 2014-02-11 | University Of Illinois At Urbana-Champaign | System for background subtraction with 3D camera |
US8463773B2 (en) * | 2010-09-10 | 2013-06-11 | Verizon Patent And Licensing Inc. | Social media organizer for instructional media |
US9264585B2 (en) * | 2010-09-22 | 2016-02-16 | Cisco Technology Inc. | Enriched digital photographs |
US9542975B2 (en) * | 2010-10-25 | 2017-01-10 | Sony Interactive Entertainment Inc. | Centralized database for 3-D and other information in videos |
US9773059B2 (en) * | 2010-11-09 | 2017-09-26 | Storagedna, Inc. | Tape data management |
US9172943B2 (en) * | 2010-12-07 | 2015-10-27 | At&T Intellectual Property I, L.P. | Dynamic modification of video content at a set-top box device |
JP5706718B2 (en) * | 2011-03-02 | 2015-04-22 | Kddiæ ȘćŒäŒç€Ÿ | Movie synthesis system and method, movie synthesis program and storage medium thereof |
EP2695388B1 (en) | 2011-04-07 | 2017-06-07 | ActiveVideo Networks, Inc. | Reduction of latency in video distribution networks using adaptive bit rates |
KR20120119504A (en) * | 2011-04-21 | 2012-10-31 | íê”ì ìí”ì ì°ê”Źì | System for servicing game streaming according to game client device and method |
US8823745B2 (en) * | 2011-06-02 | 2014-09-02 | Yoostar Entertainment Group, Inc. | Image processing based on depth information and color data of a scene |
US9210208B2 (en) * | 2011-06-21 | 2015-12-08 | The Nielsen Company (Us), Llc | Monitoring streaming media content |
US8966402B2 (en) * | 2011-06-29 | 2015-02-24 | National Taipei University Of Education | System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof |
TWI575457B (en) * | 2011-07-06 | 2017-03-21 | ç§ççČ | System and method for online editing and exchanging interactive three dimension multimedia, and computer-readable medium thereof |
US8943396B2 (en) | 2011-07-18 | 2015-01-27 | At&T Intellectual Property I, Lp | Method and apparatus for multi-experience adaptation of media content |
US9084001B2 (en) | 2011-07-18 | 2015-07-14 | At&T Intellectual Property I, Lp | Method and apparatus for multi-experience metadata translation of media content with metadata |
US8942412B2 (en) | 2011-08-11 | 2015-01-27 | At&T Intellectual Property I, Lp | Method and apparatus for controlling multi-experience translation of media content |
US9237362B2 (en) | 2011-08-11 | 2016-01-12 | At&T Intellectual Property I, Lp | Method and apparatus for multi-experience translation of media content with sensor sharing |
US20130073964A1 (en) * | 2011-09-20 | 2013-03-21 | Brian Meaney | Outputting media presentations using roles assigned to content |
US9240215B2 (en) * | 2011-09-20 | 2016-01-19 | Apple Inc. | Editing operations facilitated by metadata |
JP5882683B2 (en) * | 2011-11-02 | 2016-03-09 | ăă€ăăłæ ȘćŒäŒç€Ÿ | Information processing apparatus and method |
US20130279605A1 (en) * | 2011-11-30 | 2013-10-24 | Scott A. Krig | Perceptual Media Encoding |
CN104169941A (en) * | 2011-12-01 | 2014-11-26 | è±çčć æ怫çčç§ææéèŽŁä»»ć Źćž | Automatic tracking matte system |
US9626798B2 (en) | 2011-12-05 | 2017-04-18 | At&T Intellectual Property I, L.P. | System and method to digitally replace objects in images or video |
EP2815582B1 (en) * | 2012-01-09 | 2019-09-04 | ActiveVideo Networks, Inc. | Rendering of an interactive lean-backward user interface on a television |
US9363540B2 (en) | 2012-01-12 | 2016-06-07 | Comcast Cable Communications, Llc | Methods and systems for content control |
WO2013106916A1 (en) * | 2012-01-20 | 2013-07-25 | Karaoke Reality Video Inc. | Interactive audio/video system and method |
US9800945B2 (en) | 2012-04-03 | 2017-10-24 | Activevideo Networks, Inc. | Class-based intelligent multiplexing over unmanaged networks |
US9123084B2 (en) | 2012-04-12 | 2015-09-01 | Activevideo Networks, Inc. | Graphical application integration with MPEG objects |
US8490006B1 (en) * | 2012-09-04 | 2013-07-16 | State Farm Mutual Automobile Insurance Company | Scene creation for building automation systems |
US20140089804A1 (en) * | 2012-09-24 | 2014-03-27 | Burkiberk Ltd. | Interactive creation of a movie |
TWI506576B (en) * | 2012-11-06 | 2015-11-01 | Hung Yao Yeh | A digital media processing device and karaoke account system |
US8948509B2 (en) * | 2012-11-15 | 2015-02-03 | Adobe Systems Incorported | Blending with multiple blend modes for image manipulation |
US10220303B1 (en) | 2013-03-15 | 2019-03-05 | Harmonix Music Systems, Inc. | Gesture-based music game |
US9292280B2 (en) | 2013-03-15 | 2016-03-22 | Google Inc. | Systems and methods for multi-tiered format registration for applications |
US10275128B2 (en) | 2013-03-15 | 2019-04-30 | Activevideo Networks, Inc. | Multiple-mode system and method for providing user selectable video content |
US10008238B2 (en) | 2013-05-02 | 2018-06-26 | Waterston Entertainment (Pty) Ltd | System and method for incorporating digital footage into a digital cinematographic template |
CA2911553C (en) | 2013-05-06 | 2021-06-08 | Noo Inc. | Audio-video compositing and effects |
US9298778B2 (en) * | 2013-05-14 | 2016-03-29 | Google Inc. | Presenting related content in a stream of content |
US9489430B2 (en) * | 2013-05-14 | 2016-11-08 | Google Inc. | System and method for identifying applicable third-party applications to associate with a file |
US9805113B2 (en) * | 2013-05-15 | 2017-10-31 | International Business Machines Corporation | Intelligent indexing |
US9367568B2 (en) * | 2013-05-15 | 2016-06-14 | Facebook, Inc. | Aggregating tags in images |
US9342210B2 (en) * | 2013-05-17 | 2016-05-17 | Public Picture LLC | Video mixing method and system |
US9219922B2 (en) | 2013-06-06 | 2015-12-22 | Activevideo Networks, Inc. | System and method for exploiting scene graph information in construction of an encoded video sequence |
US9294785B2 (en) | 2013-06-06 | 2016-03-22 | Activevideo Networks, Inc. | System and method for exploiting scene graph information in construction of an encoded video sequence |
US9326047B2 (en) | 2013-06-06 | 2016-04-26 | Activevideo Networks, Inc. | Overlay rendering of user interface onto source video |
JP6260809B2 (en) * | 2013-07-10 | 2018-01-17 | ăœăăŒæ ȘćŒäŒç€Ÿ | Display device, information processing method, and program |
US9575621B2 (en) | 2013-08-26 | 2017-02-21 | Venuenext, Inc. | Game event display with scroll bar and play event icons |
US10500479B1 (en) | 2013-08-26 | 2019-12-10 | Venuenext, Inc. | Game state-sensitive selection of media sources for media coverage of a sporting event |
US10282068B2 (en) * | 2013-08-26 | 2019-05-07 | Venuenext, Inc. | Game event display with a scrollable graphical game play feed |
WO2015031886A1 (en) * | 2013-09-02 | 2015-03-05 | Thankavel Suresh T | Ar-book |
US10346624B2 (en) | 2013-10-10 | 2019-07-09 | Elwha Llc | Methods, systems, and devices for obscuring entities depicted in captured images |
US10289863B2 (en) | 2013-10-10 | 2019-05-14 | Elwha Llc | Devices, methods, and systems for managing representations of entities through use of privacy beacons |
US9799036B2 (en) | 2013-10-10 | 2017-10-24 | Elwha Llc | Devices, methods, and systems for managing representations of entities through use of privacy indicators |
US10013564B2 (en) | 2013-10-10 | 2018-07-03 | Elwha Llc | Methods, systems, and devices for handling image capture devices and captured images |
US20150104004A1 (en) | 2013-10-10 | 2015-04-16 | Elwha Llc | Methods, systems, and devices for delivering image data from captured images to devices |
US20150106195A1 (en) | 2013-10-10 | 2015-04-16 | Elwha Llc | Methods, systems, and devices for handling inserted data into captured images |
CN104581398B (en) * | 2013-10-15 | 2019-03-15 | ćŻæł°ćć·„äžïŒæ·±ćłïŒæéć Źćž | Data cached management system and method |
US9578377B1 (en) | 2013-12-03 | 2017-02-21 | Venuenext, Inc. | Displaying a graphical game play feed based on automatically detecting bounds of plays or drives using game related data sources |
KR102138521B1 (en) * | 2013-12-12 | 2020-07-28 | ìì§ì ì ìŁŒìíìŹ | Mobile terminal and method for controlling the same |
US9774548B2 (en) * | 2013-12-18 | 2017-09-26 | Personify, Inc. | Integrating user personas with chat sessions |
US9485433B2 (en) | 2013-12-31 | 2016-11-01 | Personify, Inc. | Systems and methods for iterative adjustment of video-capture settings based on identified persona |
US9414016B2 (en) | 2013-12-31 | 2016-08-09 | Personify, Inc. | System and methods for persona identification using combined probability maps |
US9233308B2 (en) * | 2014-01-02 | 2016-01-12 | Ubitus Inc. | System and method for delivering media over network |
US9788029B2 (en) | 2014-04-25 | 2017-10-10 | Activevideo Networks, Inc. | Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks |
US10600245B1 (en) | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US9710972B2 (en) * | 2014-05-30 | 2017-07-18 | Lucasfilm Entertainment Company Ltd. | Immersion photography with dynamic matte screen |
US10264320B2 (en) * | 2014-06-10 | 2019-04-16 | Microsoft Technology Licensing, Llc | Enabling user interactions with video segments |
US10182187B2 (en) | 2014-06-16 | 2019-01-15 | Playvuu, Inc. | Composing real-time processed video content with a mobile device |
US9225527B1 (en) | 2014-08-29 | 2015-12-29 | Coban Technologies, Inc. | Hidden plug-in storage drive for data integrity |
US9307317B2 (en) | 2014-08-29 | 2016-04-05 | Coban Technologies, Inc. | Wireless programmable microphone apparatus and system for integrated surveillance system devices |
CA2964944A1 (en) * | 2014-10-23 | 2016-04-28 | Visa International Service Association | Algorithm for user interface background selection |
KR102128319B1 (en) * | 2014-10-24 | 2020-07-09 | ìì€ìŒìŽ í ë ìœ€ìŁŒìíìŹ | Method and Apparatus for Playing Video by Using Pan-Tilt-Zoom Camera |
US9697595B2 (en) | 2014-11-26 | 2017-07-04 | Adobe Systems Incorporated | Content aware fill based on similar images |
CN104537703A (en) * | 2014-12-01 | 2015-04-22 | èć·äč米俥æŻç§ææéć Źćž | Mobile phone game animation size reducing method |
CN104537702A (en) * | 2014-12-01 | 2015-04-22 | èć·äč米俥æŻç§ææéć Źćž | Animation simulation method for mobile phone software |
US9754355B2 (en) | 2015-01-09 | 2017-09-05 | Snap Inc. | Object recognition based photo filters |
EP3276943A4 (en) * | 2015-03-26 | 2018-11-21 | Sony Corporation | Information processing apparatus, information processing method, and program |
US10026450B2 (en) | 2015-03-31 | 2018-07-17 | Jaguar Land Rover Limited | Content processing and distribution system and method |
US9402050B1 (en) | 2015-05-07 | 2016-07-26 | SnipMe, Inc. | Media content creation application |
US9329748B1 (en) | 2015-05-07 | 2016-05-03 | SnipMe, Inc. | Single media player simultaneously incorporating multiple different streams for linked content |
US9563962B2 (en) | 2015-05-19 | 2017-02-07 | Personify, Inc. | Methods and systems for assigning pixels distance-cost values using a flood fill technique |
EP3099081B1 (en) * | 2015-05-28 | 2020-04-29 | Samsung Electronics Co., Ltd. | Display apparatus and control method thereof |
CN105187885A (en) * | 2015-07-27 | 2015-12-23 | 怩èèæșïŒćäșŹïŒç§ææéć Źćž | Method and device for generating interactive information of interactive TV system |
DE102015112435A1 (en) * | 2015-07-29 | 2017-02-02 | Petter.Letter Gmbh | Method and device for providing individualized video films |
US9916497B2 (en) * | 2015-07-31 | 2018-03-13 | Sony Corporation | Automated embedding and blending head images |
US20170069349A1 (en) * | 2015-09-07 | 2017-03-09 | Bigvu Inc | Apparatus and method for generating a video file by a presenter of the video |
US10276210B2 (en) | 2015-11-18 | 2019-04-30 | International Business Machines Corporation | Video enhancement |
US10846895B2 (en) * | 2015-11-23 | 2020-11-24 | Anantha Pradeep | Image processing mechanism |
US10165171B2 (en) | 2016-01-22 | 2018-12-25 | Coban Technologies, Inc. | Systems, apparatuses, and methods for controlling audiovisual apparatuses |
CN107204034B (en) * | 2016-03-17 | 2019-09-13 | è ŸèźŻç§æïŒæ·±ćłïŒæéć Źćž | A kind of image processing method and terminal |
US9641818B1 (en) * | 2016-04-01 | 2017-05-02 | Adobe Systems Incorporated | Kinetic object removal from camera preview image |
US10370102B2 (en) | 2016-05-09 | 2019-08-06 | Coban Technologies, Inc. | Systems, apparatuses and methods for unmanned aerial vehicle |
US10789840B2 (en) | 2016-05-09 | 2020-09-29 | Coban Technologies, Inc. | Systems, apparatuses and methods for detecting driving behavior and triggering actions based on detected driving behavior |
US10152858B2 (en) | 2016-05-09 | 2018-12-11 | Coban Technologies, Inc. | Systems, apparatuses and methods for triggering actions based on data capture and characterization |
CN112188036B (en) * | 2016-06-09 | 2023-06-09 | è°·ææéèŽŁä»»ć Źćž | Taking pictures through visual impairment |
US9883155B2 (en) | 2016-06-14 | 2018-01-30 | Personify, Inc. | Methods and systems for combining foreground video and background video using chromatic matching |
US10706889B2 (en) * | 2016-07-07 | 2020-07-07 | Oath Inc. | Selective content insertion into areas of media objects |
US10311917B2 (en) * | 2016-07-21 | 2019-06-04 | Disney Enterprises, Inc. | Systems and methods for featuring a person in a video using performance data associated with the person |
US10367865B2 (en) | 2016-07-28 | 2019-07-30 | Verizon Digital Media Services Inc. | Encodingless transmuxing |
US9881207B1 (en) | 2016-10-25 | 2018-01-30 | Personify, Inc. | Methods and systems for real-time user extraction using deep learning networks |
US20180376225A1 (en) * | 2017-06-23 | 2018-12-27 | Metrolime, Inc. | Music video recording kiosk |
US10270986B2 (en) | 2017-09-22 | 2019-04-23 | Feedback, LLC | Near-infrared video compositing |
US10674096B2 (en) | 2017-09-22 | 2020-06-02 | Feedback, LLC | Near-infrared video compositing |
US10560645B2 (en) | 2017-09-22 | 2020-02-11 | Feedback, LLC | Immersive video environment using near-infrared video compositing |
JP7215690B2 (en) | 2018-01-11 | 2023-01-31 | ăšăłă ăă„ăŒïŒăšă«ăšă«ă·ăŒ | Scripting and content generation tools and improved behavior of these products |
US10896294B2 (en) * | 2018-01-11 | 2021-01-19 | End Cue, Llc | Script writing and content generation tools and improved operation of same |
CN112805675A (en) * | 2018-05-21 | 2021-05-14 | æćŠć Źćž | Non-linear media segment capture and editing platform |
US10885691B1 (en) * | 2018-11-08 | 2021-01-05 | Electronic Arts Inc. | Multiple character motion capture |
CN109597562A (en) * | 2018-11-30 | 2019-04-09 | æ·±ćłćžäžçç§ææéć Źćž | A kind of multipoint operation processing method and its system for single-touch screen |
CN109587549B (en) * | 2018-12-05 | 2021-08-13 | ćčżć·é ·çèźĄçźæșç§ææéć Źćž | Video recording method, device, terminal and storage medium |
CN110047334A (en) * | 2019-04-29 | 2019-07-23 | ćć·éżèčæèČç§ææéć Źćž | Tutoring system based on subject mode |
SI25621A (en) * | 2019-06-24 | 2019-10-30 | Moralot Storitveno Podjetje D O O | System and process for preparing interactive electronic objects by video processing for use in systems of electronic devices and / or electronic toys and interactive electronic objects prepared therefrom |
US11270415B2 (en) | 2019-08-22 | 2022-03-08 | Adobe Inc. | Image inpainting with geometric and photometric transformations |
US11792246B2 (en) * | 2019-10-23 | 2023-10-17 | Inner-City Movement, Inc. | System and method for coordinating live acting performances at venues |
US11302038B2 (en) * | 2019-11-19 | 2022-04-12 | Brightline Interactive, LLC | System and method for generating an augmented reality experience |
US11869135B2 (en) * | 2020-01-16 | 2024-01-09 | Fyusion, Inc. | Creating action shot video from multi-view capture data |
CN111726536B (en) * | 2020-07-03 | 2024-01-05 | è ŸèźŻç§æïŒæ·±ćłïŒæéć Źćž | Video generation method, device, storage medium and computer equipment |
CN112307925B (en) * | 2020-10-23 | 2023-11-28 | è ŸèźŻç§æïŒæ·±ćłïŒæéć Źćž | Image detection method, image display method, related device and storage medium |
CA3198731A1 (en) * | 2020-11-13 | 2022-05-19 | Sreekanth Sunil THANKAMUSHY | Gameplay evaluation method and system |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
CN112995600A (en) * | 2021-02-26 | 2021-06-18 | ć€©æŽ„ćŸźèżȘć ç§ææéć Źćž | Integrated video and audio acquisition method and system based on software and hardware |
CN113778419B (en) * | 2021-08-09 | 2023-06-02 | ćäșŹæç«čć± çœç»ææŻæéć Źćž | Method and device for generating multimedia data, readable medium and electronic equipment |
TWI803404B (en) * | 2022-08-02 | 2023-05-21 | ćŽć±±ç§æć€§ćž | System and method for producing composite streaming video for handicraft display |
Citations (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4122490A (en) * | 1976-11-09 | 1978-10-24 | Lish Charles A | Digital chroma-key circuitry |
US4599611A (en) * | 1982-06-02 | 1986-07-08 | Digital Equipment Corporation | Interactive computer-based information display system |
US4688105A (en) * | 1985-05-10 | 1987-08-18 | Bloch Arthur R | Video recording system |
US4800432A (en) * | 1986-10-24 | 1989-01-24 | The Grass Valley Group, Inc. | Video Difference key generator |
US4827344A (en) * | 1985-02-28 | 1989-05-02 | Intel Corporation | Apparatus for inserting part of one video image into another video image |
US4891748A (en) * | 1986-05-30 | 1990-01-02 | Mann Ralph V | System and method for teaching physical skills |
US5099337A (en) * | 1989-10-31 | 1992-03-24 | Cury Brian L | Method and apparatus for producing customized video recordings |
US5144454A (en) * | 1989-10-31 | 1992-09-01 | Cury Brian L | Method and apparatus for producing customized video recordings |
US5151793A (en) * | 1990-02-26 | 1992-09-29 | Pioneer Electronic Corporation | Recording medium playing apparatus |
US5184295A (en) * | 1986-05-30 | 1993-02-02 | Mann Ralph V | System and method for teaching physical skills |
US5249967A (en) * | 1991-07-12 | 1993-10-05 | George P. O'Leary | Sports technique video training device |
US5381184A (en) * | 1991-12-30 | 1995-01-10 | U.S. Philips Corporation | Method of and arrangement for inserting a background signal into parts of a foreground signal fixed by a predetermined key color |
US5428401A (en) * | 1991-05-09 | 1995-06-27 | Quantel Limited | Improvements in or relating to video image keying systems and methods |
US5500684A (en) * | 1993-12-10 | 1996-03-19 | Matsushita Electric Industrial Co., Ltd. | Chroma-key live-video compositing circuit |
US5553864A (en) * | 1992-05-22 | 1996-09-10 | Sitrick; David H. | User image integration into audiovisual presentation system and methodology |
US5566251A (en) * | 1991-09-18 | 1996-10-15 | David Sarnoff Research Center, Inc | Video merging employing pattern-key insertion |
US5681223A (en) * | 1993-08-20 | 1997-10-28 | Inventures Inc | Training video method and display |
US5751337A (en) * | 1994-09-19 | 1998-05-12 | Telesuite Corporation | Teleconferencing method and system for providing face-to-face, non-animated teleconference environment |
US5764306A (en) * | 1997-03-18 | 1998-06-09 | The Metaphor Group | Real-time method of digitally altering a video data stream to remove portions of the original image and substitute elements to create a new image |
US5861881A (en) * | 1991-11-25 | 1999-01-19 | Actv, Inc. | Interactive computer system for providing an interactive presentation with personalized video, audio and graphics responses for multiple viewers |
US5953076A (en) * | 1995-06-16 | 1999-09-14 | Princeton Video Image, Inc. | System and method of real time insertions into video using adaptive occlusion with a synthetic reference image |
US6061532A (en) * | 1995-02-24 | 2000-05-09 | Eastman Kodak Company | Animated image presentations with personalized digitized images |
US6072933A (en) * | 1995-03-06 | 2000-06-06 | Green; David | System for producing personalized video recordings |
US6072537A (en) * | 1997-01-06 | 2000-06-06 | U-R Star Ltd. | Systems for producing personalized video clips |
US6086380A (en) * | 1998-08-20 | 2000-07-11 | Chu; Chia Chen | Personalized karaoke recording studio |
US6122013A (en) * | 1994-04-29 | 2000-09-19 | Orad, Inc. | Chromakeying system |
US6126449A (en) * | 1999-03-25 | 2000-10-03 | Swing Lab | Interactive motion training device and method |
US6285408B1 (en) * | 1998-04-09 | 2001-09-04 | Lg Electronics Inc. | Digital audio/video system and method integrates the operations of several digital devices into one simplified system |
US6283858B1 (en) * | 1997-02-25 | 2001-09-04 | Bgk International Incorporated | Method for manipulating images |
US6335765B1 (en) * | 1999-11-08 | 2002-01-01 | Weather Central, Inc. | Virtual presentation system and method |
US20020007718A1 (en) * | 2000-06-20 | 2002-01-24 | Isabelle Corset | Karaoke system |
US6351265B1 (en) * | 1993-10-15 | 2002-02-26 | Personalized Online Photo Llc | Method and apparatus for producing an electronic image |
US6350199B1 (en) * | 1999-03-16 | 2002-02-26 | International Game Technology | Interactive gaming machine and method with customized game screen presentation |
US20020051009A1 (en) * | 2000-07-26 | 2002-05-02 | Takashi Ida | Method and apparatus for extracting object from video image |
US6384821B1 (en) * | 1999-10-04 | 2002-05-07 | International Business Machines Corporation | Method and apparatus for delivering 3D graphics in a networked environment using transparent video |
US6425825B1 (en) * | 1992-05-22 | 2002-07-30 | David H. Sitrick | User image integration and tracking for an audiovisual presentation system and methodology |
US20020130889A1 (en) * | 2000-07-18 | 2002-09-19 | David Blythe | System, method, and computer program product for real time transparency-based compositing |
US6522787B1 (en) * | 1995-07-10 | 2003-02-18 | Sarnoff Corporation | Method and system for rendering and combining images to form a synthesized view of a scene containing image information from a second image |
US20030051255A1 (en) * | 1993-10-15 | 2003-03-13 | Bulman Richard L. | Object customization and presentation system |
US6535269B2 (en) * | 2000-06-30 | 2003-03-18 | Gary Sherman | Video karaoke system and method of use |
US20030108329A1 (en) * | 2001-12-12 | 2003-06-12 | Meric Adriansen | Advertising method and system |
US6624853B1 (en) * | 1998-03-20 | 2003-09-23 | Nurakhmed Nurislamovich Latypov | Method and system for creating video programs with interaction of an actor with objects of a virtual space and the objects to one another |
US20040100581A1 (en) * | 2002-11-27 | 2004-05-27 | Princeton Video Image, Inc. | System and method for inserting live video into pre-produced video |
US6750919B1 (en) * | 1998-01-23 | 2004-06-15 | Princeton Video Image, Inc. | Event linked insertion of indicia into video |
US20040152058A1 (en) * | 2002-06-11 | 2004-08-05 | Browne H. Lee | Video instructional system and method for teaching motor skills |
US20040218100A1 (en) * | 2003-05-02 | 2004-11-04 | Staker Allan Robert | Interactive system and method for video compositing |
US20050028193A1 (en) * | 2002-01-02 | 2005-02-03 | Candelore Brant L. | Macro-block based content replacement by PID mapping |
US6881067B2 (en) * | 1999-01-05 | 2005-04-19 | Personal Pro, Llc | Video instructional system and method for teaching motor skills |
US20050155086A1 (en) * | 2001-11-13 | 2005-07-14 | Microsoft Corporation | Method and apparatus for the display of still images from image files |
US20050151743A1 (en) * | 2000-11-27 | 2005-07-14 | Sitrick David H. | Image tracking and substitution system and methodology for audio-visual presentations |
US6919892B1 (en) * | 2002-08-14 | 2005-07-19 | Avaworks, Incorporated | Photo realistic talking head creation system and method |
US6937295B2 (en) * | 2001-05-07 | 2005-08-30 | Junaid Islam | Realistic replication of a live performance at remote locations |
US20050215319A1 (en) * | 2004-03-23 | 2005-09-29 | Harmonix Music Systems, Inc. | Method and apparatus for controlling a three-dimensional character in a three-dimensional gaming environment |
US6954498B1 (en) * | 2000-10-24 | 2005-10-11 | Objectvideo, Inc. | Interactive video manipulation |
US7015978B2 (en) * | 1999-12-13 | 2006-03-21 | Princeton Video Image, Inc. | System and method for real time insertion into video with occlusion on areas containing multiple colors |
US7027054B1 (en) * | 2002-08-14 | 2006-04-11 | Avaworks, Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US7034537B2 (en) * | 2001-03-14 | 2006-04-25 | Hitachi Medical Corporation | MRI apparatus correcting vibratory static magnetic field fluctuations, by utilizing the static magnetic fluctuation itself |
US20060136979A1 (en) * | 2004-11-04 | 2006-06-22 | Staker Allan R | Apparatus and methods for encoding data for video compositing |
US7079176B1 (en) * | 1991-11-25 | 2006-07-18 | Actv, Inc. | Digital interactive system for providing full interactivity with live programming events |
US7106906B2 (en) * | 2000-03-06 | 2006-09-12 | Canon Kabushiki Kaisha | Moving image generation apparatus, moving image playback apparatus, their control method, and storage medium |
US7181081B2 (en) * | 2001-05-04 | 2007-02-20 | Legend Films Inc. | Image sequence enhancement system and method |
US20070064126A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US20070064120A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US20070064125A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US7209181B2 (en) * | 2000-03-08 | 2007-04-24 | Mitchell Kriegman | System and method for compositing of two or more real images in a cinematographic puppetry production |
US20070107015A1 (en) * | 2005-09-26 | 2007-05-10 | Hisashi Kazama | Video contents display system, video contents display method, and program for the same |
US7221395B2 (en) * | 2000-03-14 | 2007-05-22 | Fuji Photo Film Co., Ltd. | Digital camera and method for compositing images |
US20070122786A1 (en) * | 2005-11-29 | 2007-05-31 | Broadcom Corporation | Video karaoke system |
US7230653B1 (en) * | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
US20070189737A1 (en) * | 2005-10-11 | 2007-08-16 | Apple Computer, Inc. | Multimedia control center |
US7268834B2 (en) * | 2003-02-05 | 2007-09-11 | Axis, Ab | Method and apparatus for combining video signals to one comprehensive video signal |
US7285047B2 (en) * | 2003-10-17 | 2007-10-23 | Hewlett-Packard Development Company, L.P. | Method and system for real-time rendering within a gaming environment |
US7319493B2 (en) * | 2003-03-25 | 2008-01-15 | Yamaha Corporation | Apparatus and program for setting video processing parameters |
US7324166B1 (en) * | 2003-11-14 | 2008-01-29 | Contour Entertainment Inc | Live actor integration in pre-recorded well known video |
US7400752B2 (en) * | 2002-02-21 | 2008-07-15 | Alcon Manufacturing, Ltd. | Video overlay system for surgical apparatus |
US7495689B2 (en) * | 2002-01-15 | 2009-02-24 | Pelco, Inc. | Multiple simultaneous language display system and method |
US20090059094A1 (en) * | 2007-09-04 | 2009-03-05 | Samsung Techwin Co., Ltd. | Apparatus and method for overlaying image in video presentation system having embedded operating system |
US20090163262A1 (en) * | 2007-12-21 | 2009-06-25 | Sony Computer Entertainment America Inc. | Scheme for inserting a mimicked performance into a scene and providing an evaluation of same |
US7559841B2 (en) * | 2004-09-02 | 2009-07-14 | Sega Corporation | Pose detection method, video game apparatus, pose detection program, and computer-readable medium containing computer program |
US20090195638A1 (en) * | 2008-02-04 | 2009-08-06 | Siemens Communications, Inc. | Method and apparatus for face recognition enhanced video mixing |
US20090199078A1 (en) * | 2008-02-04 | 2009-08-06 | Siemens Communications, Inc. | Method and apparatus for enhanced video mixing |
US20090202114A1 (en) * | 2008-02-13 | 2009-08-13 | Sebastien Morin | Live-Action Image Capture |
US20090208181A1 (en) * | 2008-02-15 | 2009-08-20 | David Cottrell | System and Method for Automated Creation of Video Game Highlights |
US20090237564A1 (en) * | 2008-03-18 | 2009-09-24 | Invism, Inc. | Interactive immersive virtual reality and simulation |
US20100027961A1 (en) * | 2008-07-01 | 2010-02-04 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
US7716604B2 (en) * | 2005-04-19 | 2010-05-11 | Hitachi, Ltd. | Apparatus with thumbnail display |
US7720283B2 (en) * | 2005-12-09 | 2010-05-18 | Microsoft Corporation | Background removal in a live video |
US7752648B2 (en) * | 2003-02-11 | 2010-07-06 | Nds Limited | Apparatus and methods for handling interactive applications in broadcast networks |
US20100171848A1 (en) * | 2007-09-12 | 2010-07-08 | Event Mall, Inc. | System, apparatus, software and process for integrating video images |
Family Cites Families (105)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4357624A (en) | 1979-05-15 | 1982-11-02 | Combined Logic Company | Interactive video production system |
US4710873A (en) | 1982-07-06 | 1987-12-01 | Marvin Glass & Associates | Video game incorporating digitized images of being into game graphics |
US4521014A (en) | 1982-09-30 | 1985-06-04 | Sitrick David H | Video game including user visual image |
JPS60190078U (en) | 1984-05-28 | 1985-12-16 | æČé»æ°ć·„æ„æ ȘćŒäŒç€Ÿ | Sealed welded structure of electronic equipment storage container |
JPH0514810Y2 (en) | 1986-11-17 | 1993-04-20 | ||
JPH0649980Y2 (en) | 1987-05-13 | 1994-12-14 | æ ȘćŒäŒç€Ÿé·șćźźèŁœäœæ | solenoid valve |
JPH01228288A (en) | 1988-03-08 | 1989-09-12 | Fujitsu Ltd | Picture processor |
US4968132A (en) | 1989-05-24 | 1990-11-06 | Bran Ferren | Traveling matte extraction system |
JPH0322620Y2 (en) | 1989-12-18 | 1991-05-16 | ||
JPH03261279A (en) | 1990-03-12 | 1991-11-21 | Sanyo Electric Co Ltd | Video synthesizer |
FR2662009B1 (en) * | 1990-05-09 | 1996-03-08 | Apple Computer | MULTIPLE FACES MANOPULABLE ICON FOR DISPLAY ON COMPUTER. |
US5526034A (en) * | 1990-09-28 | 1996-06-11 | Ictv, Inc. | Interactive home information system with signal assignment |
JPH04220885A (en) | 1990-12-21 | 1992-08-11 | Nippon Telegr & Teleph Corp <Ntt> | Background elimination method and its execution device |
EP0595808B1 (en) | 1991-07-19 | 1999-06-23 | Princeton Video Image, Inc. | Television displays having selected inserted indicia |
EP0560979A1 (en) | 1991-10-07 | 1993-09-22 | Eastman Kodak Company | A compositer interface for arranging the components of special effects for a motion picture production |
JPH05284522A (en) | 1992-04-03 | 1993-10-29 | Nippon Telegr & Teleph Corp <Ntt> | Video signal mixing processing method |
US7137892B2 (en) | 1992-05-22 | 2006-11-21 | Sitrick David H | System and methodology for mapping and linking based user image integration |
US7849393B1 (en) * | 1992-12-09 | 2010-12-07 | Discovery Communications, Inc. | Electronic book connection to world watch live |
US7509270B1 (en) * | 1992-12-09 | 2009-03-24 | Discovery Communications, Inc. | Electronic Book having electronic commerce features |
US5499330A (en) * | 1993-09-17 | 1996-03-12 | Digital Equipment Corp. | Document display system for organizing and displaying documents as screen objects organized along strand paths |
US7861166B1 (en) * | 1993-12-02 | 2010-12-28 | Discovery Patent Holding, Llc | Resizing document pages to fit available hardware screens |
US7865567B1 (en) * | 1993-12-02 | 2011-01-04 | Discovery Patent Holdings, Llc | Virtual on-demand electronic book |
JP3261279B2 (en) | 1995-05-15 | 2002-02-25 | æŸäžé»ć·„æ ȘćŒäŒç€Ÿ | Opening / closing device for receipt input port in courier service stamping device |
US5678015A (en) * | 1995-09-01 | 1997-10-14 | Silicon Graphics, Inc. | Four-dimensional graphical user interface |
JPH09219836A (en) | 1996-02-14 | 1997-08-19 | Matsushita Electric Ind Co Ltd | Image information recording method and image compositing device |
US5880733A (en) * | 1996-04-30 | 1999-03-09 | Microsoft Corporation | Display system and method for displaying windows of an operating system to provide a three-dimensional workspace for a computer system |
KR20000064931A (en) * | 1996-04-30 | 2000-11-06 | ë°ëŹ ì 늏 ììŽ | User interface for browsing, organizing, and running programs, files, and data within computer systems |
US6369313B2 (en) | 2000-01-13 | 2002-04-09 | John R. Devecka | Method and apparatus for simulating a jam session and instructing a user in how to play the drums |
JPH10150585A (en) | 1996-11-19 | 1998-06-02 | Sony Corp | Image compositing device |
IL123733A0 (en) | 1997-03-27 | 1998-10-30 | Rt Set Ltd | Method for compositing an image of a real object with a virtual scene |
JPH1169228A (en) | 1997-08-19 | 1999-03-09 | Brother Ind Ltd | Music sound reproducing machine |
US6262736B1 (en) * | 1997-11-15 | 2001-07-17 | Theodor Holm Nelson | Interactive connection, viewing, and maneuvering system for complex data |
US6134346A (en) * | 1998-01-16 | 2000-10-17 | Ultimatte Corp | Method for removing from an image the background surrounding a selected object |
JPH11252459A (en) | 1998-02-26 | 1999-09-17 | Toshiba Corp | Image compositing device |
JP2000105772A (en) * | 1998-07-28 | 2000-04-11 | Sharp Corp | Information managing device |
EP1051034A4 (en) * | 1998-11-30 | 2007-10-17 | Sony Corp | Information providing device and method |
US6621509B1 (en) * | 1999-01-08 | 2003-09-16 | Ati International Srl | Method and apparatus for providing a three dimensional graphical user interface |
JP2000209500A (en) | 1999-01-14 | 2000-07-28 | Daiichikosho Co Ltd | Method for synthesizing portrait image separately photographed with recorded background video image and for outputting the synthesized image for display and karaoke machine adopting this method |
US6598054B2 (en) * | 1999-01-26 | 2003-07-22 | Xerox Corporation | System and method for clustering data objects in a collection |
US6754906B1 (en) * | 1999-03-29 | 2004-06-22 | The Directv Group, Inc. | Categorical electronic program guide |
US7987431B2 (en) * | 1999-10-29 | 2011-07-26 | Surfcast, Inc. | System and method for simultaneous display of multiple information sources |
US7028264B2 (en) * | 1999-10-29 | 2006-04-11 | Surfcast, Inc. | System and method for simultaneous display of multiple information sources |
EP1247255A4 (en) | 1999-11-24 | 2007-04-25 | Dartfish Sa | Coordination and combination of video sequences with spatial and temporal normalization |
US7434177B1 (en) * | 1999-12-20 | 2008-10-07 | Apple Inc. | User interface for providing consolidation and access |
GB0004165D0 (en) | 2000-02-22 | 2000-04-12 | Digimask Limited | System for virtual three-dimensional object creation and use |
US6636246B1 (en) * | 2000-03-17 | 2003-10-21 | Vizible.Com Inc. | Three dimensional spatial user interface |
JP4325075B2 (en) * | 2000-04-21 | 2009-09-02 | ăœăăŒæ ȘćŒäŒç€Ÿ | Data object management device |
JP2002074322A (en) * | 2000-08-31 | 2002-03-15 | Sony Corp | Information processor, method for processing information and data recording medium |
JP3667217B2 (en) | 2000-09-01 | 2005-07-06 | æ„æŹé»äżĄé»è©±æ ȘćŒäŒç€Ÿ | System and method for supplying advertisement information in video, and recording medium recording this program |
US7266768B2 (en) * | 2001-01-09 | 2007-09-04 | Sharp Laboratories Of America, Inc. | Systems and methods for manipulating electronic information using a three-dimensional iconic representation |
JP2002232783A (en) | 2001-02-06 | 2002-08-16 | Sony Corp | Image processor, method therefor and record medium for program |
US6819344B2 (en) * | 2001-03-12 | 2004-11-16 | Microsoft Corporation | Visualization of multi-dimensional data having an unbounded dimension |
JP2002281465A (en) | 2001-03-16 | 2002-09-27 | Matsushita Electric Ind Co Ltd | Security protection processor |
CA2385401C (en) * | 2001-05-07 | 2012-09-25 | Vizible.Com Inc. | Method of representing information on a three-dimensional user interface |
US7299418B2 (en) * | 2001-09-10 | 2007-11-20 | International Business Machines Corporation | Navigation method for visual presentations |
US7680817B2 (en) * | 2001-10-15 | 2010-03-16 | Maya-Systems Inc. | Multi-dimensional locating system and method |
US8010508B2 (en) * | 2001-10-15 | 2011-08-30 | Maya-Systems Inc. | Information elements locating system and method |
US7606819B2 (en) * | 2001-10-15 | 2009-10-20 | Maya-Systems Inc. | Multi-dimensional locating system and method |
US20060262696A1 (en) | 2003-08-20 | 2006-11-23 | Woerlee Pierre H | Method and device for recording information on a multilayer information carrier |
US6816159B2 (en) | 2001-12-10 | 2004-11-09 | Christine M. Solazzi | Incorporating a personalized wireframe image in a computer software application |
US7034833B2 (en) | 2002-05-29 | 2006-04-25 | Intel Corporation | Animated photographs |
JP4066162B2 (en) | 2002-09-27 | 2008-03-26 | ćŻćŁ«ăă€ă«ă æ ȘćŒäŒç€Ÿ | Image editing apparatus, image editing program, and image editing method |
US8009966B2 (en) | 2002-11-01 | 2011-08-30 | Synchro Arts Limited | Methods and apparatus for use in sound replacement with automatic synchronization to images |
US7139006B2 (en) * | 2003-02-04 | 2006-11-21 | Mitsubishi Electric Research Laboratories, Inc | System and method for presenting and browsing images serially |
GB2400514B (en) * | 2003-04-11 | 2006-07-26 | Hewlett Packard Development Co | Image capture method |
US7324512B2 (en) * | 2003-06-12 | 2008-01-29 | International Business Machines Corporation | MAC layer bridging of network frames between isolated and external networks |
JP4220885B2 (en) | 2003-11-11 | 2009-02-04 | ăąăčăŻă«æ ȘćŒäŒç€Ÿ | Proper order quantity calculation method, proper order quantity calculation system, proper order quantity calculation program |
GB2412802A (en) | 2004-02-05 | 2005-10-05 | Sony Uk Ltd | System and method for providing customised audio/video sequences |
US7292257B2 (en) * | 2004-06-28 | 2007-11-06 | Microsoft Corporation | Interactive viewpoint video system and process |
US8001476B2 (en) * | 2004-11-16 | 2011-08-16 | Open Text Inc. | Cellular user interface |
AU2004231206A1 (en) * | 2004-11-19 | 2006-06-08 | Canon Kabushiki Kaisha | Displaying a plurality of images in a stack arrangement |
US7616264B1 (en) | 2004-12-06 | 2009-11-10 | Pixelworks, Inc. | Cropped and scaled picture-in-picture system and method |
WO2006074266A2 (en) * | 2005-01-05 | 2006-07-13 | Hillcrest Laboratories, Inc. | Scaling and layout methods and systems for handling one-to-many objects |
US7409248B2 (en) * | 2005-04-15 | 2008-08-05 | Autodesk Canada Co. | Layer based paint operations |
US8768099B2 (en) | 2005-06-08 | 2014-07-01 | Thomson Licensing | Method, apparatus and system for alternate image/video insertion |
JP2007035856A (en) * | 2005-07-26 | 2007-02-08 | Freescale Semiconductor Inc | Manufacturing method of, measuring device for, and wafer for integrated circuit |
JP2009515375A (en) * | 2005-09-16 | 2009-04-09 | ăăȘăŻă”ăŒïŒă€ăłăłăŒăăŹăŒăăă | Operation to personalize video |
US8769408B2 (en) * | 2005-10-07 | 2014-07-01 | Apple Inc. | Intelligent media navigation |
US7675520B2 (en) * | 2005-12-09 | 2010-03-09 | Digital Steamworks, Llc | System, method and computer program for creating two dimensional (2D) or three dimensional (3D) computer animation from video |
GB0525789D0 (en) | 2005-12-19 | 2006-01-25 | Landesburg Andrew | Live performance entertainment apparatus and method |
US20070157232A1 (en) * | 2005-12-30 | 2007-07-05 | Dunton Randy R | User interface with software lensing |
RU2008149112A (en) | 2006-06-30 | 2010-06-20 | йДлД ĐŃĐ»Đ°Ń ĐĐŸŃŃ ĐĐŒĐ”ŃĐžĐșĐ°, ĐĐœĐș. (Us) | METHOD AND SYSTEM FOR COLLECTING USER REQUESTS FOR UPDATING REGARDING GEOGRAPHIC DATA TO SUPPORT AUTOMATED ANALYSIS, PROCESSING AND UPDATES OF GEOGRAPHIC DATA |
US7840979B2 (en) * | 2006-06-30 | 2010-11-23 | Microsoft Corporation | Graphical tile-based expansion cell guide |
US20080059896A1 (en) * | 2006-08-30 | 2008-03-06 | Microsoft Corporation | Mobile Device User Interface |
US7581186B2 (en) * | 2006-09-11 | 2009-08-25 | Apple Inc. | Media manager with integrated browsers |
CA2565756A1 (en) * | 2006-10-26 | 2008-04-26 | Daniel Langlois | Interface system |
WO2008091693A2 (en) * | 2007-01-23 | 2008-07-31 | Jostens, Inc. | Method and system for creating customized output |
USD613300S1 (en) * | 2007-06-28 | 2010-04-06 | Apple Inc. | Animated graphical user interface for a display screen or portion thereof |
JP5296337B2 (en) * | 2007-07-09 | 2013-09-25 | 任怩ć æ ȘćŒäŒç€Ÿ | Image processing program, image processing apparatus, image processing system, and image processing method |
US9602757B2 (en) * | 2007-09-04 | 2017-03-21 | Apple Inc. | Display of video subtitles |
KR101397541B1 (en) * | 2007-09-05 | 2014-05-27 | ìŁŒìíìŹ ìí°ìșì€íž | Method and apparatus for controlling scene structure in a digital broadcast receiver |
US8560337B2 (en) * | 2007-09-28 | 2013-10-15 | Cerner Innovation, Inc. | User interface for generating and managing medication tapers |
CN101946500B (en) | 2007-12-17 | 2012-10-03 | äŒć éČèżȘæ§èĄć Źćž | Real time video inclusion system |
US8230360B2 (en) * | 2008-01-04 | 2012-07-24 | Apple Inc. | User interface for selection from media collection |
US20090280897A1 (en) | 2008-01-14 | 2009-11-12 | Simon Fitzmaurice | Method and Apparatus for Producing Interactive Video Content |
US8151215B2 (en) * | 2008-02-07 | 2012-04-03 | Sony Corporation | Favorite GUI for TV |
US8904430B2 (en) | 2008-04-24 | 2014-12-02 | Sony Computer Entertainment America, LLC | Method and apparatus for real-time viewer interaction with a media presentation |
KR101443637B1 (en) | 2008-05-20 | 2014-09-23 | ìì§ì ì ìŁŒìíìŹ | Mobile terminal and method of generating contents therein |
KR101469520B1 (en) * | 2008-06-13 | 2014-12-08 | ìŒì±ì ììŁŒìíìŹ | Control device and controlling method thereof |
CA128767S (en) * | 2008-09-17 | 2009-09-09 | Toshiba Also Trading As Toshiba Corp Kk | Display screen |
CA128768S (en) * | 2008-09-17 | 2009-09-09 | Toshiba Also Trading As Toshiba Corp Kk | Display screen |
CA128769S (en) * | 2008-09-17 | 2009-09-09 | Toshiba Also Trading As Toshiba Corp Kk | Display screen |
US20100302376A1 (en) * | 2009-05-27 | 2010-12-02 | Pierre Benoit Boulanger | System and method for high-quality real-time foreground/background separation in tele-conferencing using self-registered color/infrared input images and closed-form natural image matting techniques |
EP2343883B1 (en) * | 2010-01-06 | 2017-12-06 | Orange | Data processing for an improved display |
EP2873914A4 (en) | 2012-07-10 | 2016-02-10 | Posco Led Co Ltd | Optical semiconductor illumination device |
-
2009
- 2009-06-30 US US12/495,517 patent/US8824861B2/en active Active
- 2009-06-30 US US12/495,590 patent/US20100035682A1/en not_active Abandoned
- 2009-06-30 WO PCT/US2009/049303 patent/WO2010002921A1/en active Application Filing
- 2009-06-30 US US12/495,548 patent/US20100031149A1/en not_active Abandoned
- 2009-06-30 TW TW098122142A patent/TW201005583A/en unknown
-
2013
- 2013-04-24 US US13/869,341 patent/US9143721B2/en active Active
Patent Citations (101)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4122490A (en) * | 1976-11-09 | 1978-10-24 | Lish Charles A | Digital chroma-key circuitry |
US4599611A (en) * | 1982-06-02 | 1986-07-08 | Digital Equipment Corporation | Interactive computer-based information display system |
US4827344A (en) * | 1985-02-28 | 1989-05-02 | Intel Corporation | Apparatus for inserting part of one video image into another video image |
US4688105A (en) * | 1985-05-10 | 1987-08-18 | Bloch Arthur R | Video recording system |
US4688105B1 (en) * | 1985-05-10 | 1992-07-14 | Short Takes Inc | |
US5184295A (en) * | 1986-05-30 | 1993-02-02 | Mann Ralph V | System and method for teaching physical skills |
US4891748A (en) * | 1986-05-30 | 1990-01-02 | Mann Ralph V | System and method for teaching physical skills |
US4800432A (en) * | 1986-10-24 | 1989-01-24 | The Grass Valley Group, Inc. | Video Difference key generator |
US5144454A (en) * | 1989-10-31 | 1992-09-01 | Cury Brian L | Method and apparatus for producing customized video recordings |
US5099337A (en) * | 1989-10-31 | 1992-03-24 | Cury Brian L | Method and apparatus for producing customized video recordings |
US5151793A (en) * | 1990-02-26 | 1992-09-29 | Pioneer Electronic Corporation | Recording medium playing apparatus |
US5428401A (en) * | 1991-05-09 | 1995-06-27 | Quantel Limited | Improvements in or relating to video image keying systems and methods |
US5249967A (en) * | 1991-07-12 | 1993-10-05 | George P. O'Leary | Sports technique video training device |
US5566251A (en) * | 1991-09-18 | 1996-10-15 | David Sarnoff Research Center, Inc | Video merging employing pattern-key insertion |
US5861881A (en) * | 1991-11-25 | 1999-01-19 | Actv, Inc. | Interactive computer system for providing an interactive presentation with personalized video, audio and graphics responses for multiple viewers |
US7079176B1 (en) * | 1991-11-25 | 2006-07-18 | Actv, Inc. | Digital interactive system for providing full interactivity with live programming events |
US5381184A (en) * | 1991-12-30 | 1995-01-10 | U.S. Philips Corporation | Method of and arrangement for inserting a background signal into parts of a foreground signal fixed by a predetermined key color |
US20080085766A1 (en) * | 1992-05-22 | 2008-04-10 | Sitrick David H | Image integration with replaceable content |
US7867086B2 (en) * | 1992-05-22 | 2011-01-11 | Sitrick David H | Image integration with replaceable content |
US20030148811A1 (en) * | 1992-05-22 | 2003-08-07 | Sitrick David H. | Image integration, mapping and linking system and methodology |
US5553864A (en) * | 1992-05-22 | 1996-09-10 | Sitrick; David H. | User image integration into audiovisual presentation system and methodology |
US6425825B1 (en) * | 1992-05-22 | 2002-07-30 | David H. Sitrick | User image integration and tracking for an audiovisual presentation system and methodology |
US5681223A (en) * | 1993-08-20 | 1997-10-28 | Inventures Inc | Training video method and display |
US6198503B1 (en) * | 1993-08-20 | 2001-03-06 | Steve Weinreich | Infra-red video key |
US20030051255A1 (en) * | 1993-10-15 | 2003-03-13 | Bulman Richard L. | Object customization and presentation system |
US6351265B1 (en) * | 1993-10-15 | 2002-02-26 | Personalized Online Photo Llc | Method and apparatus for producing an electronic image |
US5500684A (en) * | 1993-12-10 | 1996-03-19 | Matsushita Electric Industrial Co., Ltd. | Chroma-key live-video compositing circuit |
US6122013A (en) * | 1994-04-29 | 2000-09-19 | Orad, Inc. | Chromakeying system |
US5751337A (en) * | 1994-09-19 | 1998-05-12 | Telesuite Corporation | Teleconferencing method and system for providing face-to-face, non-animated teleconference environment |
US6061532A (en) * | 1995-02-24 | 2000-05-09 | Eastman Kodak Company | Animated image presentations with personalized digitized images |
US6072933A (en) * | 1995-03-06 | 2000-06-06 | Green; David | System for producing personalized video recordings |
US5953076A (en) * | 1995-06-16 | 1999-09-14 | Princeton Video Image, Inc. | System and method of real time insertions into video using adaptive occlusion with a synthetic reference image |
US6522787B1 (en) * | 1995-07-10 | 2003-02-18 | Sarnoff Corporation | Method and system for rendering and combining images to form a synthesized view of a scene containing image information from a second image |
US6072537A (en) * | 1997-01-06 | 2000-06-06 | U-R Star Ltd. | Systems for producing personalized video clips |
US6283858B1 (en) * | 1997-02-25 | 2001-09-04 | Bgk International Incorporated | Method for manipulating images |
US5764306A (en) * | 1997-03-18 | 1998-06-09 | The Metaphor Group | Real-time method of digitally altering a video data stream to remove portions of the original image and substitute elements to create a new image |
US6750919B1 (en) * | 1998-01-23 | 2004-06-15 | Princeton Video Image, Inc. | Event linked insertion of indicia into video |
US6624853B1 (en) * | 1998-03-20 | 2003-09-23 | Nurakhmed Nurislamovich Latypov | Method and system for creating video programs with interaction of an actor with objects of a virtual space and the objects to one another |
US6285408B1 (en) * | 1998-04-09 | 2001-09-04 | Lg Electronics Inc. | Digital audio/video system and method integrates the operations of several digital devices into one simplified system |
US6086380A (en) * | 1998-08-20 | 2000-07-11 | Chu; Chia Chen | Personalized karaoke recording studio |
US6881067B2 (en) * | 1999-01-05 | 2005-04-19 | Personal Pro, Llc | Video instructional system and method for teaching motor skills |
US7780450B2 (en) * | 1999-01-05 | 2010-08-24 | Personal Pro, Llc | Video instructional system and method for teaching motor skills |
US6350199B1 (en) * | 1999-03-16 | 2002-02-26 | International Game Technology | Interactive gaming machine and method with customized game screen presentation |
US6126449A (en) * | 1999-03-25 | 2000-10-03 | Swing Lab | Interactive motion training device and method |
US6384821B1 (en) * | 1999-10-04 | 2002-05-07 | International Business Machines Corporation | Method and apparatus for delivering 3D graphics in a networked environment using transparent video |
US6335765B1 (en) * | 1999-11-08 | 2002-01-01 | Weather Central, Inc. | Virtual presentation system and method |
US7230653B1 (en) * | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
US7015978B2 (en) * | 1999-12-13 | 2006-03-21 | Princeton Video Image, Inc. | System and method for real time insertion into video with occlusion on areas containing multiple colors |
US7106906B2 (en) * | 2000-03-06 | 2006-09-12 | Canon Kabushiki Kaisha | Moving image generation apparatus, moving image playback apparatus, their control method, and storage medium |
US7209181B2 (en) * | 2000-03-08 | 2007-04-24 | Mitchell Kriegman | System and method for compositing of two or more real images in a cinematographic puppetry production |
US7221395B2 (en) * | 2000-03-14 | 2007-05-22 | Fuji Photo Film Co., Ltd. | Digital camera and method for compositing images |
US20020007718A1 (en) * | 2000-06-20 | 2002-01-24 | Isabelle Corset | Karaoke system |
US6535269B2 (en) * | 2000-06-30 | 2003-03-18 | Gary Sherman | Video karaoke system and method of use |
US20020130889A1 (en) * | 2000-07-18 | 2002-09-19 | David Blythe | System, method, and computer program product for real time transparency-based compositing |
US20020051009A1 (en) * | 2000-07-26 | 2002-05-02 | Takashi Ida | Method and apparatus for extracting object from video image |
US6954498B1 (en) * | 2000-10-24 | 2005-10-11 | Objectvideo, Inc. | Interactive video manipulation |
US7827488B2 (en) * | 2000-11-27 | 2010-11-02 | Sitrick David H | Image tracking and substitution system and methodology for audio-visual presentations |
US20050151743A1 (en) * | 2000-11-27 | 2005-07-14 | Sitrick David H. | Image tracking and substitution system and methodology for audio-visual presentations |
US7034537B2 (en) * | 2001-03-14 | 2006-04-25 | Hitachi Medical Corporation | MRI apparatus correcting vibratory static magnetic field fluctuations, by utilizing the static magnetic fluctuation itself |
US7181081B2 (en) * | 2001-05-04 | 2007-02-20 | Legend Films Inc. | Image sequence enhancement system and method |
US6937295B2 (en) * | 2001-05-07 | 2005-08-30 | Junaid Islam | Realistic replication of a live performance at remote locations |
US20050155086A1 (en) * | 2001-11-13 | 2005-07-14 | Microsoft Corporation | Method and apparatus for the display of still images from image files |
US20030108329A1 (en) * | 2001-12-12 | 2003-06-12 | Meric Adriansen | Advertising method and system |
US20050028193A1 (en) * | 2002-01-02 | 2005-02-03 | Candelore Brant L. | Macro-block based content replacement by PID mapping |
US7495689B2 (en) * | 2002-01-15 | 2009-02-24 | Pelco, Inc. | Multiple simultaneous language display system and method |
US7400752B2 (en) * | 2002-02-21 | 2008-07-15 | Alcon Manufacturing, Ltd. | Video overlay system for surgical apparatus |
US20040152058A1 (en) * | 2002-06-11 | 2004-08-05 | Browne H. Lee | Video instructional system and method for teaching motor skills |
US7027054B1 (en) * | 2002-08-14 | 2006-04-11 | Avaworks, Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US6919892B1 (en) * | 2002-08-14 | 2005-07-19 | Avaworks, Incorporated | Photo realistic talking head creation system and method |
US20040100581A1 (en) * | 2002-11-27 | 2004-05-27 | Princeton Video Image, Inc. | System and method for inserting live video into pre-produced video |
US7268834B2 (en) * | 2003-02-05 | 2007-09-11 | Axis, Ab | Method and apparatus for combining video signals to one comprehensive video signal |
US7752648B2 (en) * | 2003-02-11 | 2010-07-06 | Nds Limited | Apparatus and methods for handling interactive applications in broadcast networks |
US7319493B2 (en) * | 2003-03-25 | 2008-01-15 | Yamaha Corporation | Apparatus and program for setting video processing parameters |
US20090040385A1 (en) * | 2003-05-02 | 2009-02-12 | Megamedia, Llc | Methods and systems for controlling video compositing in an interactive entertainment system |
US7528890B2 (en) * | 2003-05-02 | 2009-05-05 | Yoostar Entertainment Group, Inc. | Interactive system and method for video compositing |
US20040218100A1 (en) * | 2003-05-02 | 2004-11-04 | Staker Allan Robert | Interactive system and method for video compositing |
US7649571B2 (en) * | 2003-05-02 | 2010-01-19 | Yoostar Entertainment Group, Inc. | Methods for interactive video compositing |
US20090041422A1 (en) * | 2003-05-02 | 2009-02-12 | Megamedia, Llc | Methods and systems for controlling video compositing in an interactive entertainment system |
US7646434B2 (en) * | 2003-05-02 | 2010-01-12 | Yoostar Entertainment Group, Inc. | Video compositing systems for providing interactive entertainment |
US7285047B2 (en) * | 2003-10-17 | 2007-10-23 | Hewlett-Packard Development Company, L.P. | Method and system for real-time rendering within a gaming environment |
US7324166B1 (en) * | 2003-11-14 | 2008-01-29 | Contour Entertainment Inc | Live actor integration in pre-recorded well known video |
US20050215319A1 (en) * | 2004-03-23 | 2005-09-29 | Harmonix Music Systems, Inc. | Method and apparatus for controlling a three-dimensional character in a three-dimensional gaming environment |
US7559841B2 (en) * | 2004-09-02 | 2009-07-14 | Sega Corporation | Pose detection method, video game apparatus, pose detection program, and computer-readable medium containing computer program |
US20060136979A1 (en) * | 2004-11-04 | 2006-06-22 | Staker Allan R | Apparatus and methods for encoding data for video compositing |
US7716604B2 (en) * | 2005-04-19 | 2010-05-11 | Hitachi, Ltd. | Apparatus with thumbnail display |
US20070064126A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US20070064120A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US20070064125A1 (en) * | 2005-09-16 | 2007-03-22 | Richard Didow | Chroma-key event photography |
US20070107015A1 (en) * | 2005-09-26 | 2007-05-10 | Hisashi Kazama | Video contents display system, video contents display method, and program for the same |
US20070189737A1 (en) * | 2005-10-11 | 2007-08-16 | Apple Computer, Inc. | Multimedia control center |
US20070122786A1 (en) * | 2005-11-29 | 2007-05-31 | Broadcom Corporation | Video karaoke system |
US7720283B2 (en) * | 2005-12-09 | 2010-05-18 | Microsoft Corporation | Background removal in a live video |
US20090059094A1 (en) * | 2007-09-04 | 2009-03-05 | Samsung Techwin Co., Ltd. | Apparatus and method for overlaying image in video presentation system having embedded operating system |
US20100171848A1 (en) * | 2007-09-12 | 2010-07-08 | Event Mall, Inc. | System, apparatus, software and process for integrating video images |
US20090163262A1 (en) * | 2007-12-21 | 2009-06-25 | Sony Computer Entertainment America Inc. | Scheme for inserting a mimicked performance into a scene and providing an evaluation of same |
US20090199078A1 (en) * | 2008-02-04 | 2009-08-06 | Siemens Communications, Inc. | Method and apparatus for enhanced video mixing |
US20090195638A1 (en) * | 2008-02-04 | 2009-08-06 | Siemens Communications, Inc. | Method and apparatus for face recognition enhanced video mixing |
US20090202114A1 (en) * | 2008-02-13 | 2009-08-13 | Sebastien Morin | Live-Action Image Capture |
US20090208181A1 (en) * | 2008-02-15 | 2009-08-20 | David Cottrell | System and Method for Automated Creation of Video Game Highlights |
US20090237564A1 (en) * | 2008-03-18 | 2009-09-24 | Invism, Inc. | Interactive immersive virtual reality and simulation |
US20100027961A1 (en) * | 2008-07-01 | 2010-02-04 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110025918A1 (en) * | 2003-05-02 | 2011-02-03 | Megamedia, Llc | Methods and systems for controlling video compositing in an interactive entertainment system |
US20100278450A1 (en) * | 2005-06-08 | 2010-11-04 | Mike Arthur Derrenberger | Method, Apparatus And System For Alternate Image/Video Insertion |
US8768099B2 (en) * | 2005-06-08 | 2014-07-01 | Thomson Licensing | Method, apparatus and system for alternate image/video insertion |
US8625845B2 (en) | 2005-08-06 | 2014-01-07 | Quantum Signal, Llc | Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream |
US20100142928A1 (en) * | 2005-08-06 | 2010-06-10 | Quantum Signal, Llc | Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream |
US20090132900A1 (en) * | 2007-11-20 | 2009-05-21 | Steven Zielke | Method for producing and outputting web pages via a computer network, and web page produced thereby |
US20100027961A1 (en) * | 2008-07-01 | 2010-02-04 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
US20100035682A1 (en) * | 2008-07-01 | 2010-02-11 | Yoostar Entertainment Group, Inc. | User interface systems and methods for interactive video systems |
US9143721B2 (en) | 2008-07-01 | 2015-09-22 | Noo Inc. | Content preparation systems and methods for interactive video systems |
US8824861B2 (en) | 2008-07-01 | 2014-09-02 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
US9002177B2 (en) * | 2008-07-08 | 2015-04-07 | Sceneplay, Inc. | Media generating system and method |
US10346001B2 (en) | 2008-07-08 | 2019-07-09 | Sceneplay, Inc. | System and method for describing a scene for a piece of media |
US10936168B2 (en) | 2008-07-08 | 2021-03-02 | Sceneplay, Inc. | Media presentation generating system and method using recorded splitscenes |
US20100008639A1 (en) * | 2008-07-08 | 2010-01-14 | Sceneplay, Inc. | Media Generating System and Method |
US20100162315A1 (en) * | 2008-12-24 | 2010-06-24 | Samsung Electronics Co., Ltd. | Program information displaying method and display apparatus using the same |
US20100287491A1 (en) * | 2009-04-06 | 2010-11-11 | Robby Gurdan | Desktop control for a host apparatus of a digital multimedia network |
US20110219076A1 (en) * | 2010-03-04 | 2011-09-08 | Tomas Owen Roope | System and method for integrating user generated content |
US9912721B2 (en) | 2010-05-14 | 2018-03-06 | Highlight Broadcast Network, Llc | Systems and methods for providing event-related video sharing services |
US9942591B2 (en) * | 2010-05-14 | 2018-04-10 | Highlight Broadcast Network, Llc | Systems and methods for providing event-related video sharing services |
WO2011143615A1 (en) * | 2010-05-14 | 2011-11-17 | Robert Patton Stribling | Systems and methods for providing event-related video sharing services |
US20110279677A1 (en) * | 2010-05-14 | 2011-11-17 | Robert Patton Stribling | Systems and Methods for Providing Event-Related Video Sharing Services |
US20120030376A1 (en) * | 2010-07-30 | 2012-02-02 | Verizon Patent And Licensing Inc. | User-based prioritization for content transcoding |
US8862733B2 (en) * | 2010-07-30 | 2014-10-14 | Verizon Patent And Licensing Inc. | User-based prioritization for content transcoding |
US8802957B2 (en) | 2010-08-16 | 2014-08-12 | Boardwalk Technology Group, Llc | Mobile replacement-dialogue recording system |
WO2012023951A1 (en) * | 2010-08-16 | 2012-02-23 | Boardwalk Technology Group, Llc | Mobile replacement-dialogue recording system |
WO2012051585A1 (en) * | 2010-10-14 | 2012-04-19 | Fixmaster, Inc. | System and method for creating and analyzing interactive experiences |
US20130176379A1 (en) * | 2010-12-02 | 2013-07-11 | Polycom, Inc. | Removing a Self Image From a Continuous Presence Video Image |
US8970657B2 (en) * | 2010-12-02 | 2015-03-03 | Polycom, Inc. | Removing a self image from a continuous presence video image |
US20120151341A1 (en) * | 2010-12-10 | 2012-06-14 | Ko Steve S | Interactive Screen Saver Method and Apparatus |
US20120190456A1 (en) * | 2011-01-21 | 2012-07-26 | Rogers Henk B | Systems and methods for providing an interactive multiplayer story |
US8782176B2 (en) * | 2011-04-14 | 2014-07-15 | Fusic Ltd. | Synchronized video system |
US20120265859A1 (en) * | 2011-04-14 | 2012-10-18 | Audish Ltd. | Synchronized Video System |
US20120281114A1 (en) * | 2011-05-03 | 2012-11-08 | Ivi Media Llc | System, method and apparatus for providing an adaptive media experience |
US10165245B2 (en) | 2012-07-06 | 2018-12-25 | Kaltura, Inc. | Pre-fetching video content |
US8934044B2 (en) * | 2012-07-20 | 2015-01-13 | Adobe Systems Incorporated | Systems and methods for live view photo layer in digital imaging applications |
US20140022396A1 (en) * | 2012-07-20 | 2014-01-23 | Geoffrey Dowd | Systems and Methods for Live View Photo Layer in Digital Imaging Applications |
US10410266B2 (en) | 2012-08-08 | 2019-09-10 | Lowe's Companies, Inc. | Systems and methods for recording transaction and product customization information |
US11715141B2 (en) | 2012-08-08 | 2023-08-01 | Lowe's Companies, Inc. | Systems and methods for recording transaction and product customization information |
EP2711851A3 (en) * | 2012-09-25 | 2016-07-27 | Samsung Electronics Co., Ltd | Display apparatus and control method thereof |
WO2014091484A1 (en) * | 2012-12-11 | 2014-06-19 | Scooltv, Inc. | A system and method for creating a video |
US9236088B2 (en) | 2013-04-18 | 2016-01-12 | Rapt Media, Inc. | Application communication |
US9031375B2 (en) | 2013-04-18 | 2015-05-12 | Rapt Media, Inc. | Video frame still image sequences |
US20170194030A1 (en) * | 2014-10-25 | 2017-07-06 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US10789983B2 (en) * | 2014-10-25 | 2020-09-29 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US20180211692A1 (en) * | 2014-10-25 | 2018-07-26 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US9852759B2 (en) * | 2014-10-25 | 2017-12-26 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US20230186015A1 (en) * | 2014-10-25 | 2023-06-15 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US11809811B2 (en) * | 2014-10-25 | 2023-11-07 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US20170024098A1 (en) * | 2014-10-25 | 2017-01-26 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US9966109B2 (en) * | 2014-10-25 | 2018-05-08 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US10789984B2 (en) * | 2014-10-25 | 2020-09-29 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US10832729B2 (en) * | 2014-10-25 | 2020-11-10 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US10832730B2 (en) * | 2014-10-25 | 2020-11-10 | Yielmo, Inc. | Methods for serving interactive content to a user |
US20180075874A1 (en) * | 2014-10-25 | 2018-03-15 | Yieldmo, Inc. | Methods for serving interactive content to a user |
US20180075875A1 (en) * | 2014-10-25 | 2018-03-15 | Yieldmo, Inc. | Methods for serving interactive content to a user |
WO2016134415A1 (en) * | 2015-02-23 | 2016-09-01 | Zuma Beach Ip Pty Ltd | Generation of combined videos |
US20190012053A1 (en) * | 2017-07-07 | 2019-01-10 | Open Text Sa Ulc | Systems and methods for content sharing through external systems |
US11829583B2 (en) * | 2017-07-07 | 2023-11-28 | Open Text Sa Ulc | Systems and methods for content sharing through external systems |
US11635879B2 (en) | 2017-07-07 | 2023-04-25 | Open Text Corporation | Systems and methods for content sharing through external systems |
US11653072B2 (en) * | 2018-09-12 | 2023-05-16 | Zuma Beach Ip Pty Ltd | Method and system for generating interactive media content |
US11081140B1 (en) * | 2020-06-24 | 2021-08-03 | Facebook, Inc. | Systems and methods for generating templates for short-form media content |
EP4050887A4 (en) * | 2020-07-17 | 2023-06-21 | Beijing Bytedance Network Technology Co., Ltd. | Video recording method and apparatus, electronic device, and storage medium |
US20220286758A1 (en) * | 2020-07-17 | 2022-09-08 | Beijing Bytrdance Network Technology Co., Ltd. | Video recording method, apparatus, electronic device and non-transitory storage medium |
US11641512B2 (en) * | 2020-07-17 | 2023-05-02 | Beijlng Bytedance Network Technology Co., Ltd. | Video recording method, apparatus, electronic device and non-transitory storage medium |
US11145109B1 (en) * | 2020-10-05 | 2021-10-12 | Weta Digital Limited | Method for editing computer-generated images to maintain alignment between objects specified in frame space and objects specified in scene space |
US11417048B2 (en) | 2020-10-05 | 2022-08-16 | Unity Technologies Sf | Computer graphics system user interface for obtaining artist inputs for objects specified in frame space and objects specified in scene space |
US11393155B2 (en) | 2020-10-05 | 2022-07-19 | Unity Technologies Sf | Method for editing computer-generated images to maintain alignment between objects specified in frame space and objects specified in scene space |
US20220172401A1 (en) * | 2020-11-27 | 2022-06-02 | Canon Kabushiki Kaisha | Image processing apparatus, image generation method, and storage medium |
US20230004283A1 (en) * | 2021-06-30 | 2023-01-05 | At&T Intellectual Property I, L.P. | System for fan-based creation and composition of cross-franchise content |
US11347387B1 (en) * | 2021-06-30 | 2022-05-31 | At&T Intellectual Property I, L.P. | System for fan-based creation and composition of cross-franchise content |
US20230076000A1 (en) * | 2021-08-31 | 2023-03-09 | JBF Interlude 2009 LTD | Shader-based dynamic video manipulation |
Also Published As
Publication number | Publication date |
---|---|
US9143721B2 (en) | 2015-09-22 |
US8824861B2 (en) | 2014-09-02 |
TW201005583A (en) | 2010-02-01 |
US20100035682A1 (en) | 2010-02-11 |
US20130236160A1 (en) | 2013-09-12 |
US20100027961A1 (en) | 2010-02-04 |
WO2010002921A1 (en) | 2010-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9143721B2 (en) | Content preparation systems and methods for interactive video systems | |
RU2460233C2 (en) | System of inserting video online | |
AU2004237705B2 (en) | Interactive system and method for video compositing | |
US8341525B1 (en) | System and methods for collaborative online multimedia production | |
US6335765B1 (en) | Virtual presentation system and method | |
US20090046097A1 (en) | Method of making animated video | |
JPH11219446A (en) | Video/sound reproducing system | |
JP7011206B2 (en) | Amusement photography equipment, image processing equipment, and image processing methods | |
de Lima et al. | Video-based interactive storytelling using real-time video compositing techniques | |
Hu | The effects of digital video technology on modern film | |
Dsouza | Think in 3D: Food For Thought for Directors, Cinematographers and Stereographers | |
Perritt Jr | Technologies of Storytelling: New Models for Movies | |
US20230326161A1 (en) | Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
WO2007073010A1 (en) | Media making method and storage medium | |
JP2008236708A (en) | Medium production apparatus for virtual film studio | |
TWI677240B (en) | System for using multimedia symbols to present storyboard and method thereof | |
Ogden | Cinema technologies | |
Allen | Digital cinema: Virtual screens | |
WO2015173828A1 (en) | Methods, circuits, devices, systems and associated computer executable code for composing composite content | |
KR20020057916A (en) | The movie composition vending machine with chroma key | |
JP2023166836A (en) | Multi-viewpoint image management device | |
KR100724620B1 (en) | Entertainer experience stage | |
WeseliĆski | A Dictionary of Film Terms and Film Studies | |
JP2020072415A (en) | Video/audio synthesis method | |
Ming | Post-Production of Digital Film and Television with Development of Virtual Reality Image Technology-Advance Research Analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: YOOSTAR ENTERTAINMENT GROUP, INC.,NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GENTILE, ANTHONY;GENTILE, JOHN;WILKER, SCOTT;AND OTHERS;REEL/FRAME:023376/0026 Effective date: 20090804 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |