Skip to content

WofWca/jumpcutter

Repository files navigation

Logo Jump Cutter

Chrome Web Store Firefox Browser Add-ons Matrix Discord Translation status

Download:

Chrome Web Store Firefox Browser Add-ons Microsoft Edge Add-ons or from GitHub: Chromium / Gecko (Firefox)

Skips silent parts in videos, in real time.

Can be useful for watching lectures, stream recordings (VODs), webinars, podcasts, and other unedited videos.

Demo:

demo.mp4

Inspired by this video by carykh.

How it works

Simple (mostly).

Currently there are 2 separate algorithms in place.

The first one we call "the stretching algorithm", and it's in this file. It simply looks at the output audio of a media element, determines its current loudness and, when it's not loud, increases its playbackRate. (We're using Web Audio API's createMediaElementSource and AudioWorkletProcessor for this).

Details, why it's called "stretching" The algorithm we just described cannot "look ahead" in the audio timeline. It only looks at the current loudness, at the sample that we've already sent to the audio output device.

But looking ahead (a.k.a. "Margin before") is important, because, for example, there are certain sounds in speech that you can start a word with that are not very loud. But it's not good to skip such sounds just because of that. The speech would become harder to understand. For example, "throb" would become "rob".

Here is where the "stretching" part comes in. It's about how we're able to "look ahead" and slow down shortly before a loud part. Basically it involves slightly (~200ms) delaying the audio before outputting it (and that is for a purpose!).

Imagine that we're currently playing a silent part, so the playback rate is higher. Now, when we encounter a loud part, we go "aha! That might be a word, and it might start with 'th'".

As said above, we always delay (buffer) the audio for ~200ms before outputting it. So we know that these 200ms of buffered audio must contain that "th" sound, and we want the user to hear that "th" sound. But remember: at the time we recorded the said sound, the video was playing at a high speed, but we want to play back that 'th' at normal speed. So we can't just output it as is. What do we do?

What we do is we take that buffered (delayed) audio, and we slow it down (stretch and pitch-shift it) so that it appears to have been played at normal speed! Only then do we pass it to the system (which then passes it to your speakers).

And that, kids, is why we call it "the stretching algorithm".

For more details, you can check out the comments in its source code.

The second algorithm is "the cloning algorithm", and it's here. It creates a hidden clone of the target media element and plays it ahead of the original element, looking for silent parts and writing down where they are. When the target element reaches a silent part, we increase its playbackRate, or skip (seek) the silent part entirely. Currently you can enable this algorithm by checking the "Use the experimental algorithm" checkbox.

We look for video elements by injecting a script in all pages and simply document.getElementsByTagName('video'). But new video elements could get inserted after the page has already loaded, so we watch for new elements with a MutationObserver.

High-level architecture chart

If below you see a block of text instead of a chart, go here.

graph
%%graph TD

    %% TODO add links https://mermaid.js.org/syntax/flowchart.html#interaction

    watchAllElements["watchAllElements
        looks for media elements
        on the page"]
    click watchAllElements "https://github.com/WofWca/jumpcutter/blob/890a2b25948f39f1553cb9afb06c4cc10c9d2a19/src/entry-points/content/watchAllElements.ts"

    AllMediaElementsController["AllMediaElementsController
        the orchestrator,"]
    click AllMediaElementsController "https://github.com/WofWca/jumpcutter/blob/44fadb1982fbe7dd20c64741ae9e754ba9261042/src/entry-points/content/AllMediaElementsController.ts"

    watchAllElements -->|"onNewMediaElements(...elements)"| AllMediaElementsController
    AllMediaElementsController -->|original HTMLMediaElement| chooseController{choose
        appropriate
        controller}
    chooseController -->|original HTMLMediaElement| ElementPlaybackControllerCloning & ElementPlaybackControllerStretching


    %% ElementPlaybackControllerCloning

    %% subgraph "ElementPlaybackControllerCloning"

    ElementPlaybackControllerCloning["ElementPlaybackControllerCloning
        controls playbackRate
        of the original
        HTMLMediaElement"]
    click ElementPlaybackControllerCloning "https://github.com/WofWca/jumpcutter/blob/3ff011318dc9af407eaf9f4cc8d33dfafaf0b53e/src/entry-points/content/ElementPlaybackControllerCloning/ElementPlaybackControllerCloning.ts"

    Lookahead["Lookahead
        plays back the clone element
        to look for silence ranges"]
    click Lookahead "https://github.com/WofWca/jumpcutter/blob/988ec301bf2e6c07e6cc328f73a4177f7504f1e1/src/entry-points/content/ElementPlaybackControllerCloning/Lookahead.ts"

    ElementPlaybackControllerCloning --> | original HTMLMediaElement reference| Lookahead
    Lookahead --> |silence ranges| ElementPlaybackControllerCloning

    createCloneElementWithSameSrc --> |HTMLMediaElement clone| Lookahead
    Lookahead --> |original HTMLMediaElement reference| createCloneElementWithSameSrc
    click createCloneElementWithSameSrc "https://github.com/WofWca/jumpcutter/blob/e9daff122f12263a50fb1c4a10e4b13c7fd190cf/src/entry-points/content/ElementPlaybackControllerCloning/createCloneElementWithSameSrc.ts"

    cloneMediaSources["`cloneMediaSources
        intercepts all MediaSources
        and holds a clone
        HTMLMediaElement`"]
    click cloneMediaSources "https://github.com/WofWca/jumpcutter/blob/5bcfdaf53066c5ac4f664e089b916118afe37ae2/src/entry-points/content/cloneMediaSources/lib.ts"

    cloneMediaSources -->|HTMLMediaElement clone| Lookahead
    Lookahead -->|getMediaSourceCloneElement| cloneMediaSources

    SilenceDetector1["SilenceDetector
        utilizes Web Audio API
        to detect silence"]
    click SilenceDetector1 "https://github.com/WofWca/jumpcutter/blob/e3283500aeefe994a8be5bb7fdd8f7308e895f4f/src/entry-points/content/SilenceDetector/SilenceDetectorProcessor.ts"
    Lookahead --> |clone HTMLMediaElement audio| SilenceDetector1
    SilenceDetector1 --> |
        SILENCE_END &
        SILENCE_START events
        with timestamps
    | Lookahead

    %% end
    
    %% ElementPlaybackControllerStretching

    %% subgraph "ElementPlaybackControllerStretching"

    ElementPlaybackControllerStretching["ElementPlaybackControllerStretching
        controls playbackRate
        of the original
        HTMLMediaElement"]
    click ElementPlaybackControllerStretching "https://github.com/WofWca/jumpcutter/blob/3ff011318dc9af407eaf9f4cc8d33dfafaf0b53e/src/entry-points/content/ElementPlaybackControllerStretching/ElementPlaybackControllerStretching.ts"

    SilenceDetector2["SilenceDetector
        utilizes Web Audio API
        to detect silence"]
    click SilenceDetector2 "https://github.com/WofWca/jumpcutter/blob/e3283500aeefe994a8be5bb7fdd8f7308e895f4f/src/entry-points/content/SilenceDetector/SilenceDetectorProcessor.ts"
    ElementPlaybackControllerStretching --> |original HTMLMediaElement audio| SilenceDetector2
    SilenceDetector2 --> |
        SILENCE_END &
        SILENCE_START events
        with timestamps
    | ElementPlaybackControllerStretching

    %% end


    %% Telemetry

    %% ElementPlaybackControllerCloning & ElementPlaybackControllerStretching --> |telemetry| AllMediaElementsController

    Popup["Popup
        (UI, chart)"]
    click Popup "https://github.com/WofWca/jumpcutter/blob/44fadb1982fbe7dd20c64741ae9e754ba9261042/src/entry-points/popup/App.svelte"
    AllMediaElementsController --> |telemetry| Popup
Loading

Contribute

Build

  1. Install base tools:

  2. Run

    yarn install
  3. Fill the src/_locales directory with localization files. Skip this step if they're already there. Either:

    • If you're using git:

      git submodule update --init

    • If you don't want to use git, download them from the translations branch and put in src/_locales manually.

    • To build for Gecko (e.g. Firefox):

      yarn build:gecko
    • To build for Chromium (e.g. Chrome, Edge)

      yarn build:chromium

    Bundled files will appear in ./dist-gecko (or ./dist-chromium).

For development build, see CONTRIBUTING.md

Then you can install it on the extensions management page of your browser (Chromium, Gecko).

Privacy & security

In short: it's fine.

As with practically every other extension, websites you're visiting may detect that you're using this (or alike) extension, and your settings for the extension, by observing:

  • playback rate changes of an element.
  • the fact that createMediaElementSource has been called for an element.
  • increased frequency of media chunk requests resulting from increased playback rate. This cannot be mitigated with disabling JavaScript.
  • the fact of requesting the same media twice, as a result of using the cloning algotihm.

However I doubt that currently there are services that do specifically this. But there may be.

Other than that, there are no known things concerning this. It doesn't interact with third parties or try to do other creepy stuff.

Why is it free?

It started out as a hobby project in 2019 and you could say it remains such today. It feels good to write software that thousands of people use, to give back to the humanity.

However I am still thinking of monetizing it, in a liberal way. I really like FUTO's take on it, and their "infinite free trial" thing they do with e.g. FUTO Keyboard.

However, with the current amount of users I think it's not worth the effort right now.

About donations

Donations are great, but what they do is tell the user "we don't really want money, but if you insist, you can send some", or even "nobody pays for this product, so you'll be one of the generous few, and if you donate $5 I'll only be able to buy one coffee with it and will not really be incentivized to continue the development".

A lot of people are willing to pay. They want to really purchase the product and be done with it fair and square instead of running a charity and throwing money at the bottomless pit of "coffees", wondering "did I give enough?".

Of course by this I'm not saying I don't appreciate donations. They mean a lot to me. They are more personal. I am just describing that in terms of revenue they're not as powerful.

Anyways, it is extremely unlikely that this software will go closed-source or be sold out, especially given that I am not its only contributor. I have already been offered $1000 for it, but this doesn't even cover the hours I spent on it. So, unless it's a life-changing amount of money (in which case I'd be able to fund another project like this!), I am not really considering it.

Donate

  • https://antiwarcommittee.info/en/sunrise/#help

  • Monero (XMR):

    monero:88yzE5FbDoMVLXUXkbJXVHjNpP5S3xkMaTwBSxmetBDvQMbecMtVCXnQ44W6WRYsPGCPoAYp74ER9aDgBLYDGAAiSt2wu8a?tx_amount=0.050000000000&recipient_name=WofWca%20(https%3A//github.com/WofWca)&tx_description=Donation%20for%20Jump%20Cutter%20extension%20development

  • Bitcoin (BTC):

    bitcoin:bc1qdfz74882mlk64pj4ctpdegvxv9r7jgq8xs2qkxpv3gkv5xqygvgs0fyzm9




AGPLv3 Logo