Hi everyone!

A few days ago I released Whishper, a new version of a project I’ve been working for about a year now.

It’s a self-hosted audio transcription suite, you can transcribe audio to text, generate subtitles, translate subtitles and edit them all from one UI and 100% locally (it even works offline).

I hope you like it, check out the website for self-hosting instructions: https://whishper.net

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 year ago

      No, it’s completely independent, it does not rely on any third-party APIs or anything else. It can function entirely offline once the models have been downloaded.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      Whishper uses faster-whisper in the backend.

      Simply put, it is a complete UI for Faster-Whisper with extra features like transcription translation, edition, download options, etc…

  • ares35@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    how does whisper do transcribing technical documents. like for lawyers, doctors, engineers and what not? or speakers with heavy accents?

  • micha@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Congratulations on the launch and thanks for making this open-source! Not sure if this supports searching through all transcriptions yet, but that’s what I’d find really helpful. E.g. search for a keyword in all podcast episodes.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That’s a great idea! I’ll attempt to implement that feature when I find some time to work on it.

  • Axiochus@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Oh, awesome! Does it do speaker detection? That’s been one of my main gripes with Whisper.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Unfortunately, not yet. Whisper per se is not able to do that. Currently, there are few viable solutions for integration, and I’m looking at this one, but all current solutions I know about need GPU for this.

      • jherazob@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        VERY understandable, requiring a GPU would limit it’s application and spread, i hope a good GPU-less solution is found eventually

  • Morethanevil@lemmy.fedifriends.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I saw your project on Codeberg before. Then it was whisper plus. Since whisper+ it did not work anymore for me. I uploaded a file and it did not start. The old whisper worked. Did not try it for months anymore with whisper plus.

    Maybe I give it another try. Can I use bind mounts or are there special permissions? Anyway thanks for your work.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Whisper+ had some problems, that’s why I rewrote everything. This new version should fix almost (maybe there are some bugs I haven’t found) everything.

      If you take a look at the docker-compose file, you’ll see it is already using bind mounts. The only special permission needed is for the LibreTranslate models folder, which runs as non-root with user 1032.

  • optissima@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I am looking for open source live transcription software, does this offer that, or is it only file-based?

  • Railcar8095@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Massive kudos. I had the need for something like this in the past and it would have been a blessing.surely it will be for somebody else

  • UberMentch@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Would love to deploy this, but unfortunately I’m running server equipment that apparently doesn’t support MongoDB 5 (Error message MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!). Tried deploying with both 4.4.18 and 4.4.6 and can’t get it to work. If anybody has some recommendations, I’d appreciate hearing them!

    Edit: Changed my proxmox environment processor to host, fixed my issue.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m glad you were able to solve the problem, I add the comment I made to another user with the same problem:

      Didn’t know about this problem. I’ll try to add a MariaDB alternative database option soon.

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is excellent timing for me. I was just taking a break from working on setting up whisper.cpp with a web front end to transcribe interviews. This is a much nicer package than I ever had a chance of pulling together. Nice work!

  • orizuru@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Congrats, and thank you for releasing this!

    Maybe there’s a couple of personal projects I could use it for…

  • Konraddo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just tried this out but couldn’t get it to work until downgrading mongo to 4.4.6 because my NAS doesn’t ha``ve AVX support. But then, mongo stays unhealthy. No idea why.

    • pluja@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Didn’t know about this problem. I’ll try to add a MariaDB alternative database option soon to solve this.