How to create subtitles for a scene (even if you don't understand the language)

You forgot to include the “IPython” module in your command to install the needed modules.

1 Like

ChatGPT seems to combine lines together if the two lines are logically connected. Makes it pretty annoying sometimes. Does anyone else have this problem? I’ve tried to use different prompts but it just keeps happening.

Did something go wrong when in step 4 I pull in the subtitles and they all seem to be a dot? I at first thought it was because the video I chose had too much background noise, but I tried another video and got the same thing.

If it’s wrong, any idea how to fix the issue? / what to try redoing

No, you didn’t do anything wrong. It’s normal to have dot / “.” at step 4.
The first 4 steps only define where there is “voice” in the audio, without trying to know what is said.

It’s in steps 5 to 11 that we “replace” the dots with a transcription of the audio (using AI), in the original language (ex. Japanese).

In steps 12 to 21, we translate the original language texts to English.

Oh, okay. I was getting paranoid I had done something wrong. Thanks! Rereading the tutorial I just realized after step 2 it says entries for each voice detected (with no text). I missed that.

Curious if anyone has any new best practices for translating Japanese to English for JAV subtitles?

I’ve been using whisper-faster w/ the large-v2 model and they’re okay for some scenes but for others its just the same odd sentence over and over and over. I’m guessing it’s picking up some background static or something.

I found nothing that works perfectly yet. There are some models/tools (stable-ts, whisper-faster, whisperX, etc) that do a better job than ‘vanilla whisper’ for the timing of subtitles but they still have trouble with the Voice-Detection and repetition, as you said. The nice part is that whisper-faster can now be used directly in SubtitleEdit (i.e. Purfview’s Faster-Whisper seems to be based on whisper-faster).

I tested large-v3 a little bit. For Japanese, it didn’t seem better than large-v2.

As for the translation, I’m trying to use local LLM (since it’s harder to bypass chatGTP censoring now). The one that I prefer so far is TheBloke/Orca-2-13B-SFT_v5-GPTQ · Hugging Face, with a prompt like this:

I need you to translate the subtitles of a porn movie from Japanese to English.
I have the following requirements.

Requirements:
{
1- The target audience are adults so it can contains explicitly sexual concept.
2- Try to spot any sentences containing double meanings or wordplay.
3- All the provided lines are said by a girl to a guy.
4- Only translate the meaning of the original text. Don't expand. Don't give explication.
}

Please translate each line one by one:
{
[00000] 阿修羅?
[00001] わりとくいるよな
[00002] なぁごめんやけどさ、あと10分だけ泳がしてくれへん?
[00003] 最近さああ
[00004] 仕事が忙しくて
[00005] 全然暗号できてへんね〜んか
[00006] なぁ、ごめんやからお願い
[00007] この通り
[00008] っていうの、自分だけ
}

But it’s not working very well. For the same input, it’s clear that ChatGPT is a lot better. Hopefully, Llama 3 will be even better than version 2.

1 Like

Appreciate the updated thoughts! I found the same with large-v3.

Sounds like we just need to let the AI models continue to evolve. It’s still remarkable how much we can do today!

Thanks for your guide !
I encountered some problems when proceeding to step 10. Can you help me?

--- subtitles.wavchunks2srt ---
.\kavr-074.temp.perfect-vad.srt: Loading 218 .srt file from folder kavr-074.temp.perfect-vad_wav_chunks_9FA13526...
.\kavr-074.temp.perfect-vad.srt: Exception occured: System.FormatException: 28: Invalid format for start/end time:
   在 FunscriptToolbox.Core.SubtitleFile.<ReadSrtSubtitles>d__3.MoveNext()
   在 System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
   在 System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
   在 FunscriptToolbox.Core.SubtitleFile..ctor(String filepath, IEnumerable`1 subtitles)
   在 FunscriptToolbox.Core.SubtitleFile.FromSrtFile(String filepath)
   在 FunscriptToolbox.SubtitlesVerb.VerbSubtitlesWavChunks2Srt.Execute()

--- subtitles.gpt2srt ---

--- subtitles.srt2gpt ---

--- subtitles.singlewav2srt ---
.\kavr-074.temp.perfect-vad.whisper.wav: Skipping because can't find a file named 'kavr-074.temp.perfect-vad.whisper.srt' and 'kavr-074.temp.perfect-vad.whisper.offset' and 'kavr-074.temp.perfect-vad.whisper.chunks.srt'.

use FunscriptToolbox1.2.7

Humm, one of the 218 .srt file is invalid somehow but I didn’t write the offending file name in my exception handling.

Could you zip all .srt files in the folder “kavr-074.temp.perfect-vad_wav_chunks_9FA13526” and link it here?
You might have to rename the file to .zip.funscript or something like this.

Or create an issue here and link the file there.

i was legit thinking of learning japenese just to more enjoy jav scenes lol funny that ive been playing the yakuza games for years and havnt learned a single bit yet lol

thank you for your reply
kavr-074.temp.perfect-vad_wav_chunks_9FA13526srt.zip.funscript (53.1 KB)
this is the .srt zip link.

Ok, there is a bug in my .srt handling if the text is only a number. Ex:

7 <= part of the format
00:00:16,240 → 00:00:19,180
2 <= text

I’ll fix it. In the mean time, replace those files in your folder:

135___9fa13526__0326505_1720__wav-subs.srt (568 Bytes)
176___9fa13526__0703714_442__wav-subs.srt (39 Bytes)

Thank you, it works.

Has your workflow improved or changed since then if you don’t mind me asking? I’m currently diving into the subtitle rabbit hole and find myself using some combination of Chatgpt + DeepL + Gemini (with the occasional unable to translate due to explicit content warning). And using Subtitle Edit with Large v2. For some reason, the Audio->Text ‘Auto Adjust timings’ setting messes with some of the timings off, so I have that turned off. Not sure if you experienced the same or experimented with that.

I was also experiencing inaccurate timings with what I am guessing is called hallucinations? simply due to the model? basically issues in the timings due to breaks/brief pauses in the audio or the audio not being entirely loud enough for it to pick up the words…(i belive there is something called the VAD filter to help with this? but I’m a total noob when it comes to using the cmd line and various parameters).

I know stable-ts is also included in the software (Subtitle Edit) as well - to help remedy these issues, but the efficiency of whisper-faster is really a game changer.

I’m currently working on a new ‘subtitleV2’ verb in FunscriptToolbox that will be more automated and flexible (see subtitleV2 branch on github).

I’m testing other tools/methods right now:

For me, unfortunaly, as long as the translation is not helped by a ‘computer vision AI’ that can understand what’s going on in the scene, manual works will always be needed for some part of the process.

2 Likes

Has anyone tried the claude-3 and Gemini-pro from POE? They have fewer restrictions than the ones on the official website. I translated the Japanese into Traditional Chinese, and the results were great, especially with Claude - it can be used directly.

GUI subtrans is also a good choice, you need to have the GPT3.5 API, but the results are not as good as claude-3 and Gemini-pro.

Awesome, can’t wait to see it in deployment (if you decide on fully publishing it!) Your software is great.

I’m not too knowledgeable in these things, but could you point me to the ‘draft’ version of SilaroVAD? Is that one of its models?
Also, just tried Mistral, and I have to say it’s pretty concise…Not sure if it’s due to me refining my prompt or just the nature of the model. Thanks for the share!

This gave me an idea, maybe you could use something like this:

  1. screencap/movie thumbnailer (one with ffmpeg)
  2. feed images into a img2txt stable diffusion prompt (Prompt Text Prediction from a Generated Image) or something like this, or this?
  3. feed result from 2 into your JSON prompt
  4. profit?

Just an idea… you have probably already thought of something like this to help automate the process. Let me know your thoughts, otherwise, I’m back to experimenting :wink:

Yes, I did. Great suggestion. I like that Poe allows NSFW (even if the underlying model from other companies might refuse to answer).
All 3 models (Haiku, Sonnet and Opus) of Claude-3 are great. They give good translation but Opus is a lot more cooperative then Haiku & Sonnet.
Gemini-pro and even, GPT-4, were not as good.

This was an option for my old process. Now, I simply use Whisper to do the draft, it does a better job than SilaroVAD.

Mistral Next was too concise, it didn’t translate the ‘whole JSON’ that I gave it to translate. Mistral Large translated everything. But I don’t know about other use cases thought.

My new process allows to ‘fake’ having vision by telling the AI who’s talking (which could be done in the future with good Speaker Diarization AI) or by giving a general description of what will happen on the screen in the next portion of the video, and see if it improve the translation.

1 Like

Ok, this process is now obsolete.

I created a new topic with an updated tool/process: here