Creating this since I get asked a lot what my process is for video cleaning.
The funny history behind this is - I joined this forum having no idea what to expect and no intention to participate. So when I picked my username I just mashed on my keyboard. First 5 characters happened to be g90ak.
Fast forward a few months and I picked up a Handy and was really starting to appreciate the community that was built here and wanted to begin giving back in some way - I didn’t have enough time to script myself unfortunately, but did have electricity and good GPUs. Asked the mods to shorten my name, whipped up a quick avatar, and got to work.
The term “g90ak’ed” was literally a personal tag for me to keep track of what I processed so I wouldn’t get confused on my hard drive between original and processed versions. I started sharing with the community and things have just kinda gone from here.
There is no secret sauce to my processed videos, My workflow consists of:
Avisynth - cropping, color correction, levels correction, deinterlacing, edge enhancement. I mostly use functions of SmoothTweak, Hysteria, QTGMC, Smoothlevels, AutoLevels, Spline16Resize, and some more for special use cases.
Topaz Video Enhance AI - upscaling using AHQ for live video, Proteus at 50/15/15/15/0/0 for anime or CGI content. Add grain off, and export into a lossless .mp4 container. The other models aren’t worth messing around with.
Flowframes - Framedoubling/HFR’ing. Export at h.264 quality of 19. Set scene changes to “duplicate last frame”
Xmedia Recode - Simply to rejoin the original clip’s audio to the processed video
For VR videos, I skip Flowframes since they’re already 50-60fps, and use Ripbot264 to encode into a h.265 .mp4 container. I use 2 pass with “slow” encode.
That’s it. There is no strict process I follow, as each video needs its own settings based on inherent issues or lack thereof. Things I take into consideration are digital noise, issues with white balance, logos that will interfere with framedoubling, compression artifacts, etc.
This is a very CPU and GPU intensive process. My rig will consume between 350-550 watts continuously while processing a video. A half hour video can take between 5-8 hours depending on complexity (10th gen i9 @ 5ghz + 3080RTX) . I would not recommend getting into this without at least an 6th gen i7 CPU and a 1080ti unless you’re only doing clips of 5 min or shorter.
Learning how to use Avisynth is the hardest part. There’s a good wiki and some tutorials out there. I won’t be able to recreate it here though. You can skip it and get results that are pretty good. However, that’s really the big difference between my versions and most of the others.
Requesting clarification on your use of x265 vs x264 here. I like your encoded and I like to play with encodes and upscales myself as well. Props f9. or using RF19. One thing I always found with upscales is detail loss or sharpness loss on x265 instead of x264 for some reason on other software, maybe less so with hentai 2D OVAs.
I prefer h.264 as well, however many players have certain hardware acceleration algorithms designed for h.265 (since that is the “modern” codec). You’ll notice that high resolution files may have a very hard time playing back in a h.264 wrapper vs h.265. That’s why I use h.265 for VR or anything more than 5K.
You shouldn’t be seeing a quality difference. h.265 should be better quality at the same birate/filesize, but as mentioned, the decode chain for h.265 may be different on your system, so it may be going through a less accurate decode, resulting in the difference.
Another thing to also think about is you’ll always want to use software encoding. Avoid using Nvenc or Quicksync for encoding. Slower, but better quality.
I recently discovered your work and was bummed I missed out on so much of your past work and not downloading it in time.
Now I just follow you and make sure I check in to see what content you’re doing
Thank you for all that you do. The community is better for it!
I just wanted to pop in and say that I truly appreciate what you bring to this community. Not just your upscales, which are always absolutely gorgeous, but you’re an all-around outstanding community member who brings a lot of positivity to this place. Thank you.
Thanks for sending me here @g90ak! I was just using topaz for everything and then trying to compress a 200+gb file which just doesn’t turn into something that anyone would ever download. Only really interested in upscaling cause I use a 65" 4K tv.
Couple Questions
Which AI model do you use in flow frames?
Is the last step necessary as I seem to be getting results that sound fine?
With regards to Topaz does “denoise” mean Proteus set to manual and the slider cranked all the way up and processed separately of the upscale. (been doing it that way with a bit of antialiasing)
Flowframes - RIFE 2.3 is the most accurate. The subsequent models are faster, but have more artifacts.
Last step - I think you may be referring to the audio joining step? VEAI kinda sucks at passing through some types of audio and will actually re-encode them sometimes from what I understand. I don’t like taking chances, so I just join the original audio for maximum preservation.
Not sure about your denoise question. If you’re dealing with anime or CGI content, Use Proteus with settings of 50/15/15//15/0/0 as a starting point. I do not recommend doing multiple runs of VEAI on a single file except for extreme cases, but even then would recommend using Avisynth to do some of the heavy lifting rather than VEAI with multiple passes.
I also get a lot of questions - why do color/levels correction?
I think most people do not want to learn how to utilize Avisynth. It requires some brain and effort. Most people want a 1-click solution it seems. This is fine, but you won’t be able to get results like this.
90% of the improvement below is from Avisynth, not VEAI. Yes, VEAI will get things to look crisper and sharper, but won’t do extreme correction like some files need. Just sayin’ - results are based on effort
So i had been doing a pass with 100 reduce noise and 15-20 anti/deblur to get a better looking image. Then plugging that video back in and running an upscale using Artemis.
Also another question about flow frames. Have you ever gotten an error from it saying the file type might not be supported. Ill post a PIC next time it pops up. But it gets stuck figuring out the frame count.
I find that stacking AI models on top of each other can come up with some funky results, but if it looks better to you, that’s great. Sounds like you’re dealing with a pretty noisy source - I would go with QTGMC’s ezDenoise function instead of an AI pass. It’s very effective in my experience.
Nope, I typically only feed it files that are output from VEAI, or wrapped in an AVIsynth wrapper, so haven’t come across that issue.
Are you still using classic AviSynth?
From the little research you inspired me to do it seems you should update to AviSynth+ (no obvious downsides, x64, multithreading, still maintained…).
I’ll probably take a look at Vapoursynth first as I hope it’s easier to get into, do you have any thoughts on that?
Thanks for everything you give to this community - I hope that clarifying the color correction and further image-enhancing aspects of your pipeline will lead me to results that finally get my 3070 mobile to do what it was chosen for
Yeah, I use avisynth+
Never tried Vapoursynth - heard good things about it.
And take it easy on that laptop!!! Upscaling/HFR’ing is a pretty beefy, constant load that most laptops aren’t designed for, especially gaming ones, which are more designed for peaky-type loads.