What is g90ak? How do I upscale/clean videos?

Creating this since I get asked a lot what my process is for video cleaning.

The funny history behind this is - I joined this forum having no idea what to expect and no intention to participate. So when I picked my username I just mashed on my keyboard. First 5 characters happened to be g90ak.

Fast forward a few months and I picked up a Handy and was really starting to appreciate the community that was built here and wanted to begin giving back in some way - I didn’t have enough time to script myself unfortunately, but did have electricity and good GPUs. Asked the mods to shorten my name, whipped up a quick avatar, and got to work.

The term “g90ak’ed” was literally a personal tag for me to keep track of what I processed so I wouldn’t get confused on my hard drive between original and processed versions. I started sharing with the community and things have just kinda gone from here.

There is no secret sauce to my processed videos, My workflow consists of:

  • Avisynth - cropping, color correction, levels correction, deinterlacing, edge enhancement. I mostly use functions of SmoothTweak, Hysteria, QTGMC, Smoothlevels, AutoLevels, Spline16Resize, and some more for special use cases.
  • Topaz Video Enhance AI 2.6.4 - upscaling using Artemis HQ for live video or Proteus at 50/15/15/15/0/0 for anime or CGI content. Add grain off, and export into a lossless .mp4 container. The other models aren’t worth messing around with. Try not to run multiple AI models in series either - it doesn’t help things.
  • Flowframes - Framedoubling/HFR’ing. For 4K and under videos, export at h.264 with a CRF quality of 19. Set scene changes to “duplicate last frame” Use RIFE model 2,3 (slowest, but least amount of artifacts)
  • Xmedia Recode - Simply to rejoin the original clip’s audio to the processed video

For VR videos, I skip Flowframes since they’re already 50-60fps, and use Ripbot264 to encode into a h.265 .mp4 container. I use a single pass of CRF 19 to encode. Do not use h.264 with VR videos - most hardware acceleration won’t work, making the video very hard to play back.

That’s it. There is no strict process I follow, as each video needs its own settings based on inherent issues or lack thereof. Things I take into consideration are digital noise, issues with white balance, logos that will interfere with framedoubling, compression artifacts, etc.

This is a very CPU and GPU intensive process. My rig will consume between 350-550 watts continuously while processing a video. A half hour video can take between 5-8 hours depending on complexity (10th gen i9 @ 5ghz + 3080RTX) . I would not recommend getting into this without at least an 6th gen i7 CPU and a 1080ti unless you’re only doing clips of 5 min or shorter.

Learning how to use Avisynth is the hardest part. There’s a good wiki and some tutorials out there. I won’t be able to recreate it here though. You can skip it and get results that are pretty good. However, that’s really the big difference between my versions and most of the others.

74 Likes

Requesting clarification on your use of x265 vs x264 here. I like your encoded and I like to play with encodes and upscales myself as well. Props f9. or using RF19. One thing I always found with upscales is detail loss or sharpness loss on x265 instead of x264 for some reason on other software, maybe less so with hentai 2D OVAs.

For sure.

I prefer h.264 as well, however many players have certain hardware acceleration algorithms designed for h.265 (since that is the “modern” codec). You’ll notice that high resolution files may have a very hard time playing back in a h.264 wrapper vs h.265. That’s why I use h.265 for VR or anything more than 5K.

You shouldn’t be seeing a quality difference. h.265 should be better quality at the same birate/filesize, but as mentioned, the decode chain for h.265 may be different on your system, so it may be going through a less accurate decode, resulting in the difference.

Another thing to also think about is you’ll always want to use software encoding. Avoid using Nvenc or Quicksync for encoding. Slower, but better quality.

that story was beautiful

People here asking what is g90ak, but not how is g90ak. How are you?

8 Likes

I’m good buddy :slight_smile: Living the dream. Working on some new scripts too. Hopefully good stuff for the community.

6 Likes

I recently discovered your work and was bummed I missed out on so much of your past work and not downloading it in time.
Now I just follow you and make sure I check in to see what content you’re doing :rofl:
Thank you for all that you do. The community is better for it!

1 Like

I just wanted to pop in and say that I truly appreciate what you bring to this community. Not just your upscales, which are always absolutely gorgeous, but you’re an all-around outstanding community member who brings a lot of positivity to this place. Thank you.

2 Likes

Thanks for sending me here @g90ak! I was just using topaz for everything and then trying to compress a 200+gb file which just doesn’t turn into something that anyone would ever download. Only really interested in upscaling cause I use a 65" 4K tv.
Couple Questions
Which AI model do you use in flow frames?
Is the last step necessary as I seem to be getting results that sound fine?
With regards to Topaz does “denoise” mean Proteus set to manual and the slider cranked all the way up and processed separately of the upscale. (been doing it that way with a bit of antialiasing)

Thanks again for making this guide!

Thanks for the questions :slight_smile: Happy to try and answer.

Flowframes - RIFE 2.3 is the most accurate. The subsequent models are faster, but have more artifacts.

Last step - I think you may be referring to the audio joining step? VEAI kinda sucks at passing through some types of audio and will actually re-encode them sometimes from what I understand. I don’t like taking chances, so I just join the original audio for maximum preservation.

Not sure about your denoise question. If you’re dealing with anime or CGI content, Use Proteus with settings of 50/15/15//15/0/0 as a starting point. I do not recommend doing multiple runs of VEAI on a single file except for extreme cases, but even then would recommend using Avisynth to do some of the heavy lifting rather than VEAI with multiple passes.
image

1 Like

I also get a lot of questions - why do color/levels correction?

I think most people do not want to learn how to utilize Avisynth. It requires some brain and effort. Most people want a 1-click solution it seems. This is fine, but you won’t be able to get results like this.

90% of the improvement below is from Avisynth, not VEAI. Yes, VEAI will get things to look crisper and sharper, but won’t do extreme correction like some files need. Just sayin’ - results are based on effort :slight_smile:

3 Likes

So i had been doing a pass with 100 reduce noise and 15-20 anti/deblur to get a better looking image. Then plugging that video back in and running an upscale using Artemis.

Also another question about flow frames. Have you ever gotten an error from it saying the file type might not be supported. Ill post a PIC next time it pops up. But it gets stuck figuring out the frame count.

I find that stacking AI models on top of each other can come up with some funky results, but if it looks better to you, that’s great. Sounds like you’re dealing with a pretty noisy source - I would go with QTGMC’s ezDenoise function instead of an AI pass. It’s very effective in my experience.

Nope, I typically only feed it files that are output from VEAI, or wrapped in an AVIsynth wrapper, so haven’t come across that issue.

1 Like

Are you still using classic AviSynth?
From the little research you inspired me to do it seems you should update to AviSynth+ (no obvious downsides, x64, multithreading, still maintained…).
I’ll probably take a look at Vapoursynth first as I hope it’s easier to get into, do you have any thoughts on that?

Thanks for everything you give to this community - I hope that clarifying the color correction and further image-enhancing aspects of your pipeline will lead me to results that finally get my 3070 mobile to do what it was chosen for :slight_smile:

Yeah, I use avisynth+ :slight_smile:
Never tried Vapoursynth - heard good things about it.

And take it easy on that laptop!!! Upscaling/HFR’ing is a pretty beefy, constant load that most laptops aren’t designed for, especially gaming ones, which are more designed for peaky-type loads.

How do you keep the file size down? I cleaned up a 1080p video with topaz (cut it down from 1hr to 18 minutes) keeping it at 1080p and its 20 gigabytes! Original was 1gb haha. Cant imagine what would have happened if i upscaled to 4k…

Out of topaz, I do lossless. Yes, the files are fucking huge. But once you run it through flowframes with a CRF of 19, it compresses it down to normal size. You want to compress the file as few times as you can during processing, so if Topaz is spitting out huge files, you’re doing it right.

If you aren’t going to HFR, and are just going to use Topaz - you can use something like Handbrake to compress Topaz’s output files. Or simply have Topaz compress them when outputting and save yourself a step. Topaz can be a bit funky with their output compression in terms of strangely large files, so something to keep an eye on.

Dude, handbrake literally saved the day. I am now in the process of upscaling/cleaning up a metric shitton of videos now :smile:

1 Like

Can you provide the exact settings for Proteus for Anime upscaling?
They’ve added more parameters, so for:
50/15/15/15/0/0
I have two parameters left at the end :person_shrugging:
image

It seems there’s an Estimate button as well that’s pretty neat for generating some values.

leave those at 0. Estimate button doesn’t work too well. Never add noise. Leave recover original detail at the default.

Not sure if you’re doing 2D anime - but for those I do a lot of work before Topaz - edge enhancement, deinterlacing, etc. via avisynth through several passes. LMK if you need details on that.

Lastly, if you can get your hands on Topaz 2.6.4 - it works a lot better. Slower, but better. Especially if you’re going to hand off to Flowframes - the uncompressed .mp4 wrapper is much easier to deal with. Not sure why they dropped so much export control/functionality in the latest version.