This AI Performs Super Resolution in Less Than a Second

Share this video on

What's Hot

What's New

Top Grossing

Top of the Chart


M Siemons : As a physicist let me remark: the upsampling and making up new pixels is very nice, it is NOT super resolution. Resolution of an image is very well defined in terms of the diffraction limit (the largest spatial frequency transmitted by your imaging system) and there are ways to overcome thus limit with super resolution imaging techniques (often in microscopy). AI does not improve this resolution, but should be seen as a smart interpolation of data/pixels.

Bonnie Le : Sell your technology to Photoshop

CooleoBrad : Wow. "Zoom. Enhance." might actually be feasible now!


Asenstiven Odin Other : Krishna Mohan Krishna Mohan 1 месяц назад (изменено) Хорошо. Время прекратить покупать 1000 $ телефонов. Спасибо AI. Просто не убивай меня в будущем

John Doe : Enhance 224 to 176. Enhance. Stop. Move in. Stop. Pull out, track right, stop. Center and pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute. Go right. Stop. Enhance 57-19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

Asenstiven Odin Other : kastakan kastakan 1 месяц назад Наконец, мы будем иметь возможность смотреть японское порно! : D

Asenstiven Odin Other : Translation (RUS) Moby Motion Невероятно. Быстрое вычисление имеет решающее значение для этой технологии, чтобы увидеть основное использование - тем более, что многие люди могут захотеть использовать его на видео и избавиться от 480p раз и навсегда Vlad Kostin для видео вам понадобятся специальные временные когерентные варианты этой техники David Hagler Я предполагаю, что без временной согласованности вы по существу видите шум между двумя кадрами, уменьшая общее кажущееся разрешение до чего-то близкого к исходному материалу? Joe Ch Я бы назвал это революционным IF временным, и согласованность VR была бы там. Видеоролики VR особенно дороги, потому что вам необходимо передать (или сделать рендер для начала!) Достаточно пикселей для 360 просмотров на каждый глаз. Он также имеет потерянную избыточную потенциально вычитаемую информацию. Таким образом, это принесло бы большую пользу от сверхразрешения от ИИ. Но если вы галлюцинировали разные пиксели на глаз, то это сделало бы невозможным применение для контента VR, даже если существует временная согласованность. У NVidia denoiser было одно и то же. Я попытался использовать его для контента, переданного с помощью Arnold, и он даже говорит в документации, чтобы не использовать его для видео, потому что он будет мерцать.

Jack Sparrow : need 12GB GPU to upscale 520x520 picture 2x. So "1 sec" 30fps video, takes 24s -> 1 min takes 24 mins.

Vladimir Mesherin : This is amazing, thank you for your work! This can be applied to building an AI architecture that will gradually improve itself. Can also build scientific research method approximation based on your algorithm, I think... right?

Shall NotWither : This will definitely create legal problems for the photo industry and copyright in the short term as removing watermarks and enhancing images. Not to mention a future when you can download a low resolution video and then upscale it to HD while enjoying copyright free images because the AI only approximated the video. Amazing.

davidsirmons : Anti-blur.

eternalchao11 : A.I. painter Nice its about time.

TheCrazyFightClub : Wait. So you can render HD a LD video?

Chillin 420 : Hell ya this makes me think that we will eventually get to the point where we can actually "enhance" images like in the movies where you can zoom in and the photo wont be just pixels. AI is amazing!

21EC : oh wow....this in realtime could be used for an old game like Doom...the game is really pixelated, so it could be pretty cool to play it with higher res' graphics.

sdziscool : all these people remaking waifu2x and not even citing it in their papers, all of the papers are post 2016 and no mention is even made. Rather disappointing.

Melroy van den Berg : Does this mean we can play high-res NES, SNES games?

John Leonard : Mind blowing

Shaken0071 : this is how we can play super nintendo games again without stretching pixels. awesome!

21EC : damn....thats beyond crazy ! its like magic. but when do you think we will see this being used in for example google images...?

NPC #1337 : "ENHANCE"

William Taylor : it's weird why it's crooked, it's like they were shaking the camera. Bars aren't left then right then left, go, oh, bars are straight. surely, even shaking the camera, the shutter speed doesn't matter, all the light hit at the same time. first you have implied hue direction, then gamut falloff, that's four times. I gotta think of some spooky shadow detection for the bars trick. Your really looking for a three standard deviation improvement. So you got green 0 to 255, now 40 would be way to noticable. it might be another object, but 120 has to be the same object. So you got any 9 points with RGB. The fox has no improvement, but I can see how each whispy tuft should have a solid grey, 124,124,124. This is where gamut augmentation comes in. You can sharply take the 5 point parabolas' and add them together. But now Mr camera shaker let in 3d light.

Joe Garth : Already does this. I've been using it for the past year. :)

EvilStreaks : There's something uncomfortable about things being tailored in terms of detail-compromise to human perception.

lIIlIllIlIl : You can download YouTube videos in 144p and watch in 4K

Rasto k0r5n4k : you could just point out what images used are actually processed with the algorithm.. otherwise is misleading.

sevencolours : Use this on Star Wars IV, V, VI.

Cynosure : I watched it in 144p and was wondering why I couldn't see the differences

brachypelmasmith : when your eyesight is really bad so you can barely see any differenced

Ville Pakarinen : I love how he always calls me a "fellow scholar" even though I don't do shit

SuperRand13 : So is curriculum learning what the deep fakes program is doing when i leave it on for 12 hours and it slowly makes the face turn from a blurred mess into the actual face i wanted?

Mic2000 : Could you please make a demo reel with a lot of pictures you upscale? This is just awesome.

Saunterblugget Hampterfuppinshire : Wait, can you imagine if we can somehow turn a 144p video to a 4k video. What are the possibilities!

Harry Baba : Meanwhile Nvidia with their DLSS "It just works" and "the more you buy, the more you save"

skierpage : I don't think the AI uses related images to supersample, paper says "All presented models are trained with the DIV2K [34] training set, which contains 800 high-resolution images." If that set includes lots of alligator pictures then the results seem unfair 🙂. But what if the neural network recognize the kind of source picture and then supersampled based on related high-res images? Find higher-resolution pictures of fox fur or alligator skin or that particular village and supersample from training based off that.

MajkaSrajka : This is some CSI-level Bullshit! Now I wait for DNA identification through upscaled video :P

Marcus Lee : amazing technology, if they can do this to a video as well instead of still image, this technology is perfect to reduce bandwidth consumption on the network

last shadow : but how authentic is the little details revealed in the enhanced higher resolution? does the AI make it up by comparing it to other images in its database ?

Hero Of AndromedaTV : With this technology, now there are no excuses to sell 4k blu rays with shit quality, this would suit many movies ...

kastakan : Finally we'll be able to watch Japanese porn! :D

Alex4Lolz : Are there any practical projects like a website or software that uses this except for "Let's enhance". I've found their results to be pretty lack luster.

Simon Barth : Can i use the trained program? Or do i need to train it my self? I dont know much about this kind of stuff.

MetaPixel_ : So its like waifu2x but for normies, nice

Guilherme sampaio de oliveira : Okay but can it run like... 240 hertz?

Neoshaman Fulgurant : Next step, do it under .2ms on a tegra GPU, better a Mali 400mp. Then gaming is changed forever. Next step, take a low quality render turn it into cg.

nin10dorox : zoom in... ENHANCE

Nathan Gamble : Wow. It looks like "zoom and enhance" isn't so unrealistic after all.

PugmaPug : with this you wouldn't need to upgrade cameras for the future.

Oscar Barda : I watched this video in 144p, and I have to say, I'm not impressed by the results of this algorithm.