-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference Result #13
Comments
Hi, thank you for your interest in our work! |
Thank you for your reply! the data I've used is |
Thanks a lot for sharing! I think the problem is caused by the resolution change code in the red box. The current resolution change code randomly crops the high-resolution image to 160x320, resulting in a different field of view from the events. The correct way is to downscale the high-resolution image to 160x320 with image downsampling techniques like bicubic interpolation. For example, you can replace the red box part with the following:
You might also need to add one line |
Yeah! It works! Thank you for your help! |
By the way,this means that directly inputting images at their original resolution into the model may not yield good results, possibly because the pre-trained model does not generalize well across different resolutions. I'm not sure if I understand this correctly. I've seen your GEM work on resolution generalization, which is excellent, but it seems to only handle deblurring and not frame interpolation. I'm curious if there are any plans to release an advanced version of this work (evdi++). I'm really looking forward to it! |
Glad to hear that works :) |
Another question I'd like to ask is: Is it necessary to have both left and right view blurry images during the inference process? I noticed that using two left-view blurry images can also achieve deblurring. |
Nice observation! Yes, it’s not mandatory to use left and right images as inputs. In principle you can use arbitrary two images as inputs because the final result is fused in a hand-crafted manner. But we found that using both left and right images achieves the overall best performance :) |
Great work! However, I encountered an issue: when I test your provided example data using the command
python Test.py --test_ts=0.5 --model_path=./PreTrained/EVDI-GoPro-Color.pth --test_path=./Database/GoPro-Color/ --save_path=./Result/EVDI-GoPro-Color/ --color_flag=1,
frame interpolation and deblurring work normally. But when I run it on the downloaded dataset, the results become strange. For the same scene, the deblurring effect is barely noticeable. I noticed that the resolutions of the two datasets differ. After adjusting the resolution, the deblurring improved, but the image colors became distorted. Could you please help me identify where the problem might be? Thank you!
example.png
go-pro-test.png
go-pro-test-change-resolution.png
code
`class test_dataset(Dataset):
def init(self, data_path, num_bins, target_ts):
'''
Parameters
----------
data_path : str
path of target data.
num_bins : int
the number of bins in event frame.
target_ts : float
target reconstruction timestamps, normalized to [0,1].
The text was updated successfully, but these errors were encountered: