Hello, welcome to see our data and code :).

You can play the PPT file named "EEG2Video_reconstruction_results.pptx" to view the GIF images !

As the raw EEG data (.cnt file) are too large to upload, this supplementray only contains one EEG numpy file contains the fisrt blocks of EEG signals of one subject after preprocessing, i.e, down-sampling to 200 Hz, filtering with 0.1 Hz - 100 Hz band-pass filter.

File descreption:
	EEG_sub01.npy:
		a numpy array with shape (40, 5, 62, 400), means there are 40 classes in each block, and 5 two-second video clips for each class. The compared two-second EEG segments have 62 channels and 400 time points.

	All_Video_Label.npy:
		a numpy array with shape (7, 40), indicates the order of the video classes in 7 video blocks.

	Meta Information Files:
		All_video_color.npy
		All_video_face_apperance.npy
		All_video_human_apperance.npy
		All_video_obj_number.npy
		All_video_optical_flow_score.npy

		shape (7, 200), the meta information of each video clips.

	model.py:
		All baselines and our GLMNet model on our EEG-VP benchmark, which is adapted from NICE-EEG.

	EEG_VP_train_test.py:
		The implementation of on our EEG-VP benchmark.

	40_class_run_metrics.py:
		The code for obtaining the evaluation metrics, is adapted from MinD-Video code bank.

	BLIP_text_10subset.txt:
		The samples of text captions generated by BLIP models of the smallest subset with 10 classes.

	train_batch_tunemultivideo.py:
		The code for fine-tuning video diffusion models, which is adapted from Tune-A-Video code bank.

	pipeline_tuneeeg2video.py:
		The code of the inference pipeline of our EEG2Video, which is adapted from Tune-A-Video code bank.

Acknowledgement

We thank the authors of NICE-EEG [1], MinD-Video [2], and Tune-A-Video [3] for open-sourcing their codes.
[1] Song Y, Liu B, Li X, et al. Decoding Natural Images from EEG for Object Recognition[C]//The Twelfth International Conference on Learning Representations. 2023.
[2] Chen Z, Qing J, Zhou J H. Cinematic mindscapes: High-quality video reconstruction from brain activity[J]. Advances in Neural Information Processing Systems, 2024, 36.
[3] Wu J Z, Ge Y, Wang X, et al. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 7623-7633.