ckadirt commited on
Commit
70dbcfd
·
verified ·
1 Parent(s): 691c902

Add files using upload-large-folder tool

Browse files
recon_inference.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
wandb/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/debug.log ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Current SDK version is 0.17.2
2
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Configure stats pid to 20009
3
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/.config/wandb/settings
4
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/real_time_mindEye2/wandb/settings
5
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False}
7
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
8
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_setup.py:_flush():76] Applying login settings: {}
9
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_init.py:_log_setup():520] Logging user logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_153455-sdxl_turbo-MST/logs/debug.log
10
+ 2025-08-09 15:34:55,969 INFO MainThread:20009 [wandb_init.py:_log_setup():521] Logging internal logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_153455-sdxl_turbo-MST/logs/debug-internal.log
11
+ 2025-08-09 15:34:55,970 INFO MainThread:20009 [wandb_init.py:_jupyter_setup():466] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7f50b17131d0>
12
+ 2025-08-09 15:34:55,970 INFO MainThread:20009 [wandb_init.py:init():560] calling init triggers
13
+ 2025-08-09 15:34:55,970 INFO MainThread:20009 [wandb_init.py:init():567] wandb.init called with sweep_config: {}
14
+ config: {'model_name': 'sdxl_turbo-MST', 'global_batch_size': 8, 'batch_size': 24, 'num_epochs': 150, 'num_sessions': 0, 'num_params': 119187688, 'clip_scale': 1.0, 'prior_scale': 30.0, 'blur_scale': 0.5, 'use_image_aug': False, 'max_lr': 0.0003, 'mixup_pct': 0.33, 'num_samples_per_epoch': 1138, 'ckpt_interval': 999, 'ckpt_saving': True, 'seed': 0, 'distributed': False, 'num_devices': 1, 'world_size': 1}
15
+ 2025-08-09 15:34:55,970 INFO MainThread:20009 [wandb_init.py:init():610] starting backend
16
+ 2025-08-09 15:34:55,970 INFO MainThread:20009 [wandb_init.py:init():614] setting up manager
17
+ 2025-08-09 15:34:55,973 INFO MainThread:20009 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
18
+ 2025-08-09 15:34:55,975 INFO MainThread:20009 [wandb_init.py:init():622] backend started and connected
19
+ 2025-08-09 15:34:55,993 INFO MainThread:20009 [wandb_run.py:_label_probe_notebook():1334] probe notebook
20
+ 2025-08-09 15:34:55,994 INFO MainThread:20009 [wandb_run.py:_label_probe_notebook():1344] Unable to probe notebook: 'NoneType' object has no attribute 'get'
21
+ 2025-08-09 15:34:55,994 INFO MainThread:20009 [wandb_init.py:init():711] updated telemetry
22
+ 2025-08-09 15:34:56,004 INFO MainThread:20009 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout
23
+ 2025-08-09 15:34:56,545 INFO MainThread:20009 [wandb_run.py:_on_init():2402] communicating current version
24
+ 2025-08-09 15:34:56,705 INFO MainThread:20009 [wandb_run.py:_on_init():2411] got version response upgrade_message: "wandb version 0.21.1 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
25
+
26
+ 2025-08-09 15:34:56,705 INFO MainThread:20009 [wandb_init.py:init():795] starting run threads in backend
27
+ 2025-08-09 15:34:57,218 INFO MainThread:20009 [wandb_run.py:_console_start():2380] atexit reg
28
+ 2025-08-09 15:34:57,218 INFO MainThread:20009 [wandb_run.py:_redirect():2235] redirect: wrap_raw
29
+ 2025-08-09 15:34:57,218 INFO MainThread:20009 [wandb_run.py:_redirect():2300] Wrapping output streams.
30
+ 2025-08-09 15:34:57,218 INFO MainThread:20009 [wandb_run.py:_redirect():2325] Redirects installed.
31
+ 2025-08-09 15:34:57,220 INFO MainThread:20009 [wandb_init.py:init():838] run started, returning control to user process
32
+ 2025-08-09 15:34:57,223 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
33
+ 2025-08-09 15:34:57,224 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
34
+ 2025-08-09 15:34:57,975 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
35
+ 2025-08-09 15:34:57,978 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
36
+ 2025-08-09 15:34:57,979 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
37
+ 2025-08-09 15:34:58,316 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
38
+ 2025-08-09 15:34:58,318 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
39
+ 2025-08-09 15:34:58,318 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
40
+ 2025-08-09 15:34:58,595 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
41
+ 2025-08-09 15:34:58,597 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
42
+ 2025-08-09 15:34:58,597 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
43
+ 2025-08-09 15:34:58,931 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
44
+ 2025-08-09 15:34:59,386 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
45
+ 2025-08-09 15:34:59,386 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
46
+ 2025-08-09 15:34:59,670 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
47
+ 2025-08-09 15:35:13,283 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
48
+ 2025-08-09 15:35:13,283 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
49
+ 2025-08-09 15:47:19,635 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
50
+ 2025-08-09 16:08:50,915 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
51
+ 2025-08-09 16:08:50,915 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
52
+ 2025-08-09 16:08:51,344 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
53
+ 2025-08-09 16:08:51,349 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
54
+ 2025-08-09 16:08:51,350 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
55
+ 2025-08-09 16:08:51,650 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
56
+ 2025-08-09 16:08:51,652 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
57
+ 2025-08-09 16:08:51,652 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
58
+ 2025-08-09 16:08:52,007 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
59
+ 2025-08-09 16:08:52,009 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
60
+ 2025-08-09 16:08:52,009 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
61
+ 2025-08-09 16:08:52,341 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
62
+ 2025-08-09 16:08:52,349 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
63
+ 2025-08-09 16:08:52,349 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
64
+ 2025-08-09 16:08:52,646 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
65
+ 2025-08-09 16:08:52,652 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
66
+ 2025-08-09 16:08:52,652 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
67
+ 2025-08-09 16:08:52,979 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
68
+ 2025-08-09 16:08:54,012 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
69
+ 2025-08-09 16:08:54,012 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
70
+ 2025-08-09 16:08:54,354 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
71
+ 2025-08-09 16:08:54,356 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
72
+ 2025-08-09 16:08:54,356 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
73
+ 2025-08-09 16:08:54,687 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
74
+ 2025-08-09 16:08:54,689 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
75
+ 2025-08-09 16:08:54,689 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
76
+ 2025-08-09 16:08:55,044 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
77
+ 2025-08-09 16:08:55,050 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
78
+ 2025-08-09 16:08:55,050 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
79
+ 2025-08-09 16:08:55,374 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
80
+ 2025-08-09 16:08:58,719 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
81
+ 2025-08-09 16:08:58,719 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
82
+ 2025-08-09 16:08:59,022 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
83
+ 2025-08-09 16:08:59,025 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
84
+ 2025-08-09 16:08:59,025 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
85
+ 2025-08-09 16:08:59,327 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
86
+ 2025-08-09 16:08:59,330 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
87
+ 2025-08-09 16:08:59,330 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
88
+ 2025-08-09 16:08:59,652 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
89
+ 2025-08-09 16:08:59,655 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
90
+ 2025-08-09 16:08:59,655 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
91
+ 2025-08-09 16:08:59,957 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
92
+ 2025-08-09 16:08:59,959 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
93
+ 2025-08-09 16:08:59,959 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
94
+ 2025-08-09 16:09:00,340 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
95
+ 2025-08-09 16:09:00,343 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
96
+ 2025-08-09 16:09:00,343 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
97
+ 2025-08-09 16:09:00,657 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
98
+ 2025-08-09 16:09:00,663 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
99
+ 2025-08-09 16:09:00,664 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
100
+ 2025-08-09 16:09:00,964 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
101
+ 2025-08-09 16:09:00,972 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
102
+ 2025-08-09 16:09:00,973 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
103
+ 2025-08-09 16:09:01,959 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
104
+ 2025-08-09 16:09:01,961 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
105
+ 2025-08-09 16:09:01,961 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
106
+ 2025-08-09 16:09:02,589 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
107
+ 2025-08-09 16:09:02,679 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
108
+ 2025-08-09 16:09:02,679 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
109
+ 2025-08-09 16:09:03,880 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
110
+ 2025-08-09 16:09:03,882 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
111
+ 2025-08-09 16:09:03,882 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
112
+ 2025-08-09 16:09:05,439 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
113
+ 2025-08-09 16:09:05,441 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
114
+ 2025-08-09 16:09:05,441 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
115
+ 2025-08-09 16:09:05,730 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
116
+ 2025-08-09 16:09:05,732 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
117
+ 2025-08-09 16:09:05,732 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
118
+ 2025-08-09 16:09:06,067 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
119
+ 2025-08-09 16:09:06,069 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
120
+ 2025-08-09 16:09:06,069 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
121
+ 2025-08-09 16:09:06,366 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
122
+ 2025-08-09 16:09:06,368 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
123
+ 2025-08-09 16:09:06,369 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
124
+ 2025-08-09 16:09:06,704 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
125
+ 2025-08-09 16:09:06,707 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
126
+ 2025-08-09 16:09:06,707 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
127
+ 2025-08-09 16:09:07,026 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
128
+ 2025-08-09 16:09:07,027 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
129
+ 2025-08-09 16:09:07,028 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
130
+ 2025-08-09 16:09:07,328 INFO MainThread:20009 [wandb_init.py:_resume_backend():436] resuming backend
131
+ 2025-08-09 16:09:07,330 INFO MainThread:20009 [jupyter.py:_save_ipynb():383] looking for notebook: None
132
+ 2025-08-09 16:09:07,330 INFO MainThread:20009 [wandb_init.py:_pause_backend():431] pausing backend
wandb/run-20250809_151110-vit-h-MST/files/code/_session_history.ipynb ADDED
@@ -0,0 +1,2365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "680cb740",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "print(\"importing modules\")\n",
11
+ "import os\n",
12
+ "import sys\n",
13
+ "import json\n",
14
+ "import argparse\n",
15
+ "import numpy as np\n",
16
+ "import time\n",
17
+ "import random\n",
18
+ "import string\n",
19
+ "import h5py\n",
20
+ "from tqdm import tqdm\n",
21
+ "import webdataset as wds\n",
22
+ "from PIL import Image\n",
23
+ "import pandas as pd\n",
24
+ "import nibabel as nib\n",
25
+ "import nilearn\n",
26
+ "\n",
27
+ "import matplotlib.pyplot as plt\n",
28
+ "import torch\n",
29
+ "import torch.nn as nn\n",
30
+ "from torchvision import transforms\n",
31
+ "\n",
32
+ "# tf32 data type is faster than standard float32\n",
33
+ "torch.backends.cuda.matmul.allow_tf32 = True\n",
34
+ "\n",
35
+ "import utils\n",
36
+ "from utils import load_preprocess_betas, resample, applyxfm, apply_thresh, resample_betas\n",
37
+ "\n",
38
+ "# imports utils from mindeye_preproc as \"preproc\"\n",
39
+ "import importlib.util\n",
40
+ "parent_utils_path = \"/home/ubuntu/mindeye_preproc/analysis/utils.py\" # \"/home/ri4541/mindeye_preproc/analysis/utils.py\" \n",
41
+ "spec = importlib.util.spec_from_file_location(\"utils\", parent_utils_path)\n",
42
+ "preproc = importlib.util.module_from_spec(spec)\n",
43
+ "parent_dir = os.path.dirname(parent_utils_path)\n",
44
+ "if parent_dir not in sys.path:\n",
45
+ " sys.path.append(parent_dir)\n",
46
+ "spec.loader.exec_module(preproc)\n",
47
+ "\n",
48
+ "if utils.is_interactive():\n",
49
+ " from IPython.display import clear_output # function to clear print outputs in cell\n",
50
+ " %load_ext autoreload \n",
51
+ " # this allows you to change functions in models.py or utils.py and have this notebook automatically update with your revisions\n",
52
+ " %autoreload 2 \n",
53
+ " \n",
54
+ "seed = utils.get_slurm_seed()"
55
+ ]
56
+ },
57
+ {
58
+ "cell_type": "code",
59
+ "execution_count": 2,
60
+ "id": "6213ef9f",
61
+ "metadata": {},
62
+ "outputs": [],
63
+ "source": [
64
+ "if utils.is_interactive():\n",
65
+ " sub = \"sub-005\"\n",
66
+ " session = \"all\"\n",
67
+ " task = 'C' # 'study' or 'A'; used to search for functional run in bids format\n",
68
+ " func_task_name = 'C'\n",
69
+ "else:\n",
70
+ " sub = os.environ[\"SUB\"]\n",
71
+ " session = os.environ[\"SESSION\"]\n",
72
+ " task = os.environ[\"TASK\"]\n",
73
+ " func_task_name = 'C'\n",
74
+ "\n",
75
+ "if session == \"all\":\n",
76
+ " ses_list = [\"ses-01\", \"ses-02\"] # list of actual session IDs\n",
77
+ " design_ses_list = [\"ses-01\", \"ses-02\"] # list of session IDs to search for design matrix\n",
78
+ "else:\n",
79
+ " ses_list = [session]\n",
80
+ " design_ses_list = [session]\n",
81
+ " \n",
82
+ "task_name = f\"_task-{task}\" if task != 'study' else ''\n",
83
+ "resample_voxel_size = False\n",
84
+ "resample_post_glmsingle = False # do you want to do voxel resampling here? if resample_voxel_size = True and resample_post_glmsingle = False, assume the resampling has been done prior to GLMsingle, so just use resampled directory but otherwise proceed as normal\n",
85
+ "load_from_resampled_file = False # do you want to load resampled data from file? if True, assume resampling was done in this notebook before, and that we're not using the GLMsingle resampled data\n",
86
+ " \n",
87
+ "train_test_split = 'MST' # 'MST', 'orig', 'unique'\n",
88
+ "remove_close_to_MST = False\n",
89
+ "remove_random_n = False\n",
90
+ "\n",
91
+ "if remove_close_to_MST or remove_random_n:\n",
92
+ " assert remove_close_to_MST != remove_random_n # don't remove both sets of images\n",
93
+ "\n",
94
+ "n_to_remove = 0\n",
95
+ "if remove_random_n:\n",
96
+ " assert train_test_split == 'MST' # MST images are excluded from the n images removed, so only makes sense if they're not in the training set\n",
97
+ " n_to_remove = 150\n",
98
+ " \n",
99
+ "if resample_voxel_size:\n",
100
+ " # voxel size was unchanged in glmsingle, want to perform resampling here\n",
101
+ " resampled_vox_size = 2.5\n",
102
+ " resample_method = \"sinc\" # {trilinear,nearestneighbour,sinc,spline}, credit: https://johnmuschelli.com/fslr/reference/flirt.help.html\n",
103
+ " \n",
104
+ " # file name helper variables\n",
105
+ " vox_dim_str = str(resampled_vox_size).replace('.', '_') # in case the voxel size has a decimal, replace with an underscore\n",
106
+ " resampled_suffix = f\"resampled_{vox_dim_str}mm_{resample_method}\"\n",
107
+ " mask_resampled_suffix = resampled_suffix\n",
108
+ " if resample_post_glmsingle:\n",
109
+ " resampled_suffix += '_postglmsingle'\n",
110
+ " else:\n",
111
+ " resampled_suffix += '_preglmsingle'"
112
+ ]
113
+ },
114
+ {
115
+ "cell_type": "code",
116
+ "execution_count": 3,
117
+ "id": "7511be2d",
118
+ "metadata": {},
119
+ "outputs": [],
120
+ "source": [
121
+ "session_label = preproc.get_session_label(ses_list)\n",
122
+ "print('session label:', session_label)\n",
123
+ "n_runs, _ = preproc.get_runs_per_session(sub, session, ses_list)"
124
+ ]
125
+ },
126
+ {
127
+ "cell_type": "code",
128
+ "execution_count": 4,
129
+ "id": "d57d05fa",
130
+ "metadata": {},
131
+ "outputs": [],
132
+ "source": [
133
+ "if utils.is_interactive():\n",
134
+ " glmsingle_path = f\"/home/ubuntu/glmsingle/glmsingle_{sub}_{session_label}_task-{task}\"\n",
135
+ "else:\n",
136
+ " glmsingle_path = os.environ[\"glmsingle_path\"]\n",
137
+ " \n",
138
+ "designdir = \"/home/ubuntu/real_time_mindEye2\" #\"/home/ri4541/real_time_mindEye2\"\n",
139
+ "print(glmsingle_path)\n",
140
+ "\n",
141
+ "if resample_voxel_size:\n",
142
+ " # option 1: we are using original (non-resampled) GLMsingle outputs and doing the resampling here\n",
143
+ " # option 2: doing resampling pre-GLMsingle and using those outputs; no resampling involved here\n",
144
+ " if resample_post_glmsingle:\n",
145
+ " # option 1\n",
146
+ " orig_glmsingle_path = glmsingle_path\n",
147
+ " glmsingle_path += f\"_{resampled_suffix}\"\n",
148
+ " print(\"resampled glmsingle path:\", glmsingle_path)\n",
149
+ " if load_from_resampled_file:\n",
150
+ " # resampling is already done; load from file\n",
151
+ " assert os.path.exists(glmsingle_path) # the new directory must have been created if we reached here\n",
152
+ " else:\n",
153
+ " # don't load from file; do resampling here\n",
154
+ " os.makedirs(glmsingle_path,exist_ok=True)\n",
155
+ " else:\n",
156
+ " # option 2\n",
157
+ " glmsingle_path += f\"_{resampled_suffix}\"\n",
158
+ " print(\"glmsingle path:\", glmsingle_path)\n",
159
+ "\n",
160
+ "assert os.path.exists(glmsingle_path)\n",
161
+ "print(\"glmsingle path exists!\")"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": 5,
167
+ "id": "074a6b10",
168
+ "metadata": {},
169
+ "outputs": [],
170
+ "source": [
171
+ "data, starts, images, is_new_run, image_names, unique_images, len_unique_images = preproc.load_design_files(\n",
172
+ " sub=sub,\n",
173
+ " session=session,\n",
174
+ " func_task_name=task,\n",
175
+ " designdir=designdir,\n",
176
+ " design_ses_list=design_ses_list\n",
177
+ ")\n",
178
+ "\n",
179
+ "if sub == 'sub-001':\n",
180
+ " if session == 'ses-01':\n",
181
+ " assert image_names[0] == 'images/image_686_seed_1.png'\n",
182
+ " elif session in ('ses-02', 'all'):\n",
183
+ " assert image_names[0] == 'all_stimuli/special515/special_40840.jpg'\n",
184
+ " elif session == 'ses-03':\n",
185
+ " assert image_names[0] == 'all_stimuli/special515/special_69839.jpg'\n",
186
+ " elif session == 'ses-04':\n",
187
+ " assert image_names[0] == 'all_stimuli/rtmindeye_stimuli/image_686_seed_1.png'\n",
188
+ "elif sub == 'sub-003':\n",
189
+ " assert image_names[0] == 'all_stimuli/rtmindeye_stimuli/image_686_seed_1.png'\n",
190
+ "\n",
191
+ "unique_images = np.unique(image_names.astype(str))\n",
192
+ "unique_images = unique_images[(unique_images!=\"nan\")]\n",
193
+ "len_unique_images = len(unique_images)\n",
194
+ "print(\"n_runs\",n_runs)\n",
195
+ "\n",
196
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
197
+ " assert len(unique_images) == 851\n",
198
+ "\n",
199
+ "print(image_names[:4])\n",
200
+ "print(starts[:4])\n",
201
+ "print(is_new_run[:4])\n",
202
+ "\n",
203
+ "if remove_random_n:\n",
204
+ " # want to remove 150 imgs\n",
205
+ " # 100 special515 imgs are repeated 3x (300 total)\n",
206
+ " # all other train imgs are only shown once (558 total)\n",
207
+ " # of the 150, want to sample proportionally since we're cutting all repeats for special515\n",
208
+ " # so take out 51 (17 unique) from special515 and 99 from rest = removing 150 total\n",
209
+ " np.random.seed(seed)\n",
210
+ " options_to_remove = [x for x in set(image_names) if str(x) != 'nan' and x != 'blank.jpg' and 'MST_pairs' not in x and 'special515' not in x and list(image_names).count(x)==1] # all the imgs that only appear once (this is O(N^2) b/c of count() within list comprehension but image_names is a relatively small list)\n",
211
+ " options_to_remove_special515 = [x for x in set(image_names) if str(x) != 'nan' and x != 'blank.jpg' and 'MST_pairs' not in x and 'special515' in x and list(image_names).count(x)>1] # all the special515 images that are repeated (count()>1 necessary because there are special515 that are not repeated)\n",
212
+ " imgs_to_remove = np.random.choice(options_to_remove, size=99, replace=False)\n",
213
+ " imgs_to_remove = np.append(imgs_to_remove, np.random.choice(options_to_remove_special515, size=17, replace=False))\n",
214
+ "\n",
215
+ "image_idx = np.array([]) # contains the unique index of each presented image\n",
216
+ "vox_image_names = np.array([]) # contains the names of the images corresponding to image_idx\n",
217
+ "all_MST_images = dict()\n",
218
+ "for i, im in enumerate(image_names):\n",
219
+ " # skip if blank, nan\n",
220
+ " if im == \"blank.jpg\":\n",
221
+ " i+=1\n",
222
+ " continue\n",
223
+ " if str(im) == \"nan\":\n",
224
+ " i+=1\n",
225
+ " continue\n",
226
+ " vox_image_names = np.append(vox_image_names, im)\n",
227
+ " if remove_close_to_MST: # optionally skip close_to_MST images \n",
228
+ " if \"closest_pairs\" in im:\n",
229
+ " i+=1\n",
230
+ " continue\n",
231
+ " elif remove_random_n:\n",
232
+ " if im in imgs_to_remove:\n",
233
+ " i+=1\n",
234
+ " continue\n",
235
+ " \n",
236
+ " image_idx_ = np.where(im==unique_images)[0].item()\n",
237
+ " image_idx = np.append(image_idx, image_idx_)\n",
238
+ " \n",
239
+ " if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'): # MST images are ones that matched these image titles\n",
240
+ " import re\n",
241
+ " if ('w_' in im or 'paired_image_' in im or re.match(r'all_stimuli/rtmindeye_stimuli/\\d{1,2}_\\d{1,3}\\.png$', im) or re.match(r'images/\\d{1,2}_\\d{1,3}\\.png$', im)): \n",
242
+ " # the regexp here looks for **_***.png, allows 1-2 chars before underscore and 1-3 chars after it\n",
243
+ " # print(im)\n",
244
+ " all_MST_images[i] = im\n",
245
+ " i+=1 \n",
246
+ " elif 'MST' in im:\n",
247
+ " all_MST_images[i] = im\n",
248
+ " i+=1\n",
249
+ " \n",
250
+ "image_idx = torch.Tensor(image_idx).long()\n",
251
+ "# for im in new_image_names[MST_images]:\n",
252
+ "# assert 'MST_pairs' in im\n",
253
+ "# assert len(all_MST_images) == 300\n",
254
+ "\n",
255
+ "unique_MST_images = np.unique(list(all_MST_images.values())) \n",
256
+ "\n",
257
+ "MST_ID = np.array([], dtype=int)\n",
258
+ "if remove_close_to_MST:\n",
259
+ " close_to_MST_idx = np.array([], dtype=int)\n",
260
+ "if remove_random_n:\n",
261
+ " random_n_idx = np.array([], dtype=int)\n",
262
+ "\n",
263
+ "vox_idx = np.array([], dtype=int)\n",
264
+ "j=0 # this is a counter keeping track of the remove_random_n used later to index vox based on the removed images; unused otherwise\n",
265
+ "for i, im in enumerate(image_names): # need unique_MST_images to be defined, so repeating the same loop structure\n",
266
+ " # skip if blank, nan\n",
267
+ " if im == \"blank.jpg\":\n",
268
+ " i+=1\n",
269
+ " continue\n",
270
+ " if str(im) == \"nan\":\n",
271
+ " i+=1\n",
272
+ " continue\n",
273
+ " if remove_close_to_MST: # optionally skip close_to_MST images \n",
274
+ " if \"closest_pairs\" in im:\n",
275
+ " close_to_MST_idx = np.append(close_to_MST_idx, i)\n",
276
+ " i+=1\n",
277
+ " continue\n",
278
+ " if remove_random_n:\n",
279
+ " if im in imgs_to_remove:\n",
280
+ " vox_idx = np.append(vox_idx, j)\n",
281
+ " i+=1\n",
282
+ " j+=1\n",
283
+ " continue\n",
284
+ " j+=1\n",
285
+ " curr = np.where(im == unique_MST_images)\n",
286
+ " # print(curr)\n",
287
+ " if curr[0].size == 0:\n",
288
+ " MST_ID = np.append(MST_ID, np.array(len(unique_MST_images))) # add a value that should be out of range based on the for loop, will index it out later\n",
289
+ " else:\n",
290
+ " MST_ID = np.append(MST_ID, curr)\n",
291
+ " \n",
292
+ "assert len(MST_ID) == len(image_idx)\n",
293
+ "# assert len(np.argwhere(pd.isna(data['current_image']))) + len(np.argwhere(data['current_image'] == 'blank.jpg')) + len(image_idx) == len(data)\n",
294
+ "# MST_ID = torch.tensor(MST_ID[MST_ID != len(unique_MST_images)], dtype=torch.uint8) # torch.tensor (lowercase) allows dtype kwarg, Tensor (uppercase) is an alias for torch.FloatTensor\n",
295
+ "print(MST_ID.shape)\n",
296
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
297
+ " assert len(all_MST_images) == 100"
298
+ ]
299
+ },
300
+ {
301
+ "cell_type": "code",
302
+ "execution_count": 6,
303
+ "id": "4af150a8",
304
+ "metadata": {},
305
+ "outputs": [],
306
+ "source": [
307
+ "import imageio.v2 as imageio\n",
308
+ "resize_transform = transforms.Resize((224, 224))\n",
309
+ "MST_images = []\n",
310
+ "images = None\n",
311
+ "for im_name in tqdm(image_idx):\n",
312
+ " if sub == 'sub-001' and session == 'ses-01':\n",
313
+ " image_file = f\"all_stimuli/rtmindeye_stimuli/{unique_images[im_name]}\"\n",
314
+ " else:\n",
315
+ " image_file = f\"{unique_images[im_name]}\"\n",
316
+ " im = imageio.imread(image_file)\n",
317
+ " im = torch.Tensor(im / 255).permute(2,0,1)\n",
318
+ " im = resize_transform(im.unsqueeze(0))\n",
319
+ " if images is None:\n",
320
+ " images = im\n",
321
+ " else:\n",
322
+ " images = torch.vstack((images, im))\n",
323
+ " if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
324
+ " if ('w_' in image_file or 'paired_image_' in image_file or re.match(r'all_stimuli/rtmindeye_stimuli/\\d{1,2}_\\d{1,3}\\.png$', image_file) or re.match(r'all_stimuli/rtmindeye_stimuli/images/\\d{1,2}_\\d{1,3}\\.png$', image_file)): \n",
325
+ " MST_images.append(True)\n",
326
+ " else:\n",
327
+ " MST_images.append(False)\n",
328
+ " else: \n",
329
+ " if (\"MST_pairs\" in image_file): # (\"_seed_\" not in unique_images[im_name]) and (unique_images[im_name] != \"blank.jpg\") \n",
330
+ " MST_images.append(True)\n",
331
+ " else:\n",
332
+ " MST_images.append(False)\n",
333
+ "\n",
334
+ "print(\"images\", images.shape)\n",
335
+ "MST_images = np.array(MST_images)\n",
336
+ "print(\"MST_images\", len(MST_images))\n",
337
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
338
+ " assert len(MST_images[MST_images==True]) == 100\n",
339
+ "print(\"MST_images==True\", len(MST_images[MST_images==True]))"
340
+ ]
341
+ },
342
+ {
343
+ "cell_type": "code",
344
+ "execution_count": 7,
345
+ "id": "4937263a",
346
+ "metadata": {},
347
+ "outputs": [],
348
+ "source": [
349
+ "# want IDs of pairmates based on MST_images\n",
350
+ "# create \"MST_pairmates\" which is a 25x2 array with indices of the 25 pairs based on MST_images == True\n",
351
+ "\n",
352
+ "assert unique_MST_images.shape[0] % 2 == 0 # make sure it's divisible by 2\n",
353
+ "MST_pairmate_names = unique_MST_images.reshape(int(unique_MST_images.shape[0]/2),2)\n",
354
+ "# print(MST_pairmate_names)\n",
355
+ "\n",
356
+ "MST_pairmate_indices = np.empty(shape=MST_pairmate_names.shape, dtype=int)\n",
357
+ "for p, pair in enumerate(MST_pairmate_names):\n",
358
+ " for i, im in enumerate(pair):\n",
359
+ " MST_pairmate_indices[p][i] = np.where(np.isin(list(all_MST_images.values()), im))[0][0] # just take the first repeated instance of an image\n",
360
+ " \n",
361
+ "print(MST_pairmate_indices.shape, MST_pairmate_indices)"
362
+ ]
363
+ },
364
+ {
365
+ "cell_type": "code",
366
+ "execution_count": 8,
367
+ "id": "108a3210",
368
+ "metadata": {},
369
+ "outputs": [],
370
+ "source": [
371
+ "if (sub == 'sub-001' and session in ('ses-02', 'ses-03', 'all')):\n",
372
+ " # MST_pairs contains the indices of repeats based on all_MST_images\n",
373
+ " # all_MST_images contains the indices of images from image_names\n",
374
+ " MST_pairs = utils.find_paired_indices(torch.tensor(MST_ID))\n",
375
+ " MST_pairs = np.array(sorted(MST_pairs[:-1], key=lambda x: x[0])) # we added a fake value as a placeholder so index out the last group of pairs\n",
376
+ "\n",
377
+ " # assert images[MST_pairs]\n",
378
+ "\n",
379
+ " fig, ax = plt.subplots(1, 3, figsize=(10,4))\n",
380
+ " fig.suptitle('Sample MST pairs')\n",
381
+ "\n",
382
+ " ax[0].imshow(images[MST_pairs[-1][0]].permute(1,2,0).numpy())\n",
383
+ " ax[0].set_title(f\"Trial 0\")\n",
384
+ "\n",
385
+ " ax[1].imshow(images[MST_pairs[-1][1]].permute(1,2,0).numpy())\n",
386
+ " ax[1].set_title(f\"Trial 1\")\n",
387
+ "\n",
388
+ " ax[2].imshow(images[MST_pairs[-1][2]].permute(1,2,0).numpy())\n",
389
+ " ax[2].set_title(f\"Trial 2\")\n",
390
+ "\n",
391
+ " plt.setp(ax, xticks=[], yticks=[])\n",
392
+ " plt.tight_layout()\n",
393
+ " plt.show()"
394
+ ]
395
+ },
396
+ {
397
+ "cell_type": "code",
398
+ "execution_count": 9,
399
+ "id": "d502b890",
400
+ "metadata": {},
401
+ "outputs": [],
402
+ "source": [
403
+ "# pairs has the indices of all repeated images\n",
404
+ "pairs = utils.find_paired_indices(image_idx)\n",
405
+ "pairs = sorted(pairs, key=lambda x: x[0])\n",
406
+ "\n",
407
+ "fig, axes = plt.subplots(1, 3, figsize=(6, 2)) # 1 row, 3 columns\n",
408
+ "for i, ax in enumerate(axes):\n",
409
+ " ax.imshow(images[i].permute(1, 2, 0).numpy())\n",
410
+ " ax.set_title(f\"Trial {i}\")\n",
411
+ " ax.axis(\"off\") # Hide axes for better visualization\n",
412
+ "\n",
413
+ "plt.tight_layout()\n",
414
+ "# output_path = os.path.join(output_dir, \"trials_plot.png\")\n",
415
+ "# plt.savefig(output_path, dpi=300) # Save figure\n",
416
+ "plt.show()"
417
+ ]
418
+ },
419
+ {
420
+ "cell_type": "code",
421
+ "execution_count": 10,
422
+ "id": "cfc6a1f4",
423
+ "metadata": {},
424
+ "outputs": [],
425
+ "source": [
426
+ "p=0\n",
427
+ "\n",
428
+ "# plot 2 repeats (anything in pairs should have 2 repeats, even if there's more)\n",
429
+ "fig, ax = plt.subplots(1, 2, figsize=(10,8))\n",
430
+ "\n",
431
+ "ax[0].imshow(images[pairs[p][0]].permute(1,2,0).numpy())\n",
432
+ "ax[0].set_title(f\"Repeat 1\")\n",
433
+ "\n",
434
+ "ax[1].imshow(images[pairs[p][1]].permute(1,2,0).numpy())\n",
435
+ "ax[1].set_title(f\"Repeat 2\")\n",
436
+ "\n",
437
+ "plt.setp(ax, xticks=[], yticks=[])\n",
438
+ "plt.tight_layout()\n",
439
+ "plt.show()"
440
+ ]
441
+ },
442
+ {
443
+ "cell_type": "code",
444
+ "execution_count": 11,
445
+ "id": "c5fe984b",
446
+ "metadata": {},
447
+ "outputs": [],
448
+ "source": [
449
+ "def get_image_pairs(sub, session, func_task_name, designdir):\n",
450
+ " \"\"\"Loads design files and processes image pairs for a given session.\"\"\"\n",
451
+ " _, _, _, _, image_names, unique_images, _ = preproc.load_design_files(\n",
452
+ " sub=sub,\n",
453
+ " session=session,\n",
454
+ " func_task_name=func_task_name,\n",
455
+ " designdir=designdir,\n",
456
+ " design_ses_list=[session] # Ensure it's a list\n",
457
+ " )\n",
458
+ " return utils.process_images(image_names, unique_images)"
459
+ ]
460
+ },
461
+ {
462
+ "cell_type": "code",
463
+ "execution_count": 12,
464
+ "id": "f759b5d3",
465
+ "metadata": {},
466
+ "outputs": [],
467
+ "source": [
468
+ "from collections import defaultdict\n",
469
+ "\n",
470
+ "all_dicts = []\n",
471
+ "for s_idx, s in enumerate(ses_list):\n",
472
+ " im, vo, _ = get_image_pairs(sub, s, func_task_name, designdir)\n",
473
+ " assert len(im) == len(vo)\n",
474
+ " all_dicts.append({k:v for k,v in enumerate(vo)})\n",
475
+ "\n",
476
+ "# for the train set (ses-01-02 non-MST)\n",
477
+ "image_to_indices = defaultdict(lambda: [[] for _ in range(len(ses_list))])\n",
478
+ "for ses_idx, idx_to_name in enumerate(all_dicts):\n",
479
+ " for idx, name in idx_to_name.items():\n",
480
+ " image_to_indices[name][ses_idx].append(idx)\n",
481
+ " \n",
482
+ "image_to_indices = dict(image_to_indices)\n",
483
+ "\n",
484
+ "# for the test set (ses-03)\n",
485
+ "# test_image_to_indices = defaultdict(lambda: [[] for _ in range(len([ses_list[-1]]))])\n",
486
+ "# for ses_idx, idx_to_name in enumerate([all_dicts[-1]]):\n",
487
+ "# for idx, name in idx_to_name.items():\n",
488
+ "# test_image_to_indices[name][ses_idx].append(idx)\n",
489
+ " \n",
490
+ "# test_image_to_indices = dict(test_image_to_indices)\n",
491
+ "\n",
492
+ "if sub == 'sub-005' and len(ses_list) > 1:\n",
493
+ " session_length = 693\n",
494
+ " for image, session_indices_list in image_to_indices.items():\n",
495
+ " new_indices_list = []\n",
496
+ " for idx, indices in enumerate(session_indices_list):\n",
497
+ " offset = idx * session_length\n",
498
+ " new_indices = [i + offset for i in indices]\n",
499
+ " new_indices_list.append(new_indices)\n",
500
+ " image_to_indices[image] = new_indices_list\n",
501
+ " \n",
502
+ " import itertools\n",
503
+ " assert max(itertools.chain.from_iterable(list(image_to_indices.values())))[0] == (len(ses_list)*session_length) - 1"
504
+ ]
505
+ },
506
+ {
507
+ "cell_type": "code",
508
+ "execution_count": 13,
509
+ "id": "2be1079a",
510
+ "metadata": {},
511
+ "outputs": [],
512
+ "source": [
513
+ "if resample_voxel_size:\n",
514
+ " from nilearn.masking import apply_mask, unmask\n",
515
+ " ref_name = f'{glmsingle_path}/boldref_resampled.nii.gz'\n",
516
+ " omat_name = f'{glmsingle_path}/boldref_omat'"
517
+ ]
518
+ },
519
+ {
520
+ "cell_type": "code",
521
+ "execution_count": 14,
522
+ "id": "28bf7f64",
523
+ "metadata": {},
524
+ "outputs": [],
525
+ "source": [
526
+ "from nilearn.plotting import plot_roi\n",
527
+ "\n",
528
+ "print('loading brain mask')\n",
529
+ "avg_mask = nib.load(f'{orig_glmsingle_path}/glmsingle_sub-005_task-C/sub-005_final_brain.nii.gz')\n",
530
+ "final_mask = nib.load(f'{orig_glmsingle_path}/glmsingle_sub-005_task-C/sub-005_final_mask.nii.gz')\n",
531
+ "\n",
532
+ "# mask info\n",
533
+ "dimsize=avg_mask.header.get_zooms()\n",
534
+ "affine_mat = avg_mask.affine\n",
535
+ "brain=avg_mask.get_fdata()\n",
536
+ "xyz=brain.shape #xyz dimensionality of brain mask and epi data\n",
537
+ "\n",
538
+ "print('Mask dimensions:', dimsize)\n",
539
+ "print('')\n",
540
+ "print('Affine:')\n",
541
+ "print(affine_mat)\n",
542
+ "print('')\n",
543
+ "print(f'There are {int(np.sum(brain))} voxels in the included brain mask\\n')\n",
544
+ "\n",
545
+ "plot_roi(final_mask, bg_img=avg_mask)\n",
546
+ "plt.show()"
547
+ ]
548
+ },
549
+ {
550
+ "cell_type": "code",
551
+ "execution_count": 15,
552
+ "id": "ca124946",
553
+ "metadata": {},
554
+ "outputs": [],
555
+ "source": [
556
+ "glm_single_path"
557
+ ]
558
+ },
559
+ {
560
+ "cell_type": "code",
561
+ "execution_count": 16,
562
+ "id": "844c2b1f",
563
+ "metadata": {},
564
+ "outputs": [
565
+ {
566
+ "name": "stdout",
567
+ "output_type": "stream",
568
+ "text": [
569
+ "'/home/ubuntu/glmsingle/glmsingle_sub-005_ses-01-02_task-C'"
570
+ ]
571
+ }
572
+ ],
573
+ "source": [
574
+ "glmsingle_path"
575
+ ]
576
+ },
577
+ {
578
+ "cell_type": "code",
579
+ "execution_count": 17,
580
+ "id": "fee56ca8",
581
+ "metadata": {},
582
+ "outputs": [],
583
+ "source": [
584
+ "base_glm_single_path = os.environ[\"glmsingle_path\"]\n",
585
+ "base_glm_single_path"
586
+ ]
587
+ },
588
+ {
589
+ "cell_type": "code",
590
+ "execution_count": 18,
591
+ "id": "610317a3",
592
+ "metadata": {},
593
+ "outputs": [],
594
+ "source": [
595
+ "# take all paths exept last dir\n",
596
+ "base_glm_single_path = glmsingle_path.split('/')[:-1]\n",
597
+ "base_glm_single_path = '/'.join(base_glm_single_path)"
598
+ ]
599
+ },
600
+ {
601
+ "cell_type": "code",
602
+ "execution_count": 19,
603
+ "id": "82cae662",
604
+ "metadata": {},
605
+ "outputs": [],
606
+ "source": [
607
+ "from nilearn.plotting import plot_roi\n",
608
+ "\n",
609
+ "print('loading brain mask')\n",
610
+ "avg_mask = nib.load(f'{base_glm_single_path}/glmsingle_sub-005_task-C/sub-005_final_brain.nii.gz')\n",
611
+ "final_mask = nib.load(f'{base_glm_single_path}/glmsingle_sub-005_task-C/sub-005_final_mask.nii.gz')\n",
612
+ "\n",
613
+ "# mask info\n",
614
+ "dimsize=avg_mask.header.get_zooms()\n",
615
+ "affine_mat = avg_mask.affine\n",
616
+ "brain=avg_mask.get_fdata()\n",
617
+ "xyz=brain.shape #xyz dimensionality of brain mask and epi data\n",
618
+ "\n",
619
+ "print('Mask dimensions:', dimsize)\n",
620
+ "print('')\n",
621
+ "print('Affine:')\n",
622
+ "print(affine_mat)\n",
623
+ "print('')\n",
624
+ "print(f'There are {int(np.sum(brain))} voxels in the included brain mask\\n')\n",
625
+ "\n",
626
+ "plot_roi(final_mask, bg_img=avg_mask)\n",
627
+ "plt.show()"
628
+ ]
629
+ },
630
+ {
631
+ "cell_type": "code",
632
+ "execution_count": 20,
633
+ "id": "e6d4d01a",
634
+ "metadata": {},
635
+ "outputs": [],
636
+ "source": [
637
+ "# # create union of ses-01 and ses-02 reliability masks and plot against avg_mask \n",
638
+ "# rel_masks = []\n",
639
+ "# rel_masks.append(np.load('/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/rel_mask_from_ses-01_to_ses-03.npy'))\n",
640
+ "# rel_masks.append(np.load('/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/rel_mask_from_ses-02_to_ses-03.npy'))\n",
641
+ "# rel_masks = np.array(rel_masks)\n",
642
+ "# for r in rel_masks:\n",
643
+ "# assert r.shape[0] == int(final_mask.get_fdata().sum())\n",
644
+ "# assert r.dtype == bool\n",
645
+ " \n",
646
+ "# assert len(rel_masks) == 2 # should be the case if there's 2 training sessions\n",
647
+ "# union_mask = np.logical_or(rel_masks[0], rel_masks[1])\n",
648
+ "# assert union_mask.sum() > rel_masks[0].sum()\n",
649
+ "# assert union_mask.sum() > rel_masks[1].sum()\n",
650
+ "# print(f'there are {union_mask.sum()} reliable voxels based on the union mask out of {int(final_mask.get_fdata().sum())} voxels in the nsdgeneral roi')\n",
651
+ "# print(f'{(union_mask.sum() / int(final_mask.get_fdata().sum())):.2%} of the voxels in the roi were selected')\n",
652
+ "# path = f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/union_mask_from_{session_label}.npy'\n",
653
+ "path = f'{base_glm_single_path}/glmsingle_sub-005_task-C/union_mask_from_ses-01-02.npy'\n",
654
+ "# np.save(f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/union_mask_from_{session_label}.npy', union_mask)\n",
655
+ "# print(f'saved union mask to {path}!')\n",
656
+ "union_mask = np.load(path)"
657
+ ]
658
+ },
659
+ {
660
+ "cell_type": "code",
661
+ "execution_count": 21,
662
+ "id": "8f372fed",
663
+ "metadata": {},
664
+ "outputs": [],
665
+ "source": [
666
+ "ses_mask = []\n",
667
+ "\n",
668
+ "for s in ses_list:\n",
669
+ " ses_mask_path = f'{base_glm_single_path}/glmsingle_sub-005_{s}_task-C/sub-005_{s}_task-C_brain.nii.gz'\n",
670
+ " ses_mask.append(nib.load(ses_mask_path))\n",
671
+ " \n",
672
+ " assert np.all(ses_mask[-1].affine == final_mask.affine)\n",
673
+ " assert np.all(ses_mask[-1].shape == final_mask.shape)"
674
+ ]
675
+ },
676
+ {
677
+ "cell_type": "code",
678
+ "execution_count": 22,
679
+ "id": "36d2591a",
680
+ "metadata": {},
681
+ "outputs": [],
682
+ "source": [
683
+ "ses_vox = []\n",
684
+ "vox = None\n",
685
+ "needs_postprocessing = False\n",
686
+ "params = (session, ses_list, remove_close_to_MST, image_names, remove_random_n, vox_idx)\n",
687
+ "\n",
688
+ "if resample_post_glmsingle == True:\n",
689
+ " glm_save_path_resampled = f\"{glmsingle_path}/vox_resampled.nii.gz\"\n",
690
+ " if load_from_resampled_file == True:\n",
691
+ " # resampling was done in this notebook so we can load from file\n",
692
+ " vox = nib.load(glm_save_path_resampled)\n",
693
+ " else:\n",
694
+ " # do resampling here\n",
695
+ " assert os.path.exists(ref_name) and os.path.exists(omat_name), \"need to generate the boldref and omat separately since we don't have access to the functional data here; either do so using flirt on the command line or copy over the glmsingle resampled outputs\"\n",
696
+ " vox = load_preprocess_betas(orig_glmsingle_path, *params)\n",
697
+ " vox = resample_betas(orig_glmsingle_path, sub, session, task_name, vox, glmsingle_path, glm_save_path_resampled, ref_name, omat_name)\n",
698
+ " needs_postprocessing = True\n",
699
+ "\n",
700
+ "if vox is None: \n",
701
+ " for i, s in enumerate(ses_list):\n",
702
+ " # either resampling was done in glmsingle or we aren't resampling \n",
703
+ " ses_vox_path = f'{glmsingle_path}/glmsingle_sub-005_{s}_task-C'\n",
704
+ " assert os.path.exists(ses_vox_path)\n",
705
+ " ses_vox.append(load_preprocess_betas(ses_vox_path, *params))\n",
706
+ " v = nilearn.masking.unmask(ses_vox[i], ses_mask[i])\n",
707
+ " ses_vox[i] = nilearn.masking.apply_mask(v, final_mask)\n",
708
+ " vox = np.concatenate(ses_vox)\n",
709
+ " print(\"applied final brain mask\")\n",
710
+ " print(vox.shape)\n",
711
+ " vox = vox[:, union_mask]\n",
712
+ " print(\"applied union roi mask\")\n",
713
+ " print(vox.shape)\n",
714
+ " \n",
715
+ " \n",
716
+ "if needs_postprocessing == True:\n",
717
+ " vox = apply_mask(vox, avg_mask)\n",
718
+ " vox = vox.reshape(-1, vox.shape[-1]) # flatten the 3D image into np array with shape (voxels, images)\n",
719
+ " print(vox.shape)\n",
720
+ "\n",
721
+ "assert len(vox) == len(image_idx)"
722
+ ]
723
+ },
724
+ {
725
+ "cell_type": "code",
726
+ "execution_count": 23,
727
+ "id": "5aca9065",
728
+ "metadata": {},
729
+ "outputs": [],
730
+ "source": [
731
+ "ses_vox = []\n",
732
+ "vox = None\n",
733
+ "needs_postprocessing = False\n",
734
+ "params = (session, ses_list, remove_close_to_MST, image_names, remove_random_n, vox_idx)\n",
735
+ "\n",
736
+ "if resample_post_glmsingle == True:\n",
737
+ " glm_save_path_resampled = f\"{glmsingle_path}/vox_resampled.nii.gz\"\n",
738
+ " if load_from_resampled_file == True:\n",
739
+ " # resampling was done in this notebook so we can load from file\n",
740
+ " vox = nib.load(glm_save_path_resampled)\n",
741
+ " else:\n",
742
+ " # do resampling here\n",
743
+ " assert os.path.exists(ref_name) and os.path.exists(omat_name), \"need to generate the boldref and omat separately since we don't have access to the functional data here; either do so using flirt on the command line or copy over the glmsingle resampled outputs\"\n",
744
+ " vox = load_preprocess_betas(orig_glmsingle_path, *params)\n",
745
+ " vox = resample_betas(orig_glmsingle_path, sub, session, task_name, vox, glmsingle_path, glm_save_path_resampled, ref_name, omat_name)\n",
746
+ " needs_postprocessing = True\n",
747
+ "\n",
748
+ "if vox is None: \n",
749
+ " for i, s in enumerate(ses_list):\n",
750
+ " # either resampling was done in glmsingle or we aren't resampling \n",
751
+ " ses_vox_path = f'{base_glm_single_path}/glmsingle_sub-005_{s}_task-C'\n",
752
+ " assert os.path.exists(ses_vox_path)\n",
753
+ " ses_vox.append(load_preprocess_betas(ses_vox_path, *params))\n",
754
+ " v = nilearn.masking.unmask(ses_vox[i], ses_mask[i])\n",
755
+ " ses_vox[i] = nilearn.masking.apply_mask(v, final_mask)\n",
756
+ " vox = np.concatenate(ses_vox)\n",
757
+ " print(\"applied final brain mask\")\n",
758
+ " print(vox.shape)\n",
759
+ " vox = vox[:, union_mask]\n",
760
+ " print(\"applied union roi mask\")\n",
761
+ " print(vox.shape)\n",
762
+ " \n",
763
+ " \n",
764
+ "if needs_postprocessing == True:\n",
765
+ " vox = apply_mask(vox, avg_mask)\n",
766
+ " vox = vox.reshape(-1, vox.shape[-1]) # flatten the 3D image into np array with shape (voxels, images)\n",
767
+ " print(vox.shape)\n",
768
+ "\n",
769
+ "assert len(vox) == len(image_idx)"
770
+ ]
771
+ },
772
+ {
773
+ "cell_type": "code",
774
+ "execution_count": 24,
775
+ "id": "a8e1b076",
776
+ "metadata": {},
777
+ "outputs": [],
778
+ "source": [
779
+ "# # get vox into the same shape as the union mask\n",
780
+ "# v = nilearn.masking.unmask(vox, ses_mask) # move back to 3D based on own session mask\n",
781
+ "# final_mask = nilearn.masking.intersect_masks([avg_mask, roi])\n",
782
+ "# vox = nilearn.masking.apply_mask(vox, final_mask) # re-flatten based on final mask so everything is in the same shape now\n",
783
+ "# print(vox.shape)"
784
+ ]
785
+ },
786
+ {
787
+ "cell_type": "code",
788
+ "execution_count": 25,
789
+ "id": "c309fabe",
790
+ "metadata": {},
791
+ "outputs": [],
792
+ "source": [
793
+ "pairs_homog = np.array([[p[0], p[1]] for p in pairs])"
794
+ ]
795
+ },
796
+ {
797
+ "cell_type": "code",
798
+ "execution_count": 26,
799
+ "id": "04d838b7",
800
+ "metadata": {},
801
+ "outputs": [],
802
+ "source": [
803
+ "same_corrs = []\n",
804
+ "diff_corrs = []\n",
805
+ "for isamp, samp in enumerate(vox[pairs_homog]):\n",
806
+ " avg_same_img = []\n",
807
+ " for i in range(samp.shape[0]):\n",
808
+ " for j in range(i, samp.shape[0]):\n",
809
+ " if i != j:\n",
810
+ " avg_same_img.append(np.array([np.corrcoef(samp[i, :], samp[j, :])[0,1]]))\n",
811
+ " \n",
812
+ " same_corrs.append(np.mean(avg_same_img))\n",
813
+ " \n",
814
+ " avg_diff_img = []\n",
815
+ " for isamp_j, samp_j in enumerate(vox[pairs_homog]):\n",
816
+ " if isamp_j != isamp:\n",
817
+ " for i in range(samp_j.shape[0]):\n",
818
+ " for j in range(i, samp_j.shape[0]):\n",
819
+ " if i != j:\n",
820
+ " avg_diff_img.append(np.array([np.corrcoef(samp[i, :], samp_j[j, :])[0,1]]))\n",
821
+ " \n",
822
+ " # print(len(avg_diff_img))\n",
823
+ " diff_corrs.append(np.mean(avg_diff_img))\n",
824
+ "\n",
825
+ "\n",
826
+ "print(len(same_corrs), len(diff_corrs))\n",
827
+ "same_corrs = np.array(same_corrs)\n",
828
+ "diff_corrs = np.array(diff_corrs)\n",
829
+ "\n",
830
+ "\n",
831
+ "plt.figure(figsize=(5,4))\n",
832
+ "plt.title(f\"{sub}_{session} same/diff Pearson corr.\")\n",
833
+ "plt.plot(np.sort(same_corrs),c='blue',label='same')\n",
834
+ "plt.plot(np.sort(diff_corrs),c='cyan',label='diff')\n",
835
+ "plt.axhline(0,c='k',ls='--')\n",
836
+ "plt.legend()\n",
837
+ "plt.xlabel(\"sample\")\n",
838
+ "plt.ylabel(\"Pearson R\")\n",
839
+ "plt.show()"
840
+ ]
841
+ },
842
+ {
843
+ "cell_type": "code",
844
+ "execution_count": 27,
845
+ "id": "3ddc8bdb",
846
+ "metadata": {},
847
+ "outputs": [],
848
+ "source": [
849
+ "vox_pairs = utils.zscore(vox[pairs_homog])\n",
850
+ "plt.figure(figsize=(5,4))\n",
851
+ "plt.title(f\"{sub}_{session} same minus diff difference Pearson corr.\")\n",
852
+ "plt.plot(np.sort(same_corrs) - np.sort(diff_corrs),c='cyan',label='difference')\n",
853
+ "plt.axhline(0,c='k',ls='--')\n",
854
+ "plt.legend()\n",
855
+ "plt.xlabel(\"sample\")\n",
856
+ "plt.ylabel(\"Pearson R\")\n",
857
+ "plt.show()"
858
+ ]
859
+ },
860
+ {
861
+ "cell_type": "code",
862
+ "execution_count": 28,
863
+ "id": "5fd964cd",
864
+ "metadata": {},
865
+ "outputs": [],
866
+ "source": [
867
+ "utils.seed_everything(seed)\n",
868
+ "\n",
869
+ "if train_test_split == 'orig':\n",
870
+ " # train = all images except images that were repeated\n",
871
+ " # test = average of the same-image presentations\n",
872
+ " imageTrain = np.arange(len(images))\n",
873
+ " train_image_indices = np.array([item for item in imageTrain if item not in pairs.flatten()])\n",
874
+ " test_image_indices = pairs\n",
875
+ " print(len(train_image_indices), len(test_image_indices))\n",
876
+ " assert len(train_image_indices) + len(test_image_indices) == len(image_idx)\n",
877
+ "elif train_test_split == 'MST':\n",
878
+ " # non-MST images are the train split\n",
879
+ " # MST images are the test split\n",
880
+ " MST_idx = np.array([v for k,v in image_to_indices.items() if 'MST_pairs' in k])\n",
881
+ " non_MST_idx = [v for k,v in image_to_indices.items() if 'MST_pairs' not in k]\n",
882
+ " non_MST_idx = np.array([z for y in non_MST_idx for x in y for z in x]) # flatten the indices\n",
883
+ " train_image_indices = non_MST_idx\n",
884
+ " test_image_indices = MST_idx.flatten() # MST_idx contains the mapping for the different test sets; test_image_indices has all MST indices combined\n",
885
+ " print(len(train_image_indices), len(test_image_indices))\n",
886
+ " assert len(train_image_indices) + len(test_image_indices) == len(vox)\n",
887
+ "elif train_test_split == 'unique':\n",
888
+ " imageTest = np.arange(len(images))\n",
889
+ " train_image_indices = pairs.flatten()\n",
890
+ " test_image_indices = np.array([item for item in imageTest if item not in pairs.flatten()])\n",
891
+ " print(len(train_image_indices), len(test_image_indices))\n",
892
+ " assert len(train_image_indices) + len(test_image_indices) == len(image_idx)\n",
893
+ "else:\n",
894
+ " raise Exception(\"invalid train_test_split\")\n",
895
+ "\n",
896
+ "# TODO add assertion that verifies file names in train and test don't overlap, guards against repeats\n",
897
+ "\n",
898
+ "for i in train_image_indices:\n",
899
+ " assert i not in test_image_indices"
900
+ ]
901
+ },
902
+ {
903
+ "cell_type": "code",
904
+ "execution_count": 29,
905
+ "id": "98927cca",
906
+ "metadata": {},
907
+ "outputs": [],
908
+ "source": [
909
+ "ses_split = vox[train_image_indices].shape[0] // 2\n",
910
+ "\n",
911
+ "train_mean_s1 = np.mean(vox[train_image_indices][:ses_split], axis=0)\n",
912
+ "train_std_s1 = np.std(vox[train_image_indices][:ses_split], axis=0)\n",
913
+ "train_mean_s2 = np.mean(vox[train_image_indices][ses_split:], axis=0)\n",
914
+ "train_std_s2 = np.std(vox[train_image_indices][ses_split:], axis=0)\n",
915
+ "\n",
916
+ "print('shape of train mean from ses-01:', train_mean_s1.shape)\n",
917
+ "print('shape of train std from ses-01:', train_std_s1.shape)\n",
918
+ "print('shape of train mean from ses-02:', train_mean_s2.shape)\n",
919
+ "print('shape of train std from ses-02:', train_std_s2.shape)\n",
920
+ "\n",
921
+ "\n",
922
+ "vox[:ses_split] = utils.zscore(vox[:ses_split],train_mean=train_mean_s1,train_std=train_std_s1)\n",
923
+ "vox[ses_split:] = utils.zscore(vox[ses_split:],train_mean=train_mean_s2,train_std=train_std_s2)\n",
924
+ "\n",
925
+ "print(\"voxels have been zscored\")\n",
926
+ "print(\"ses-01:\", vox[:ses_split,0].mean(), vox[:ses_split,0].std())\n",
927
+ "print(\"ses-02:\", vox[ses_split:,0].mean(), vox[ses_split:,0].std())\n",
928
+ "print(\"vox\", vox.shape)"
929
+ ]
930
+ },
931
+ {
932
+ "cell_type": "code",
933
+ "execution_count": 30,
934
+ "id": "c7a289d5",
935
+ "metadata": {},
936
+ "outputs": [],
937
+ "source": [
938
+ "# save the mean and std from ses-01 and 02\n",
939
+ "train_test_mean_s1 = np.mean(vox[:ses_split], axis=0)\n",
940
+ "train_test_std_s1 = np.std(vox[:ses_split], axis=0)\n",
941
+ "train_test_mean_s2 = np.mean(vox[ses_split:], axis=0)\n",
942
+ "train_test_std_s2 = np.std(vox[ses_split:], axis=0)\n",
943
+ "print(train_test_mean_s1.shape)\n",
944
+ "assert np.all(train_test_mean_s1.shape == train_test_std_s1.shape)\n",
945
+ "assert np.all(train_test_mean_s1.shape == train_test_mean_s2.shape)\n",
946
+ "assert np.all(train_test_mean_s1.shape == train_test_std_s2.shape)"
947
+ ]
948
+ },
949
+ {
950
+ "cell_type": "code",
951
+ "execution_count": 31,
952
+ "id": "242a0f0c",
953
+ "metadata": {},
954
+ "outputs": [],
955
+ "source": [
956
+ "# for idx in deleted_indices:\n",
957
+ "# # check image names to be deleted match\n",
958
+ "# original_name = vox_image_dict[idx]\n",
959
+ "# matching_indices = [i for i in deleted_indices if vox_image_dict[i] == original_name]\n",
960
+ "# assert all(vox_image_dict[i] == original_name for i in matching_indices), \\\n",
961
+ "# f\"Mismatch in image names for deleted indices {matching_indices}\"\n",
962
+ "\n",
963
+ "# # check image data to be deleted match\n",
964
+ "# base_image = images[matching_indices[0]] # Reference image\n",
965
+ "# for i in matching_indices[1:]:\n",
966
+ "# assert np.array_equal(base_image, images[i]), \\\n",
967
+ "# f\"Mismatch in image data for {vox_image_dict[i]} at index {i}\"\n",
968
+ "\n",
969
+ "# images = images[kept_indices]"
970
+ ]
971
+ },
972
+ {
973
+ "cell_type": "code",
974
+ "execution_count": 32,
975
+ "id": "1644ff68",
976
+ "metadata": {},
977
+ "outputs": [],
978
+ "source": [
979
+ "images = torch.Tensor(images)\n",
980
+ "vox = torch.Tensor(vox)\n",
981
+ "assert len(images) == len(vox)"
982
+ ]
983
+ },
984
+ {
985
+ "cell_type": "code",
986
+ "execution_count": 33,
987
+ "id": "f5eff44d",
988
+ "metadata": {},
989
+ "outputs": [],
990
+ "source": [
991
+ "### Multi-GPU config ###\n",
992
+ "from accelerate import Accelerator, DeepSpeedPlugin\n",
993
+ "\n",
994
+ "local_rank = os.getenv('RANK')\n",
995
+ "if local_rank is None: \n",
996
+ " local_rank = 0\n",
997
+ "else:\n",
998
+ " local_rank = int(local_rank)\n",
999
+ "print(\"LOCAL RANK \", local_rank) \n",
1000
+ "\n",
1001
+ "data_type = torch.float32 # change depending on your mixed_precision\n",
1002
+ "\n",
1003
+ "accelerator = Accelerator(split_batches=False)\n",
1004
+ "batch_size = 8 "
1005
+ ]
1006
+ },
1007
+ {
1008
+ "cell_type": "code",
1009
+ "execution_count": 34,
1010
+ "id": "13696477",
1011
+ "metadata": {},
1012
+ "outputs": [],
1013
+ "source": [
1014
+ "print(\"PID of this process =\",os.getpid())\n",
1015
+ "device = accelerator.device\n",
1016
+ "print(\"device:\",device)\n",
1017
+ "world_size = accelerator.state.num_processes\n",
1018
+ "distributed = not accelerator.state.distributed_type == 'NO'\n",
1019
+ "num_devices = torch.cuda.device_count()\n",
1020
+ "global_batch_size = batch_size * num_devices\n",
1021
+ "print(\"global_batch_size\", global_batch_size)\n",
1022
+ "if num_devices==0 or not distributed: num_devices = 1\n",
1023
+ "num_workers = num_devices\n",
1024
+ "print(accelerator.state)\n",
1025
+ "\n",
1026
+ "# set data_type to match your mixed precision (automatically set based on deepspeed config)\n",
1027
+ "if accelerator.mixed_precision == \"bf16\":\n",
1028
+ " data_type = torch.bfloat16\n",
1029
+ "elif accelerator.mixed_precision == \"fp16\":\n",
1030
+ " data_type = torch.float16\n",
1031
+ "else:\n",
1032
+ " data_type = torch.float32\n",
1033
+ "\n",
1034
+ "print(\"distributed =\",distributed, \"num_devices =\", num_devices, \"local rank =\", local_rank, \"world size =\", world_size, \"data_type =\", data_type)\n",
1035
+ "print = accelerator.print # only print if local_rank=0"
1036
+ ]
1037
+ },
1038
+ {
1039
+ "cell_type": "code",
1040
+ "execution_count": 35,
1041
+ "id": "3076e4cc",
1042
+ "metadata": {},
1043
+ "outputs": [],
1044
+ "source": [
1045
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1046
+ "if utils.is_interactive():\n",
1047
+ " model_name = 'vit-h' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1048
+ " print(\"model_name:\", model_name)\n",
1049
+ " \n",
1050
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1051
+ " # other variables can be specified in the following string:\n",
1052
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1053
+ " batch_size = 24\n",
1054
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1055
+ " --model_name={model_name} \\\n",
1056
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1057
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1058
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1059
+ " --no-use_prior --prior_scale=30 \\\n",
1060
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1061
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1062
+ " --multisubject_ckpt=None\"\n",
1063
+ " print(jupyter_args)\n",
1064
+ " jupyter_args = jupyter_args.split()"
1065
+ ]
1066
+ },
1067
+ {
1068
+ "cell_type": "code",
1069
+ "execution_count": 36,
1070
+ "id": "d8c4b5e2",
1071
+ "metadata": {},
1072
+ "outputs": [],
1073
+ "source": [
1074
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1075
+ "parser.add_argument(\n",
1076
+ " \"--model_name\", type=str, default=\"testing\",\n",
1077
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1078
+ ")\n",
1079
+ "parser.add_argument(\n",
1080
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1081
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1082
+ ")\n",
1083
+ "parser.add_argument(\n",
1084
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1085
+ " help=\"Validate on which subject?\",\n",
1086
+ ")\n",
1087
+ "parser.add_argument(\n",
1088
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1089
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1090
+ ")\n",
1091
+ "parser.add_argument(\n",
1092
+ " \"--num_sessions\", type=int, default=0,\n",
1093
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1094
+ ")\n",
1095
+ "parser.add_argument(\n",
1096
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1097
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1098
+ ")\n",
1099
+ "parser.add_argument(\n",
1100
+ " \"--batch_size\", type=int, default=32,\n",
1101
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1102
+ ")\n",
1103
+ "parser.add_argument(\n",
1104
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1105
+ " help=\"whether to log to wandb\",\n",
1106
+ ")\n",
1107
+ "parser.add_argument(\n",
1108
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1109
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1110
+ ")\n",
1111
+ "parser.add_argument(\n",
1112
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1113
+ " help=\"wandb project name\",\n",
1114
+ ")\n",
1115
+ "parser.add_argument(\n",
1116
+ " \"--mixup_pct\",type=float,default=.33,\n",
1117
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1118
+ ")\n",
1119
+ "parser.add_argument(\n",
1120
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1121
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1122
+ ")\n",
1123
+ "parser.add_argument(\n",
1124
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1125
+ " help=\"whether to output blurry reconstructions\",\n",
1126
+ ")\n",
1127
+ "parser.add_argument(\n",
1128
+ " \"--blur_scale\",type=float,default=.5,\n",
1129
+ " help=\"multiply loss from blurry recons by this number\",\n",
1130
+ ")\n",
1131
+ "parser.add_argument(\n",
1132
+ " \"--clip_scale\",type=float,default=1.,\n",
1133
+ " help=\"multiply contrastive loss by this number\",\n",
1134
+ ")\n",
1135
+ "parser.add_argument(\n",
1136
+ " \"--prior_scale\",type=float,default=30,\n",
1137
+ " help=\"multiply diffusion prior loss by this\",\n",
1138
+ ")\n",
1139
+ "parser.add_argument(\n",
1140
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1141
+ " help=\"whether to use image augmentation\",\n",
1142
+ ")\n",
1143
+ "parser.add_argument(\n",
1144
+ " \"--num_epochs\",type=int,default=120,\n",
1145
+ " help=\"number of epochs of training\",\n",
1146
+ ")\n",
1147
+ "parser.add_argument(\n",
1148
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1149
+ ")\n",
1150
+ "parser.add_argument(\n",
1151
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1152
+ ")\n",
1153
+ "parser.add_argument(\n",
1154
+ " \"--n_blocks\",type=int,default=2,\n",
1155
+ ")\n",
1156
+ "parser.add_argument(\n",
1157
+ " \"--hidden_dim\",type=int,default=1024,\n",
1158
+ ")\n",
1159
+ "parser.add_argument(\n",
1160
+ " \"--seq_past\",type=int,default=0,\n",
1161
+ ")\n",
1162
+ "parser.add_argument(\n",
1163
+ " \"--seq_future\",type=int,default=0,\n",
1164
+ ")\n",
1165
+ "parser.add_argument(\n",
1166
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1167
+ ")\n",
1168
+ "parser.add_argument(\n",
1169
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1170
+ ")\n",
1171
+ "parser.add_argument(\n",
1172
+ " \"--ckpt_interval\",type=int,default=5,\n",
1173
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1174
+ ")\n",
1175
+ "parser.add_argument(\n",
1176
+ " \"--seed\",type=int,default=42,\n",
1177
+ ")\n",
1178
+ "parser.add_argument(\n",
1179
+ " \"--max_lr\",type=float,default=3e-4,\n",
1180
+ ")\n",
1181
+ "\n",
1182
+ "if utils.is_interactive():\n",
1183
+ " args = parser.parse_args(jupyter_args)\n",
1184
+ "else:\n",
1185
+ " args = parser.parse_args()\n",
1186
+ "\n",
1187
+ "# create global variables without the args prefix\n",
1188
+ "for attribute_name in vars(args).keys():\n",
1189
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1190
+ " \n",
1191
+ "outdir = os.path.abspath(f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/train_logs/{model_name}')\n",
1192
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1193
+ " os.makedirs(outdir,exist_ok=True)\n",
1194
+ " \n",
1195
+ "if use_image_aug or blurry_recon:\n",
1196
+ " import kornia\n",
1197
+ " import kornia.augmentation as K\n",
1198
+ " from kornia.augmentation.container import AugmentationSequential\n",
1199
+ "if use_image_aug:\n",
1200
+ " img_augment = AugmentationSequential(\n",
1201
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1202
+ " same_on_batch=False,\n",
1203
+ " data_keys=[\"input\"],\n",
1204
+ " )\n",
1205
+ " # Define the blurring augmentations\n",
1206
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1207
+ " \n",
1208
+ "if multi_subject:\n",
1209
+ " subj_list = np.arange(1,9)\n",
1210
+ " subj_list = subj_list[subj_list != subj]\n",
1211
+ "else:\n",
1212
+ " subj_list = [subj]\n",
1213
+ "\n",
1214
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1215
+ ]
1216
+ },
1217
+ {
1218
+ "cell_type": "code",
1219
+ "execution_count": 37,
1220
+ "id": "9f6cbde6",
1221
+ "metadata": {},
1222
+ "outputs": [],
1223
+ "source": [
1224
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1225
+ "parser.add_argument(\n",
1226
+ " \"--model_name\", type=str, default=\"testing\",\n",
1227
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1228
+ ")\n",
1229
+ "parser.add_argument(\n",
1230
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1231
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1232
+ ")\n",
1233
+ "parser.add_argument(\n",
1234
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1235
+ " help=\"Validate on which subject?\",\n",
1236
+ ")\n",
1237
+ "parser.add_argument(\n",
1238
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1239
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1240
+ ")\n",
1241
+ "parser.add_argument(\n",
1242
+ " \"--num_sessions\", type=int, default=0,\n",
1243
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1244
+ ")\n",
1245
+ "parser.add_argument(\n",
1246
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1247
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1248
+ ")\n",
1249
+ "parser.add_argument(\n",
1250
+ " \"--batch_size\", type=int, default=32,\n",
1251
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1252
+ ")\n",
1253
+ "parser.add_argument(\n",
1254
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1255
+ " help=\"whether to log to wandb\",\n",
1256
+ ")\n",
1257
+ "parser.add_argument(\n",
1258
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1259
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1260
+ ")\n",
1261
+ "parser.add_argument(\n",
1262
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1263
+ " help=\"wandb project name\",\n",
1264
+ ")\n",
1265
+ "parser.add_argument(\n",
1266
+ " \"--mixup_pct\",type=float,default=.33,\n",
1267
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1268
+ ")\n",
1269
+ "parser.add_argument(\n",
1270
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1271
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1272
+ ")\n",
1273
+ "parser.add_argument(\n",
1274
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1275
+ " help=\"whether to output blurry reconstructions\",\n",
1276
+ ")\n",
1277
+ "parser.add_argument(\n",
1278
+ " \"--blur_scale\",type=float,default=.5,\n",
1279
+ " help=\"multiply loss from blurry recons by this number\",\n",
1280
+ ")\n",
1281
+ "parser.add_argument(\n",
1282
+ " \"--clip_scale\",type=float,default=1.,\n",
1283
+ " help=\"multiply contrastive loss by this number\",\n",
1284
+ ")\n",
1285
+ "parser.add_argument(\n",
1286
+ " \"--prior_scale\",type=float,default=30,\n",
1287
+ " help=\"multiply diffusion prior loss by this\",\n",
1288
+ ")\n",
1289
+ "parser.add_argument(\n",
1290
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1291
+ " help=\"whether to use image augmentation\",\n",
1292
+ ")\n",
1293
+ "parser.add_argument(\n",
1294
+ " \"--num_epochs\",type=int,default=120,\n",
1295
+ " help=\"number of epochs of training\",\n",
1296
+ ")\n",
1297
+ "parser.add_argument(\n",
1298
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1299
+ ")\n",
1300
+ "parser.add_argument(\n",
1301
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1302
+ ")\n",
1303
+ "parser.add_argument(\n",
1304
+ " \"--n_blocks\",type=int,default=2,\n",
1305
+ ")\n",
1306
+ "parser.add_argument(\n",
1307
+ " \"--hidden_dim\",type=int,default=1024,\n",
1308
+ ")\n",
1309
+ "parser.add_argument(\n",
1310
+ " \"--seq_past\",type=int,default=0,\n",
1311
+ ")\n",
1312
+ "parser.add_argument(\n",
1313
+ " \"--seq_future\",type=int,default=0,\n",
1314
+ ")\n",
1315
+ "parser.add_argument(\n",
1316
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1317
+ ")\n",
1318
+ "parser.add_argument(\n",
1319
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1320
+ ")\n",
1321
+ "parser.add_argument(\n",
1322
+ " \"--ckpt_interval\",type=int,default=5,\n",
1323
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1324
+ ")\n",
1325
+ "parser.add_argument(\n",
1326
+ " \"--seed\",type=int,default=42,\n",
1327
+ ")\n",
1328
+ "parser.add_argument(\n",
1329
+ " \"--max_lr\",type=float,default=3e-4,\n",
1330
+ ")\n",
1331
+ "\n",
1332
+ "if utils.is_interactive():\n",
1333
+ " args = parser.parse_args(jupyter_args)\n",
1334
+ "else:\n",
1335
+ " args = parser.parse_args()\n",
1336
+ "\n",
1337
+ "# create global variables without the args prefix\n",
1338
+ "for attribute_name in vars(args).keys():\n",
1339
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1340
+ " \n",
1341
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1342
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1343
+ " os.makedirs(outdir,exist_ok=True)\n",
1344
+ " \n",
1345
+ "if use_image_aug or blurry_recon:\n",
1346
+ " import kornia\n",
1347
+ " import kornia.augmentation as K\n",
1348
+ " from kornia.augmentation.container import AugmentationSequential\n",
1349
+ "if use_image_aug:\n",
1350
+ " img_augment = AugmentationSequential(\n",
1351
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1352
+ " same_on_batch=False,\n",
1353
+ " data_keys=[\"input\"],\n",
1354
+ " )\n",
1355
+ " # Define the blurring augmentations\n",
1356
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1357
+ " \n",
1358
+ "if multi_subject:\n",
1359
+ " subj_list = np.arange(1,9)\n",
1360
+ " subj_list = subj_list[subj_list != subj]\n",
1361
+ "else:\n",
1362
+ " subj_list = [subj]\n",
1363
+ "\n",
1364
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1365
+ ]
1366
+ },
1367
+ {
1368
+ "cell_type": "code",
1369
+ "execution_count": 38,
1370
+ "id": "957e3d21",
1371
+ "metadata": {},
1372
+ "outputs": [],
1373
+ "source": [
1374
+ "if ckpt_saving:\n",
1375
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1376
+ " if 'MST' in model_name:\n",
1377
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1378
+ " print('saving MST info in', eval_dir)\n",
1379
+ " # Saving ##\n",
1380
+ " if not os.path.exists(eval_dir):\n",
1381
+ " os.mkdir(eval_dir)\n",
1382
+ "\n",
1383
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1384
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1385
+ "\n",
1386
+ " if remove_random_n:\n",
1387
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1388
+ "\n",
1389
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1390
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1391
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1392
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1393
+ " \n",
1394
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1395
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1396
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1397
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1398
+ ]
1399
+ },
1400
+ {
1401
+ "cell_type": "code",
1402
+ "execution_count": 39,
1403
+ "id": "7fec6e0b",
1404
+ "metadata": {},
1405
+ "outputs": [],
1406
+ "source": [
1407
+ "if ckpt_saving:\n",
1408
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1409
+ " if 'MST' in model_name or True:\n",
1410
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1411
+ " print('saving MST info in', eval_dir)\n",
1412
+ " # Saving ##\n",
1413
+ " if not os.path.exists(eval_dir):\n",
1414
+ " os.mkdir(eval_dir)\n",
1415
+ "\n",
1416
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1417
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1418
+ "\n",
1419
+ " if remove_random_n:\n",
1420
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1421
+ "\n",
1422
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1423
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1424
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1425
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1426
+ " \n",
1427
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1428
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1429
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1430
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1431
+ ]
1432
+ },
1433
+ {
1434
+ "cell_type": "code",
1435
+ "execution_count": 40,
1436
+ "id": "f9bb9d1c",
1437
+ "metadata": {},
1438
+ "outputs": [],
1439
+ "source": [
1440
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1441
+ "if utils.is_interactive():\n",
1442
+ " model_name = 'vit-h-MST' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1443
+ " print(\"model_name:\", model_name)\n",
1444
+ " \n",
1445
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1446
+ " # other variables can be specified in the following string:\n",
1447
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1448
+ " batch_size = 24\n",
1449
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1450
+ " --model_name={model_name} \\\n",
1451
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1452
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1453
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1454
+ " --no-use_prior --prior_scale=30 \\\n",
1455
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1456
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1457
+ " --multisubject_ckpt=None\"\n",
1458
+ " print(jupyter_args)\n",
1459
+ " jupyter_args = jupyter_args.split()"
1460
+ ]
1461
+ },
1462
+ {
1463
+ "cell_type": "code",
1464
+ "execution_count": 41,
1465
+ "id": "d112b218",
1466
+ "metadata": {},
1467
+ "outputs": [],
1468
+ "source": [
1469
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1470
+ "parser.add_argument(\n",
1471
+ " \"--model_name\", type=str, default=\"testing\",\n",
1472
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1473
+ ")\n",
1474
+ "parser.add_argument(\n",
1475
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1476
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1477
+ ")\n",
1478
+ "parser.add_argument(\n",
1479
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1480
+ " help=\"Validate on which subject?\",\n",
1481
+ ")\n",
1482
+ "parser.add_argument(\n",
1483
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1484
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1485
+ ")\n",
1486
+ "parser.add_argument(\n",
1487
+ " \"--num_sessions\", type=int, default=0,\n",
1488
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1489
+ ")\n",
1490
+ "parser.add_argument(\n",
1491
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1492
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1493
+ ")\n",
1494
+ "parser.add_argument(\n",
1495
+ " \"--batch_size\", type=int, default=32,\n",
1496
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1497
+ ")\n",
1498
+ "parser.add_argument(\n",
1499
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1500
+ " help=\"whether to log to wandb\",\n",
1501
+ ")\n",
1502
+ "parser.add_argument(\n",
1503
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1504
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1505
+ ")\n",
1506
+ "parser.add_argument(\n",
1507
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1508
+ " help=\"wandb project name\",\n",
1509
+ ")\n",
1510
+ "parser.add_argument(\n",
1511
+ " \"--mixup_pct\",type=float,default=.33,\n",
1512
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1513
+ ")\n",
1514
+ "parser.add_argument(\n",
1515
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1516
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1517
+ ")\n",
1518
+ "parser.add_argument(\n",
1519
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1520
+ " help=\"whether to output blurry reconstructions\",\n",
1521
+ ")\n",
1522
+ "parser.add_argument(\n",
1523
+ " \"--blur_scale\",type=float,default=.5,\n",
1524
+ " help=\"multiply loss from blurry recons by this number\",\n",
1525
+ ")\n",
1526
+ "parser.add_argument(\n",
1527
+ " \"--clip_scale\",type=float,default=1.,\n",
1528
+ " help=\"multiply contrastive loss by this number\",\n",
1529
+ ")\n",
1530
+ "parser.add_argument(\n",
1531
+ " \"--prior_scale\",type=float,default=30,\n",
1532
+ " help=\"multiply diffusion prior loss by this\",\n",
1533
+ ")\n",
1534
+ "parser.add_argument(\n",
1535
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1536
+ " help=\"whether to use image augmentation\",\n",
1537
+ ")\n",
1538
+ "parser.add_argument(\n",
1539
+ " \"--num_epochs\",type=int,default=120,\n",
1540
+ " help=\"number of epochs of training\",\n",
1541
+ ")\n",
1542
+ "parser.add_argument(\n",
1543
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1544
+ ")\n",
1545
+ "parser.add_argument(\n",
1546
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1547
+ ")\n",
1548
+ "parser.add_argument(\n",
1549
+ " \"--n_blocks\",type=int,default=2,\n",
1550
+ ")\n",
1551
+ "parser.add_argument(\n",
1552
+ " \"--hidden_dim\",type=int,default=1024,\n",
1553
+ ")\n",
1554
+ "parser.add_argument(\n",
1555
+ " \"--seq_past\",type=int,default=0,\n",
1556
+ ")\n",
1557
+ "parser.add_argument(\n",
1558
+ " \"--seq_future\",type=int,default=0,\n",
1559
+ ")\n",
1560
+ "parser.add_argument(\n",
1561
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1562
+ ")\n",
1563
+ "parser.add_argument(\n",
1564
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1565
+ ")\n",
1566
+ "parser.add_argument(\n",
1567
+ " \"--ckpt_interval\",type=int,default=5,\n",
1568
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1569
+ ")\n",
1570
+ "parser.add_argument(\n",
1571
+ " \"--seed\",type=int,default=42,\n",
1572
+ ")\n",
1573
+ "parser.add_argument(\n",
1574
+ " \"--max_lr\",type=float,default=3e-4,\n",
1575
+ ")\n",
1576
+ "\n",
1577
+ "if utils.is_interactive():\n",
1578
+ " args = parser.parse_args(jupyter_args)\n",
1579
+ "else:\n",
1580
+ " args = parser.parse_args()\n",
1581
+ "\n",
1582
+ "# create global variables without the args prefix\n",
1583
+ "for attribute_name in vars(args).keys():\n",
1584
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1585
+ " \n",
1586
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1587
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1588
+ " os.makedirs(outdir,exist_ok=True)\n",
1589
+ " \n",
1590
+ "if use_image_aug or blurry_recon:\n",
1591
+ " import kornia\n",
1592
+ " import kornia.augmentation as K\n",
1593
+ " from kornia.augmentation.container import AugmentationSequential\n",
1594
+ "if use_image_aug:\n",
1595
+ " img_augment = AugmentationSequential(\n",
1596
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1597
+ " same_on_batch=False,\n",
1598
+ " data_keys=[\"input\"],\n",
1599
+ " )\n",
1600
+ " # Define the blurring augmentations\n",
1601
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1602
+ " \n",
1603
+ "if multi_subject:\n",
1604
+ " subj_list = np.arange(1,9)\n",
1605
+ " subj_list = subj_list[subj_list != subj]\n",
1606
+ "else:\n",
1607
+ " subj_list = [subj]\n",
1608
+ "\n",
1609
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1610
+ ]
1611
+ },
1612
+ {
1613
+ "cell_type": "code",
1614
+ "execution_count": 42,
1615
+ "id": "4846c60d",
1616
+ "metadata": {},
1617
+ "outputs": [],
1618
+ "source": [
1619
+ "if ckpt_saving:\n",
1620
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1621
+ " if 'MST' in model_name:\n",
1622
+ " if utils.is_interactive():\n",
1623
+ " eval_dir = os.path.join(outdir, \"eval_dir\")\n",
1624
+ " else:\n",
1625
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1626
+ " print('saving MST info in', eval_dir)\n",
1627
+ " # Saving ##\n",
1628
+ " if not os.path.exists(eval_dir):\n",
1629
+ " os.mkdir(eval_dir)\n",
1630
+ "\n",
1631
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1632
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1633
+ "\n",
1634
+ " if remove_random_n:\n",
1635
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1636
+ "\n",
1637
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1638
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1639
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1640
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1641
+ " \n",
1642
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1643
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1644
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1645
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1646
+ ]
1647
+ },
1648
+ {
1649
+ "cell_type": "code",
1650
+ "execution_count": 43,
1651
+ "id": "b0d9d4bd",
1652
+ "metadata": {},
1653
+ "outputs": [],
1654
+ "source": [
1655
+ "if ckpt_saving:\n",
1656
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1657
+ " if 'MST' in model_name:\n",
1658
+ " if utils.is_interactive():\n",
1659
+ " eval_dir = os.path.join(outdir, \"eval_dir\")\n",
1660
+ " else:\n",
1661
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1662
+ " print('saving MST info in', eval_dir)\n",
1663
+ " # Saving ##\n",
1664
+ " if not os.path.exists(eval_dir):\n",
1665
+ " os.mkdir(eval_dir)\n",
1666
+ "\n",
1667
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1668
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1669
+ "\n",
1670
+ " if remove_random_n:\n",
1671
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1672
+ "\n",
1673
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1674
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1675
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1676
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1677
+ " \n",
1678
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1679
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1680
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1681
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1682
+ ]
1683
+ },
1684
+ {
1685
+ "cell_type": "code",
1686
+ "execution_count": 44,
1687
+ "id": "8f59503d",
1688
+ "metadata": {},
1689
+ "outputs": [],
1690
+ "source": [
1691
+ "def my_split_by_node(urls): return urls\n",
1692
+ "num_voxels_list = []\n",
1693
+ "\n",
1694
+ "if multi_subject:\n",
1695
+ " nsessions_allsubj=np.array([40, 40, 32, 30, 40, 32, 40, 30])\n",
1696
+ " num_samples_per_epoch = (750*40) // num_devices \n",
1697
+ "else:\n",
1698
+ " # num_samples_per_epoch = (750*num_sessions) // num_devices \n",
1699
+ " num_samples_per_epoch = len(train_image_indices)\n",
1700
+ "\n",
1701
+ "print(\"dividing batch size by subj_list, which will then be concatenated across subj during training...\") \n",
1702
+ "batch_size = batch_size // len(subj_list)\n",
1703
+ "\n",
1704
+ "num_iterations_per_epoch = num_samples_per_epoch // (batch_size*len(subj_list))\n",
1705
+ "\n",
1706
+ "print(\"batch_size =\", batch_size, \"num_iterations_per_epoch =\",num_iterations_per_epoch, \"num_samples_per_epoch =\",num_samples_per_epoch)"
1707
+ ]
1708
+ },
1709
+ {
1710
+ "cell_type": "code",
1711
+ "execution_count": 45,
1712
+ "id": "5e5ffb53",
1713
+ "metadata": {},
1714
+ "outputs": [],
1715
+ "source": [
1716
+ "train_data = {}\n",
1717
+ "train_dl = {}\n",
1718
+ "\n",
1719
+ "train_data[f'subj0{subj}'] = torch.utils.data.TensorDataset(torch.tensor(train_image_indices))\n",
1720
+ "test_data = torch.utils.data.TensorDataset(torch.tensor(test_image_indices))"
1721
+ ]
1722
+ },
1723
+ {
1724
+ "cell_type": "code",
1725
+ "execution_count": 46,
1726
+ "id": "4c12edab",
1727
+ "metadata": {},
1728
+ "outputs": [],
1729
+ "source": [
1730
+ "num_voxels = {}\n",
1731
+ "voxels = {}\n",
1732
+ "for s in subj_list:\n",
1733
+ " print(f\"Training with {num_sessions} sessions\")\n",
1734
+ " train_dl = torch.utils.data.DataLoader(train_data[f'subj0{s}'], batch_size=batch_size, shuffle=True, drop_last=True, pin_memory=True)\n",
1735
+ "\n",
1736
+ " num_voxels_list.append(vox[0].shape[-1])\n",
1737
+ " num_voxels[f'subj0{s}'] = vox[0].shape[-1]\n",
1738
+ " voxels[f'subj0{s}'] = vox\n",
1739
+ " print(f\"num_voxels for subj0{s}: {num_voxels[f'subj0{s}']}\")\n",
1740
+ "\n",
1741
+ "print(\"Loaded all subj train dls and vox!\\n\")\n",
1742
+ "\n",
1743
+ "# Validate only on one subject\n",
1744
+ "if multi_subject: \n",
1745
+ " subj = subj_list[0] # cant validate on the actual held out person so picking first in subj_list\n",
1746
+ "test_dl = torch.utils.data.DataLoader(test_data, batch_size=24, shuffle=False, drop_last=True, pin_memory=True)\n",
1747
+ "\n",
1748
+ "print(f\"Loaded test dl for subj{subj}!\\n\")"
1749
+ ]
1750
+ },
1751
+ {
1752
+ "cell_type": "code",
1753
+ "execution_count": 47,
1754
+ "id": "e0a00122",
1755
+ "metadata": {},
1756
+ "outputs": [],
1757
+ "source": [
1758
+ "## USING OpenCLIP ViT-bigG ###\n",
1759
+ "sys.path.append('generative_models/')\n",
1760
+ "import sgm\n",
1761
+ "from generative_models.sgm.modules.encoders.modules import FrozenOpenCLIPImageEmbedder\n",
1762
+ "# from generative_models.sgm.models.diffusion import DiffusionEngine\n",
1763
+ "# from omegaconf import OmegaConf\n",
1764
+ "\n",
1765
+ "try:\n",
1766
+ " print(clip_img_embedder)\n",
1767
+ "except:\n",
1768
+ " clip_img_embedder = FrozenOpenCLIPImageEmbedder(\n",
1769
+ " arch=\"ViT-bigG-14\",\n",
1770
+ " version=\"laion2b_s39b_b160k\",\n",
1771
+ " output_tokens=True,\n",
1772
+ " only_tokens=True,\n",
1773
+ " )\n",
1774
+ " clip_img_embedder.to(device)\n",
1775
+ "clip_seq_dim = 256\n",
1776
+ "clip_emb_dim = 1664\n",
1777
+ "\n",
1778
+ "# ## USING OPEN AI CLIP ViT-L ###\n",
1779
+ "# import clip\n",
1780
+ "# try:\n",
1781
+ "# print(clip_model)\n",
1782
+ "# except:\n",
1783
+ "# clip_model, preprocess = clip.load(\"ViT-L/14\", device=device)\n",
1784
+ "# preprocess = transforms.Compose([\n",
1785
+ "# transforms.Resize(224, interpolation=transforms.InterpolationMode.BILINEAR),\n",
1786
+ "# transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n",
1787
+ "# std=[0.26862954, 0.26130258, 0.27577711]),\n",
1788
+ "# ])\n",
1789
+ "# def clip_img_embedder(image):\n",
1790
+ "# preproc_img = preprocess(image)\n",
1791
+ "# return clip_model.encode_image(preproc_img)\n",
1792
+ "# clip_seq_dim = 1\n",
1793
+ "# clip_emb_dim = 768"
1794
+ ]
1795
+ },
1796
+ {
1797
+ "cell_type": "code",
1798
+ "execution_count": 48,
1799
+ "id": "c308f889",
1800
+ "metadata": {},
1801
+ "outputs": [],
1802
+ "source": [
1803
+ "# ## USING OpenCLIP ViT-bigG ###\n",
1804
+ "# sys.path.append('generative_models/')\n",
1805
+ "# import sgm\n",
1806
+ "# from generative_models.sgm.modules.encoders.modules import FrozenOpenCLIPImageEmbedder\n",
1807
+ "# # from generative_models.sgm.models.diffusion import DiffusionEngine\n",
1808
+ "# # from omegaconf import OmegaConf\n",
1809
+ "\n",
1810
+ "try:\n",
1811
+ " print(clip_img_embedder)\n",
1812
+ "except:\n",
1813
+ " clip_img_embedder = FrozenOpenCLIPImageEmbedder(\n",
1814
+ " arch=\"ViT-H-14\",\n",
1815
+ " version=\"laion2b_s32b_b79k\",\n",
1816
+ " output_tokens=True,\n",
1817
+ " only_tokens=True,\n",
1818
+ " )\n",
1819
+ " clip_img_embedder.to(device)\n",
1820
+ "clip_seq_dim = 256\n",
1821
+ "clip_emb_dim = 1280\n",
1822
+ "\n",
1823
+ "# # ## USING OPEN AI CLIP ViT-L ###\n",
1824
+ "# # import clip\n",
1825
+ "# # try:\n",
1826
+ "# # print(clip_model)\n",
1827
+ "# # except:\n",
1828
+ "# # clip_model, preprocess = clip.load(\"ViT-L/14\", device=device)\n",
1829
+ "# # preprocess = transforms.Compose([\n",
1830
+ "# # transforms.Resize(224, interpolation=transforms.InterpolationMode.BILINEAR),\n",
1831
+ "# # transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n",
1832
+ "# # std=[0.26862954, 0.26130258, 0.27577711]),\n",
1833
+ "# # ])\n",
1834
+ "# # def clip_img_embedder(image):\n",
1835
+ "# # preproc_img = preprocess(image)\n",
1836
+ "# # return clip_model.encode_image(preproc_img)\n",
1837
+ "# # clip_seq_dim = 1\n",
1838
+ "# # clip_emb_dim = 768"
1839
+ ]
1840
+ },
1841
+ {
1842
+ "cell_type": "code",
1843
+ "execution_count": 49,
1844
+ "id": "af081f8c",
1845
+ "metadata": {},
1846
+ "outputs": [],
1847
+ "source": [
1848
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1849
+ "if utils.is_interactive():\n",
1850
+ " model_name = 'vit-h-MST' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1851
+ " print(\"model_name:\", model_name)\n",
1852
+ " \n",
1853
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1854
+ " # other variables can be specified in the following string:\n",
1855
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1856
+ " batch_size = 24\n",
1857
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1858
+ " --model_name={model_name} \\\n",
1859
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1860
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1861
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1862
+ " --no-use_prior --prior_scale=30 \\\n",
1863
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1864
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1865
+ " --multisubject_ckpt=None --wandb_log\"\n",
1866
+ " print(jupyter_args)\n",
1867
+ " jupyter_args = jupyter_args.split()"
1868
+ ]
1869
+ },
1870
+ {
1871
+ "cell_type": "code",
1872
+ "execution_count": 50,
1873
+ "id": "d5b9cf29",
1874
+ "metadata": {},
1875
+ "outputs": [],
1876
+ "source": [
1877
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1878
+ "parser.add_argument(\n",
1879
+ " \"--model_name\", type=str, default=\"testing\",\n",
1880
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1881
+ ")\n",
1882
+ "parser.add_argument(\n",
1883
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1884
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1885
+ ")\n",
1886
+ "parser.add_argument(\n",
1887
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1888
+ " help=\"Validate on which subject?\",\n",
1889
+ ")\n",
1890
+ "parser.add_argument(\n",
1891
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1892
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1893
+ ")\n",
1894
+ "parser.add_argument(\n",
1895
+ " \"--num_sessions\", type=int, default=0,\n",
1896
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1897
+ ")\n",
1898
+ "parser.add_argument(\n",
1899
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1900
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1901
+ ")\n",
1902
+ "parser.add_argument(\n",
1903
+ " \"--batch_size\", type=int, default=32,\n",
1904
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1905
+ ")\n",
1906
+ "parser.add_argument(\n",
1907
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1908
+ " help=\"whether to log to wandb\",\n",
1909
+ ")\n",
1910
+ "parser.add_argument(\n",
1911
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1912
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1913
+ ")\n",
1914
+ "parser.add_argument(\n",
1915
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1916
+ " help=\"wandb project name\",\n",
1917
+ ")\n",
1918
+ "parser.add_argument(\n",
1919
+ " \"--mixup_pct\",type=float,default=.33,\n",
1920
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1921
+ ")\n",
1922
+ "parser.add_argument(\n",
1923
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1924
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1925
+ ")\n",
1926
+ "parser.add_argument(\n",
1927
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1928
+ " help=\"whether to output blurry reconstructions\",\n",
1929
+ ")\n",
1930
+ "parser.add_argument(\n",
1931
+ " \"--blur_scale\",type=float,default=.5,\n",
1932
+ " help=\"multiply loss from blurry recons by this number\",\n",
1933
+ ")\n",
1934
+ "parser.add_argument(\n",
1935
+ " \"--clip_scale\",type=float,default=1.,\n",
1936
+ " help=\"multiply contrastive loss by this number\",\n",
1937
+ ")\n",
1938
+ "parser.add_argument(\n",
1939
+ " \"--prior_scale\",type=float,default=30,\n",
1940
+ " help=\"multiply diffusion prior loss by this\",\n",
1941
+ ")\n",
1942
+ "parser.add_argument(\n",
1943
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1944
+ " help=\"whether to use image augmentation\",\n",
1945
+ ")\n",
1946
+ "parser.add_argument(\n",
1947
+ " \"--num_epochs\",type=int,default=120,\n",
1948
+ " help=\"number of epochs of training\",\n",
1949
+ ")\n",
1950
+ "parser.add_argument(\n",
1951
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1952
+ ")\n",
1953
+ "parser.add_argument(\n",
1954
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1955
+ ")\n",
1956
+ "parser.add_argument(\n",
1957
+ " \"--n_blocks\",type=int,default=2,\n",
1958
+ ")\n",
1959
+ "parser.add_argument(\n",
1960
+ " \"--hidden_dim\",type=int,default=1024,\n",
1961
+ ")\n",
1962
+ "parser.add_argument(\n",
1963
+ " \"--seq_past\",type=int,default=0,\n",
1964
+ ")\n",
1965
+ "parser.add_argument(\n",
1966
+ " \"--seq_future\",type=int,default=0,\n",
1967
+ ")\n",
1968
+ "parser.add_argument(\n",
1969
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1970
+ ")\n",
1971
+ "parser.add_argument(\n",
1972
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1973
+ ")\n",
1974
+ "parser.add_argument(\n",
1975
+ " \"--ckpt_interval\",type=int,default=5,\n",
1976
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1977
+ ")\n",
1978
+ "parser.add_argument(\n",
1979
+ " \"--seed\",type=int,default=42,\n",
1980
+ ")\n",
1981
+ "parser.add_argument(\n",
1982
+ " \"--max_lr\",type=float,default=3e-4,\n",
1983
+ ")\n",
1984
+ "\n",
1985
+ "if utils.is_interactive():\n",
1986
+ " args = parser.parse_args(jupyter_args)\n",
1987
+ "else:\n",
1988
+ " args = parser.parse_args()\n",
1989
+ "\n",
1990
+ "# create global variables without the args prefix\n",
1991
+ "for attribute_name in vars(args).keys():\n",
1992
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1993
+ " \n",
1994
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1995
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1996
+ " os.makedirs(outdir,exist_ok=True)\n",
1997
+ " \n",
1998
+ "if use_image_aug or blurry_recon:\n",
1999
+ " import kornia\n",
2000
+ " import kornia.augmentation as K\n",
2001
+ " from kornia.augmentation.container import AugmentationSequential\n",
2002
+ "if use_image_aug:\n",
2003
+ " img_augment = AugmentationSequential(\n",
2004
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
2005
+ " same_on_batch=False,\n",
2006
+ " data_keys=[\"input\"],\n",
2007
+ " )\n",
2008
+ " # Define the blurring augmentations\n",
2009
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
2010
+ " \n",
2011
+ "if multi_subject:\n",
2012
+ " subj_list = np.arange(1,9)\n",
2013
+ " subj_list = subj_list[subj_list != subj]\n",
2014
+ "else:\n",
2015
+ " subj_list = [subj]\n",
2016
+ "\n",
2017
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
2018
+ ]
2019
+ },
2020
+ {
2021
+ "cell_type": "code",
2022
+ "execution_count": 51,
2023
+ "id": "925f533f",
2024
+ "metadata": {},
2025
+ "outputs": [],
2026
+ "source": [
2027
+ "model = utils.prepare_model_and_training(\n",
2028
+ " num_voxels_list=num_voxels_list,\n",
2029
+ " n_blocks=n_blocks,\n",
2030
+ " hidden_dim=hidden_dim,\n",
2031
+ " clip_emb_dim=clip_emb_dim,\n",
2032
+ " clip_seq_dim=clip_seq_dim,\n",
2033
+ " use_prior=use_prior,\n",
2034
+ " clip_scale=clip_scale\n",
2035
+ ")"
2036
+ ]
2037
+ },
2038
+ {
2039
+ "cell_type": "code",
2040
+ "execution_count": 52,
2041
+ "id": "4572d154",
2042
+ "metadata": {},
2043
+ "outputs": [],
2044
+ "source": [
2045
+ "# test on subject 1 with fake data\n",
2046
+ "b = torch.randn((2,1,num_voxels_list[0]))\n",
2047
+ "print(b.shape, model.ridge(b,0).shape)"
2048
+ ]
2049
+ },
2050
+ {
2051
+ "cell_type": "code",
2052
+ "execution_count": 53,
2053
+ "id": "fed5fade",
2054
+ "metadata": {},
2055
+ "outputs": [],
2056
+ "source": [
2057
+ "# test that the model works on some fake data\n",
2058
+ "b = torch.randn((2,1,hidden_dim))\n",
2059
+ "print(\"b.shape\",b.shape)\n",
2060
+ "\n",
2061
+ "backbone_, clip_, blur_ = model.backbone(b)\n",
2062
+ "print(backbone_.shape, clip_.shape, blur_[0].shape, blur_[1].shape)"
2063
+ ]
2064
+ },
2065
+ {
2066
+ "cell_type": "code",
2067
+ "execution_count": 54,
2068
+ "id": "ca55bf63",
2069
+ "metadata": {},
2070
+ "outputs": [],
2071
+ "source": [
2072
+ "if use_prior:\n",
2073
+ " from models import *\n",
2074
+ "\n",
2075
+ " # setup diffusion prior network\n",
2076
+ " out_dim = clip_emb_dim\n",
2077
+ " depth = 6\n",
2078
+ " dim_head = 52\n",
2079
+ " heads = clip_emb_dim//52 # heads * dim_head = clip_emb_dim\n",
2080
+ " timesteps = 100\n",
2081
+ "\n",
2082
+ " prior_network = VersatileDiffusionPriorNetwork(\n",
2083
+ " dim=out_dim,\n",
2084
+ " depth=depth,\n",
2085
+ " dim_head=dim_head,\n",
2086
+ " heads=heads,\n",
2087
+ " causal=False,\n",
2088
+ " num_tokens = clip_seq_dim,\n",
2089
+ " learned_query_mode=\"pos_emb\"\n",
2090
+ " )\n",
2091
+ "\n",
2092
+ " model.diffusion_prior = BrainDiffusionPrior(\n",
2093
+ " net=prior_network,\n",
2094
+ " image_embed_dim=out_dim,\n",
2095
+ " condition_on_text_encodings=False,\n",
2096
+ " timesteps=timesteps,\n",
2097
+ " cond_drop_prob=0.2,\n",
2098
+ " image_embed_scale=None,\n",
2099
+ " )\n",
2100
+ " \n",
2101
+ " utils.count_params(model.diffusion_prior)\n",
2102
+ " utils.count_params(model)"
2103
+ ]
2104
+ },
2105
+ {
2106
+ "cell_type": "code",
2107
+ "execution_count": 55,
2108
+ "id": "04a6fed8",
2109
+ "metadata": {},
2110
+ "outputs": [],
2111
+ "source": [
2112
+ "no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']\n",
2113
+ "\n",
2114
+ "opt_grouped_parameters = [\n",
2115
+ " {'params': [p for n, p in model.ridge.named_parameters()], 'weight_decay': 1e-2},\n",
2116
+ " {'params': [p for n, p in model.backbone.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 1e-2},\n",
2117
+ " {'params': [p for n, p in model.backbone.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0},\n",
2118
+ "]\n",
2119
+ "# model.backbone.requires_grad_(False)\n",
2120
+ "\n",
2121
+ "if use_prior:\n",
2122
+ " opt_grouped_parameters.extend([\n",
2123
+ " {'params': [p for n, p in model.diffusion_prior.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 1e-2},\n",
2124
+ " {'params': [p for n, p in model.diffusion_prior.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n",
2125
+ " ])\n",
2126
+ "\n",
2127
+ "optimizer = torch.optim.AdamW(opt_grouped_parameters, lr=max_lr)\n",
2128
+ "\n",
2129
+ "if lr_scheduler_type == 'linear':\n",
2130
+ " lr_scheduler = torch.optim.lr_scheduler.LinearLR(\n",
2131
+ " optimizer,\n",
2132
+ " total_iters=int(np.floor(num_epochs*num_iterations_per_epoch)),\n",
2133
+ " last_epoch=-1\n",
2134
+ " )\n",
2135
+ "elif lr_scheduler_type == 'cycle':\n",
2136
+ " if num_iterations_per_epoch==0:\n",
2137
+ " num_iterations_per_epoch=1\n",
2138
+ " total_steps=int(np.floor(num_epochs*num_iterations_per_epoch))\n",
2139
+ " print(\"total_steps\", total_steps)\n",
2140
+ " lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(\n",
2141
+ " optimizer, \n",
2142
+ " max_lr=max_lr,\n",
2143
+ " total_steps=total_steps,\n",
2144
+ " final_div_factor=1000,\n",
2145
+ " last_epoch=-1, pct_start=2/num_epochs\n",
2146
+ " )\n",
2147
+ " \n",
2148
+ "def save_ckpt(tag):\n",
2149
+ " ckpt_path = outdir+f'/{tag}.pth'\n",
2150
+ " if accelerator.is_main_process:\n",
2151
+ " unwrapped_model = accelerator.unwrap_model(model)\n",
2152
+ " torch.save({\n",
2153
+ " 'epoch': epoch,\n",
2154
+ " 'model_state_dict': unwrapped_model.state_dict(),\n",
2155
+ " 'optimizer_state_dict': optimizer.state_dict(),\n",
2156
+ " 'lr_scheduler': lr_scheduler.state_dict(),\n",
2157
+ " 'train_losses': losses,\n",
2158
+ " 'test_losses': test_losses,\n",
2159
+ " 'lrs': lrs,\n",
2160
+ " }, ckpt_path)\n",
2161
+ " print(f\"\\n---saved {outdir}/{tag} ckpt!---\\n\")\n",
2162
+ "\n",
2163
+ "def load_ckpt(tag,load_lr=True,load_optimizer=True,load_epoch=True,strict=True,outdir=outdir,multisubj_loading=False): \n",
2164
+ " print(f\"\\n---loading {outdir}/{tag}.pth ckpt---\\n\")\n",
2165
+ " checkpoint = torch.load(outdir+'/last.pth', map_location='cpu')\n",
2166
+ " state_dict = checkpoint['model_state_dict']\n",
2167
+ " if multisubj_loading: # remove incompatible ridge layer that will otherwise error\n",
2168
+ " state_dict.pop('ridge.linears.0.weight',None)\n",
2169
+ " model.load_state_dict(state_dict, strict=strict)\n",
2170
+ " if load_epoch:\n",
2171
+ " globals()[\"epoch\"] = checkpoint['epoch']\n",
2172
+ " print(\"Epoch\",epoch)\n",
2173
+ " if load_optimizer:\n",
2174
+ " optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\n",
2175
+ " if load_lr:\n",
2176
+ " lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n",
2177
+ " del checkpoint\n",
2178
+ "\n",
2179
+ "print(\"\\nDone with model preparations!\")\n",
2180
+ "num_params = utils.count_params(model)"
2181
+ ]
2182
+ },
2183
+ {
2184
+ "cell_type": "code",
2185
+ "execution_count": 56,
2186
+ "id": "0d2a0961",
2187
+ "metadata": {},
2188
+ "outputs": [],
2189
+ "source": [
2190
+ "if local_rank==0 and wandb_log: # only use main process for wandb logging\n",
2191
+ " import wandb\n",
2192
+ " import time\n",
2193
+ " \n",
2194
+ " wandb_project = 'rtmindeye'\n",
2195
+ " print(f\"wandb {wandb_project} run {model_name}\")\n",
2196
+ "\n",
2197
+ " # Need to configure wandb beforehand in terminal with \"wandb init\"!\n",
2198
+ " wandb_config = {\n",
2199
+ " \"model_name\": model_name,\n",
2200
+ " \"global_batch_size\": global_batch_size,\n",
2201
+ " \"batch_size\": batch_size,\n",
2202
+ " \"num_epochs\": num_epochs,\n",
2203
+ " \"num_sessions\": num_sessions,\n",
2204
+ " \"num_params\": num_params,\n",
2205
+ " \"clip_scale\": clip_scale,\n",
2206
+ " \"prior_scale\": prior_scale,\n",
2207
+ " \"blur_scale\": blur_scale,\n",
2208
+ " \"use_image_aug\": use_image_aug,\n",
2209
+ " \"max_lr\": max_lr,\n",
2210
+ " \"mixup_pct\": mixup_pct,\n",
2211
+ " \"num_samples_per_epoch\": num_samples_per_epoch,\n",
2212
+ " \"ckpt_interval\": ckpt_interval,\n",
2213
+ " \"ckpt_saving\": ckpt_saving,\n",
2214
+ " \"seed\": seed, # SLURM array task ID\n",
2215
+ " \"distributed\": distributed,\n",
2216
+ " \"num_devices\": num_devices,\n",
2217
+ " \"world_size\": world_size,\n",
2218
+ " }\n",
2219
+ " print(\"wandb_config:\\n\", wandb_config)\n",
2220
+ " print(\"wandb_id:\", model_name)\n",
2221
+ "\n",
2222
+ " # Initialize wandb\n",
2223
+ " wandb.init(\n",
2224
+ " id=model_name,\n",
2225
+ " project=wandb_project,\n",
2226
+ " name=model_name,\n",
2227
+ " config=wandb_config,\n",
2228
+ " resume=\"allow\",\n",
2229
+ " save_code=True,\n",
2230
+ " )\n",
2231
+ "\n",
2232
+ " # Get SLURM job & array ID\n",
2233
+ " slurm_job_id = utils.get_slurm_job()\n",
2234
+ " slurm_array_id = seed # seed corresponds to SLURM_ARRAY_TASK_ID\n",
2235
+ "\n",
2236
+ " # Define SLURM log paths\n",
2237
+ " log_dir = \"slurms\"\n",
2238
+ " log_files = [\n",
2239
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.out\",\n",
2240
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.err\",\n",
2241
+ " ]\n",
2242
+ "\n",
2243
+ " # Ensure logs exist before logging them\n",
2244
+ " for log_file in log_files:\n",
2245
+ " wait_time = 0\n",
2246
+ " while not os.path.exists(log_file) and wait_time < 60: # Wait max 60s\n",
2247
+ " time.sleep(5)\n",
2248
+ " wait_time += 5\n",
2249
+ "\n",
2250
+ " # Log SLURM logs as artifacts\n",
2251
+ " artifact = wandb.Artifact(f\"slurm_logs_{slurm_job_id}_{slurm_array_id}\", type=\"logs\")\n",
2252
+ " for log_file in log_files:\n",
2253
+ " if os.path.exists(log_file):\n",
2254
+ " artifact.add_file(log_file)\n",
2255
+ "\n",
2256
+ " wandb.log_artifact(artifact)\n",
2257
+ "else:\n",
2258
+ " wandb_log = False"
2259
+ ]
2260
+ },
2261
+ {
2262
+ "cell_type": "code",
2263
+ "execution_count": 57,
2264
+ "id": "ea0b850a",
2265
+ "metadata": {},
2266
+ "outputs": [],
2267
+ "source": [
2268
+ "if local_rank==0 and wandb_log: # only use main process for wandb logging\n",
2269
+ " import wandb\n",
2270
+ " import time\n",
2271
+ " \n",
2272
+ " wandb_project = 'rtmindeye'\n",
2273
+ " print(f\"wandb {wandb_project} run {model_name}\")\n",
2274
+ "\n",
2275
+ " # Need to configure wandb beforehand in terminal with \"wandb init\"!\n",
2276
+ " wandb_config = {\n",
2277
+ " \"model_name\": model_name,\n",
2278
+ " \"global_batch_size\": global_batch_size,\n",
2279
+ " \"batch_size\": batch_size,\n",
2280
+ " \"num_epochs\": num_epochs,\n",
2281
+ " \"num_sessions\": num_sessions,\n",
2282
+ " \"num_params\": num_params,\n",
2283
+ " \"clip_scale\": clip_scale,\n",
2284
+ " \"prior_scale\": prior_scale,\n",
2285
+ " \"blur_scale\": blur_scale,\n",
2286
+ " \"use_image_aug\": use_image_aug,\n",
2287
+ " \"max_lr\": max_lr,\n",
2288
+ " \"mixup_pct\": mixup_pct,\n",
2289
+ " \"num_samples_per_epoch\": num_samples_per_epoch,\n",
2290
+ " \"ckpt_interval\": ckpt_interval,\n",
2291
+ " \"ckpt_saving\": ckpt_saving,\n",
2292
+ " \"seed\": seed, # SLURM array task ID\n",
2293
+ " \"distributed\": distributed,\n",
2294
+ " \"num_devices\": num_devices,\n",
2295
+ " \"world_size\": world_size,\n",
2296
+ " }\n",
2297
+ " print(\"wandb_config:\\n\", wandb_config)\n",
2298
+ " print(\"wandb_id:\", model_name)\n",
2299
+ "\n",
2300
+ " # Initialize wandb\n",
2301
+ " wandb.init(\n",
2302
+ " id=model_name,\n",
2303
+ " project=wandb_project,\n",
2304
+ " name=model_name,\n",
2305
+ " config=wandb_config,\n",
2306
+ " resume=\"allow\",\n",
2307
+ " save_code=True,\n",
2308
+ " )\n",
2309
+ "\n",
2310
+ " # Get SLURM job & array ID\n",
2311
+ " try:\n",
2312
+ " slurm_job_id = utils.get_slurm_job()\n",
2313
+ " slurm_array_id = seed # seed corresponds to SLURM_ARRAY_TASK_ID\n",
2314
+ "\n",
2315
+ " # Define SLURM log paths\n",
2316
+ " log_dir = \"slurms\"\n",
2317
+ " log_files = [\n",
2318
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.out\",\n",
2319
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.err\",\n",
2320
+ " ]\n",
2321
+ "\n",
2322
+ " # Ensure logs exist before logging them\n",
2323
+ " for log_file in log_files:\n",
2324
+ " wait_time = 0\n",
2325
+ " while not os.path.exists(log_file) and wait_time < 60: # Wait max 60s\n",
2326
+ " time.sleep(5)\n",
2327
+ " wait_time += 5\n",
2328
+ "\n",
2329
+ " # Log SLURM logs as artifacts\n",
2330
+ " artifact = wandb.Artifact(f\"slurm_logs_{slurm_job_id}_{slurm_array_id}\", type=\"logs\")\n",
2331
+ " for log_file in log_files:\n",
2332
+ " if os.path.exists(log_file):\n",
2333
+ " artifact.add_file(log_file)\n",
2334
+ "\n",
2335
+ " wandb.log_artifact(artifact)\n",
2336
+ " \n",
2337
+ " except:\n",
2338
+ " print(\"Alert: wandb is not being logged locally.\")\n",
2339
+ "else:\n",
2340
+ " wandb_log = False"
2341
+ ]
2342
+ }
2343
+ ],
2344
+ "metadata": {
2345
+ "kernelspec": {
2346
+ "display_name": "Python 3",
2347
+ "language": "python",
2348
+ "name": "python3"
2349
+ },
2350
+ "language_info": {
2351
+ "codemirror_mode": {
2352
+ "name": "ipython",
2353
+ "version": 3
2354
+ },
2355
+ "file_extension": ".py",
2356
+ "mimetype": "text/x-python",
2357
+ "name": "python",
2358
+ "nbconvert_exporter": "python",
2359
+ "pygments_lexer": "ipython3",
2360
+ "version": "3.11.13"
2361
+ }
2362
+ },
2363
+ "nbformat": 4,
2364
+ "nbformat_minor": 5
2365
+ }
wandb/run-20250809_151110-vit-h-MST/files/config.yaml ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ model_name:
4
+ desc: null
5
+ value: vit-h-MST
6
+ global_batch_size:
7
+ desc: null
8
+ value: 8
9
+ batch_size:
10
+ desc: null
11
+ value: 24
12
+ num_epochs:
13
+ desc: null
14
+ value: 30
15
+ num_sessions:
16
+ desc: null
17
+ value: 0
18
+ num_params:
19
+ desc: null
20
+ value: 358038808
21
+ clip_scale:
22
+ desc: null
23
+ value: 1.0
24
+ prior_scale:
25
+ desc: null
26
+ value: 30.0
27
+ blur_scale:
28
+ desc: null
29
+ value: 0.5
30
+ use_image_aug:
31
+ desc: null
32
+ value: false
33
+ max_lr:
34
+ desc: null
35
+ value: 0.0003
36
+ mixup_pct:
37
+ desc: null
38
+ value: 0.33
39
+ num_samples_per_epoch:
40
+ desc: null
41
+ value: 1138
42
+ ckpt_interval:
43
+ desc: null
44
+ value: 999
45
+ ckpt_saving:
46
+ desc: null
47
+ value: true
48
+ seed:
49
+ desc: null
50
+ value: 42
51
+ distributed:
52
+ desc: null
53
+ value: false
54
+ num_devices:
55
+ desc: null
56
+ value: 1
57
+ world_size:
58
+ desc: null
59
+ value: 1
60
+ _wandb:
61
+ desc: null
62
+ value:
63
+ python_version: 3.11.13
64
+ cli_version: 0.17.2
65
+ framework: huggingface
66
+ huggingface_version: 4.37.2
67
+ is_jupyter_run: true
68
+ is_kaggle_kernel: false
69
+ start_time: 1754752270
70
+ t:
71
+ 1:
72
+ - 1
73
+ - 5
74
+ - 9
75
+ - 11
76
+ - 41
77
+ - 49
78
+ - 53
79
+ - 55
80
+ - 63
81
+ - 71
82
+ - 79
83
+ - 83
84
+ - 103
85
+ 2:
86
+ - 1
87
+ - 5
88
+ - 9
89
+ - 11
90
+ - 41
91
+ - 49
92
+ - 53
93
+ - 55
94
+ - 63
95
+ - 71
96
+ - 79
97
+ - 83
98
+ - 103
99
+ 3:
100
+ - 2
101
+ - 13
102
+ - 14
103
+ - 16
104
+ - 23
105
+ 4: 3.11.13
106
+ 5: 0.17.2
107
+ 6: 4.37.2
108
+ 8:
109
+ - 1
110
+ - 5
111
+ 13: linux-x86_64
112
+ session_history: code/_session_history.ipynb
wandb/run-20250809_151110-vit-h-MST/files/diff.patch ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20250809_151110-vit-h-MST/files/requirements.txt ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CoCa-pytorch==0.1.0
2
+ Django==5.2.5
3
+ GitPython==3.1.45
4
+ Jinja2==3.1.6
5
+ MarkupSafe==3.0.2
6
+ PyYAML==6.0.2
7
+ Pygments==2.19.2
8
+ Send2Trash==1.8.3
9
+ accelerate==0.24.1
10
+ aiohappyeyeballs==2.6.1
11
+ aiohttp==3.12.15
12
+ aiosignal==1.4.0
13
+ annotated-types==0.7.0
14
+ antlr4-python3-runtime==4.9.3
15
+ ants==0.0.7
16
+ anyio==4.10.0
17
+ argon2-cffi-bindings==25.1.0
18
+ argon2-cffi==25.1.0
19
+ arrow==1.3.0
20
+ asgiref==3.9.1
21
+ asttokens==3.0.0
22
+ async-lru==2.0.5
23
+ attrs==25.3.0
24
+ autocommand==2.2.2
25
+ babel==2.17.0
26
+ backports.tarfile==1.2.0
27
+ beartype==0.21.0
28
+ beautifulsoup4==4.13.4
29
+ bleach==6.2.0
30
+ braceexpand==0.1.7
31
+ certifi==2025.8.3
32
+ cffi==1.17.1
33
+ charset-normalizer==3.4.3
34
+ click==8.2.1
35
+ clip-anytorch==2.6.0
36
+ clip==0.2.0
37
+ comm==0.2.3
38
+ contourpy==1.3.3
39
+ cycler==0.12.1
40
+ dalle2-pytorch==1.15.6
41
+ debugpy==1.8.16
42
+ decorator==5.2.1
43
+ defusedxml==0.7.1
44
+ diffusers==0.23.0
45
+ docker-pycreds==0.4.0
46
+ einops==0.7.0
47
+ einx==0.3.0
48
+ ema-pytorch==0.7.7
49
+ embedding-reader==1.7.0
50
+ executing==2.2.0
51
+ fastjsonschema==2.21.1
52
+ filelock==3.18.0
53
+ fonttools==4.59.0
54
+ fqdn==1.5.1
55
+ frozendict==2.4.6
56
+ frozenlist==1.7.0
57
+ fsspec==2025.7.0
58
+ ftfy==6.3.1
59
+ gevent==25.5.1
60
+ gitdb==4.0.12
61
+ greenlet==3.2.4
62
+ h11==0.16.0
63
+ h5py==3.10.0
64
+ hf-xet==1.1.7
65
+ httpcore==1.0.9
66
+ httpx==0.28.1
67
+ huggingface-hub==0.34.4
68
+ idna==3.10
69
+ imageio==2.37.0
70
+ importlib_metadata==8.0.0
71
+ importlib_metadata==8.7.0
72
+ inflect==7.3.1
73
+ ipykernel==6.30.1
74
+ ipython==9.4.0
75
+ ipython_pygments_lexers==1.1.1
76
+ ipywidgets==8.1.7
77
+ isoduration==20.11.0
78
+ jaraco.collections==5.1.0
79
+ jaraco.context==5.3.0
80
+ jaraco.functools==4.0.1
81
+ jaraco.text==3.12.1
82
+ jedi==0.19.2
83
+ joblib==1.5.1
84
+ json5==0.12.0
85
+ jsonpointer==3.0.0
86
+ jsonschema-specifications==2025.4.1
87
+ jsonschema==4.25.0
88
+ jupyter-console==6.6.3
89
+ jupyter-events==0.12.0
90
+ jupyter-lsp==2.2.6
91
+ jupyter==1.1.1
92
+ jupyter_client==8.6.3
93
+ jupyter_core==5.8.1
94
+ jupyter_server==2.16.0
95
+ jupyter_server_terminals==0.5.3
96
+ jupyterlab==4.4.5
97
+ jupyterlab_nvdashboard==0.13.0
98
+ jupyterlab_pygments==0.3.0
99
+ jupyterlab_server==2.27.3
100
+ jupyterlab_widgets==3.0.15
101
+ kiwisolver==1.4.8
102
+ kornia==0.8.1
103
+ kornia_rs==0.1.9
104
+ lark==1.2.2
105
+ lazy_loader==0.4
106
+ lightning-utilities==0.15.2
107
+ lxml==6.0.0
108
+ matplotlib-inline==0.1.7
109
+ matplotlib==3.8.2
110
+ mistune==3.1.3
111
+ more-itertools==10.3.0
112
+ mpmath==1.3.0
113
+ multidict==6.6.3
114
+ nbclient==0.10.2
115
+ nbconvert==7.16.6
116
+ nbformat==5.10.4
117
+ nest-asyncio==1.6.0
118
+ networkx==3.5
119
+ nibabel==5.2.1
120
+ nilearn==0.12.0
121
+ notebook==7.4.5
122
+ notebook_shim==0.2.4
123
+ numpy==1.26.4
124
+ nvidia-cublas-cu12==12.4.5.8
125
+ nvidia-cuda-cupti-cu12==12.4.127
126
+ nvidia-cuda-nvrtc-cu12==12.4.127
127
+ nvidia-cuda-runtime-cu12==12.4.127
128
+ nvidia-cudnn-cu12==9.1.0.70
129
+ nvidia-cufft-cu12==11.2.1.3
130
+ nvidia-curand-cu12==10.3.5.147
131
+ nvidia-cusolver-cu12==11.6.1.9
132
+ nvidia-cusparse-cu12==12.3.1.170
133
+ nvidia-ml-py==12.575.51
134
+ nvidia-nccl-cu12==2.21.5
135
+ nvidia-nvjitlink-cu12==12.4.127
136
+ nvidia-nvtx-cu12==12.4.127
137
+ omegaconf==2.3.0
138
+ open-clip-torch==2.24.0
139
+ overrides==7.7.0
140
+ packaging==24.2
141
+ packaging==25.0
142
+ pandas==2.2.0
143
+ pandocfilters==1.5.1
144
+ parso==0.8.4
145
+ pexpect==4.9.0
146
+ pillow==10.2.0
147
+ platformdirs==4.2.2
148
+ platformdirs==4.3.8
149
+ prometheus_client==0.22.1
150
+ prompt_toolkit==3.0.51
151
+ propcache==0.3.2
152
+ protobuf==5.29.5
153
+ psutil==7.0.0
154
+ ptyprocess==0.7.0
155
+ pure_eval==0.2.3
156
+ pyarrow==15.0.2
157
+ pycparser==2.22
158
+ pydantic==2.11.7
159
+ pydantic_core==2.33.2
160
+ pynvml==12.0.0
161
+ pyparsing==3.2.3
162
+ python-dateutil==2.9.0.post0
163
+ python-json-logger==3.3.0
164
+ pytorch-lightning==2.5.2
165
+ pytorch-warmup==0.2.0
166
+ pytz==2025.2
167
+ pyzmq==27.0.1
168
+ referencing==0.36.2
169
+ regex==2025.7.34
170
+ requests==2.32.4
171
+ resize-right==0.0.2
172
+ rfc3339-validator==0.1.4
173
+ rfc3986-validator==0.1.1
174
+ rfc3987-syntax==1.1.0
175
+ rotary-embedding-torch==0.8.9
176
+ rpds-py==0.27.0
177
+ safetensors==0.6.2
178
+ scikit-image==0.25.2
179
+ scikit-learn==1.4.1.post1
180
+ scipy==1.12.0
181
+ sentencepiece==0.2.0
182
+ sentry-sdk==2.34.1
183
+ setproctitle==1.3.6
184
+ setuptools==80.9.0
185
+ six==1.17.0
186
+ smmap==5.0.2
187
+ sniffio==1.3.1
188
+ soupsieve==2.7
189
+ sqlparse==0.5.3
190
+ stack-data==0.6.3
191
+ sympy==1.13.1
192
+ terminado==0.18.1
193
+ threadpoolctl==3.6.0
194
+ tifffile==2025.6.11
195
+ timm==1.0.19
196
+ tinycss2==1.4.0
197
+ tokenizers==0.15.2
198
+ tomli==2.0.1
199
+ torch-fidelity==0.3.0
200
+ torch==2.5.1
201
+ torchmetrics==1.8.1
202
+ torchvision==0.20.1
203
+ tornado==6.5.2
204
+ tqdm==4.66.2
205
+ traitlets==5.14.3
206
+ transformers==4.37.2
207
+ triton==3.1.0
208
+ typeguard==4.3.0
209
+ types-python-dateutil==2.9.0.20250809
210
+ typing-inspection==0.4.1
211
+ typing_extensions==4.12.2
212
+ typing_extensions==4.14.1
213
+ tzdata==2025.2
214
+ uri-template==1.3.0
215
+ urllib3==2.5.0
216
+ vector_quantize_pytorch==1.14.7
217
+ wandb==0.17.2
218
+ wcwidth==0.2.13
219
+ webcolors==24.11.1
220
+ webdataset==0.2.73
221
+ webencodings==0.5.1
222
+ websocket-client==1.8.0
223
+ wheel==0.45.1
224
+ widgetsnbextension==4.0.14
225
+ x-clip==0.14.4
226
+ yarl==1.20.1
227
+ zipp==3.19.2
228
+ zipp==3.23.0
229
+ zope.event==5.1.1
230
+ zope.interface==7.2
wandb/run-20250809_151110-vit-h-MST/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"_wandb": {"runtime": 0}}
wandb/run-20250809_151110-vit-h-MST/logs/debug.log ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-08-09 15:11:10,707 INFO MainThread:9111 [wandb_setup.py:_flush():76] Current SDK version is 0.17.2
2
+ 2025-08-09 15:11:10,707 INFO MainThread:9111 [wandb_setup.py:_flush():76] Configure stats pid to 9111
3
+ 2025-08-09 15:11:10,707 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/.config/wandb/settings
4
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/real_time_mindEye2/wandb/settings
5
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False}
7
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
8
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying login settings: {}
9
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
10
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_init.py:_log_setup():520] Logging user logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_151110-vit-h-MST/logs/debug.log
11
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_init.py:_log_setup():521] Logging internal logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_151110-vit-h-MST/logs/debug-internal.log
12
+ 2025-08-09 15:11:10,708 INFO MainThread:9111 [wandb_init.py:_jupyter_setup():466] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7fe32c9b2a50>
13
+ 2025-08-09 15:11:10,709 INFO MainThread:9111 [wandb_init.py:init():560] calling init triggers
14
+ 2025-08-09 15:11:10,709 INFO MainThread:9111 [wandb_init.py:init():567] wandb.init called with sweep_config: {}
15
+ config: {'model_name': 'vit-h-MST', 'global_batch_size': 8, 'batch_size': 24, 'num_epochs': 30, 'num_sessions': 0, 'num_params': 358038808, 'clip_scale': 1.0, 'prior_scale': 30.0, 'blur_scale': 0.5, 'use_image_aug': False, 'max_lr': 0.0003, 'mixup_pct': 0.33, 'num_samples_per_epoch': 1138, 'ckpt_interval': 999, 'ckpt_saving': True, 'seed': 42, 'distributed': False, 'num_devices': 1, 'world_size': 1}
16
+ 2025-08-09 15:11:10,709 INFO MainThread:9111 [wandb_init.py:init():610] starting backend
17
+ 2025-08-09 15:11:10,709 INFO MainThread:9111 [wandb_init.py:init():614] setting up manager
18
+ 2025-08-09 15:11:10,711 INFO MainThread:9111 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2025-08-09 15:11:10,715 INFO MainThread:9111 [wandb_init.py:init():622] backend started and connected
20
+ 2025-08-09 15:11:10,734 INFO MainThread:9111 [wandb_run.py:_label_probe_notebook():1334] probe notebook
21
+ 2025-08-09 15:11:10,736 INFO MainThread:9111 [wandb_run.py:_label_probe_notebook():1344] Unable to probe notebook: 'NoneType' object has no attribute 'get'
22
+ 2025-08-09 15:11:10,736 INFO MainThread:9111 [wandb_init.py:init():711] updated telemetry
23
+ 2025-08-09 15:11:10,744 INFO MainThread:9111 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout
24
+ 2025-08-09 15:11:11,170 INFO MainThread:9111 [wandb_run.py:_on_init():2402] communicating current version
25
+ 2025-08-09 15:11:11,323 INFO MainThread:9111 [wandb_run.py:_on_init():2411] got version response upgrade_message: "wandb version 0.21.1 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
26
+
27
+ 2025-08-09 15:11:11,323 INFO MainThread:9111 [wandb_init.py:init():795] starting run threads in backend
28
+ 2025-08-09 15:11:11,823 INFO MainThread:9111 [wandb_run.py:_console_start():2380] atexit reg
29
+ 2025-08-09 15:11:11,823 INFO MainThread:9111 [wandb_run.py:_redirect():2235] redirect: wrap_raw
30
+ 2025-08-09 15:11:11,824 INFO MainThread:9111 [wandb_run.py:_redirect():2300] Wrapping output streams.
31
+ 2025-08-09 15:11:11,824 INFO MainThread:9111 [wandb_run.py:_redirect():2325] Redirects installed.
32
+ 2025-08-09 15:11:11,832 INFO MainThread:9111 [wandb_init.py:init():838] run started, returning control to user process
33
+ 2025-08-09 15:11:11,988 INFO MainThread:9111 [jupyter.py:_save_ipynb():383] looking for notebook: None
34
+ 2025-08-09 15:11:11,988 INFO MainThread:9111 [wandb_init.py:_pause_backend():431] pausing backend
35
+ 2025-08-09 15:11:47,935 INFO MainThread:9111 [wandb_init.py:_resume_backend():436] resuming backend
36
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Current SDK version is 0.17.2
37
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Configure stats pid to 9111
38
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/.config/wandb/settings
39
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from /home/ubuntu/real_time_mindEye2/wandb/settings
40
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
41
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False}
42
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
43
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying login settings: {}
44
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
45
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_init.py:_log_setup():520] Logging user logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_151147-vit-h-MST/logs/debug.log
46
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_init.py:_log_setup():521] Logging internal logs to /home/ubuntu/real_time_mindEye2/wandb/run-20250809_151147-vit-h-MST/logs/debug-internal.log
47
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_init.py:init():560] calling init triggers
48
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_init.py:init():567] wandb.init called with sweep_config: {}
49
+ config: {'model_name': 'vit-h-MST', 'global_batch_size': 8, 'batch_size': 24, 'num_epochs': 30, 'num_sessions': 0, 'num_params': 358038808, 'clip_scale': 1.0, 'prior_scale': 30.0, 'blur_scale': 0.5, 'use_image_aug': False, 'max_lr': 0.0003, 'mixup_pct': 0.33, 'num_samples_per_epoch': 1138, 'ckpt_interval': 999, 'ckpt_saving': True, 'seed': 42, 'distributed': False, 'num_devices': 1, 'world_size': 1}
50
+ 2025-08-09 15:11:47,951 INFO MainThread:9111 [wandb_init.py:init():585] re-initializing run, found existing run on stack: vit-h-MST
51
+ 2025-08-09 15:11:47,952 INFO MainThread:9111 [wandb_run.py:_finish():2109] finishing run ckadirt/rtmindeye/vit-h-MST
52
+ 2025-08-09 15:11:48,010 INFO MainThread:9111 [jupyter.py:save_history():473] saving 57 cells to _session_history.ipynb
53
+ 2025-08-09 15:11:48,012 INFO MainThread:9111 [wandb_run.py:_config_callback():1382] config_cb ('_wandb', 'session_history') code/_session_history.ipynb None
54
+ 2025-08-09 15:11:48,022 INFO MainThread:9111 [jupyter.py:_save_ipynb():383] looking for notebook: None
55
+ 2025-08-09 15:11:48,022 INFO MainThread:9111 [wandb_init.py:_jupyter_teardown():448] cleaning up jupyter logic
56
+ 2025-08-09 15:11:48,022 INFO MainThread:9111 [wandb_run.py:_atexit_cleanup():2349] got exitcode: 0
57
+ 2025-08-09 15:11:48,022 INFO MainThread:9111 [wandb_run.py:_restore():2332] restore
58
+ 2025-08-09 15:11:48,022 INFO MainThread:9111 [wandb_run.py:_restore():2338] restore done
59
+ 2025-08-09 15:11:51,294 INFO MainThread:9111 [wandb_run.py:_footer_history_summary_info():3988] rendering history
60
+ 2025-08-09 15:11:51,294 INFO MainThread:9111 [wandb_run.py:_footer_history_summary_info():4020] rendering summary
61
+ 2025-08-09 15:11:51,303 INFO MainThread:9111 [wandb_run.py:_footer_sync_info():3947] logging synced files
wandb/run-20250809_151110-vit-h-MST/run-vit-h-MST.wandb ADDED
Binary file (15.4 kB). View file
 
wandb/run-20250809_151110-vit-h-MST/tmp/code/_session_history.ipynb ADDED
@@ -0,0 +1,2365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "680cb740",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "print(\"importing modules\")\n",
11
+ "import os\n",
12
+ "import sys\n",
13
+ "import json\n",
14
+ "import argparse\n",
15
+ "import numpy as np\n",
16
+ "import time\n",
17
+ "import random\n",
18
+ "import string\n",
19
+ "import h5py\n",
20
+ "from tqdm import tqdm\n",
21
+ "import webdataset as wds\n",
22
+ "from PIL import Image\n",
23
+ "import pandas as pd\n",
24
+ "import nibabel as nib\n",
25
+ "import nilearn\n",
26
+ "\n",
27
+ "import matplotlib.pyplot as plt\n",
28
+ "import torch\n",
29
+ "import torch.nn as nn\n",
30
+ "from torchvision import transforms\n",
31
+ "\n",
32
+ "# tf32 data type is faster than standard float32\n",
33
+ "torch.backends.cuda.matmul.allow_tf32 = True\n",
34
+ "\n",
35
+ "import utils\n",
36
+ "from utils import load_preprocess_betas, resample, applyxfm, apply_thresh, resample_betas\n",
37
+ "\n",
38
+ "# imports utils from mindeye_preproc as \"preproc\"\n",
39
+ "import importlib.util\n",
40
+ "parent_utils_path = \"/home/ubuntu/mindeye_preproc/analysis/utils.py\" # \"/home/ri4541/mindeye_preproc/analysis/utils.py\" \n",
41
+ "spec = importlib.util.spec_from_file_location(\"utils\", parent_utils_path)\n",
42
+ "preproc = importlib.util.module_from_spec(spec)\n",
43
+ "parent_dir = os.path.dirname(parent_utils_path)\n",
44
+ "if parent_dir not in sys.path:\n",
45
+ " sys.path.append(parent_dir)\n",
46
+ "spec.loader.exec_module(preproc)\n",
47
+ "\n",
48
+ "if utils.is_interactive():\n",
49
+ " from IPython.display import clear_output # function to clear print outputs in cell\n",
50
+ " %load_ext autoreload \n",
51
+ " # this allows you to change functions in models.py or utils.py and have this notebook automatically update with your revisions\n",
52
+ " %autoreload 2 \n",
53
+ " \n",
54
+ "seed = utils.get_slurm_seed()"
55
+ ]
56
+ },
57
+ {
58
+ "cell_type": "code",
59
+ "execution_count": 2,
60
+ "id": "6213ef9f",
61
+ "metadata": {},
62
+ "outputs": [],
63
+ "source": [
64
+ "if utils.is_interactive():\n",
65
+ " sub = \"sub-005\"\n",
66
+ " session = \"all\"\n",
67
+ " task = 'C' # 'study' or 'A'; used to search for functional run in bids format\n",
68
+ " func_task_name = 'C'\n",
69
+ "else:\n",
70
+ " sub = os.environ[\"SUB\"]\n",
71
+ " session = os.environ[\"SESSION\"]\n",
72
+ " task = os.environ[\"TASK\"]\n",
73
+ " func_task_name = 'C'\n",
74
+ "\n",
75
+ "if session == \"all\":\n",
76
+ " ses_list = [\"ses-01\", \"ses-02\"] # list of actual session IDs\n",
77
+ " design_ses_list = [\"ses-01\", \"ses-02\"] # list of session IDs to search for design matrix\n",
78
+ "else:\n",
79
+ " ses_list = [session]\n",
80
+ " design_ses_list = [session]\n",
81
+ " \n",
82
+ "task_name = f\"_task-{task}\" if task != 'study' else ''\n",
83
+ "resample_voxel_size = False\n",
84
+ "resample_post_glmsingle = False # do you want to do voxel resampling here? if resample_voxel_size = True and resample_post_glmsingle = False, assume the resampling has been done prior to GLMsingle, so just use resampled directory but otherwise proceed as normal\n",
85
+ "load_from_resampled_file = False # do you want to load resampled data from file? if True, assume resampling was done in this notebook before, and that we're not using the GLMsingle resampled data\n",
86
+ " \n",
87
+ "train_test_split = 'MST' # 'MST', 'orig', 'unique'\n",
88
+ "remove_close_to_MST = False\n",
89
+ "remove_random_n = False\n",
90
+ "\n",
91
+ "if remove_close_to_MST or remove_random_n:\n",
92
+ " assert remove_close_to_MST != remove_random_n # don't remove both sets of images\n",
93
+ "\n",
94
+ "n_to_remove = 0\n",
95
+ "if remove_random_n:\n",
96
+ " assert train_test_split == 'MST' # MST images are excluded from the n images removed, so only makes sense if they're not in the training set\n",
97
+ " n_to_remove = 150\n",
98
+ " \n",
99
+ "if resample_voxel_size:\n",
100
+ " # voxel size was unchanged in glmsingle, want to perform resampling here\n",
101
+ " resampled_vox_size = 2.5\n",
102
+ " resample_method = \"sinc\" # {trilinear,nearestneighbour,sinc,spline}, credit: https://johnmuschelli.com/fslr/reference/flirt.help.html\n",
103
+ " \n",
104
+ " # file name helper variables\n",
105
+ " vox_dim_str = str(resampled_vox_size).replace('.', '_') # in case the voxel size has a decimal, replace with an underscore\n",
106
+ " resampled_suffix = f\"resampled_{vox_dim_str}mm_{resample_method}\"\n",
107
+ " mask_resampled_suffix = resampled_suffix\n",
108
+ " if resample_post_glmsingle:\n",
109
+ " resampled_suffix += '_postglmsingle'\n",
110
+ " else:\n",
111
+ " resampled_suffix += '_preglmsingle'"
112
+ ]
113
+ },
114
+ {
115
+ "cell_type": "code",
116
+ "execution_count": 3,
117
+ "id": "7511be2d",
118
+ "metadata": {},
119
+ "outputs": [],
120
+ "source": [
121
+ "session_label = preproc.get_session_label(ses_list)\n",
122
+ "print('session label:', session_label)\n",
123
+ "n_runs, _ = preproc.get_runs_per_session(sub, session, ses_list)"
124
+ ]
125
+ },
126
+ {
127
+ "cell_type": "code",
128
+ "execution_count": 4,
129
+ "id": "d57d05fa",
130
+ "metadata": {},
131
+ "outputs": [],
132
+ "source": [
133
+ "if utils.is_interactive():\n",
134
+ " glmsingle_path = f\"/home/ubuntu/glmsingle/glmsingle_{sub}_{session_label}_task-{task}\"\n",
135
+ "else:\n",
136
+ " glmsingle_path = os.environ[\"glmsingle_path\"]\n",
137
+ " \n",
138
+ "designdir = \"/home/ubuntu/real_time_mindEye2\" #\"/home/ri4541/real_time_mindEye2\"\n",
139
+ "print(glmsingle_path)\n",
140
+ "\n",
141
+ "if resample_voxel_size:\n",
142
+ " # option 1: we are using original (non-resampled) GLMsingle outputs and doing the resampling here\n",
143
+ " # option 2: doing resampling pre-GLMsingle and using those outputs; no resampling involved here\n",
144
+ " if resample_post_glmsingle:\n",
145
+ " # option 1\n",
146
+ " orig_glmsingle_path = glmsingle_path\n",
147
+ " glmsingle_path += f\"_{resampled_suffix}\"\n",
148
+ " print(\"resampled glmsingle path:\", glmsingle_path)\n",
149
+ " if load_from_resampled_file:\n",
150
+ " # resampling is already done; load from file\n",
151
+ " assert os.path.exists(glmsingle_path) # the new directory must have been created if we reached here\n",
152
+ " else:\n",
153
+ " # don't load from file; do resampling here\n",
154
+ " os.makedirs(glmsingle_path,exist_ok=True)\n",
155
+ " else:\n",
156
+ " # option 2\n",
157
+ " glmsingle_path += f\"_{resampled_suffix}\"\n",
158
+ " print(\"glmsingle path:\", glmsingle_path)\n",
159
+ "\n",
160
+ "assert os.path.exists(glmsingle_path)\n",
161
+ "print(\"glmsingle path exists!\")"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": 5,
167
+ "id": "074a6b10",
168
+ "metadata": {},
169
+ "outputs": [],
170
+ "source": [
171
+ "data, starts, images, is_new_run, image_names, unique_images, len_unique_images = preproc.load_design_files(\n",
172
+ " sub=sub,\n",
173
+ " session=session,\n",
174
+ " func_task_name=task,\n",
175
+ " designdir=designdir,\n",
176
+ " design_ses_list=design_ses_list\n",
177
+ ")\n",
178
+ "\n",
179
+ "if sub == 'sub-001':\n",
180
+ " if session == 'ses-01':\n",
181
+ " assert image_names[0] == 'images/image_686_seed_1.png'\n",
182
+ " elif session in ('ses-02', 'all'):\n",
183
+ " assert image_names[0] == 'all_stimuli/special515/special_40840.jpg'\n",
184
+ " elif session == 'ses-03':\n",
185
+ " assert image_names[0] == 'all_stimuli/special515/special_69839.jpg'\n",
186
+ " elif session == 'ses-04':\n",
187
+ " assert image_names[0] == 'all_stimuli/rtmindeye_stimuli/image_686_seed_1.png'\n",
188
+ "elif sub == 'sub-003':\n",
189
+ " assert image_names[0] == 'all_stimuli/rtmindeye_stimuli/image_686_seed_1.png'\n",
190
+ "\n",
191
+ "unique_images = np.unique(image_names.astype(str))\n",
192
+ "unique_images = unique_images[(unique_images!=\"nan\")]\n",
193
+ "len_unique_images = len(unique_images)\n",
194
+ "print(\"n_runs\",n_runs)\n",
195
+ "\n",
196
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
197
+ " assert len(unique_images) == 851\n",
198
+ "\n",
199
+ "print(image_names[:4])\n",
200
+ "print(starts[:4])\n",
201
+ "print(is_new_run[:4])\n",
202
+ "\n",
203
+ "if remove_random_n:\n",
204
+ " # want to remove 150 imgs\n",
205
+ " # 100 special515 imgs are repeated 3x (300 total)\n",
206
+ " # all other train imgs are only shown once (558 total)\n",
207
+ " # of the 150, want to sample proportionally since we're cutting all repeats for special515\n",
208
+ " # so take out 51 (17 unique) from special515 and 99 from rest = removing 150 total\n",
209
+ " np.random.seed(seed)\n",
210
+ " options_to_remove = [x for x in set(image_names) if str(x) != 'nan' and x != 'blank.jpg' and 'MST_pairs' not in x and 'special515' not in x and list(image_names).count(x)==1] # all the imgs that only appear once (this is O(N^2) b/c of count() within list comprehension but image_names is a relatively small list)\n",
211
+ " options_to_remove_special515 = [x for x in set(image_names) if str(x) != 'nan' and x != 'blank.jpg' and 'MST_pairs' not in x and 'special515' in x and list(image_names).count(x)>1] # all the special515 images that are repeated (count()>1 necessary because there are special515 that are not repeated)\n",
212
+ " imgs_to_remove = np.random.choice(options_to_remove, size=99, replace=False)\n",
213
+ " imgs_to_remove = np.append(imgs_to_remove, np.random.choice(options_to_remove_special515, size=17, replace=False))\n",
214
+ "\n",
215
+ "image_idx = np.array([]) # contains the unique index of each presented image\n",
216
+ "vox_image_names = np.array([]) # contains the names of the images corresponding to image_idx\n",
217
+ "all_MST_images = dict()\n",
218
+ "for i, im in enumerate(image_names):\n",
219
+ " # skip if blank, nan\n",
220
+ " if im == \"blank.jpg\":\n",
221
+ " i+=1\n",
222
+ " continue\n",
223
+ " if str(im) == \"nan\":\n",
224
+ " i+=1\n",
225
+ " continue\n",
226
+ " vox_image_names = np.append(vox_image_names, im)\n",
227
+ " if remove_close_to_MST: # optionally skip close_to_MST images \n",
228
+ " if \"closest_pairs\" in im:\n",
229
+ " i+=1\n",
230
+ " continue\n",
231
+ " elif remove_random_n:\n",
232
+ " if im in imgs_to_remove:\n",
233
+ " i+=1\n",
234
+ " continue\n",
235
+ " \n",
236
+ " image_idx_ = np.where(im==unique_images)[0].item()\n",
237
+ " image_idx = np.append(image_idx, image_idx_)\n",
238
+ " \n",
239
+ " if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'): # MST images are ones that matched these image titles\n",
240
+ " import re\n",
241
+ " if ('w_' in im or 'paired_image_' in im or re.match(r'all_stimuli/rtmindeye_stimuli/\\d{1,2}_\\d{1,3}\\.png$', im) or re.match(r'images/\\d{1,2}_\\d{1,3}\\.png$', im)): \n",
242
+ " # the regexp here looks for **_***.png, allows 1-2 chars before underscore and 1-3 chars after it\n",
243
+ " # print(im)\n",
244
+ " all_MST_images[i] = im\n",
245
+ " i+=1 \n",
246
+ " elif 'MST' in im:\n",
247
+ " all_MST_images[i] = im\n",
248
+ " i+=1\n",
249
+ " \n",
250
+ "image_idx = torch.Tensor(image_idx).long()\n",
251
+ "# for im in new_image_names[MST_images]:\n",
252
+ "# assert 'MST_pairs' in im\n",
253
+ "# assert len(all_MST_images) == 300\n",
254
+ "\n",
255
+ "unique_MST_images = np.unique(list(all_MST_images.values())) \n",
256
+ "\n",
257
+ "MST_ID = np.array([], dtype=int)\n",
258
+ "if remove_close_to_MST:\n",
259
+ " close_to_MST_idx = np.array([], dtype=int)\n",
260
+ "if remove_random_n:\n",
261
+ " random_n_idx = np.array([], dtype=int)\n",
262
+ "\n",
263
+ "vox_idx = np.array([], dtype=int)\n",
264
+ "j=0 # this is a counter keeping track of the remove_random_n used later to index vox based on the removed images; unused otherwise\n",
265
+ "for i, im in enumerate(image_names): # need unique_MST_images to be defined, so repeating the same loop structure\n",
266
+ " # skip if blank, nan\n",
267
+ " if im == \"blank.jpg\":\n",
268
+ " i+=1\n",
269
+ " continue\n",
270
+ " if str(im) == \"nan\":\n",
271
+ " i+=1\n",
272
+ " continue\n",
273
+ " if remove_close_to_MST: # optionally skip close_to_MST images \n",
274
+ " if \"closest_pairs\" in im:\n",
275
+ " close_to_MST_idx = np.append(close_to_MST_idx, i)\n",
276
+ " i+=1\n",
277
+ " continue\n",
278
+ " if remove_random_n:\n",
279
+ " if im in imgs_to_remove:\n",
280
+ " vox_idx = np.append(vox_idx, j)\n",
281
+ " i+=1\n",
282
+ " j+=1\n",
283
+ " continue\n",
284
+ " j+=1\n",
285
+ " curr = np.where(im == unique_MST_images)\n",
286
+ " # print(curr)\n",
287
+ " if curr[0].size == 0:\n",
288
+ " MST_ID = np.append(MST_ID, np.array(len(unique_MST_images))) # add a value that should be out of range based on the for loop, will index it out later\n",
289
+ " else:\n",
290
+ " MST_ID = np.append(MST_ID, curr)\n",
291
+ " \n",
292
+ "assert len(MST_ID) == len(image_idx)\n",
293
+ "# assert len(np.argwhere(pd.isna(data['current_image']))) + len(np.argwhere(data['current_image'] == 'blank.jpg')) + len(image_idx) == len(data)\n",
294
+ "# MST_ID = torch.tensor(MST_ID[MST_ID != len(unique_MST_images)], dtype=torch.uint8) # torch.tensor (lowercase) allows dtype kwarg, Tensor (uppercase) is an alias for torch.FloatTensor\n",
295
+ "print(MST_ID.shape)\n",
296
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
297
+ " assert len(all_MST_images) == 100"
298
+ ]
299
+ },
300
+ {
301
+ "cell_type": "code",
302
+ "execution_count": 6,
303
+ "id": "4af150a8",
304
+ "metadata": {},
305
+ "outputs": [],
306
+ "source": [
307
+ "import imageio.v2 as imageio\n",
308
+ "resize_transform = transforms.Resize((224, 224))\n",
309
+ "MST_images = []\n",
310
+ "images = None\n",
311
+ "for im_name in tqdm(image_idx):\n",
312
+ " if sub == 'sub-001' and session == 'ses-01':\n",
313
+ " image_file = f\"all_stimuli/rtmindeye_stimuli/{unique_images[im_name]}\"\n",
314
+ " else:\n",
315
+ " image_file = f\"{unique_images[im_name]}\"\n",
316
+ " im = imageio.imread(image_file)\n",
317
+ " im = torch.Tensor(im / 255).permute(2,0,1)\n",
318
+ " im = resize_transform(im.unsqueeze(0))\n",
319
+ " if images is None:\n",
320
+ " images = im\n",
321
+ " else:\n",
322
+ " images = torch.vstack((images, im))\n",
323
+ " if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
324
+ " if ('w_' in image_file or 'paired_image_' in image_file or re.match(r'all_stimuli/rtmindeye_stimuli/\\d{1,2}_\\d{1,3}\\.png$', image_file) or re.match(r'all_stimuli/rtmindeye_stimuli/images/\\d{1,2}_\\d{1,3}\\.png$', image_file)): \n",
325
+ " MST_images.append(True)\n",
326
+ " else:\n",
327
+ " MST_images.append(False)\n",
328
+ " else: \n",
329
+ " if (\"MST_pairs\" in image_file): # (\"_seed_\" not in unique_images[im_name]) and (unique_images[im_name] != \"blank.jpg\") \n",
330
+ " MST_images.append(True)\n",
331
+ " else:\n",
332
+ " MST_images.append(False)\n",
333
+ "\n",
334
+ "print(\"images\", images.shape)\n",
335
+ "MST_images = np.array(MST_images)\n",
336
+ "print(\"MST_images\", len(MST_images))\n",
337
+ "if (sub == 'sub-001' and session == 'ses-04') or (sub == 'sub-003' and session == 'ses-01'):\n",
338
+ " assert len(MST_images[MST_images==True]) == 100\n",
339
+ "print(\"MST_images==True\", len(MST_images[MST_images==True]))"
340
+ ]
341
+ },
342
+ {
343
+ "cell_type": "code",
344
+ "execution_count": 7,
345
+ "id": "4937263a",
346
+ "metadata": {},
347
+ "outputs": [],
348
+ "source": [
349
+ "# want IDs of pairmates based on MST_images\n",
350
+ "# create \"MST_pairmates\" which is a 25x2 array with indices of the 25 pairs based on MST_images == True\n",
351
+ "\n",
352
+ "assert unique_MST_images.shape[0] % 2 == 0 # make sure it's divisible by 2\n",
353
+ "MST_pairmate_names = unique_MST_images.reshape(int(unique_MST_images.shape[0]/2),2)\n",
354
+ "# print(MST_pairmate_names)\n",
355
+ "\n",
356
+ "MST_pairmate_indices = np.empty(shape=MST_pairmate_names.shape, dtype=int)\n",
357
+ "for p, pair in enumerate(MST_pairmate_names):\n",
358
+ " for i, im in enumerate(pair):\n",
359
+ " MST_pairmate_indices[p][i] = np.where(np.isin(list(all_MST_images.values()), im))[0][0] # just take the first repeated instance of an image\n",
360
+ " \n",
361
+ "print(MST_pairmate_indices.shape, MST_pairmate_indices)"
362
+ ]
363
+ },
364
+ {
365
+ "cell_type": "code",
366
+ "execution_count": 8,
367
+ "id": "108a3210",
368
+ "metadata": {},
369
+ "outputs": [],
370
+ "source": [
371
+ "if (sub == 'sub-001' and session in ('ses-02', 'ses-03', 'all')):\n",
372
+ " # MST_pairs contains the indices of repeats based on all_MST_images\n",
373
+ " # all_MST_images contains the indices of images from image_names\n",
374
+ " MST_pairs = utils.find_paired_indices(torch.tensor(MST_ID))\n",
375
+ " MST_pairs = np.array(sorted(MST_pairs[:-1], key=lambda x: x[0])) # we added a fake value as a placeholder so index out the last group of pairs\n",
376
+ "\n",
377
+ " # assert images[MST_pairs]\n",
378
+ "\n",
379
+ " fig, ax = plt.subplots(1, 3, figsize=(10,4))\n",
380
+ " fig.suptitle('Sample MST pairs')\n",
381
+ "\n",
382
+ " ax[0].imshow(images[MST_pairs[-1][0]].permute(1,2,0).numpy())\n",
383
+ " ax[0].set_title(f\"Trial 0\")\n",
384
+ "\n",
385
+ " ax[1].imshow(images[MST_pairs[-1][1]].permute(1,2,0).numpy())\n",
386
+ " ax[1].set_title(f\"Trial 1\")\n",
387
+ "\n",
388
+ " ax[2].imshow(images[MST_pairs[-1][2]].permute(1,2,0).numpy())\n",
389
+ " ax[2].set_title(f\"Trial 2\")\n",
390
+ "\n",
391
+ " plt.setp(ax, xticks=[], yticks=[])\n",
392
+ " plt.tight_layout()\n",
393
+ " plt.show()"
394
+ ]
395
+ },
396
+ {
397
+ "cell_type": "code",
398
+ "execution_count": 9,
399
+ "id": "d502b890",
400
+ "metadata": {},
401
+ "outputs": [],
402
+ "source": [
403
+ "# pairs has the indices of all repeated images\n",
404
+ "pairs = utils.find_paired_indices(image_idx)\n",
405
+ "pairs = sorted(pairs, key=lambda x: x[0])\n",
406
+ "\n",
407
+ "fig, axes = plt.subplots(1, 3, figsize=(6, 2)) # 1 row, 3 columns\n",
408
+ "for i, ax in enumerate(axes):\n",
409
+ " ax.imshow(images[i].permute(1, 2, 0).numpy())\n",
410
+ " ax.set_title(f\"Trial {i}\")\n",
411
+ " ax.axis(\"off\") # Hide axes for better visualization\n",
412
+ "\n",
413
+ "plt.tight_layout()\n",
414
+ "# output_path = os.path.join(output_dir, \"trials_plot.png\")\n",
415
+ "# plt.savefig(output_path, dpi=300) # Save figure\n",
416
+ "plt.show()"
417
+ ]
418
+ },
419
+ {
420
+ "cell_type": "code",
421
+ "execution_count": 10,
422
+ "id": "cfc6a1f4",
423
+ "metadata": {},
424
+ "outputs": [],
425
+ "source": [
426
+ "p=0\n",
427
+ "\n",
428
+ "# plot 2 repeats (anything in pairs should have 2 repeats, even if there's more)\n",
429
+ "fig, ax = plt.subplots(1, 2, figsize=(10,8))\n",
430
+ "\n",
431
+ "ax[0].imshow(images[pairs[p][0]].permute(1,2,0).numpy())\n",
432
+ "ax[0].set_title(f\"Repeat 1\")\n",
433
+ "\n",
434
+ "ax[1].imshow(images[pairs[p][1]].permute(1,2,0).numpy())\n",
435
+ "ax[1].set_title(f\"Repeat 2\")\n",
436
+ "\n",
437
+ "plt.setp(ax, xticks=[], yticks=[])\n",
438
+ "plt.tight_layout()\n",
439
+ "plt.show()"
440
+ ]
441
+ },
442
+ {
443
+ "cell_type": "code",
444
+ "execution_count": 11,
445
+ "id": "c5fe984b",
446
+ "metadata": {},
447
+ "outputs": [],
448
+ "source": [
449
+ "def get_image_pairs(sub, session, func_task_name, designdir):\n",
450
+ " \"\"\"Loads design files and processes image pairs for a given session.\"\"\"\n",
451
+ " _, _, _, _, image_names, unique_images, _ = preproc.load_design_files(\n",
452
+ " sub=sub,\n",
453
+ " session=session,\n",
454
+ " func_task_name=func_task_name,\n",
455
+ " designdir=designdir,\n",
456
+ " design_ses_list=[session] # Ensure it's a list\n",
457
+ " )\n",
458
+ " return utils.process_images(image_names, unique_images)"
459
+ ]
460
+ },
461
+ {
462
+ "cell_type": "code",
463
+ "execution_count": 12,
464
+ "id": "f759b5d3",
465
+ "metadata": {},
466
+ "outputs": [],
467
+ "source": [
468
+ "from collections import defaultdict\n",
469
+ "\n",
470
+ "all_dicts = []\n",
471
+ "for s_idx, s in enumerate(ses_list):\n",
472
+ " im, vo, _ = get_image_pairs(sub, s, func_task_name, designdir)\n",
473
+ " assert len(im) == len(vo)\n",
474
+ " all_dicts.append({k:v for k,v in enumerate(vo)})\n",
475
+ "\n",
476
+ "# for the train set (ses-01-02 non-MST)\n",
477
+ "image_to_indices = defaultdict(lambda: [[] for _ in range(len(ses_list))])\n",
478
+ "for ses_idx, idx_to_name in enumerate(all_dicts):\n",
479
+ " for idx, name in idx_to_name.items():\n",
480
+ " image_to_indices[name][ses_idx].append(idx)\n",
481
+ " \n",
482
+ "image_to_indices = dict(image_to_indices)\n",
483
+ "\n",
484
+ "# for the test set (ses-03)\n",
485
+ "# test_image_to_indices = defaultdict(lambda: [[] for _ in range(len([ses_list[-1]]))])\n",
486
+ "# for ses_idx, idx_to_name in enumerate([all_dicts[-1]]):\n",
487
+ "# for idx, name in idx_to_name.items():\n",
488
+ "# test_image_to_indices[name][ses_idx].append(idx)\n",
489
+ " \n",
490
+ "# test_image_to_indices = dict(test_image_to_indices)\n",
491
+ "\n",
492
+ "if sub == 'sub-005' and len(ses_list) > 1:\n",
493
+ " session_length = 693\n",
494
+ " for image, session_indices_list in image_to_indices.items():\n",
495
+ " new_indices_list = []\n",
496
+ " for idx, indices in enumerate(session_indices_list):\n",
497
+ " offset = idx * session_length\n",
498
+ " new_indices = [i + offset for i in indices]\n",
499
+ " new_indices_list.append(new_indices)\n",
500
+ " image_to_indices[image] = new_indices_list\n",
501
+ " \n",
502
+ " import itertools\n",
503
+ " assert max(itertools.chain.from_iterable(list(image_to_indices.values())))[0] == (len(ses_list)*session_length) - 1"
504
+ ]
505
+ },
506
+ {
507
+ "cell_type": "code",
508
+ "execution_count": 13,
509
+ "id": "2be1079a",
510
+ "metadata": {},
511
+ "outputs": [],
512
+ "source": [
513
+ "if resample_voxel_size:\n",
514
+ " from nilearn.masking import apply_mask, unmask\n",
515
+ " ref_name = f'{glmsingle_path}/boldref_resampled.nii.gz'\n",
516
+ " omat_name = f'{glmsingle_path}/boldref_omat'"
517
+ ]
518
+ },
519
+ {
520
+ "cell_type": "code",
521
+ "execution_count": 14,
522
+ "id": "28bf7f64",
523
+ "metadata": {},
524
+ "outputs": [],
525
+ "source": [
526
+ "from nilearn.plotting import plot_roi\n",
527
+ "\n",
528
+ "print('loading brain mask')\n",
529
+ "avg_mask = nib.load(f'{orig_glmsingle_path}/glmsingle_sub-005_task-C/sub-005_final_brain.nii.gz')\n",
530
+ "final_mask = nib.load(f'{orig_glmsingle_path}/glmsingle_sub-005_task-C/sub-005_final_mask.nii.gz')\n",
531
+ "\n",
532
+ "# mask info\n",
533
+ "dimsize=avg_mask.header.get_zooms()\n",
534
+ "affine_mat = avg_mask.affine\n",
535
+ "brain=avg_mask.get_fdata()\n",
536
+ "xyz=brain.shape #xyz dimensionality of brain mask and epi data\n",
537
+ "\n",
538
+ "print('Mask dimensions:', dimsize)\n",
539
+ "print('')\n",
540
+ "print('Affine:')\n",
541
+ "print(affine_mat)\n",
542
+ "print('')\n",
543
+ "print(f'There are {int(np.sum(brain))} voxels in the included brain mask\\n')\n",
544
+ "\n",
545
+ "plot_roi(final_mask, bg_img=avg_mask)\n",
546
+ "plt.show()"
547
+ ]
548
+ },
549
+ {
550
+ "cell_type": "code",
551
+ "execution_count": 15,
552
+ "id": "ca124946",
553
+ "metadata": {},
554
+ "outputs": [],
555
+ "source": [
556
+ "glm_single_path"
557
+ ]
558
+ },
559
+ {
560
+ "cell_type": "code",
561
+ "execution_count": 16,
562
+ "id": "844c2b1f",
563
+ "metadata": {},
564
+ "outputs": [
565
+ {
566
+ "name": "stdout",
567
+ "output_type": "stream",
568
+ "text": [
569
+ "'/home/ubuntu/glmsingle/glmsingle_sub-005_ses-01-02_task-C'"
570
+ ]
571
+ }
572
+ ],
573
+ "source": [
574
+ "glmsingle_path"
575
+ ]
576
+ },
577
+ {
578
+ "cell_type": "code",
579
+ "execution_count": 17,
580
+ "id": "fee56ca8",
581
+ "metadata": {},
582
+ "outputs": [],
583
+ "source": [
584
+ "base_glm_single_path = os.environ[\"glmsingle_path\"]\n",
585
+ "base_glm_single_path"
586
+ ]
587
+ },
588
+ {
589
+ "cell_type": "code",
590
+ "execution_count": 18,
591
+ "id": "610317a3",
592
+ "metadata": {},
593
+ "outputs": [],
594
+ "source": [
595
+ "# take all paths exept last dir\n",
596
+ "base_glm_single_path = glmsingle_path.split('/')[:-1]\n",
597
+ "base_glm_single_path = '/'.join(base_glm_single_path)"
598
+ ]
599
+ },
600
+ {
601
+ "cell_type": "code",
602
+ "execution_count": 19,
603
+ "id": "82cae662",
604
+ "metadata": {},
605
+ "outputs": [],
606
+ "source": [
607
+ "from nilearn.plotting import plot_roi\n",
608
+ "\n",
609
+ "print('loading brain mask')\n",
610
+ "avg_mask = nib.load(f'{base_glm_single_path}/glmsingle_sub-005_task-C/sub-005_final_brain.nii.gz')\n",
611
+ "final_mask = nib.load(f'{base_glm_single_path}/glmsingle_sub-005_task-C/sub-005_final_mask.nii.gz')\n",
612
+ "\n",
613
+ "# mask info\n",
614
+ "dimsize=avg_mask.header.get_zooms()\n",
615
+ "affine_mat = avg_mask.affine\n",
616
+ "brain=avg_mask.get_fdata()\n",
617
+ "xyz=brain.shape #xyz dimensionality of brain mask and epi data\n",
618
+ "\n",
619
+ "print('Mask dimensions:', dimsize)\n",
620
+ "print('')\n",
621
+ "print('Affine:')\n",
622
+ "print(affine_mat)\n",
623
+ "print('')\n",
624
+ "print(f'There are {int(np.sum(brain))} voxels in the included brain mask\\n')\n",
625
+ "\n",
626
+ "plot_roi(final_mask, bg_img=avg_mask)\n",
627
+ "plt.show()"
628
+ ]
629
+ },
630
+ {
631
+ "cell_type": "code",
632
+ "execution_count": 20,
633
+ "id": "e6d4d01a",
634
+ "metadata": {},
635
+ "outputs": [],
636
+ "source": [
637
+ "# # create union of ses-01 and ses-02 reliability masks and plot against avg_mask \n",
638
+ "# rel_masks = []\n",
639
+ "# rel_masks.append(np.load('/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/rel_mask_from_ses-01_to_ses-03.npy'))\n",
640
+ "# rel_masks.append(np.load('/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/rel_mask_from_ses-02_to_ses-03.npy'))\n",
641
+ "# rel_masks = np.array(rel_masks)\n",
642
+ "# for r in rel_masks:\n",
643
+ "# assert r.shape[0] == int(final_mask.get_fdata().sum())\n",
644
+ "# assert r.dtype == bool\n",
645
+ " \n",
646
+ "# assert len(rel_masks) == 2 # should be the case if there's 2 training sessions\n",
647
+ "# union_mask = np.logical_or(rel_masks[0], rel_masks[1])\n",
648
+ "# assert union_mask.sum() > rel_masks[0].sum()\n",
649
+ "# assert union_mask.sum() > rel_masks[1].sum()\n",
650
+ "# print(f'there are {union_mask.sum()} reliable voxels based on the union mask out of {int(final_mask.get_fdata().sum())} voxels in the nsdgeneral roi')\n",
651
+ "# print(f'{(union_mask.sum() / int(final_mask.get_fdata().sum())):.2%} of the voxels in the roi were selected')\n",
652
+ "# path = f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/union_mask_from_{session_label}.npy'\n",
653
+ "path = f'{base_glm_single_path}/glmsingle_sub-005_task-C/union_mask_from_ses-01-02.npy'\n",
654
+ "# np.save(f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/glmsingle_sub-005_task-C/union_mask_from_{session_label}.npy', union_mask)\n",
655
+ "# print(f'saved union mask to {path}!')\n",
656
+ "union_mask = np.load(path)"
657
+ ]
658
+ },
659
+ {
660
+ "cell_type": "code",
661
+ "execution_count": 21,
662
+ "id": "8f372fed",
663
+ "metadata": {},
664
+ "outputs": [],
665
+ "source": [
666
+ "ses_mask = []\n",
667
+ "\n",
668
+ "for s in ses_list:\n",
669
+ " ses_mask_path = f'{base_glm_single_path}/glmsingle_sub-005_{s}_task-C/sub-005_{s}_task-C_brain.nii.gz'\n",
670
+ " ses_mask.append(nib.load(ses_mask_path))\n",
671
+ " \n",
672
+ " assert np.all(ses_mask[-1].affine == final_mask.affine)\n",
673
+ " assert np.all(ses_mask[-1].shape == final_mask.shape)"
674
+ ]
675
+ },
676
+ {
677
+ "cell_type": "code",
678
+ "execution_count": 22,
679
+ "id": "36d2591a",
680
+ "metadata": {},
681
+ "outputs": [],
682
+ "source": [
683
+ "ses_vox = []\n",
684
+ "vox = None\n",
685
+ "needs_postprocessing = False\n",
686
+ "params = (session, ses_list, remove_close_to_MST, image_names, remove_random_n, vox_idx)\n",
687
+ "\n",
688
+ "if resample_post_glmsingle == True:\n",
689
+ " glm_save_path_resampled = f\"{glmsingle_path}/vox_resampled.nii.gz\"\n",
690
+ " if load_from_resampled_file == True:\n",
691
+ " # resampling was done in this notebook so we can load from file\n",
692
+ " vox = nib.load(glm_save_path_resampled)\n",
693
+ " else:\n",
694
+ " # do resampling here\n",
695
+ " assert os.path.exists(ref_name) and os.path.exists(omat_name), \"need to generate the boldref and omat separately since we don't have access to the functional data here; either do so using flirt on the command line or copy over the glmsingle resampled outputs\"\n",
696
+ " vox = load_preprocess_betas(orig_glmsingle_path, *params)\n",
697
+ " vox = resample_betas(orig_glmsingle_path, sub, session, task_name, vox, glmsingle_path, glm_save_path_resampled, ref_name, omat_name)\n",
698
+ " needs_postprocessing = True\n",
699
+ "\n",
700
+ "if vox is None: \n",
701
+ " for i, s in enumerate(ses_list):\n",
702
+ " # either resampling was done in glmsingle or we aren't resampling \n",
703
+ " ses_vox_path = f'{glmsingle_path}/glmsingle_sub-005_{s}_task-C'\n",
704
+ " assert os.path.exists(ses_vox_path)\n",
705
+ " ses_vox.append(load_preprocess_betas(ses_vox_path, *params))\n",
706
+ " v = nilearn.masking.unmask(ses_vox[i], ses_mask[i])\n",
707
+ " ses_vox[i] = nilearn.masking.apply_mask(v, final_mask)\n",
708
+ " vox = np.concatenate(ses_vox)\n",
709
+ " print(\"applied final brain mask\")\n",
710
+ " print(vox.shape)\n",
711
+ " vox = vox[:, union_mask]\n",
712
+ " print(\"applied union roi mask\")\n",
713
+ " print(vox.shape)\n",
714
+ " \n",
715
+ " \n",
716
+ "if needs_postprocessing == True:\n",
717
+ " vox = apply_mask(vox, avg_mask)\n",
718
+ " vox = vox.reshape(-1, vox.shape[-1]) # flatten the 3D image into np array with shape (voxels, images)\n",
719
+ " print(vox.shape)\n",
720
+ "\n",
721
+ "assert len(vox) == len(image_idx)"
722
+ ]
723
+ },
724
+ {
725
+ "cell_type": "code",
726
+ "execution_count": 23,
727
+ "id": "5aca9065",
728
+ "metadata": {},
729
+ "outputs": [],
730
+ "source": [
731
+ "ses_vox = []\n",
732
+ "vox = None\n",
733
+ "needs_postprocessing = False\n",
734
+ "params = (session, ses_list, remove_close_to_MST, image_names, remove_random_n, vox_idx)\n",
735
+ "\n",
736
+ "if resample_post_glmsingle == True:\n",
737
+ " glm_save_path_resampled = f\"{glmsingle_path}/vox_resampled.nii.gz\"\n",
738
+ " if load_from_resampled_file == True:\n",
739
+ " # resampling was done in this notebook so we can load from file\n",
740
+ " vox = nib.load(glm_save_path_resampled)\n",
741
+ " else:\n",
742
+ " # do resampling here\n",
743
+ " assert os.path.exists(ref_name) and os.path.exists(omat_name), \"need to generate the boldref and omat separately since we don't have access to the functional data here; either do so using flirt on the command line or copy over the glmsingle resampled outputs\"\n",
744
+ " vox = load_preprocess_betas(orig_glmsingle_path, *params)\n",
745
+ " vox = resample_betas(orig_glmsingle_path, sub, session, task_name, vox, glmsingle_path, glm_save_path_resampled, ref_name, omat_name)\n",
746
+ " needs_postprocessing = True\n",
747
+ "\n",
748
+ "if vox is None: \n",
749
+ " for i, s in enumerate(ses_list):\n",
750
+ " # either resampling was done in glmsingle or we aren't resampling \n",
751
+ " ses_vox_path = f'{base_glm_single_path}/glmsingle_sub-005_{s}_task-C'\n",
752
+ " assert os.path.exists(ses_vox_path)\n",
753
+ " ses_vox.append(load_preprocess_betas(ses_vox_path, *params))\n",
754
+ " v = nilearn.masking.unmask(ses_vox[i], ses_mask[i])\n",
755
+ " ses_vox[i] = nilearn.masking.apply_mask(v, final_mask)\n",
756
+ " vox = np.concatenate(ses_vox)\n",
757
+ " print(\"applied final brain mask\")\n",
758
+ " print(vox.shape)\n",
759
+ " vox = vox[:, union_mask]\n",
760
+ " print(\"applied union roi mask\")\n",
761
+ " print(vox.shape)\n",
762
+ " \n",
763
+ " \n",
764
+ "if needs_postprocessing == True:\n",
765
+ " vox = apply_mask(vox, avg_mask)\n",
766
+ " vox = vox.reshape(-1, vox.shape[-1]) # flatten the 3D image into np array with shape (voxels, images)\n",
767
+ " print(vox.shape)\n",
768
+ "\n",
769
+ "assert len(vox) == len(image_idx)"
770
+ ]
771
+ },
772
+ {
773
+ "cell_type": "code",
774
+ "execution_count": 24,
775
+ "id": "a8e1b076",
776
+ "metadata": {},
777
+ "outputs": [],
778
+ "source": [
779
+ "# # get vox into the same shape as the union mask\n",
780
+ "# v = nilearn.masking.unmask(vox, ses_mask) # move back to 3D based on own session mask\n",
781
+ "# final_mask = nilearn.masking.intersect_masks([avg_mask, roi])\n",
782
+ "# vox = nilearn.masking.apply_mask(vox, final_mask) # re-flatten based on final mask so everything is in the same shape now\n",
783
+ "# print(vox.shape)"
784
+ ]
785
+ },
786
+ {
787
+ "cell_type": "code",
788
+ "execution_count": 25,
789
+ "id": "c309fabe",
790
+ "metadata": {},
791
+ "outputs": [],
792
+ "source": [
793
+ "pairs_homog = np.array([[p[0], p[1]] for p in pairs])"
794
+ ]
795
+ },
796
+ {
797
+ "cell_type": "code",
798
+ "execution_count": 26,
799
+ "id": "04d838b7",
800
+ "metadata": {},
801
+ "outputs": [],
802
+ "source": [
803
+ "same_corrs = []\n",
804
+ "diff_corrs = []\n",
805
+ "for isamp, samp in enumerate(vox[pairs_homog]):\n",
806
+ " avg_same_img = []\n",
807
+ " for i in range(samp.shape[0]):\n",
808
+ " for j in range(i, samp.shape[0]):\n",
809
+ " if i != j:\n",
810
+ " avg_same_img.append(np.array([np.corrcoef(samp[i, :], samp[j, :])[0,1]]))\n",
811
+ " \n",
812
+ " same_corrs.append(np.mean(avg_same_img))\n",
813
+ " \n",
814
+ " avg_diff_img = []\n",
815
+ " for isamp_j, samp_j in enumerate(vox[pairs_homog]):\n",
816
+ " if isamp_j != isamp:\n",
817
+ " for i in range(samp_j.shape[0]):\n",
818
+ " for j in range(i, samp_j.shape[0]):\n",
819
+ " if i != j:\n",
820
+ " avg_diff_img.append(np.array([np.corrcoef(samp[i, :], samp_j[j, :])[0,1]]))\n",
821
+ " \n",
822
+ " # print(len(avg_diff_img))\n",
823
+ " diff_corrs.append(np.mean(avg_diff_img))\n",
824
+ "\n",
825
+ "\n",
826
+ "print(len(same_corrs), len(diff_corrs))\n",
827
+ "same_corrs = np.array(same_corrs)\n",
828
+ "diff_corrs = np.array(diff_corrs)\n",
829
+ "\n",
830
+ "\n",
831
+ "plt.figure(figsize=(5,4))\n",
832
+ "plt.title(f\"{sub}_{session} same/diff Pearson corr.\")\n",
833
+ "plt.plot(np.sort(same_corrs),c='blue',label='same')\n",
834
+ "plt.plot(np.sort(diff_corrs),c='cyan',label='diff')\n",
835
+ "plt.axhline(0,c='k',ls='--')\n",
836
+ "plt.legend()\n",
837
+ "plt.xlabel(\"sample\")\n",
838
+ "plt.ylabel(\"Pearson R\")\n",
839
+ "plt.show()"
840
+ ]
841
+ },
842
+ {
843
+ "cell_type": "code",
844
+ "execution_count": 27,
845
+ "id": "3ddc8bdb",
846
+ "metadata": {},
847
+ "outputs": [],
848
+ "source": [
849
+ "vox_pairs = utils.zscore(vox[pairs_homog])\n",
850
+ "plt.figure(figsize=(5,4))\n",
851
+ "plt.title(f\"{sub}_{session} same minus diff difference Pearson corr.\")\n",
852
+ "plt.plot(np.sort(same_corrs) - np.sort(diff_corrs),c='cyan',label='difference')\n",
853
+ "plt.axhline(0,c='k',ls='--')\n",
854
+ "plt.legend()\n",
855
+ "plt.xlabel(\"sample\")\n",
856
+ "plt.ylabel(\"Pearson R\")\n",
857
+ "plt.show()"
858
+ ]
859
+ },
860
+ {
861
+ "cell_type": "code",
862
+ "execution_count": 28,
863
+ "id": "5fd964cd",
864
+ "metadata": {},
865
+ "outputs": [],
866
+ "source": [
867
+ "utils.seed_everything(seed)\n",
868
+ "\n",
869
+ "if train_test_split == 'orig':\n",
870
+ " # train = all images except images that were repeated\n",
871
+ " # test = average of the same-image presentations\n",
872
+ " imageTrain = np.arange(len(images))\n",
873
+ " train_image_indices = np.array([item for item in imageTrain if item not in pairs.flatten()])\n",
874
+ " test_image_indices = pairs\n",
875
+ " print(len(train_image_indices), len(test_image_indices))\n",
876
+ " assert len(train_image_indices) + len(test_image_indices) == len(image_idx)\n",
877
+ "elif train_test_split == 'MST':\n",
878
+ " # non-MST images are the train split\n",
879
+ " # MST images are the test split\n",
880
+ " MST_idx = np.array([v for k,v in image_to_indices.items() if 'MST_pairs' in k])\n",
881
+ " non_MST_idx = [v for k,v in image_to_indices.items() if 'MST_pairs' not in k]\n",
882
+ " non_MST_idx = np.array([z for y in non_MST_idx for x in y for z in x]) # flatten the indices\n",
883
+ " train_image_indices = non_MST_idx\n",
884
+ " test_image_indices = MST_idx.flatten() # MST_idx contains the mapping for the different test sets; test_image_indices has all MST indices combined\n",
885
+ " print(len(train_image_indices), len(test_image_indices))\n",
886
+ " assert len(train_image_indices) + len(test_image_indices) == len(vox)\n",
887
+ "elif train_test_split == 'unique':\n",
888
+ " imageTest = np.arange(len(images))\n",
889
+ " train_image_indices = pairs.flatten()\n",
890
+ " test_image_indices = np.array([item for item in imageTest if item not in pairs.flatten()])\n",
891
+ " print(len(train_image_indices), len(test_image_indices))\n",
892
+ " assert len(train_image_indices) + len(test_image_indices) == len(image_idx)\n",
893
+ "else:\n",
894
+ " raise Exception(\"invalid train_test_split\")\n",
895
+ "\n",
896
+ "# TODO add assertion that verifies file names in train and test don't overlap, guards against repeats\n",
897
+ "\n",
898
+ "for i in train_image_indices:\n",
899
+ " assert i not in test_image_indices"
900
+ ]
901
+ },
902
+ {
903
+ "cell_type": "code",
904
+ "execution_count": 29,
905
+ "id": "98927cca",
906
+ "metadata": {},
907
+ "outputs": [],
908
+ "source": [
909
+ "ses_split = vox[train_image_indices].shape[0] // 2\n",
910
+ "\n",
911
+ "train_mean_s1 = np.mean(vox[train_image_indices][:ses_split], axis=0)\n",
912
+ "train_std_s1 = np.std(vox[train_image_indices][:ses_split], axis=0)\n",
913
+ "train_mean_s2 = np.mean(vox[train_image_indices][ses_split:], axis=0)\n",
914
+ "train_std_s2 = np.std(vox[train_image_indices][ses_split:], axis=0)\n",
915
+ "\n",
916
+ "print('shape of train mean from ses-01:', train_mean_s1.shape)\n",
917
+ "print('shape of train std from ses-01:', train_std_s1.shape)\n",
918
+ "print('shape of train mean from ses-02:', train_mean_s2.shape)\n",
919
+ "print('shape of train std from ses-02:', train_std_s2.shape)\n",
920
+ "\n",
921
+ "\n",
922
+ "vox[:ses_split] = utils.zscore(vox[:ses_split],train_mean=train_mean_s1,train_std=train_std_s1)\n",
923
+ "vox[ses_split:] = utils.zscore(vox[ses_split:],train_mean=train_mean_s2,train_std=train_std_s2)\n",
924
+ "\n",
925
+ "print(\"voxels have been zscored\")\n",
926
+ "print(\"ses-01:\", vox[:ses_split,0].mean(), vox[:ses_split,0].std())\n",
927
+ "print(\"ses-02:\", vox[ses_split:,0].mean(), vox[ses_split:,0].std())\n",
928
+ "print(\"vox\", vox.shape)"
929
+ ]
930
+ },
931
+ {
932
+ "cell_type": "code",
933
+ "execution_count": 30,
934
+ "id": "c7a289d5",
935
+ "metadata": {},
936
+ "outputs": [],
937
+ "source": [
938
+ "# save the mean and std from ses-01 and 02\n",
939
+ "train_test_mean_s1 = np.mean(vox[:ses_split], axis=0)\n",
940
+ "train_test_std_s1 = np.std(vox[:ses_split], axis=0)\n",
941
+ "train_test_mean_s2 = np.mean(vox[ses_split:], axis=0)\n",
942
+ "train_test_std_s2 = np.std(vox[ses_split:], axis=0)\n",
943
+ "print(train_test_mean_s1.shape)\n",
944
+ "assert np.all(train_test_mean_s1.shape == train_test_std_s1.shape)\n",
945
+ "assert np.all(train_test_mean_s1.shape == train_test_mean_s2.shape)\n",
946
+ "assert np.all(train_test_mean_s1.shape == train_test_std_s2.shape)"
947
+ ]
948
+ },
949
+ {
950
+ "cell_type": "code",
951
+ "execution_count": 31,
952
+ "id": "242a0f0c",
953
+ "metadata": {},
954
+ "outputs": [],
955
+ "source": [
956
+ "# for idx in deleted_indices:\n",
957
+ "# # check image names to be deleted match\n",
958
+ "# original_name = vox_image_dict[idx]\n",
959
+ "# matching_indices = [i for i in deleted_indices if vox_image_dict[i] == original_name]\n",
960
+ "# assert all(vox_image_dict[i] == original_name for i in matching_indices), \\\n",
961
+ "# f\"Mismatch in image names for deleted indices {matching_indices}\"\n",
962
+ "\n",
963
+ "# # check image data to be deleted match\n",
964
+ "# base_image = images[matching_indices[0]] # Reference image\n",
965
+ "# for i in matching_indices[1:]:\n",
966
+ "# assert np.array_equal(base_image, images[i]), \\\n",
967
+ "# f\"Mismatch in image data for {vox_image_dict[i]} at index {i}\"\n",
968
+ "\n",
969
+ "# images = images[kept_indices]"
970
+ ]
971
+ },
972
+ {
973
+ "cell_type": "code",
974
+ "execution_count": 32,
975
+ "id": "1644ff68",
976
+ "metadata": {},
977
+ "outputs": [],
978
+ "source": [
979
+ "images = torch.Tensor(images)\n",
980
+ "vox = torch.Tensor(vox)\n",
981
+ "assert len(images) == len(vox)"
982
+ ]
983
+ },
984
+ {
985
+ "cell_type": "code",
986
+ "execution_count": 33,
987
+ "id": "f5eff44d",
988
+ "metadata": {},
989
+ "outputs": [],
990
+ "source": [
991
+ "### Multi-GPU config ###\n",
992
+ "from accelerate import Accelerator, DeepSpeedPlugin\n",
993
+ "\n",
994
+ "local_rank = os.getenv('RANK')\n",
995
+ "if local_rank is None: \n",
996
+ " local_rank = 0\n",
997
+ "else:\n",
998
+ " local_rank = int(local_rank)\n",
999
+ "print(\"LOCAL RANK \", local_rank) \n",
1000
+ "\n",
1001
+ "data_type = torch.float32 # change depending on your mixed_precision\n",
1002
+ "\n",
1003
+ "accelerator = Accelerator(split_batches=False)\n",
1004
+ "batch_size = 8 "
1005
+ ]
1006
+ },
1007
+ {
1008
+ "cell_type": "code",
1009
+ "execution_count": 34,
1010
+ "id": "13696477",
1011
+ "metadata": {},
1012
+ "outputs": [],
1013
+ "source": [
1014
+ "print(\"PID of this process =\",os.getpid())\n",
1015
+ "device = accelerator.device\n",
1016
+ "print(\"device:\",device)\n",
1017
+ "world_size = accelerator.state.num_processes\n",
1018
+ "distributed = not accelerator.state.distributed_type == 'NO'\n",
1019
+ "num_devices = torch.cuda.device_count()\n",
1020
+ "global_batch_size = batch_size * num_devices\n",
1021
+ "print(\"global_batch_size\", global_batch_size)\n",
1022
+ "if num_devices==0 or not distributed: num_devices = 1\n",
1023
+ "num_workers = num_devices\n",
1024
+ "print(accelerator.state)\n",
1025
+ "\n",
1026
+ "# set data_type to match your mixed precision (automatically set based on deepspeed config)\n",
1027
+ "if accelerator.mixed_precision == \"bf16\":\n",
1028
+ " data_type = torch.bfloat16\n",
1029
+ "elif accelerator.mixed_precision == \"fp16\":\n",
1030
+ " data_type = torch.float16\n",
1031
+ "else:\n",
1032
+ " data_type = torch.float32\n",
1033
+ "\n",
1034
+ "print(\"distributed =\",distributed, \"num_devices =\", num_devices, \"local rank =\", local_rank, \"world size =\", world_size, \"data_type =\", data_type)\n",
1035
+ "print = accelerator.print # only print if local_rank=0"
1036
+ ]
1037
+ },
1038
+ {
1039
+ "cell_type": "code",
1040
+ "execution_count": 35,
1041
+ "id": "3076e4cc",
1042
+ "metadata": {},
1043
+ "outputs": [],
1044
+ "source": [
1045
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1046
+ "if utils.is_interactive():\n",
1047
+ " model_name = 'vit-h' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1048
+ " print(\"model_name:\", model_name)\n",
1049
+ " \n",
1050
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1051
+ " # other variables can be specified in the following string:\n",
1052
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1053
+ " batch_size = 24\n",
1054
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1055
+ " --model_name={model_name} \\\n",
1056
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1057
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1058
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1059
+ " --no-use_prior --prior_scale=30 \\\n",
1060
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1061
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1062
+ " --multisubject_ckpt=None\"\n",
1063
+ " print(jupyter_args)\n",
1064
+ " jupyter_args = jupyter_args.split()"
1065
+ ]
1066
+ },
1067
+ {
1068
+ "cell_type": "code",
1069
+ "execution_count": 36,
1070
+ "id": "d8c4b5e2",
1071
+ "metadata": {},
1072
+ "outputs": [],
1073
+ "source": [
1074
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1075
+ "parser.add_argument(\n",
1076
+ " \"--model_name\", type=str, default=\"testing\",\n",
1077
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1078
+ ")\n",
1079
+ "parser.add_argument(\n",
1080
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1081
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1082
+ ")\n",
1083
+ "parser.add_argument(\n",
1084
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1085
+ " help=\"Validate on which subject?\",\n",
1086
+ ")\n",
1087
+ "parser.add_argument(\n",
1088
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1089
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1090
+ ")\n",
1091
+ "parser.add_argument(\n",
1092
+ " \"--num_sessions\", type=int, default=0,\n",
1093
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1094
+ ")\n",
1095
+ "parser.add_argument(\n",
1096
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1097
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1098
+ ")\n",
1099
+ "parser.add_argument(\n",
1100
+ " \"--batch_size\", type=int, default=32,\n",
1101
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1102
+ ")\n",
1103
+ "parser.add_argument(\n",
1104
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1105
+ " help=\"whether to log to wandb\",\n",
1106
+ ")\n",
1107
+ "parser.add_argument(\n",
1108
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1109
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1110
+ ")\n",
1111
+ "parser.add_argument(\n",
1112
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1113
+ " help=\"wandb project name\",\n",
1114
+ ")\n",
1115
+ "parser.add_argument(\n",
1116
+ " \"--mixup_pct\",type=float,default=.33,\n",
1117
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1118
+ ")\n",
1119
+ "parser.add_argument(\n",
1120
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1121
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1122
+ ")\n",
1123
+ "parser.add_argument(\n",
1124
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1125
+ " help=\"whether to output blurry reconstructions\",\n",
1126
+ ")\n",
1127
+ "parser.add_argument(\n",
1128
+ " \"--blur_scale\",type=float,default=.5,\n",
1129
+ " help=\"multiply loss from blurry recons by this number\",\n",
1130
+ ")\n",
1131
+ "parser.add_argument(\n",
1132
+ " \"--clip_scale\",type=float,default=1.,\n",
1133
+ " help=\"multiply contrastive loss by this number\",\n",
1134
+ ")\n",
1135
+ "parser.add_argument(\n",
1136
+ " \"--prior_scale\",type=float,default=30,\n",
1137
+ " help=\"multiply diffusion prior loss by this\",\n",
1138
+ ")\n",
1139
+ "parser.add_argument(\n",
1140
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1141
+ " help=\"whether to use image augmentation\",\n",
1142
+ ")\n",
1143
+ "parser.add_argument(\n",
1144
+ " \"--num_epochs\",type=int,default=120,\n",
1145
+ " help=\"number of epochs of training\",\n",
1146
+ ")\n",
1147
+ "parser.add_argument(\n",
1148
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1149
+ ")\n",
1150
+ "parser.add_argument(\n",
1151
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1152
+ ")\n",
1153
+ "parser.add_argument(\n",
1154
+ " \"--n_blocks\",type=int,default=2,\n",
1155
+ ")\n",
1156
+ "parser.add_argument(\n",
1157
+ " \"--hidden_dim\",type=int,default=1024,\n",
1158
+ ")\n",
1159
+ "parser.add_argument(\n",
1160
+ " \"--seq_past\",type=int,default=0,\n",
1161
+ ")\n",
1162
+ "parser.add_argument(\n",
1163
+ " \"--seq_future\",type=int,default=0,\n",
1164
+ ")\n",
1165
+ "parser.add_argument(\n",
1166
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1167
+ ")\n",
1168
+ "parser.add_argument(\n",
1169
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1170
+ ")\n",
1171
+ "parser.add_argument(\n",
1172
+ " \"--ckpt_interval\",type=int,default=5,\n",
1173
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1174
+ ")\n",
1175
+ "parser.add_argument(\n",
1176
+ " \"--seed\",type=int,default=42,\n",
1177
+ ")\n",
1178
+ "parser.add_argument(\n",
1179
+ " \"--max_lr\",type=float,default=3e-4,\n",
1180
+ ")\n",
1181
+ "\n",
1182
+ "if utils.is_interactive():\n",
1183
+ " args = parser.parse_args(jupyter_args)\n",
1184
+ "else:\n",
1185
+ " args = parser.parse_args()\n",
1186
+ "\n",
1187
+ "# create global variables without the args prefix\n",
1188
+ "for attribute_name in vars(args).keys():\n",
1189
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1190
+ " \n",
1191
+ "outdir = os.path.abspath(f'/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2/train_logs/{model_name}')\n",
1192
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1193
+ " os.makedirs(outdir,exist_ok=True)\n",
1194
+ " \n",
1195
+ "if use_image_aug or blurry_recon:\n",
1196
+ " import kornia\n",
1197
+ " import kornia.augmentation as K\n",
1198
+ " from kornia.augmentation.container import AugmentationSequential\n",
1199
+ "if use_image_aug:\n",
1200
+ " img_augment = AugmentationSequential(\n",
1201
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1202
+ " same_on_batch=False,\n",
1203
+ " data_keys=[\"input\"],\n",
1204
+ " )\n",
1205
+ " # Define the blurring augmentations\n",
1206
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1207
+ " \n",
1208
+ "if multi_subject:\n",
1209
+ " subj_list = np.arange(1,9)\n",
1210
+ " subj_list = subj_list[subj_list != subj]\n",
1211
+ "else:\n",
1212
+ " subj_list = [subj]\n",
1213
+ "\n",
1214
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1215
+ ]
1216
+ },
1217
+ {
1218
+ "cell_type": "code",
1219
+ "execution_count": 37,
1220
+ "id": "9f6cbde6",
1221
+ "metadata": {},
1222
+ "outputs": [],
1223
+ "source": [
1224
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1225
+ "parser.add_argument(\n",
1226
+ " \"--model_name\", type=str, default=\"testing\",\n",
1227
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1228
+ ")\n",
1229
+ "parser.add_argument(\n",
1230
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1231
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1232
+ ")\n",
1233
+ "parser.add_argument(\n",
1234
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1235
+ " help=\"Validate on which subject?\",\n",
1236
+ ")\n",
1237
+ "parser.add_argument(\n",
1238
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1239
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1240
+ ")\n",
1241
+ "parser.add_argument(\n",
1242
+ " \"--num_sessions\", type=int, default=0,\n",
1243
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1244
+ ")\n",
1245
+ "parser.add_argument(\n",
1246
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1247
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1248
+ ")\n",
1249
+ "parser.add_argument(\n",
1250
+ " \"--batch_size\", type=int, default=32,\n",
1251
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1252
+ ")\n",
1253
+ "parser.add_argument(\n",
1254
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1255
+ " help=\"whether to log to wandb\",\n",
1256
+ ")\n",
1257
+ "parser.add_argument(\n",
1258
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1259
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1260
+ ")\n",
1261
+ "parser.add_argument(\n",
1262
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1263
+ " help=\"wandb project name\",\n",
1264
+ ")\n",
1265
+ "parser.add_argument(\n",
1266
+ " \"--mixup_pct\",type=float,default=.33,\n",
1267
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1268
+ ")\n",
1269
+ "parser.add_argument(\n",
1270
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1271
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1272
+ ")\n",
1273
+ "parser.add_argument(\n",
1274
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1275
+ " help=\"whether to output blurry reconstructions\",\n",
1276
+ ")\n",
1277
+ "parser.add_argument(\n",
1278
+ " \"--blur_scale\",type=float,default=.5,\n",
1279
+ " help=\"multiply loss from blurry recons by this number\",\n",
1280
+ ")\n",
1281
+ "parser.add_argument(\n",
1282
+ " \"--clip_scale\",type=float,default=1.,\n",
1283
+ " help=\"multiply contrastive loss by this number\",\n",
1284
+ ")\n",
1285
+ "parser.add_argument(\n",
1286
+ " \"--prior_scale\",type=float,default=30,\n",
1287
+ " help=\"multiply diffusion prior loss by this\",\n",
1288
+ ")\n",
1289
+ "parser.add_argument(\n",
1290
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1291
+ " help=\"whether to use image augmentation\",\n",
1292
+ ")\n",
1293
+ "parser.add_argument(\n",
1294
+ " \"--num_epochs\",type=int,default=120,\n",
1295
+ " help=\"number of epochs of training\",\n",
1296
+ ")\n",
1297
+ "parser.add_argument(\n",
1298
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1299
+ ")\n",
1300
+ "parser.add_argument(\n",
1301
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1302
+ ")\n",
1303
+ "parser.add_argument(\n",
1304
+ " \"--n_blocks\",type=int,default=2,\n",
1305
+ ")\n",
1306
+ "parser.add_argument(\n",
1307
+ " \"--hidden_dim\",type=int,default=1024,\n",
1308
+ ")\n",
1309
+ "parser.add_argument(\n",
1310
+ " \"--seq_past\",type=int,default=0,\n",
1311
+ ")\n",
1312
+ "parser.add_argument(\n",
1313
+ " \"--seq_future\",type=int,default=0,\n",
1314
+ ")\n",
1315
+ "parser.add_argument(\n",
1316
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1317
+ ")\n",
1318
+ "parser.add_argument(\n",
1319
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1320
+ ")\n",
1321
+ "parser.add_argument(\n",
1322
+ " \"--ckpt_interval\",type=int,default=5,\n",
1323
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1324
+ ")\n",
1325
+ "parser.add_argument(\n",
1326
+ " \"--seed\",type=int,default=42,\n",
1327
+ ")\n",
1328
+ "parser.add_argument(\n",
1329
+ " \"--max_lr\",type=float,default=3e-4,\n",
1330
+ ")\n",
1331
+ "\n",
1332
+ "if utils.is_interactive():\n",
1333
+ " args = parser.parse_args(jupyter_args)\n",
1334
+ "else:\n",
1335
+ " args = parser.parse_args()\n",
1336
+ "\n",
1337
+ "# create global variables without the args prefix\n",
1338
+ "for attribute_name in vars(args).keys():\n",
1339
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1340
+ " \n",
1341
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1342
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1343
+ " os.makedirs(outdir,exist_ok=True)\n",
1344
+ " \n",
1345
+ "if use_image_aug or blurry_recon:\n",
1346
+ " import kornia\n",
1347
+ " import kornia.augmentation as K\n",
1348
+ " from kornia.augmentation.container import AugmentationSequential\n",
1349
+ "if use_image_aug:\n",
1350
+ " img_augment = AugmentationSequential(\n",
1351
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1352
+ " same_on_batch=False,\n",
1353
+ " data_keys=[\"input\"],\n",
1354
+ " )\n",
1355
+ " # Define the blurring augmentations\n",
1356
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1357
+ " \n",
1358
+ "if multi_subject:\n",
1359
+ " subj_list = np.arange(1,9)\n",
1360
+ " subj_list = subj_list[subj_list != subj]\n",
1361
+ "else:\n",
1362
+ " subj_list = [subj]\n",
1363
+ "\n",
1364
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1365
+ ]
1366
+ },
1367
+ {
1368
+ "cell_type": "code",
1369
+ "execution_count": 38,
1370
+ "id": "957e3d21",
1371
+ "metadata": {},
1372
+ "outputs": [],
1373
+ "source": [
1374
+ "if ckpt_saving:\n",
1375
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1376
+ " if 'MST' in model_name:\n",
1377
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1378
+ " print('saving MST info in', eval_dir)\n",
1379
+ " # Saving ##\n",
1380
+ " if not os.path.exists(eval_dir):\n",
1381
+ " os.mkdir(eval_dir)\n",
1382
+ "\n",
1383
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1384
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1385
+ "\n",
1386
+ " if remove_random_n:\n",
1387
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1388
+ "\n",
1389
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1390
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1391
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1392
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1393
+ " \n",
1394
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1395
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1396
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1397
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1398
+ ]
1399
+ },
1400
+ {
1401
+ "cell_type": "code",
1402
+ "execution_count": 39,
1403
+ "id": "7fec6e0b",
1404
+ "metadata": {},
1405
+ "outputs": [],
1406
+ "source": [
1407
+ "if ckpt_saving:\n",
1408
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1409
+ " if 'MST' in model_name or True:\n",
1410
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1411
+ " print('saving MST info in', eval_dir)\n",
1412
+ " # Saving ##\n",
1413
+ " if not os.path.exists(eval_dir):\n",
1414
+ " os.mkdir(eval_dir)\n",
1415
+ "\n",
1416
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1417
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1418
+ "\n",
1419
+ " if remove_random_n:\n",
1420
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1421
+ "\n",
1422
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1423
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1424
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1425
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1426
+ " \n",
1427
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1428
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1429
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1430
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1431
+ ]
1432
+ },
1433
+ {
1434
+ "cell_type": "code",
1435
+ "execution_count": 40,
1436
+ "id": "f9bb9d1c",
1437
+ "metadata": {},
1438
+ "outputs": [],
1439
+ "source": [
1440
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1441
+ "if utils.is_interactive():\n",
1442
+ " model_name = 'vit-h-MST' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1443
+ " print(\"model_name:\", model_name)\n",
1444
+ " \n",
1445
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1446
+ " # other variables can be specified in the following string:\n",
1447
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1448
+ " batch_size = 24\n",
1449
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1450
+ " --model_name={model_name} \\\n",
1451
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1452
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1453
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1454
+ " --no-use_prior --prior_scale=30 \\\n",
1455
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1456
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1457
+ " --multisubject_ckpt=None\"\n",
1458
+ " print(jupyter_args)\n",
1459
+ " jupyter_args = jupyter_args.split()"
1460
+ ]
1461
+ },
1462
+ {
1463
+ "cell_type": "code",
1464
+ "execution_count": 41,
1465
+ "id": "d112b218",
1466
+ "metadata": {},
1467
+ "outputs": [],
1468
+ "source": [
1469
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1470
+ "parser.add_argument(\n",
1471
+ " \"--model_name\", type=str, default=\"testing\",\n",
1472
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1473
+ ")\n",
1474
+ "parser.add_argument(\n",
1475
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1476
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1477
+ ")\n",
1478
+ "parser.add_argument(\n",
1479
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1480
+ " help=\"Validate on which subject?\",\n",
1481
+ ")\n",
1482
+ "parser.add_argument(\n",
1483
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1484
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1485
+ ")\n",
1486
+ "parser.add_argument(\n",
1487
+ " \"--num_sessions\", type=int, default=0,\n",
1488
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1489
+ ")\n",
1490
+ "parser.add_argument(\n",
1491
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1492
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1493
+ ")\n",
1494
+ "parser.add_argument(\n",
1495
+ " \"--batch_size\", type=int, default=32,\n",
1496
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1497
+ ")\n",
1498
+ "parser.add_argument(\n",
1499
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1500
+ " help=\"whether to log to wandb\",\n",
1501
+ ")\n",
1502
+ "parser.add_argument(\n",
1503
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1504
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1505
+ ")\n",
1506
+ "parser.add_argument(\n",
1507
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1508
+ " help=\"wandb project name\",\n",
1509
+ ")\n",
1510
+ "parser.add_argument(\n",
1511
+ " \"--mixup_pct\",type=float,default=.33,\n",
1512
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1513
+ ")\n",
1514
+ "parser.add_argument(\n",
1515
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1516
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1517
+ ")\n",
1518
+ "parser.add_argument(\n",
1519
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1520
+ " help=\"whether to output blurry reconstructions\",\n",
1521
+ ")\n",
1522
+ "parser.add_argument(\n",
1523
+ " \"--blur_scale\",type=float,default=.5,\n",
1524
+ " help=\"multiply loss from blurry recons by this number\",\n",
1525
+ ")\n",
1526
+ "parser.add_argument(\n",
1527
+ " \"--clip_scale\",type=float,default=1.,\n",
1528
+ " help=\"multiply contrastive loss by this number\",\n",
1529
+ ")\n",
1530
+ "parser.add_argument(\n",
1531
+ " \"--prior_scale\",type=float,default=30,\n",
1532
+ " help=\"multiply diffusion prior loss by this\",\n",
1533
+ ")\n",
1534
+ "parser.add_argument(\n",
1535
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1536
+ " help=\"whether to use image augmentation\",\n",
1537
+ ")\n",
1538
+ "parser.add_argument(\n",
1539
+ " \"--num_epochs\",type=int,default=120,\n",
1540
+ " help=\"number of epochs of training\",\n",
1541
+ ")\n",
1542
+ "parser.add_argument(\n",
1543
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1544
+ ")\n",
1545
+ "parser.add_argument(\n",
1546
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1547
+ ")\n",
1548
+ "parser.add_argument(\n",
1549
+ " \"--n_blocks\",type=int,default=2,\n",
1550
+ ")\n",
1551
+ "parser.add_argument(\n",
1552
+ " \"--hidden_dim\",type=int,default=1024,\n",
1553
+ ")\n",
1554
+ "parser.add_argument(\n",
1555
+ " \"--seq_past\",type=int,default=0,\n",
1556
+ ")\n",
1557
+ "parser.add_argument(\n",
1558
+ " \"--seq_future\",type=int,default=0,\n",
1559
+ ")\n",
1560
+ "parser.add_argument(\n",
1561
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1562
+ ")\n",
1563
+ "parser.add_argument(\n",
1564
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1565
+ ")\n",
1566
+ "parser.add_argument(\n",
1567
+ " \"--ckpt_interval\",type=int,default=5,\n",
1568
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1569
+ ")\n",
1570
+ "parser.add_argument(\n",
1571
+ " \"--seed\",type=int,default=42,\n",
1572
+ ")\n",
1573
+ "parser.add_argument(\n",
1574
+ " \"--max_lr\",type=float,default=3e-4,\n",
1575
+ ")\n",
1576
+ "\n",
1577
+ "if utils.is_interactive():\n",
1578
+ " args = parser.parse_args(jupyter_args)\n",
1579
+ "else:\n",
1580
+ " args = parser.parse_args()\n",
1581
+ "\n",
1582
+ "# create global variables without the args prefix\n",
1583
+ "for attribute_name in vars(args).keys():\n",
1584
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1585
+ " \n",
1586
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1587
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1588
+ " os.makedirs(outdir,exist_ok=True)\n",
1589
+ " \n",
1590
+ "if use_image_aug or blurry_recon:\n",
1591
+ " import kornia\n",
1592
+ " import kornia.augmentation as K\n",
1593
+ " from kornia.augmentation.container import AugmentationSequential\n",
1594
+ "if use_image_aug:\n",
1595
+ " img_augment = AugmentationSequential(\n",
1596
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
1597
+ " same_on_batch=False,\n",
1598
+ " data_keys=[\"input\"],\n",
1599
+ " )\n",
1600
+ " # Define the blurring augmentations\n",
1601
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
1602
+ " \n",
1603
+ "if multi_subject:\n",
1604
+ " subj_list = np.arange(1,9)\n",
1605
+ " subj_list = subj_list[subj_list != subj]\n",
1606
+ "else:\n",
1607
+ " subj_list = [subj]\n",
1608
+ "\n",
1609
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
1610
+ ]
1611
+ },
1612
+ {
1613
+ "cell_type": "code",
1614
+ "execution_count": 42,
1615
+ "id": "4846c60d",
1616
+ "metadata": {},
1617
+ "outputs": [],
1618
+ "source": [
1619
+ "if ckpt_saving:\n",
1620
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1621
+ " if 'MST' in model_name:\n",
1622
+ " if utils.is_interactive():\n",
1623
+ " eval_dir = os.path.join(outdir, \"eval_dir\")\n",
1624
+ " else:\n",
1625
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1626
+ " print('saving MST info in', eval_dir)\n",
1627
+ " # Saving ##\n",
1628
+ " if not os.path.exists(eval_dir):\n",
1629
+ " os.mkdir(eval_dir)\n",
1630
+ "\n",
1631
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1632
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1633
+ "\n",
1634
+ " if remove_random_n:\n",
1635
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1636
+ "\n",
1637
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1638
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1639
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1640
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1641
+ " \n",
1642
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1643
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1644
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1645
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1646
+ ]
1647
+ },
1648
+ {
1649
+ "cell_type": "code",
1650
+ "execution_count": 43,
1651
+ "id": "b0d9d4bd",
1652
+ "metadata": {},
1653
+ "outputs": [],
1654
+ "source": [
1655
+ "if ckpt_saving:\n",
1656
+ " # save MST_ID for 2-alternative forced-choice retrieval evaluation \n",
1657
+ " if 'MST' in model_name:\n",
1658
+ " if utils.is_interactive():\n",
1659
+ " eval_dir = os.path.join(outdir, \"eval_dir\")\n",
1660
+ " else:\n",
1661
+ " eval_dir = os.environ[\"eval_dir\"]\n",
1662
+ " print('saving MST info in', eval_dir)\n",
1663
+ " # Saving ##\n",
1664
+ " if not os.path.exists(eval_dir):\n",
1665
+ " os.mkdir(eval_dir)\n",
1666
+ "\n",
1667
+ " np.save(f\"{eval_dir}/MST_ID.npy\", MST_ID)\n",
1668
+ " np.save(f\"{eval_dir}/MST_pairmate_indices.npy\", MST_pairmate_indices)\n",
1669
+ "\n",
1670
+ " if remove_random_n:\n",
1671
+ " np.save(f\"{eval_dir}/imgs_to_remove.npy\", imgs_to_remove)\n",
1672
+ "\n",
1673
+ " np.save(f\"{eval_dir}/train_image_indices.npy\", train_image_indices)\n",
1674
+ " np.save(f\"{eval_dir}/test_image_indices.npy\", test_image_indices)\n",
1675
+ " np.save(f\"{eval_dir}/images.npy\", images)\n",
1676
+ " np.save(f\"{eval_dir}/vox.npy\", vox)\n",
1677
+ " \n",
1678
+ " np.save(f'{eval_dir}/train_test_mean_s1.npy', train_test_mean_s1)\n",
1679
+ " np.save(f'{eval_dir}/train_test_std_s1.npy', train_test_std_s1)\n",
1680
+ " np.save(f'{eval_dir}/train_test_mean_s2.npy', train_test_mean_s2)\n",
1681
+ " np.save(f'{eval_dir}/train_test_std_s2.npy', train_test_std_s2)"
1682
+ ]
1683
+ },
1684
+ {
1685
+ "cell_type": "code",
1686
+ "execution_count": 44,
1687
+ "id": "8f59503d",
1688
+ "metadata": {},
1689
+ "outputs": [],
1690
+ "source": [
1691
+ "def my_split_by_node(urls): return urls\n",
1692
+ "num_voxels_list = []\n",
1693
+ "\n",
1694
+ "if multi_subject:\n",
1695
+ " nsessions_allsubj=np.array([40, 40, 32, 30, 40, 32, 40, 30])\n",
1696
+ " num_samples_per_epoch = (750*40) // num_devices \n",
1697
+ "else:\n",
1698
+ " # num_samples_per_epoch = (750*num_sessions) // num_devices \n",
1699
+ " num_samples_per_epoch = len(train_image_indices)\n",
1700
+ "\n",
1701
+ "print(\"dividing batch size by subj_list, which will then be concatenated across subj during training...\") \n",
1702
+ "batch_size = batch_size // len(subj_list)\n",
1703
+ "\n",
1704
+ "num_iterations_per_epoch = num_samples_per_epoch // (batch_size*len(subj_list))\n",
1705
+ "\n",
1706
+ "print(\"batch_size =\", batch_size, \"num_iterations_per_epoch =\",num_iterations_per_epoch, \"num_samples_per_epoch =\",num_samples_per_epoch)"
1707
+ ]
1708
+ },
1709
+ {
1710
+ "cell_type": "code",
1711
+ "execution_count": 45,
1712
+ "id": "5e5ffb53",
1713
+ "metadata": {},
1714
+ "outputs": [],
1715
+ "source": [
1716
+ "train_data = {}\n",
1717
+ "train_dl = {}\n",
1718
+ "\n",
1719
+ "train_data[f'subj0{subj}'] = torch.utils.data.TensorDataset(torch.tensor(train_image_indices))\n",
1720
+ "test_data = torch.utils.data.TensorDataset(torch.tensor(test_image_indices))"
1721
+ ]
1722
+ },
1723
+ {
1724
+ "cell_type": "code",
1725
+ "execution_count": 46,
1726
+ "id": "4c12edab",
1727
+ "metadata": {},
1728
+ "outputs": [],
1729
+ "source": [
1730
+ "num_voxels = {}\n",
1731
+ "voxels = {}\n",
1732
+ "for s in subj_list:\n",
1733
+ " print(f\"Training with {num_sessions} sessions\")\n",
1734
+ " train_dl = torch.utils.data.DataLoader(train_data[f'subj0{s}'], batch_size=batch_size, shuffle=True, drop_last=True, pin_memory=True)\n",
1735
+ "\n",
1736
+ " num_voxels_list.append(vox[0].shape[-1])\n",
1737
+ " num_voxels[f'subj0{s}'] = vox[0].shape[-1]\n",
1738
+ " voxels[f'subj0{s}'] = vox\n",
1739
+ " print(f\"num_voxels for subj0{s}: {num_voxels[f'subj0{s}']}\")\n",
1740
+ "\n",
1741
+ "print(\"Loaded all subj train dls and vox!\\n\")\n",
1742
+ "\n",
1743
+ "# Validate only on one subject\n",
1744
+ "if multi_subject: \n",
1745
+ " subj = subj_list[0] # cant validate on the actual held out person so picking first in subj_list\n",
1746
+ "test_dl = torch.utils.data.DataLoader(test_data, batch_size=24, shuffle=False, drop_last=True, pin_memory=True)\n",
1747
+ "\n",
1748
+ "print(f\"Loaded test dl for subj{subj}!\\n\")"
1749
+ ]
1750
+ },
1751
+ {
1752
+ "cell_type": "code",
1753
+ "execution_count": 47,
1754
+ "id": "e0a00122",
1755
+ "metadata": {},
1756
+ "outputs": [],
1757
+ "source": [
1758
+ "## USING OpenCLIP ViT-bigG ###\n",
1759
+ "sys.path.append('generative_models/')\n",
1760
+ "import sgm\n",
1761
+ "from generative_models.sgm.modules.encoders.modules import FrozenOpenCLIPImageEmbedder\n",
1762
+ "# from generative_models.sgm.models.diffusion import DiffusionEngine\n",
1763
+ "# from omegaconf import OmegaConf\n",
1764
+ "\n",
1765
+ "try:\n",
1766
+ " print(clip_img_embedder)\n",
1767
+ "except:\n",
1768
+ " clip_img_embedder = FrozenOpenCLIPImageEmbedder(\n",
1769
+ " arch=\"ViT-bigG-14\",\n",
1770
+ " version=\"laion2b_s39b_b160k\",\n",
1771
+ " output_tokens=True,\n",
1772
+ " only_tokens=True,\n",
1773
+ " )\n",
1774
+ " clip_img_embedder.to(device)\n",
1775
+ "clip_seq_dim = 256\n",
1776
+ "clip_emb_dim = 1664\n",
1777
+ "\n",
1778
+ "# ## USING OPEN AI CLIP ViT-L ###\n",
1779
+ "# import clip\n",
1780
+ "# try:\n",
1781
+ "# print(clip_model)\n",
1782
+ "# except:\n",
1783
+ "# clip_model, preprocess = clip.load(\"ViT-L/14\", device=device)\n",
1784
+ "# preprocess = transforms.Compose([\n",
1785
+ "# transforms.Resize(224, interpolation=transforms.InterpolationMode.BILINEAR),\n",
1786
+ "# transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n",
1787
+ "# std=[0.26862954, 0.26130258, 0.27577711]),\n",
1788
+ "# ])\n",
1789
+ "# def clip_img_embedder(image):\n",
1790
+ "# preproc_img = preprocess(image)\n",
1791
+ "# return clip_model.encode_image(preproc_img)\n",
1792
+ "# clip_seq_dim = 1\n",
1793
+ "# clip_emb_dim = 768"
1794
+ ]
1795
+ },
1796
+ {
1797
+ "cell_type": "code",
1798
+ "execution_count": 48,
1799
+ "id": "c308f889",
1800
+ "metadata": {},
1801
+ "outputs": [],
1802
+ "source": [
1803
+ "# ## USING OpenCLIP ViT-bigG ###\n",
1804
+ "# sys.path.append('generative_models/')\n",
1805
+ "# import sgm\n",
1806
+ "# from generative_models.sgm.modules.encoders.modules import FrozenOpenCLIPImageEmbedder\n",
1807
+ "# # from generative_models.sgm.models.diffusion import DiffusionEngine\n",
1808
+ "# # from omegaconf import OmegaConf\n",
1809
+ "\n",
1810
+ "try:\n",
1811
+ " print(clip_img_embedder)\n",
1812
+ "except:\n",
1813
+ " clip_img_embedder = FrozenOpenCLIPImageEmbedder(\n",
1814
+ " arch=\"ViT-H-14\",\n",
1815
+ " version=\"laion2b_s32b_b79k\",\n",
1816
+ " output_tokens=True,\n",
1817
+ " only_tokens=True,\n",
1818
+ " )\n",
1819
+ " clip_img_embedder.to(device)\n",
1820
+ "clip_seq_dim = 256\n",
1821
+ "clip_emb_dim = 1280\n",
1822
+ "\n",
1823
+ "# # ## USING OPEN AI CLIP ViT-L ###\n",
1824
+ "# # import clip\n",
1825
+ "# # try:\n",
1826
+ "# # print(clip_model)\n",
1827
+ "# # except:\n",
1828
+ "# # clip_model, preprocess = clip.load(\"ViT-L/14\", device=device)\n",
1829
+ "# # preprocess = transforms.Compose([\n",
1830
+ "# # transforms.Resize(224, interpolation=transforms.InterpolationMode.BILINEAR),\n",
1831
+ "# # transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n",
1832
+ "# # std=[0.26862954, 0.26130258, 0.27577711]),\n",
1833
+ "# # ])\n",
1834
+ "# # def clip_img_embedder(image):\n",
1835
+ "# # preproc_img = preprocess(image)\n",
1836
+ "# # return clip_model.encode_image(preproc_img)\n",
1837
+ "# # clip_seq_dim = 1\n",
1838
+ "# # clip_emb_dim = 768"
1839
+ ]
1840
+ },
1841
+ {
1842
+ "cell_type": "code",
1843
+ "execution_count": 49,
1844
+ "id": "af081f8c",
1845
+ "metadata": {},
1846
+ "outputs": [],
1847
+ "source": [
1848
+ "# if running this interactively, can specify jupyter_args here for argparser to use\n",
1849
+ "if utils.is_interactive():\n",
1850
+ " model_name = 'vit-h-MST' # 'sub-001_multi_bs24_MST_rishab_MSTsplit_remove_150_random_seed_0'\n",
1851
+ " print(\"model_name:\", model_name)\n",
1852
+ " \n",
1853
+ " # global_batch_size and batch_size should already be defined in the above cells\n",
1854
+ " # other variables can be specified in the following string:\n",
1855
+ " # jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 --model_name={model_name}\"\n",
1856
+ " batch_size = 24\n",
1857
+ " jupyter_args = f\"--data_path=/scratch/gpfs/ri4541/MindEyeV2/src/mindeyev2 \\\n",
1858
+ " --model_name={model_name} \\\n",
1859
+ " --no-multi_subject --subj=1 --batch_size={batch_size} \\\n",
1860
+ " --hidden_dim=1024 --clip_scale=1. \\\n",
1861
+ " --no-blurry_recon --blur_scale=.5 \\\n",
1862
+ " --no-use_prior --prior_scale=30 \\\n",
1863
+ " --n_blocks=4 --max_lr=3e-4 --mixup_pct=.33 --num_epochs=30 --no-use_image_aug \\\n",
1864
+ " --ckpt_interval=999 --ckpt_saving --new_test \\\n",
1865
+ " --multisubject_ckpt=None --wandb_log\"\n",
1866
+ " print(jupyter_args)\n",
1867
+ " jupyter_args = jupyter_args.split()"
1868
+ ]
1869
+ },
1870
+ {
1871
+ "cell_type": "code",
1872
+ "execution_count": 50,
1873
+ "id": "d5b9cf29",
1874
+ "metadata": {},
1875
+ "outputs": [],
1876
+ "source": [
1877
+ "parser = argparse.ArgumentParser(description=\"Model Training Configuration\")\n",
1878
+ "parser.add_argument(\n",
1879
+ " \"--model_name\", type=str, default=\"testing\",\n",
1880
+ " help=\"name of model, used for ckpt saving and wandb logging (if enabled)\",\n",
1881
+ ")\n",
1882
+ "parser.add_argument(\n",
1883
+ " \"--data_path\", type=str, default=\"/weka/proj-fmri/shared/natural-scenes-dataset\",\n",
1884
+ " help=\"Path to where NSD data is stored / where to download it to\",\n",
1885
+ ")\n",
1886
+ "parser.add_argument(\n",
1887
+ " \"--subj\",type=int, default=1, choices=[1,2,3,4,5,6,7,8],\n",
1888
+ " help=\"Validate on which subject?\",\n",
1889
+ ")\n",
1890
+ "parser.add_argument(\n",
1891
+ " \"--multisubject_ckpt\", type=str, default=None,\n",
1892
+ " help=\"Path to pre-trained multisubject model to finetune a single subject from. multisubject must be False.\",\n",
1893
+ ")\n",
1894
+ "parser.add_argument(\n",
1895
+ " \"--num_sessions\", type=int, default=0,\n",
1896
+ " help=\"Number of training sessions to include (if multi_subject, this variable doesnt matter)\",\n",
1897
+ ")\n",
1898
+ "parser.add_argument(\n",
1899
+ " \"--use_prior\",action=argparse.BooleanOptionalAction,default=False,\n",
1900
+ " help=\"whether to train diffusion prior (True) or just rely on retrieval part of the pipeline (False)\",\n",
1901
+ ")\n",
1902
+ "parser.add_argument(\n",
1903
+ " \"--batch_size\", type=int, default=32,\n",
1904
+ " help=\"Batch size can be increased by 10x if only training v2c and not diffusion diffuser\",\n",
1905
+ ")\n",
1906
+ "parser.add_argument(\n",
1907
+ " \"--wandb_log\",action=argparse.BooleanOptionalAction,default=False,\n",
1908
+ " help=\"whether to log to wandb\",\n",
1909
+ ")\n",
1910
+ "parser.add_argument(\n",
1911
+ " \"--resume_from_ckpt\",action=argparse.BooleanOptionalAction,default=False,\n",
1912
+ " help=\"if not using wandb and want to resume from a ckpt\",\n",
1913
+ ")\n",
1914
+ "parser.add_argument(\n",
1915
+ " \"--wandb_project\",type=str,default=\"stability\",\n",
1916
+ " help=\"wandb project name\",\n",
1917
+ ")\n",
1918
+ "parser.add_argument(\n",
1919
+ " \"--mixup_pct\",type=float,default=.33,\n",
1920
+ " help=\"proportion of way through training when to switch from BiMixCo to SoftCLIP\",\n",
1921
+ ")\n",
1922
+ "parser.add_argument(\n",
1923
+ " \"--low_mem\",action=argparse.BooleanOptionalAction,default=False,\n",
1924
+ " help=\"whether to preload images to cpu to speed things up but consume more memory\",\n",
1925
+ ")\n",
1926
+ "parser.add_argument(\n",
1927
+ " \"--blurry_recon\",action=argparse.BooleanOptionalAction,default=True,\n",
1928
+ " help=\"whether to output blurry reconstructions\",\n",
1929
+ ")\n",
1930
+ "parser.add_argument(\n",
1931
+ " \"--blur_scale\",type=float,default=.5,\n",
1932
+ " help=\"multiply loss from blurry recons by this number\",\n",
1933
+ ")\n",
1934
+ "parser.add_argument(\n",
1935
+ " \"--clip_scale\",type=float,default=1.,\n",
1936
+ " help=\"multiply contrastive loss by this number\",\n",
1937
+ ")\n",
1938
+ "parser.add_argument(\n",
1939
+ " \"--prior_scale\",type=float,default=30,\n",
1940
+ " help=\"multiply diffusion prior loss by this\",\n",
1941
+ ")\n",
1942
+ "parser.add_argument(\n",
1943
+ " \"--use_image_aug\",action=argparse.BooleanOptionalAction,default=True,\n",
1944
+ " help=\"whether to use image augmentation\",\n",
1945
+ ")\n",
1946
+ "parser.add_argument(\n",
1947
+ " \"--num_epochs\",type=int,default=120,\n",
1948
+ " help=\"number of epochs of training\",\n",
1949
+ ")\n",
1950
+ "parser.add_argument(\n",
1951
+ " \"--multi_subject\",action=argparse.BooleanOptionalAction,default=False,\n",
1952
+ ")\n",
1953
+ "parser.add_argument(\n",
1954
+ " \"--new_test\",action=argparse.BooleanOptionalAction,default=True,\n",
1955
+ ")\n",
1956
+ "parser.add_argument(\n",
1957
+ " \"--n_blocks\",type=int,default=2,\n",
1958
+ ")\n",
1959
+ "parser.add_argument(\n",
1960
+ " \"--hidden_dim\",type=int,default=1024,\n",
1961
+ ")\n",
1962
+ "parser.add_argument(\n",
1963
+ " \"--seq_past\",type=int,default=0,\n",
1964
+ ")\n",
1965
+ "parser.add_argument(\n",
1966
+ " \"--seq_future\",type=int,default=0,\n",
1967
+ ")\n",
1968
+ "parser.add_argument(\n",
1969
+ " \"--lr_scheduler_type\",type=str,default='cycle',choices=['cycle','linear'],\n",
1970
+ ")\n",
1971
+ "parser.add_argument(\n",
1972
+ " \"--ckpt_saving\",action=argparse.BooleanOptionalAction,default=True,\n",
1973
+ ")\n",
1974
+ "parser.add_argument(\n",
1975
+ " \"--ckpt_interval\",type=int,default=5,\n",
1976
+ " help=\"save backup ckpt and reconstruct every x epochs\",\n",
1977
+ ")\n",
1978
+ "parser.add_argument(\n",
1979
+ " \"--seed\",type=int,default=42,\n",
1980
+ ")\n",
1981
+ "parser.add_argument(\n",
1982
+ " \"--max_lr\",type=float,default=3e-4,\n",
1983
+ ")\n",
1984
+ "\n",
1985
+ "if utils.is_interactive():\n",
1986
+ " args = parser.parse_args(jupyter_args)\n",
1987
+ "else:\n",
1988
+ " args = parser.parse_args()\n",
1989
+ "\n",
1990
+ "# create global variables without the args prefix\n",
1991
+ "for attribute_name in vars(args).keys():\n",
1992
+ " globals()[attribute_name] = getattr(args, attribute_name)\n",
1993
+ " \n",
1994
+ "outdir = os.path.abspath(f'./train_logs/{model_name}')\n",
1995
+ "if not os.path.exists(outdir) and ckpt_saving:\n",
1996
+ " os.makedirs(outdir,exist_ok=True)\n",
1997
+ " \n",
1998
+ "if use_image_aug or blurry_recon:\n",
1999
+ " import kornia\n",
2000
+ " import kornia.augmentation as K\n",
2001
+ " from kornia.augmentation.container import AugmentationSequential\n",
2002
+ "if use_image_aug:\n",
2003
+ " img_augment = AugmentationSequential(\n",
2004
+ " kornia.augmentation.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.3),\n",
2005
+ " same_on_batch=False,\n",
2006
+ " data_keys=[\"input\"],\n",
2007
+ " )\n",
2008
+ " # Define the blurring augmentations\n",
2009
+ " blur_augment = K.RandomGaussianBlur(kernel_size=(21, 21), sigma=(51.0, 51.0), p=1.)\n",
2010
+ " \n",
2011
+ "if multi_subject:\n",
2012
+ " subj_list = np.arange(1,9)\n",
2013
+ " subj_list = subj_list[subj_list != subj]\n",
2014
+ "else:\n",
2015
+ " subj_list = [subj]\n",
2016
+ "\n",
2017
+ "print(\"subj_list\", subj_list, \"num_sessions\", num_sessions)"
2018
+ ]
2019
+ },
2020
+ {
2021
+ "cell_type": "code",
2022
+ "execution_count": 51,
2023
+ "id": "925f533f",
2024
+ "metadata": {},
2025
+ "outputs": [],
2026
+ "source": [
2027
+ "model = utils.prepare_model_and_training(\n",
2028
+ " num_voxels_list=num_voxels_list,\n",
2029
+ " n_blocks=n_blocks,\n",
2030
+ " hidden_dim=hidden_dim,\n",
2031
+ " clip_emb_dim=clip_emb_dim,\n",
2032
+ " clip_seq_dim=clip_seq_dim,\n",
2033
+ " use_prior=use_prior,\n",
2034
+ " clip_scale=clip_scale\n",
2035
+ ")"
2036
+ ]
2037
+ },
2038
+ {
2039
+ "cell_type": "code",
2040
+ "execution_count": 52,
2041
+ "id": "4572d154",
2042
+ "metadata": {},
2043
+ "outputs": [],
2044
+ "source": [
2045
+ "# test on subject 1 with fake data\n",
2046
+ "b = torch.randn((2,1,num_voxels_list[0]))\n",
2047
+ "print(b.shape, model.ridge(b,0).shape)"
2048
+ ]
2049
+ },
2050
+ {
2051
+ "cell_type": "code",
2052
+ "execution_count": 53,
2053
+ "id": "fed5fade",
2054
+ "metadata": {},
2055
+ "outputs": [],
2056
+ "source": [
2057
+ "# test that the model works on some fake data\n",
2058
+ "b = torch.randn((2,1,hidden_dim))\n",
2059
+ "print(\"b.shape\",b.shape)\n",
2060
+ "\n",
2061
+ "backbone_, clip_, blur_ = model.backbone(b)\n",
2062
+ "print(backbone_.shape, clip_.shape, blur_[0].shape, blur_[1].shape)"
2063
+ ]
2064
+ },
2065
+ {
2066
+ "cell_type": "code",
2067
+ "execution_count": 54,
2068
+ "id": "ca55bf63",
2069
+ "metadata": {},
2070
+ "outputs": [],
2071
+ "source": [
2072
+ "if use_prior:\n",
2073
+ " from models import *\n",
2074
+ "\n",
2075
+ " # setup diffusion prior network\n",
2076
+ " out_dim = clip_emb_dim\n",
2077
+ " depth = 6\n",
2078
+ " dim_head = 52\n",
2079
+ " heads = clip_emb_dim//52 # heads * dim_head = clip_emb_dim\n",
2080
+ " timesteps = 100\n",
2081
+ "\n",
2082
+ " prior_network = VersatileDiffusionPriorNetwork(\n",
2083
+ " dim=out_dim,\n",
2084
+ " depth=depth,\n",
2085
+ " dim_head=dim_head,\n",
2086
+ " heads=heads,\n",
2087
+ " causal=False,\n",
2088
+ " num_tokens = clip_seq_dim,\n",
2089
+ " learned_query_mode=\"pos_emb\"\n",
2090
+ " )\n",
2091
+ "\n",
2092
+ " model.diffusion_prior = BrainDiffusionPrior(\n",
2093
+ " net=prior_network,\n",
2094
+ " image_embed_dim=out_dim,\n",
2095
+ " condition_on_text_encodings=False,\n",
2096
+ " timesteps=timesteps,\n",
2097
+ " cond_drop_prob=0.2,\n",
2098
+ " image_embed_scale=None,\n",
2099
+ " )\n",
2100
+ " \n",
2101
+ " utils.count_params(model.diffusion_prior)\n",
2102
+ " utils.count_params(model)"
2103
+ ]
2104
+ },
2105
+ {
2106
+ "cell_type": "code",
2107
+ "execution_count": 55,
2108
+ "id": "04a6fed8",
2109
+ "metadata": {},
2110
+ "outputs": [],
2111
+ "source": [
2112
+ "no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']\n",
2113
+ "\n",
2114
+ "opt_grouped_parameters = [\n",
2115
+ " {'params': [p for n, p in model.ridge.named_parameters()], 'weight_decay': 1e-2},\n",
2116
+ " {'params': [p for n, p in model.backbone.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 1e-2},\n",
2117
+ " {'params': [p for n, p in model.backbone.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0},\n",
2118
+ "]\n",
2119
+ "# model.backbone.requires_grad_(False)\n",
2120
+ "\n",
2121
+ "if use_prior:\n",
2122
+ " opt_grouped_parameters.extend([\n",
2123
+ " {'params': [p for n, p in model.diffusion_prior.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 1e-2},\n",
2124
+ " {'params': [p for n, p in model.diffusion_prior.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n",
2125
+ " ])\n",
2126
+ "\n",
2127
+ "optimizer = torch.optim.AdamW(opt_grouped_parameters, lr=max_lr)\n",
2128
+ "\n",
2129
+ "if lr_scheduler_type == 'linear':\n",
2130
+ " lr_scheduler = torch.optim.lr_scheduler.LinearLR(\n",
2131
+ " optimizer,\n",
2132
+ " total_iters=int(np.floor(num_epochs*num_iterations_per_epoch)),\n",
2133
+ " last_epoch=-1\n",
2134
+ " )\n",
2135
+ "elif lr_scheduler_type == 'cycle':\n",
2136
+ " if num_iterations_per_epoch==0:\n",
2137
+ " num_iterations_per_epoch=1\n",
2138
+ " total_steps=int(np.floor(num_epochs*num_iterations_per_epoch))\n",
2139
+ " print(\"total_steps\", total_steps)\n",
2140
+ " lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(\n",
2141
+ " optimizer, \n",
2142
+ " max_lr=max_lr,\n",
2143
+ " total_steps=total_steps,\n",
2144
+ " final_div_factor=1000,\n",
2145
+ " last_epoch=-1, pct_start=2/num_epochs\n",
2146
+ " )\n",
2147
+ " \n",
2148
+ "def save_ckpt(tag):\n",
2149
+ " ckpt_path = outdir+f'/{tag}.pth'\n",
2150
+ " if accelerator.is_main_process:\n",
2151
+ " unwrapped_model = accelerator.unwrap_model(model)\n",
2152
+ " torch.save({\n",
2153
+ " 'epoch': epoch,\n",
2154
+ " 'model_state_dict': unwrapped_model.state_dict(),\n",
2155
+ " 'optimizer_state_dict': optimizer.state_dict(),\n",
2156
+ " 'lr_scheduler': lr_scheduler.state_dict(),\n",
2157
+ " 'train_losses': losses,\n",
2158
+ " 'test_losses': test_losses,\n",
2159
+ " 'lrs': lrs,\n",
2160
+ " }, ckpt_path)\n",
2161
+ " print(f\"\\n---saved {outdir}/{tag} ckpt!---\\n\")\n",
2162
+ "\n",
2163
+ "def load_ckpt(tag,load_lr=True,load_optimizer=True,load_epoch=True,strict=True,outdir=outdir,multisubj_loading=False): \n",
2164
+ " print(f\"\\n---loading {outdir}/{tag}.pth ckpt---\\n\")\n",
2165
+ " checkpoint = torch.load(outdir+'/last.pth', map_location='cpu')\n",
2166
+ " state_dict = checkpoint['model_state_dict']\n",
2167
+ " if multisubj_loading: # remove incompatible ridge layer that will otherwise error\n",
2168
+ " state_dict.pop('ridge.linears.0.weight',None)\n",
2169
+ " model.load_state_dict(state_dict, strict=strict)\n",
2170
+ " if load_epoch:\n",
2171
+ " globals()[\"epoch\"] = checkpoint['epoch']\n",
2172
+ " print(\"Epoch\",epoch)\n",
2173
+ " if load_optimizer:\n",
2174
+ " optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\n",
2175
+ " if load_lr:\n",
2176
+ " lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n",
2177
+ " del checkpoint\n",
2178
+ "\n",
2179
+ "print(\"\\nDone with model preparations!\")\n",
2180
+ "num_params = utils.count_params(model)"
2181
+ ]
2182
+ },
2183
+ {
2184
+ "cell_type": "code",
2185
+ "execution_count": 56,
2186
+ "id": "0d2a0961",
2187
+ "metadata": {},
2188
+ "outputs": [],
2189
+ "source": [
2190
+ "if local_rank==0 and wandb_log: # only use main process for wandb logging\n",
2191
+ " import wandb\n",
2192
+ " import time\n",
2193
+ " \n",
2194
+ " wandb_project = 'rtmindeye'\n",
2195
+ " print(f\"wandb {wandb_project} run {model_name}\")\n",
2196
+ "\n",
2197
+ " # Need to configure wandb beforehand in terminal with \"wandb init\"!\n",
2198
+ " wandb_config = {\n",
2199
+ " \"model_name\": model_name,\n",
2200
+ " \"global_batch_size\": global_batch_size,\n",
2201
+ " \"batch_size\": batch_size,\n",
2202
+ " \"num_epochs\": num_epochs,\n",
2203
+ " \"num_sessions\": num_sessions,\n",
2204
+ " \"num_params\": num_params,\n",
2205
+ " \"clip_scale\": clip_scale,\n",
2206
+ " \"prior_scale\": prior_scale,\n",
2207
+ " \"blur_scale\": blur_scale,\n",
2208
+ " \"use_image_aug\": use_image_aug,\n",
2209
+ " \"max_lr\": max_lr,\n",
2210
+ " \"mixup_pct\": mixup_pct,\n",
2211
+ " \"num_samples_per_epoch\": num_samples_per_epoch,\n",
2212
+ " \"ckpt_interval\": ckpt_interval,\n",
2213
+ " \"ckpt_saving\": ckpt_saving,\n",
2214
+ " \"seed\": seed, # SLURM array task ID\n",
2215
+ " \"distributed\": distributed,\n",
2216
+ " \"num_devices\": num_devices,\n",
2217
+ " \"world_size\": world_size,\n",
2218
+ " }\n",
2219
+ " print(\"wandb_config:\\n\", wandb_config)\n",
2220
+ " print(\"wandb_id:\", model_name)\n",
2221
+ "\n",
2222
+ " # Initialize wandb\n",
2223
+ " wandb.init(\n",
2224
+ " id=model_name,\n",
2225
+ " project=wandb_project,\n",
2226
+ " name=model_name,\n",
2227
+ " config=wandb_config,\n",
2228
+ " resume=\"allow\",\n",
2229
+ " save_code=True,\n",
2230
+ " )\n",
2231
+ "\n",
2232
+ " # Get SLURM job & array ID\n",
2233
+ " slurm_job_id = utils.get_slurm_job()\n",
2234
+ " slurm_array_id = seed # seed corresponds to SLURM_ARRAY_TASK_ID\n",
2235
+ "\n",
2236
+ " # Define SLURM log paths\n",
2237
+ " log_dir = \"slurms\"\n",
2238
+ " log_files = [\n",
2239
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.out\",\n",
2240
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.err\",\n",
2241
+ " ]\n",
2242
+ "\n",
2243
+ " # Ensure logs exist before logging them\n",
2244
+ " for log_file in log_files:\n",
2245
+ " wait_time = 0\n",
2246
+ " while not os.path.exists(log_file) and wait_time < 60: # Wait max 60s\n",
2247
+ " time.sleep(5)\n",
2248
+ " wait_time += 5\n",
2249
+ "\n",
2250
+ " # Log SLURM logs as artifacts\n",
2251
+ " artifact = wandb.Artifact(f\"slurm_logs_{slurm_job_id}_{slurm_array_id}\", type=\"logs\")\n",
2252
+ " for log_file in log_files:\n",
2253
+ " if os.path.exists(log_file):\n",
2254
+ " artifact.add_file(log_file)\n",
2255
+ "\n",
2256
+ " wandb.log_artifact(artifact)\n",
2257
+ "else:\n",
2258
+ " wandb_log = False"
2259
+ ]
2260
+ },
2261
+ {
2262
+ "cell_type": "code",
2263
+ "execution_count": 57,
2264
+ "id": "ea0b850a",
2265
+ "metadata": {},
2266
+ "outputs": [],
2267
+ "source": [
2268
+ "if local_rank==0 and wandb_log: # only use main process for wandb logging\n",
2269
+ " import wandb\n",
2270
+ " import time\n",
2271
+ " \n",
2272
+ " wandb_project = 'rtmindeye'\n",
2273
+ " print(f\"wandb {wandb_project} run {model_name}\")\n",
2274
+ "\n",
2275
+ " # Need to configure wandb beforehand in terminal with \"wandb init\"!\n",
2276
+ " wandb_config = {\n",
2277
+ " \"model_name\": model_name,\n",
2278
+ " \"global_batch_size\": global_batch_size,\n",
2279
+ " \"batch_size\": batch_size,\n",
2280
+ " \"num_epochs\": num_epochs,\n",
2281
+ " \"num_sessions\": num_sessions,\n",
2282
+ " \"num_params\": num_params,\n",
2283
+ " \"clip_scale\": clip_scale,\n",
2284
+ " \"prior_scale\": prior_scale,\n",
2285
+ " \"blur_scale\": blur_scale,\n",
2286
+ " \"use_image_aug\": use_image_aug,\n",
2287
+ " \"max_lr\": max_lr,\n",
2288
+ " \"mixup_pct\": mixup_pct,\n",
2289
+ " \"num_samples_per_epoch\": num_samples_per_epoch,\n",
2290
+ " \"ckpt_interval\": ckpt_interval,\n",
2291
+ " \"ckpt_saving\": ckpt_saving,\n",
2292
+ " \"seed\": seed, # SLURM array task ID\n",
2293
+ " \"distributed\": distributed,\n",
2294
+ " \"num_devices\": num_devices,\n",
2295
+ " \"world_size\": world_size,\n",
2296
+ " }\n",
2297
+ " print(\"wandb_config:\\n\", wandb_config)\n",
2298
+ " print(\"wandb_id:\", model_name)\n",
2299
+ "\n",
2300
+ " # Initialize wandb\n",
2301
+ " wandb.init(\n",
2302
+ " id=model_name,\n",
2303
+ " project=wandb_project,\n",
2304
+ " name=model_name,\n",
2305
+ " config=wandb_config,\n",
2306
+ " resume=\"allow\",\n",
2307
+ " save_code=True,\n",
2308
+ " )\n",
2309
+ "\n",
2310
+ " # Get SLURM job & array ID\n",
2311
+ " try:\n",
2312
+ " slurm_job_id = utils.get_slurm_job()\n",
2313
+ " slurm_array_id = seed # seed corresponds to SLURM_ARRAY_TASK_ID\n",
2314
+ "\n",
2315
+ " # Define SLURM log paths\n",
2316
+ " log_dir = \"slurms\"\n",
2317
+ " log_files = [\n",
2318
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.out\",\n",
2319
+ " f\"{log_dir}/{slurm_job_id}_{slurm_array_id}.err\",\n",
2320
+ " ]\n",
2321
+ "\n",
2322
+ " # Ensure logs exist before logging them\n",
2323
+ " for log_file in log_files:\n",
2324
+ " wait_time = 0\n",
2325
+ " while not os.path.exists(log_file) and wait_time < 60: # Wait max 60s\n",
2326
+ " time.sleep(5)\n",
2327
+ " wait_time += 5\n",
2328
+ "\n",
2329
+ " # Log SLURM logs as artifacts\n",
2330
+ " artifact = wandb.Artifact(f\"slurm_logs_{slurm_job_id}_{slurm_array_id}\", type=\"logs\")\n",
2331
+ " for log_file in log_files:\n",
2332
+ " if os.path.exists(log_file):\n",
2333
+ " artifact.add_file(log_file)\n",
2334
+ "\n",
2335
+ " wandb.log_artifact(artifact)\n",
2336
+ " \n",
2337
+ " except:\n",
2338
+ " print(\"Alert: wandb is not being logged locally.\")\n",
2339
+ "else:\n",
2340
+ " wandb_log = False"
2341
+ ]
2342
+ }
2343
+ ],
2344
+ "metadata": {
2345
+ "kernelspec": {
2346
+ "display_name": "Python 3",
2347
+ "language": "python",
2348
+ "name": "python3"
2349
+ },
2350
+ "language_info": {
2351
+ "codemirror_mode": {
2352
+ "name": "ipython",
2353
+ "version": 3
2354
+ },
2355
+ "file_extension": ".py",
2356
+ "mimetype": "text/x-python",
2357
+ "name": "python",
2358
+ "nbconvert_exporter": "python",
2359
+ "pygments_lexer": "ipython3",
2360
+ "version": "3.11.13"
2361
+ }
2362
+ },
2363
+ "nbformat": 4,
2364
+ "nbformat_minor": 5
2365
+ }
wandb/run-20250809_151147-vit-h-MST/files/config.yaml ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ model_name:
4
+ desc: null
5
+ value: vit-h-MST
6
+ global_batch_size:
7
+ desc: null
8
+ value: 8
9
+ batch_size:
10
+ desc: null
11
+ value: 24
12
+ num_epochs:
13
+ desc: null
14
+ value: 30
15
+ num_sessions:
16
+ desc: null
17
+ value: 0
18
+ num_params:
19
+ desc: null
20
+ value: 358038808
21
+ clip_scale:
22
+ desc: null
23
+ value: 1.0
24
+ prior_scale:
25
+ desc: null
26
+ value: 30.0
27
+ blur_scale:
28
+ desc: null
29
+ value: 0.5
30
+ use_image_aug:
31
+ desc: null
32
+ value: false
33
+ max_lr:
34
+ desc: null
35
+ value: 0.0003
36
+ mixup_pct:
37
+ desc: null
38
+ value: 0.33
39
+ num_samples_per_epoch:
40
+ desc: null
41
+ value: 1138
42
+ ckpt_interval:
43
+ desc: null
44
+ value: 999
45
+ ckpt_saving:
46
+ desc: null
47
+ value: true
48
+ seed:
49
+ desc: null
50
+ value: 42
51
+ distributed:
52
+ desc: null
53
+ value: false
54
+ num_devices:
55
+ desc: null
56
+ value: 1
57
+ world_size:
58
+ desc: null
59
+ value: 1
60
+ _wandb:
61
+ desc: null
62
+ value:
63
+ python_version: 3.11.13
64
+ cli_version: 0.17.2
65
+ framework: huggingface
66
+ huggingface_version: 4.37.2
67
+ is_jupyter_run: true
68
+ is_kaggle_kernel: false
69
+ start_time: 1754752311
70
+ t:
71
+ 1:
72
+ - 1
73
+ - 5
74
+ - 9
75
+ - 11
76
+ - 41
77
+ - 49
78
+ - 53
79
+ - 55
80
+ - 63
81
+ - 71
82
+ - 79
83
+ - 83
84
+ - 103
85
+ 2:
86
+ - 1
87
+ - 5
88
+ - 9
89
+ - 11
90
+ - 41
91
+ - 49
92
+ - 53
93
+ - 55
94
+ - 63
95
+ - 71
96
+ - 79
97
+ - 83
98
+ - 103
99
+ 3:
100
+ - 5
101
+ - 13
102
+ - 14
103
+ - 16
104
+ - 23
105
+ - 62
106
+ 4: 3.11.13
107
+ 5: 0.17.2
108
+ 6: 4.37.2
109
+ 8:
110
+ - 1
111
+ - 5
112
+ 13: linux-x86_64
wandb/run-20250809_152227-vit-h-MST/files/config.yaml ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ model_name:
4
+ desc: null
5
+ value: vit-h-MST
6
+ global_batch_size:
7
+ desc: null
8
+ value: 8
9
+ batch_size:
10
+ desc: null
11
+ value: 24
12
+ num_epochs:
13
+ desc: null
14
+ value: 150
15
+ num_sessions:
16
+ desc: null
17
+ value: 0
18
+ num_params:
19
+ desc: null
20
+ value: 511732360
21
+ clip_scale:
22
+ desc: null
23
+ value: 1.0
24
+ prior_scale:
25
+ desc: null
26
+ value: 30.0
27
+ blur_scale:
28
+ desc: null
29
+ value: 0.5
30
+ use_image_aug:
31
+ desc: null
32
+ value: false
33
+ max_lr:
34
+ desc: null
35
+ value: 0.0003
36
+ mixup_pct:
37
+ desc: null
38
+ value: 0.33
39
+ num_samples_per_epoch:
40
+ desc: null
41
+ value: 1138
42
+ ckpt_interval:
43
+ desc: null
44
+ value: 999
45
+ ckpt_saving:
46
+ desc: null
47
+ value: true
48
+ seed:
49
+ desc: null
50
+ value: 0
51
+ distributed:
52
+ desc: null
53
+ value: false
54
+ num_devices:
55
+ desc: null
56
+ value: 1
57
+ world_size:
58
+ desc: null
59
+ value: 1
60
+ _wandb:
61
+ desc: null
62
+ value:
63
+ python_version: 3.11.13
64
+ cli_version: 0.17.2
65
+ framework: huggingface
66
+ huggingface_version: 4.37.2
67
+ is_jupyter_run: true
68
+ is_kaggle_kernel: false
69
+ start_time: 1754752947
70
+ t:
71
+ 1:
72
+ - 1
73
+ - 5
74
+ - 9
75
+ - 11
76
+ - 41
77
+ - 49
78
+ - 53
79
+ - 55
80
+ - 63
81
+ - 71
82
+ - 79
83
+ - 83
84
+ - 103
85
+ 2:
86
+ - 1
87
+ - 5
88
+ - 9
89
+ - 11
90
+ - 41
91
+ - 49
92
+ - 53
93
+ - 55
94
+ - 63
95
+ - 71
96
+ - 79
97
+ - 83
98
+ - 103
99
+ 3:
100
+ - 5
101
+ - 13
102
+ - 14
103
+ - 16
104
+ - 23
105
+ 4: 3.11.13
106
+ 5: 0.17.2
107
+ 6: 4.37.2
108
+ 8:
109
+ - 1
110
+ - 5
111
+ 13: linux-x86_64
wandb/run-20250809_152227-vit-h-MST/files/requirements.txt ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CoCa-pytorch==0.1.0
2
+ Django==5.2.5
3
+ GitPython==3.1.45
4
+ Jinja2==3.1.6
5
+ MarkupSafe==3.0.2
6
+ PyYAML==6.0.2
7
+ Pygments==2.19.2
8
+ Send2Trash==1.8.3
9
+ accelerate==0.24.1
10
+ aiohappyeyeballs==2.6.1
11
+ aiohttp==3.12.15
12
+ aiosignal==1.4.0
13
+ annotated-types==0.7.0
14
+ antlr4-python3-runtime==4.9.3
15
+ ants==0.0.7
16
+ anyio==4.10.0
17
+ argon2-cffi-bindings==25.1.0
18
+ argon2-cffi==25.1.0
19
+ arrow==1.3.0
20
+ asgiref==3.9.1
21
+ asttokens==3.0.0
22
+ async-lru==2.0.5
23
+ attrs==25.3.0
24
+ autocommand==2.2.2
25
+ babel==2.17.0
26
+ backports.tarfile==1.2.0
27
+ beartype==0.21.0
28
+ beautifulsoup4==4.13.4
29
+ bleach==6.2.0
30
+ braceexpand==0.1.7
31
+ certifi==2025.8.3
32
+ cffi==1.17.1
33
+ charset-normalizer==3.4.3
34
+ click==8.2.1
35
+ clip-anytorch==2.6.0
36
+ clip==0.2.0
37
+ comm==0.2.3
38
+ contourpy==1.3.3
39
+ cycler==0.12.1
40
+ dalle2-pytorch==1.15.6
41
+ debugpy==1.8.16
42
+ decorator==5.2.1
43
+ defusedxml==0.7.1
44
+ diffusers==0.23.0
45
+ docker-pycreds==0.4.0
46
+ einops==0.7.0
47
+ einx==0.3.0
48
+ ema-pytorch==0.7.7
49
+ embedding-reader==1.7.0
50
+ executing==2.2.0
51
+ fastjsonschema==2.21.1
52
+ filelock==3.18.0
53
+ fonttools==4.59.0
54
+ fqdn==1.5.1
55
+ frozendict==2.4.6
56
+ frozenlist==1.7.0
57
+ fsspec==2025.7.0
58
+ ftfy==6.3.1
59
+ gevent==25.5.1
60
+ gitdb==4.0.12
61
+ greenlet==3.2.4
62
+ h11==0.16.0
63
+ h5py==3.10.0
64
+ hf-xet==1.1.7
65
+ httpcore==1.0.9
66
+ httpx==0.28.1
67
+ huggingface-hub==0.34.4
68
+ idna==3.10
69
+ imageio==2.37.0
70
+ importlib_metadata==8.0.0
71
+ importlib_metadata==8.7.0
72
+ inflect==7.3.1
73
+ ipykernel==6.30.1
74
+ ipython==9.4.0
75
+ ipython_pygments_lexers==1.1.1
76
+ ipywidgets==8.1.7
77
+ isoduration==20.11.0
78
+ jaraco.collections==5.1.0
79
+ jaraco.context==5.3.0
80
+ jaraco.functools==4.0.1
81
+ jaraco.text==3.12.1
82
+ jedi==0.19.2
83
+ joblib==1.5.1
84
+ json5==0.12.0
85
+ jsonpointer==3.0.0
86
+ jsonschema-specifications==2025.4.1
87
+ jsonschema==4.25.0
88
+ jupyter-console==6.6.3
89
+ jupyter-events==0.12.0
90
+ jupyter-lsp==2.2.6
91
+ jupyter==1.1.1
92
+ jupyter_client==8.6.3
93
+ jupyter_core==5.8.1
94
+ jupyter_server==2.16.0
95
+ jupyter_server_terminals==0.5.3
96
+ jupyterlab==4.4.5
97
+ jupyterlab_nvdashboard==0.13.0
98
+ jupyterlab_pygments==0.3.0
99
+ jupyterlab_server==2.27.3
100
+ jupyterlab_widgets==3.0.15
101
+ kiwisolver==1.4.8
102
+ kornia==0.8.1
103
+ kornia_rs==0.1.9
104
+ lark==1.2.2
105
+ lazy_loader==0.4
106
+ lightning-utilities==0.15.2
107
+ lxml==6.0.0
108
+ matplotlib-inline==0.1.7
109
+ matplotlib==3.8.2
110
+ mistune==3.1.3
111
+ more-itertools==10.3.0
112
+ mpmath==1.3.0
113
+ multidict==6.6.3
114
+ nbclient==0.10.2
115
+ nbconvert==7.16.6
116
+ nbformat==5.10.4
117
+ nest-asyncio==1.6.0
118
+ networkx==3.5
119
+ nibabel==5.2.1
120
+ nilearn==0.12.0
121
+ notebook==7.4.5
122
+ notebook_shim==0.2.4
123
+ numpy==1.26.4
124
+ nvidia-cublas-cu12==12.4.5.8
125
+ nvidia-cuda-cupti-cu12==12.4.127
126
+ nvidia-cuda-nvrtc-cu12==12.4.127
127
+ nvidia-cuda-runtime-cu12==12.4.127
128
+ nvidia-cudnn-cu12==9.1.0.70
129
+ nvidia-cufft-cu12==11.2.1.3
130
+ nvidia-curand-cu12==10.3.5.147
131
+ nvidia-cusolver-cu12==11.6.1.9
132
+ nvidia-cusparse-cu12==12.3.1.170
133
+ nvidia-ml-py==12.575.51
134
+ nvidia-nccl-cu12==2.21.5
135
+ nvidia-nvjitlink-cu12==12.4.127
136
+ nvidia-nvtx-cu12==12.4.127
137
+ omegaconf==2.3.0
138
+ open-clip-torch==2.24.0
139
+ overrides==7.7.0
140
+ packaging==24.2
141
+ packaging==25.0
142
+ pandas==2.2.0
143
+ pandocfilters==1.5.1
144
+ parso==0.8.4
145
+ pexpect==4.9.0
146
+ pillow==10.2.0
147
+ platformdirs==4.2.2
148
+ platformdirs==4.3.8
149
+ prometheus_client==0.22.1
150
+ prompt_toolkit==3.0.51
151
+ propcache==0.3.2
152
+ protobuf==5.29.5
153
+ psutil==7.0.0
154
+ ptyprocess==0.7.0
155
+ pure_eval==0.2.3
156
+ pyarrow==15.0.2
157
+ pycparser==2.22
158
+ pydantic==2.11.7
159
+ pydantic_core==2.33.2
160
+ pynvml==12.0.0
161
+ pyparsing==3.2.3
162
+ python-dateutil==2.9.0.post0
163
+ python-json-logger==3.3.0
164
+ pytorch-lightning==2.5.2
165
+ pytorch-warmup==0.2.0
166
+ pytz==2025.2
167
+ pyzmq==27.0.1
168
+ referencing==0.36.2
169
+ regex==2025.7.34
170
+ requests==2.32.4
171
+ resize-right==0.0.2
172
+ rfc3339-validator==0.1.4
173
+ rfc3986-validator==0.1.1
174
+ rfc3987-syntax==1.1.0
175
+ rotary-embedding-torch==0.8.9
176
+ rpds-py==0.27.0
177
+ safetensors==0.6.2
178
+ scikit-image==0.25.2
179
+ scikit-learn==1.4.1.post1
180
+ scipy==1.12.0
181
+ sentencepiece==0.2.0
182
+ sentry-sdk==2.34.1
183
+ setproctitle==1.3.6
184
+ setuptools==80.9.0
185
+ six==1.17.0
186
+ smmap==5.0.2
187
+ sniffio==1.3.1
188
+ soupsieve==2.7
189
+ sqlparse==0.5.3
190
+ stack-data==0.6.3
191
+ sympy==1.13.1
192
+ terminado==0.18.1
193
+ threadpoolctl==3.6.0
194
+ tifffile==2025.6.11
195
+ timm==1.0.19
196
+ tinycss2==1.4.0
197
+ tokenizers==0.15.2
198
+ tomli==2.0.1
199
+ torch-fidelity==0.3.0
200
+ torch==2.5.1
201
+ torchmetrics==1.8.1
202
+ torchvision==0.20.1
203
+ tornado==6.5.2
204
+ tqdm==4.66.2
205
+ traitlets==5.14.3
206
+ transformers==4.37.2
207
+ triton==3.1.0
208
+ typeguard==4.3.0
209
+ types-python-dateutil==2.9.0.20250809
210
+ typing-inspection==0.4.1
211
+ typing_extensions==4.12.2
212
+ typing_extensions==4.14.1
213
+ tzdata==2025.2
214
+ uri-template==1.3.0
215
+ urllib3==2.5.0
216
+ vector_quantize_pytorch==1.14.7
217
+ wandb==0.17.2
218
+ wcwidth==0.2.13
219
+ webcolors==24.11.1
220
+ webdataset==0.2.73
221
+ webencodings==0.5.1
222
+ websocket-client==1.8.0
223
+ wheel==0.45.1
224
+ widgetsnbextension==4.0.14
225
+ x-clip==0.14.4
226
+ yarl==1.20.1
227
+ zipp==3.19.2
228
+ zipp==3.23.0
229
+ zope.event==5.1.1
230
+ zope.interface==7.2
wandb/run-20250809_152227-vit-h-MST/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"subset_0_0_test/loss": 15.603862762451172, "subset_0_0_test/loss_clip_total": 2.6600615978240967, "subset_0_0_test/loss_prior": 0.43146002292633057, "subset_0_0_test/blurry_pixcorr": 0.0, "subset_0_0_test/fwd_pct_correct": 0.30645161867141724, "subset_0_0_test/bwd_pct_correct": 0.25806450843811035, "subset_0_1_test/loss": 20.326326370239258, "subset_0_1_test/loss_clip_total": 2.8700623512268066, "subset_0_1_test/loss_prior": 0.5818755030632019, "subset_0_1_test/blurry_pixcorr": 0.0, "subset_0_1_test/fwd_pct_correct": 0.32258063554763794, "subset_0_1_test/bwd_pct_correct": 0.22580644488334656, "subset_1_0_test/loss": 18.277183532714844, "subset_1_0_test/loss_clip_total": 2.982631206512451, "subset_1_0_test/loss_prior": 0.509818434715271, "subset_1_0_test/blurry_pixcorr": 0.0, "subset_1_0_test/fwd_pct_correct": 0.24193547666072845, "subset_1_0_test/bwd_pct_correct": 0.14516128599643707, "subset_1_1_test/loss": 13.336342811584473, "subset_1_1_test/loss_clip_total": 3.0331695079803467, "subset_1_1_test/loss_prior": 0.34343910217285156, "subset_1_1_test/blurry_pixcorr": 0.0, "subset_1_1_test/fwd_pct_correct": 0.24193547666072845, "subset_1_1_test/bwd_pct_correct": 0.20967741310596466, "train/loss": 6.496718102313102, "train/lr": 1.1999999999999998e-08, "train/num_steps": 7050, "train/fwd_pct_correct": 0.9902482349821862, "train/bwd_pct_correct": 0.9902482349821862, "train/loss_clip_total": 0.014585590695942495, "train/loss_blurry_total": 0.0, "train/loss_blurry_cont_total": 0.0, "train/blurry_pixcorr": 0.0, "train/recon_cossim": 0.8349524990041205, "train/recon_mse": 0.21607108477582324, "train/loss_prior": 0.21607108477582324, "_timestamp": 1754754981.5695724, "_runtime": 2071.16503739357, "_step": 149}
wandb/run-20250809_152227-vit-h-MST/logs/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20250809_153455-sdxl_turbo-MST/files/diff.patch ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20250809_153455-sdxl_turbo-MST/files/output.log ADDED
@@ -0,0 +1 @@
 
 
1
+
wandb/run-20250809_153455-sdxl_turbo-MST/files/requirements.txt ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CoCa-pytorch==0.1.0
2
+ Django==5.2.5
3
+ GitPython==3.1.45
4
+ Jinja2==3.1.6
5
+ MarkupSafe==3.0.2
6
+ PyYAML==6.0.2
7
+ Pygments==2.19.2
8
+ Send2Trash==1.8.3
9
+ accelerate==0.24.1
10
+ aiohappyeyeballs==2.6.1
11
+ aiohttp==3.12.15
12
+ aiosignal==1.4.0
13
+ annotated-types==0.7.0
14
+ antlr4-python3-runtime==4.9.3
15
+ ants==0.0.7
16
+ anyio==4.10.0
17
+ argon2-cffi-bindings==25.1.0
18
+ argon2-cffi==25.1.0
19
+ arrow==1.3.0
20
+ asgiref==3.9.1
21
+ asttokens==3.0.0
22
+ async-lru==2.0.5
23
+ attrs==25.3.0
24
+ autocommand==2.2.2
25
+ babel==2.17.0
26
+ backports.tarfile==1.2.0
27
+ beartype==0.21.0
28
+ beautifulsoup4==4.13.4
29
+ bleach==6.2.0
30
+ braceexpand==0.1.7
31
+ certifi==2025.8.3
32
+ cffi==1.17.1
33
+ charset-normalizer==3.4.3
34
+ click==8.2.1
35
+ clip-anytorch==2.6.0
36
+ clip==0.2.0
37
+ comm==0.2.3
38
+ contourpy==1.3.3
39
+ cycler==0.12.1
40
+ dalle2-pytorch==1.15.6
41
+ debugpy==1.8.16
42
+ decorator==5.2.1
43
+ defusedxml==0.7.1
44
+ diffusers==0.23.0
45
+ docker-pycreds==0.4.0
46
+ einops==0.7.0
47
+ einx==0.3.0
48
+ ema-pytorch==0.7.7
49
+ embedding-reader==1.7.0
50
+ executing==2.2.0
51
+ fastjsonschema==2.21.1
52
+ filelock==3.18.0
53
+ fonttools==4.59.0
54
+ fqdn==1.5.1
55
+ frozendict==2.4.6
56
+ frozenlist==1.7.0
57
+ fsspec==2025.7.0
58
+ ftfy==6.3.1
59
+ gevent==25.5.1
60
+ gitdb==4.0.12
61
+ greenlet==3.2.4
62
+ h11==0.16.0
63
+ h5py==3.10.0
64
+ hf-xet==1.1.7
65
+ httpcore==1.0.9
66
+ httpx==0.28.1
67
+ huggingface-hub==0.34.4
68
+ idna==3.10
69
+ imageio==2.37.0
70
+ importlib_metadata==8.0.0
71
+ importlib_metadata==8.7.0
72
+ inflect==7.3.1
73
+ ipykernel==6.30.1
74
+ ipython==9.4.0
75
+ ipython_pygments_lexers==1.1.1
76
+ ipywidgets==8.1.7
77
+ isoduration==20.11.0
78
+ jaraco.collections==5.1.0
79
+ jaraco.context==5.3.0
80
+ jaraco.functools==4.0.1
81
+ jaraco.text==3.12.1
82
+ jedi==0.19.2
83
+ joblib==1.5.1
84
+ json5==0.12.0
85
+ jsonpointer==3.0.0
86
+ jsonschema-specifications==2025.4.1
87
+ jsonschema==4.25.0
88
+ jupyter-console==6.6.3
89
+ jupyter-events==0.12.0
90
+ jupyter-lsp==2.2.6
91
+ jupyter==1.1.1
92
+ jupyter_client==8.6.3
93
+ jupyter_core==5.8.1
94
+ jupyter_server==2.16.0
95
+ jupyter_server_terminals==0.5.3
96
+ jupyterlab==4.4.5
97
+ jupyterlab_nvdashboard==0.13.0
98
+ jupyterlab_pygments==0.3.0
99
+ jupyterlab_server==2.27.3
100
+ jupyterlab_widgets==3.0.15
101
+ kiwisolver==1.4.8
102
+ kornia==0.8.1
103
+ kornia_rs==0.1.9
104
+ lark==1.2.2
105
+ lazy_loader==0.4
106
+ lightning-utilities==0.15.2
107
+ lxml==6.0.0
108
+ matplotlib-inline==0.1.7
109
+ matplotlib==3.8.2
110
+ mistune==3.1.3
111
+ more-itertools==10.3.0
112
+ mpmath==1.3.0
113
+ multidict==6.6.3
114
+ nbclient==0.10.2
115
+ nbconvert==7.16.6
116
+ nbformat==5.10.4
117
+ nest-asyncio==1.6.0
118
+ networkx==3.5
119
+ nibabel==5.2.1
120
+ nilearn==0.12.0
121
+ notebook==7.4.5
122
+ notebook_shim==0.2.4
123
+ numpy==1.26.4
124
+ nvidia-cublas-cu12==12.4.5.8
125
+ nvidia-cuda-cupti-cu12==12.4.127
126
+ nvidia-cuda-nvrtc-cu12==12.4.127
127
+ nvidia-cuda-runtime-cu12==12.4.127
128
+ nvidia-cudnn-cu12==9.1.0.70
129
+ nvidia-cufft-cu12==11.2.1.3
130
+ nvidia-curand-cu12==10.3.5.147
131
+ nvidia-cusolver-cu12==11.6.1.9
132
+ nvidia-cusparse-cu12==12.3.1.170
133
+ nvidia-ml-py==12.575.51
134
+ nvidia-nccl-cu12==2.21.5
135
+ nvidia-nvjitlink-cu12==12.4.127
136
+ nvidia-nvtx-cu12==12.4.127
137
+ omegaconf==2.3.0
138
+ open-clip-torch==2.24.0
139
+ overrides==7.7.0
140
+ packaging==24.2
141
+ packaging==25.0
142
+ pandas==2.2.0
143
+ pandocfilters==1.5.1
144
+ parso==0.8.4
145
+ pexpect==4.9.0
146
+ pillow==10.2.0
147
+ platformdirs==4.2.2
148
+ platformdirs==4.3.8
149
+ prometheus_client==0.22.1
150
+ prompt_toolkit==3.0.51
151
+ propcache==0.3.2
152
+ protobuf==5.29.5
153
+ psutil==7.0.0
154
+ ptyprocess==0.7.0
155
+ pure_eval==0.2.3
156
+ pyarrow==15.0.2
157
+ pycparser==2.22
158
+ pydantic==2.11.7
159
+ pydantic_core==2.33.2
160
+ pynvml==12.0.0
161
+ pyparsing==3.2.3
162
+ python-dateutil==2.9.0.post0
163
+ python-json-logger==3.3.0
164
+ pytorch-lightning==2.5.2
165
+ pytorch-warmup==0.2.0
166
+ pytz==2025.2
167
+ pyzmq==27.0.1
168
+ referencing==0.36.2
169
+ regex==2025.7.34
170
+ requests==2.32.4
171
+ resize-right==0.0.2
172
+ rfc3339-validator==0.1.4
173
+ rfc3986-validator==0.1.1
174
+ rfc3987-syntax==1.1.0
175
+ rotary-embedding-torch==0.8.9
176
+ rpds-py==0.27.0
177
+ safetensors==0.6.2
178
+ scikit-image==0.25.2
179
+ scikit-learn==1.4.1.post1
180
+ scipy==1.12.0
181
+ sentencepiece==0.2.0
182
+ sentry-sdk==2.34.1
183
+ setproctitle==1.3.6
184
+ setuptools==80.9.0
185
+ six==1.17.0
186
+ smmap==5.0.2
187
+ sniffio==1.3.1
188
+ soupsieve==2.7
189
+ sqlparse==0.5.3
190
+ stack-data==0.6.3
191
+ sympy==1.13.1
192
+ terminado==0.18.1
193
+ threadpoolctl==3.6.0
194
+ tifffile==2025.6.11
195
+ timm==1.0.19
196
+ tinycss2==1.4.0
197
+ tokenizers==0.15.2
198
+ tomli==2.0.1
199
+ torch-fidelity==0.3.0
200
+ torch==2.5.1
201
+ torchmetrics==1.8.1
202
+ torchvision==0.20.1
203
+ tornado==6.5.2
204
+ tqdm==4.66.2
205
+ traitlets==5.14.3
206
+ transformers==4.37.2
207
+ triton==3.1.0
208
+ typeguard==4.3.0
209
+ types-python-dateutil==2.9.0.20250809
210
+ typing-inspection==0.4.1
211
+ typing_extensions==4.12.2
212
+ typing_extensions==4.14.1
213
+ tzdata==2025.2
214
+ uri-template==1.3.0
215
+ urllib3==2.5.0
216
+ vector_quantize_pytorch==1.14.7
217
+ wandb==0.17.2
218
+ wcwidth==0.2.13
219
+ webcolors==24.11.1
220
+ webdataset==0.2.73
221
+ webencodings==0.5.1
222
+ websocket-client==1.8.0
223
+ wheel==0.45.1
224
+ widgetsnbextension==4.0.14
225
+ x-clip==0.14.4
226
+ yarl==1.20.1
227
+ zipp==3.19.2
228
+ zipp==3.23.0
229
+ zope.event==5.1.1
230
+ zope.interface==7.2
wandb/run-20250809_153455-sdxl_turbo-MST/files/wandb-metadata.json ADDED
@@ -0,0 +1,1167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-5.15.0-139-generic-x86_64-with-glibc2.35",
3
+ "python": "3.11.13",
4
+ "heartbeatAt": "2025-08-09T15:34:56.942086",
5
+ "startedAt": "2025-08-09T15:34:55.966691",
6
+ "docker": null,
7
+ "cuda": null,
8
+ "args": [],
9
+ "state": "running",
10
+ "program": "<python with no main file>",
11
+ "codePathLocal": null,
12
+ "git": {
13
+ "remote": "https://github.com/PrincetonCompMemLab/real_time_mindEye2",
14
+ "commit": "a4bdfadf8f0b5e580b93a897978290a2890d5c52"
15
+ },
16
+ "email": "torrico.villanueva.cesar.kadir@gmail.com",
17
+ "root": "/home/ubuntu/real_time_mindEye2",
18
+ "host": "defiant-holly-hornet-758fccb7c4-hl6gs",
19
+ "username": "ubuntu",
20
+ "executable": "/home/ubuntu/rt_mindEye2/bin/python",
21
+ "cpu_count": 112,
22
+ "cpu_count_logical": 224,
23
+ "cpu_freq": {
24
+ "current": 824.9719598214285,
25
+ "min": 800.0,
26
+ "max": 2001.0
27
+ },
28
+ "cpu_freq_per_core": [
29
+ {
30
+ "current": 800.0,
31
+ "min": 800.0,
32
+ "max": 2001.0
33
+ },
34
+ {
35
+ "current": 800.0,
36
+ "min": 800.0,
37
+ "max": 2001.0
38
+ },
39
+ {
40
+ "current": 1986.269,
41
+ "min": 800.0,
42
+ "max": 2001.0
43
+ },
44
+ {
45
+ "current": 800.0,
46
+ "min": 800.0,
47
+ "max": 2001.0
48
+ },
49
+ {
50
+ "current": 800.0,
51
+ "min": 800.0,
52
+ "max": 2001.0
53
+ },
54
+ {
55
+ "current": 800.0,
56
+ "min": 800.0,
57
+ "max": 2001.0
58
+ },
59
+ {
60
+ "current": 800.0,
61
+ "min": 800.0,
62
+ "max": 2001.0
63
+ },
64
+ {
65
+ "current": 800.0,
66
+ "min": 800.0,
67
+ "max": 2001.0
68
+ },
69
+ {
70
+ "current": 800.0,
71
+ "min": 800.0,
72
+ "max": 2001.0
73
+ },
74
+ {
75
+ "current": 800.0,
76
+ "min": 800.0,
77
+ "max": 2001.0
78
+ },
79
+ {
80
+ "current": 800.0,
81
+ "min": 800.0,
82
+ "max": 2001.0
83
+ },
84
+ {
85
+ "current": 800.0,
86
+ "min": 800.0,
87
+ "max": 2001.0
88
+ },
89
+ {
90
+ "current": 800.0,
91
+ "min": 800.0,
92
+ "max": 2001.0
93
+ },
94
+ {
95
+ "current": 800.0,
96
+ "min": 800.0,
97
+ "max": 2001.0
98
+ },
99
+ {
100
+ "current": 800.0,
101
+ "min": 800.0,
102
+ "max": 2001.0
103
+ },
104
+ {
105
+ "current": 800.0,
106
+ "min": 800.0,
107
+ "max": 2001.0
108
+ },
109
+ {
110
+ "current": 800.0,
111
+ "min": 800.0,
112
+ "max": 2001.0
113
+ },
114
+ {
115
+ "current": 800.0,
116
+ "min": 800.0,
117
+ "max": 2001.0
118
+ },
119
+ {
120
+ "current": 800.0,
121
+ "min": 800.0,
122
+ "max": 2001.0
123
+ },
124
+ {
125
+ "current": 800.0,
126
+ "min": 800.0,
127
+ "max": 2001.0
128
+ },
129
+ {
130
+ "current": 800.0,
131
+ "min": 800.0,
132
+ "max": 2001.0
133
+ },
134
+ {
135
+ "current": 800.0,
136
+ "min": 800.0,
137
+ "max": 2001.0
138
+ },
139
+ {
140
+ "current": 800.0,
141
+ "min": 800.0,
142
+ "max": 2001.0
143
+ },
144
+ {
145
+ "current": 800.0,
146
+ "min": 800.0,
147
+ "max": 2001.0
148
+ },
149
+ {
150
+ "current": 800.0,
151
+ "min": 800.0,
152
+ "max": 2001.0
153
+ },
154
+ {
155
+ "current": 800.0,
156
+ "min": 800.0,
157
+ "max": 2001.0
158
+ },
159
+ {
160
+ "current": 800.0,
161
+ "min": 800.0,
162
+ "max": 2001.0
163
+ },
164
+ {
165
+ "current": 800.0,
166
+ "min": 800.0,
167
+ "max": 2001.0
168
+ },
169
+ {
170
+ "current": 800.0,
171
+ "min": 800.0,
172
+ "max": 2001.0
173
+ },
174
+ {
175
+ "current": 800.0,
176
+ "min": 800.0,
177
+ "max": 2001.0
178
+ },
179
+ {
180
+ "current": 800.0,
181
+ "min": 800.0,
182
+ "max": 2001.0
183
+ },
184
+ {
185
+ "current": 800.0,
186
+ "min": 800.0,
187
+ "max": 2001.0
188
+ },
189
+ {
190
+ "current": 800.0,
191
+ "min": 800.0,
192
+ "max": 2001.0
193
+ },
194
+ {
195
+ "current": 800.0,
196
+ "min": 800.0,
197
+ "max": 2001.0
198
+ },
199
+ {
200
+ "current": 800.0,
201
+ "min": 800.0,
202
+ "max": 2001.0
203
+ },
204
+ {
205
+ "current": 800.0,
206
+ "min": 800.0,
207
+ "max": 2001.0
208
+ },
209
+ {
210
+ "current": 800.0,
211
+ "min": 800.0,
212
+ "max": 2001.0
213
+ },
214
+ {
215
+ "current": 800.0,
216
+ "min": 800.0,
217
+ "max": 2001.0
218
+ },
219
+ {
220
+ "current": 800.0,
221
+ "min": 800.0,
222
+ "max": 2001.0
223
+ },
224
+ {
225
+ "current": 800.0,
226
+ "min": 800.0,
227
+ "max": 2001.0
228
+ },
229
+ {
230
+ "current": 800.0,
231
+ "min": 800.0,
232
+ "max": 2001.0
233
+ },
234
+ {
235
+ "current": 800.0,
236
+ "min": 800.0,
237
+ "max": 2001.0
238
+ },
239
+ {
240
+ "current": 800.0,
241
+ "min": 800.0,
242
+ "max": 2001.0
243
+ },
244
+ {
245
+ "current": 800.0,
246
+ "min": 800.0,
247
+ "max": 2001.0
248
+ },
249
+ {
250
+ "current": 800.0,
251
+ "min": 800.0,
252
+ "max": 2001.0
253
+ },
254
+ {
255
+ "current": 800.0,
256
+ "min": 800.0,
257
+ "max": 2001.0
258
+ },
259
+ {
260
+ "current": 800.0,
261
+ "min": 800.0,
262
+ "max": 2001.0
263
+ },
264
+ {
265
+ "current": 800.0,
266
+ "min": 800.0,
267
+ "max": 2001.0
268
+ },
269
+ {
270
+ "current": 800.0,
271
+ "min": 800.0,
272
+ "max": 2001.0
273
+ },
274
+ {
275
+ "current": 800.0,
276
+ "min": 800.0,
277
+ "max": 2001.0
278
+ },
279
+ {
280
+ "current": 800.0,
281
+ "min": 800.0,
282
+ "max": 2001.0
283
+ },
284
+ {
285
+ "current": 800.0,
286
+ "min": 800.0,
287
+ "max": 2001.0
288
+ },
289
+ {
290
+ "current": 800.0,
291
+ "min": 800.0,
292
+ "max": 2001.0
293
+ },
294
+ {
295
+ "current": 800.0,
296
+ "min": 800.0,
297
+ "max": 2001.0
298
+ },
299
+ {
300
+ "current": 800.0,
301
+ "min": 800.0,
302
+ "max": 2001.0
303
+ },
304
+ {
305
+ "current": 800.0,
306
+ "min": 800.0,
307
+ "max": 2001.0
308
+ },
309
+ {
310
+ "current": 800.0,
311
+ "min": 800.0,
312
+ "max": 2001.0
313
+ },
314
+ {
315
+ "current": 800.0,
316
+ "min": 800.0,
317
+ "max": 2001.0
318
+ },
319
+ {
320
+ "current": 800.0,
321
+ "min": 800.0,
322
+ "max": 2001.0
323
+ },
324
+ {
325
+ "current": 800.0,
326
+ "min": 800.0,
327
+ "max": 2001.0
328
+ },
329
+ {
330
+ "current": 800.0,
331
+ "min": 800.0,
332
+ "max": 2001.0
333
+ },
334
+ {
335
+ "current": 800.0,
336
+ "min": 800.0,
337
+ "max": 2001.0
338
+ },
339
+ {
340
+ "current": 800.0,
341
+ "min": 800.0,
342
+ "max": 2001.0
343
+ },
344
+ {
345
+ "current": 800.0,
346
+ "min": 800.0,
347
+ "max": 2001.0
348
+ },
349
+ {
350
+ "current": 800.0,
351
+ "min": 800.0,
352
+ "max": 2001.0
353
+ },
354
+ {
355
+ "current": 800.0,
356
+ "min": 800.0,
357
+ "max": 2001.0
358
+ },
359
+ {
360
+ "current": 800.0,
361
+ "min": 800.0,
362
+ "max": 2001.0
363
+ },
364
+ {
365
+ "current": 800.0,
366
+ "min": 800.0,
367
+ "max": 2001.0
368
+ },
369
+ {
370
+ "current": 800.0,
371
+ "min": 800.0,
372
+ "max": 2001.0
373
+ },
374
+ {
375
+ "current": 800.0,
376
+ "min": 800.0,
377
+ "max": 2001.0
378
+ },
379
+ {
380
+ "current": 800.0,
381
+ "min": 800.0,
382
+ "max": 2001.0
383
+ },
384
+ {
385
+ "current": 800.0,
386
+ "min": 800.0,
387
+ "max": 2001.0
388
+ },
389
+ {
390
+ "current": 800.0,
391
+ "min": 800.0,
392
+ "max": 2001.0
393
+ },
394
+ {
395
+ "current": 800.0,
396
+ "min": 800.0,
397
+ "max": 2001.0
398
+ },
399
+ {
400
+ "current": 800.0,
401
+ "min": 800.0,
402
+ "max": 2001.0
403
+ },
404
+ {
405
+ "current": 800.0,
406
+ "min": 800.0,
407
+ "max": 2001.0
408
+ },
409
+ {
410
+ "current": 800.0,
411
+ "min": 800.0,
412
+ "max": 2001.0
413
+ },
414
+ {
415
+ "current": 800.0,
416
+ "min": 800.0,
417
+ "max": 2001.0
418
+ },
419
+ {
420
+ "current": 800.0,
421
+ "min": 800.0,
422
+ "max": 2001.0
423
+ },
424
+ {
425
+ "current": 800.0,
426
+ "min": 800.0,
427
+ "max": 2001.0
428
+ },
429
+ {
430
+ "current": 800.0,
431
+ "min": 800.0,
432
+ "max": 2001.0
433
+ },
434
+ {
435
+ "current": 800.0,
436
+ "min": 800.0,
437
+ "max": 2001.0
438
+ },
439
+ {
440
+ "current": 800.0,
441
+ "min": 800.0,
442
+ "max": 2001.0
443
+ },
444
+ {
445
+ "current": 800.0,
446
+ "min": 800.0,
447
+ "max": 2001.0
448
+ },
449
+ {
450
+ "current": 800.0,
451
+ "min": 800.0,
452
+ "max": 2001.0
453
+ },
454
+ {
455
+ "current": 800.0,
456
+ "min": 800.0,
457
+ "max": 2001.0
458
+ },
459
+ {
460
+ "current": 800.0,
461
+ "min": 800.0,
462
+ "max": 2001.0
463
+ },
464
+ {
465
+ "current": 800.0,
466
+ "min": 800.0,
467
+ "max": 2001.0
468
+ },
469
+ {
470
+ "current": 800.0,
471
+ "min": 800.0,
472
+ "max": 2001.0
473
+ },
474
+ {
475
+ "current": 800.0,
476
+ "min": 800.0,
477
+ "max": 2001.0
478
+ },
479
+ {
480
+ "current": 800.0,
481
+ "min": 800.0,
482
+ "max": 2001.0
483
+ },
484
+ {
485
+ "current": 1200.01,
486
+ "min": 800.0,
487
+ "max": 2001.0
488
+ },
489
+ {
490
+ "current": 1300.0,
491
+ "min": 800.0,
492
+ "max": 2001.0
493
+ },
494
+ {
495
+ "current": 800.0,
496
+ "min": 800.0,
497
+ "max": 2001.0
498
+ },
499
+ {
500
+ "current": 800.0,
501
+ "min": 800.0,
502
+ "max": 2001.0
503
+ },
504
+ {
505
+ "current": 800.0,
506
+ "min": 800.0,
507
+ "max": 2001.0
508
+ },
509
+ {
510
+ "current": 1100.0,
511
+ "min": 800.0,
512
+ "max": 2001.0
513
+ },
514
+ {
515
+ "current": 800.0,
516
+ "min": 800.0,
517
+ "max": 2001.0
518
+ },
519
+ {
520
+ "current": 800.0,
521
+ "min": 800.0,
522
+ "max": 2001.0
523
+ },
524
+ {
525
+ "current": 1300.0,
526
+ "min": 800.0,
527
+ "max": 2001.0
528
+ },
529
+ {
530
+ "current": 1100.0,
531
+ "min": 800.0,
532
+ "max": 2001.0
533
+ },
534
+ {
535
+ "current": 1000.0,
536
+ "min": 800.0,
537
+ "max": 2001.0
538
+ },
539
+ {
540
+ "current": 800.0,
541
+ "min": 800.0,
542
+ "max": 2001.0
543
+ },
544
+ {
545
+ "current": 800.0,
546
+ "min": 800.0,
547
+ "max": 2001.0
548
+ },
549
+ {
550
+ "current": 800.0,
551
+ "min": 800.0,
552
+ "max": 2001.0
553
+ },
554
+ {
555
+ "current": 800.0,
556
+ "min": 800.0,
557
+ "max": 2001.0
558
+ },
559
+ {
560
+ "current": 800.0,
561
+ "min": 800.0,
562
+ "max": 2001.0
563
+ },
564
+ {
565
+ "current": 800.0,
566
+ "min": 800.0,
567
+ "max": 2001.0
568
+ },
569
+ {
570
+ "current": 800.0,
571
+ "min": 800.0,
572
+ "max": 2001.0
573
+ },
574
+ {
575
+ "current": 800.0,
576
+ "min": 800.0,
577
+ "max": 2001.0
578
+ },
579
+ {
580
+ "current": 800.0,
581
+ "min": 800.0,
582
+ "max": 2001.0
583
+ },
584
+ {
585
+ "current": 800.0,
586
+ "min": 800.0,
587
+ "max": 2001.0
588
+ },
589
+ {
590
+ "current": 800.0,
591
+ "min": 800.0,
592
+ "max": 2001.0
593
+ },
594
+ {
595
+ "current": 800.0,
596
+ "min": 800.0,
597
+ "max": 2001.0
598
+ },
599
+ {
600
+ "current": 800.0,
601
+ "min": 800.0,
602
+ "max": 2001.0
603
+ },
604
+ {
605
+ "current": 800.0,
606
+ "min": 800.0,
607
+ "max": 2001.0
608
+ },
609
+ {
610
+ "current": 800.0,
611
+ "min": 800.0,
612
+ "max": 2001.0
613
+ },
614
+ {
615
+ "current": 800.0,
616
+ "min": 800.0,
617
+ "max": 2001.0
618
+ },
619
+ {
620
+ "current": 800.0,
621
+ "min": 800.0,
622
+ "max": 2001.0
623
+ },
624
+ {
625
+ "current": 800.0,
626
+ "min": 800.0,
627
+ "max": 2001.0
628
+ },
629
+ {
630
+ "current": 800.0,
631
+ "min": 800.0,
632
+ "max": 2001.0
633
+ },
634
+ {
635
+ "current": 800.0,
636
+ "min": 800.0,
637
+ "max": 2001.0
638
+ },
639
+ {
640
+ "current": 800.0,
641
+ "min": 800.0,
642
+ "max": 2001.0
643
+ },
644
+ {
645
+ "current": 800.0,
646
+ "min": 800.0,
647
+ "max": 2001.0
648
+ },
649
+ {
650
+ "current": 800.0,
651
+ "min": 800.0,
652
+ "max": 2001.0
653
+ },
654
+ {
655
+ "current": 800.0,
656
+ "min": 800.0,
657
+ "max": 2001.0
658
+ },
659
+ {
660
+ "current": 800.0,
661
+ "min": 800.0,
662
+ "max": 2001.0
663
+ },
664
+ {
665
+ "current": 800.0,
666
+ "min": 800.0,
667
+ "max": 2001.0
668
+ },
669
+ {
670
+ "current": 800.0,
671
+ "min": 800.0,
672
+ "max": 2001.0
673
+ },
674
+ {
675
+ "current": 800.0,
676
+ "min": 800.0,
677
+ "max": 2001.0
678
+ },
679
+ {
680
+ "current": 900.0,
681
+ "min": 800.0,
682
+ "max": 2001.0
683
+ },
684
+ {
685
+ "current": 800.0,
686
+ "min": 800.0,
687
+ "max": 2001.0
688
+ },
689
+ {
690
+ "current": 800.0,
691
+ "min": 800.0,
692
+ "max": 2001.0
693
+ },
694
+ {
695
+ "current": 800.0,
696
+ "min": 800.0,
697
+ "max": 2001.0
698
+ },
699
+ {
700
+ "current": 800.0,
701
+ "min": 800.0,
702
+ "max": 2001.0
703
+ },
704
+ {
705
+ "current": 800.0,
706
+ "min": 800.0,
707
+ "max": 2001.0
708
+ },
709
+ {
710
+ "current": 800.0,
711
+ "min": 800.0,
712
+ "max": 2001.0
713
+ },
714
+ {
715
+ "current": 800.0,
716
+ "min": 800.0,
717
+ "max": 2001.0
718
+ },
719
+ {
720
+ "current": 800.0,
721
+ "min": 800.0,
722
+ "max": 2001.0
723
+ },
724
+ {
725
+ "current": 800.0,
726
+ "min": 800.0,
727
+ "max": 2001.0
728
+ },
729
+ {
730
+ "current": 800.0,
731
+ "min": 800.0,
732
+ "max": 2001.0
733
+ },
734
+ {
735
+ "current": 838.038,
736
+ "min": 800.0,
737
+ "max": 2001.0
738
+ },
739
+ {
740
+ "current": 800.0,
741
+ "min": 800.0,
742
+ "max": 2001.0
743
+ },
744
+ {
745
+ "current": 800.0,
746
+ "min": 800.0,
747
+ "max": 2001.0
748
+ },
749
+ {
750
+ "current": 800.0,
751
+ "min": 800.0,
752
+ "max": 2001.0
753
+ },
754
+ {
755
+ "current": 800.0,
756
+ "min": 800.0,
757
+ "max": 2001.0
758
+ },
759
+ {
760
+ "current": 800.0,
761
+ "min": 800.0,
762
+ "max": 2001.0
763
+ },
764
+ {
765
+ "current": 800.0,
766
+ "min": 800.0,
767
+ "max": 2001.0
768
+ },
769
+ {
770
+ "current": 800.0,
771
+ "min": 800.0,
772
+ "max": 2001.0
773
+ },
774
+ {
775
+ "current": 800.0,
776
+ "min": 800.0,
777
+ "max": 2001.0
778
+ },
779
+ {
780
+ "current": 800.0,
781
+ "min": 800.0,
782
+ "max": 2001.0
783
+ },
784
+ {
785
+ "current": 800.0,
786
+ "min": 800.0,
787
+ "max": 2001.0
788
+ },
789
+ {
790
+ "current": 800.0,
791
+ "min": 800.0,
792
+ "max": 2001.0
793
+ },
794
+ {
795
+ "current": 800.0,
796
+ "min": 800.0,
797
+ "max": 2001.0
798
+ },
799
+ {
800
+ "current": 800.0,
801
+ "min": 800.0,
802
+ "max": 2001.0
803
+ },
804
+ {
805
+ "current": 800.0,
806
+ "min": 800.0,
807
+ "max": 2001.0
808
+ },
809
+ {
810
+ "current": 800.0,
811
+ "min": 800.0,
812
+ "max": 2001.0
813
+ },
814
+ {
815
+ "current": 800.0,
816
+ "min": 800.0,
817
+ "max": 2001.0
818
+ },
819
+ {
820
+ "current": 800.0,
821
+ "min": 800.0,
822
+ "max": 2001.0
823
+ },
824
+ {
825
+ "current": 800.0,
826
+ "min": 800.0,
827
+ "max": 2001.0
828
+ },
829
+ {
830
+ "current": 800.0,
831
+ "min": 800.0,
832
+ "max": 2001.0
833
+ },
834
+ {
835
+ "current": 800.0,
836
+ "min": 800.0,
837
+ "max": 2001.0
838
+ },
839
+ {
840
+ "current": 800.0,
841
+ "min": 800.0,
842
+ "max": 2001.0
843
+ },
844
+ {
845
+ "current": 800.0,
846
+ "min": 800.0,
847
+ "max": 2001.0
848
+ },
849
+ {
850
+ "current": 800.0,
851
+ "min": 800.0,
852
+ "max": 2001.0
853
+ },
854
+ {
855
+ "current": 800.0,
856
+ "min": 800.0,
857
+ "max": 2001.0
858
+ },
859
+ {
860
+ "current": 800.0,
861
+ "min": 800.0,
862
+ "max": 2001.0
863
+ },
864
+ {
865
+ "current": 800.0,
866
+ "min": 800.0,
867
+ "max": 2001.0
868
+ },
869
+ {
870
+ "current": 800.0,
871
+ "min": 800.0,
872
+ "max": 2001.0
873
+ },
874
+ {
875
+ "current": 800.0,
876
+ "min": 800.0,
877
+ "max": 2001.0
878
+ },
879
+ {
880
+ "current": 800.0,
881
+ "min": 800.0,
882
+ "max": 2001.0
883
+ },
884
+ {
885
+ "current": 800.0,
886
+ "min": 800.0,
887
+ "max": 2001.0
888
+ },
889
+ {
890
+ "current": 800.0,
891
+ "min": 800.0,
892
+ "max": 2001.0
893
+ },
894
+ {
895
+ "current": 800.0,
896
+ "min": 800.0,
897
+ "max": 2001.0
898
+ },
899
+ {
900
+ "current": 800.0,
901
+ "min": 800.0,
902
+ "max": 2001.0
903
+ },
904
+ {
905
+ "current": 800.0,
906
+ "min": 800.0,
907
+ "max": 2001.0
908
+ },
909
+ {
910
+ "current": 800.0,
911
+ "min": 800.0,
912
+ "max": 2001.0
913
+ },
914
+ {
915
+ "current": 800.0,
916
+ "min": 800.0,
917
+ "max": 2001.0
918
+ },
919
+ {
920
+ "current": 800.0,
921
+ "min": 800.0,
922
+ "max": 2001.0
923
+ },
924
+ {
925
+ "current": 800.0,
926
+ "min": 800.0,
927
+ "max": 2001.0
928
+ },
929
+ {
930
+ "current": 1400.0,
931
+ "min": 800.0,
932
+ "max": 2001.0
933
+ },
934
+ {
935
+ "current": 800.0,
936
+ "min": 800.0,
937
+ "max": 2001.0
938
+ },
939
+ {
940
+ "current": 800.0,
941
+ "min": 800.0,
942
+ "max": 2001.0
943
+ },
944
+ {
945
+ "current": 800.0,
946
+ "min": 800.0,
947
+ "max": 2001.0
948
+ },
949
+ {
950
+ "current": 800.0,
951
+ "min": 800.0,
952
+ "max": 2001.0
953
+ },
954
+ {
955
+ "current": 800.0,
956
+ "min": 800.0,
957
+ "max": 2001.0
958
+ },
959
+ {
960
+ "current": 800.0,
961
+ "min": 800.0,
962
+ "max": 2001.0
963
+ },
964
+ {
965
+ "current": 800.0,
966
+ "min": 800.0,
967
+ "max": 2001.0
968
+ },
969
+ {
970
+ "current": 800.0,
971
+ "min": 800.0,
972
+ "max": 2001.0
973
+ },
974
+ {
975
+ "current": 800.0,
976
+ "min": 800.0,
977
+ "max": 2001.0
978
+ },
979
+ {
980
+ "current": 800.0,
981
+ "min": 800.0,
982
+ "max": 2001.0
983
+ },
984
+ {
985
+ "current": 800.0,
986
+ "min": 800.0,
987
+ "max": 2001.0
988
+ },
989
+ {
990
+ "current": 800.0,
991
+ "min": 800.0,
992
+ "max": 2001.0
993
+ },
994
+ {
995
+ "current": 800.0,
996
+ "min": 800.0,
997
+ "max": 2001.0
998
+ },
999
+ {
1000
+ "current": 1300.0,
1001
+ "min": 800.0,
1002
+ "max": 2001.0
1003
+ },
1004
+ {
1005
+ "current": 800.0,
1006
+ "min": 800.0,
1007
+ "max": 2001.0
1008
+ },
1009
+ {
1010
+ "current": 800.0,
1011
+ "min": 800.0,
1012
+ "max": 2001.0
1013
+ },
1014
+ {
1015
+ "current": 800.0,
1016
+ "min": 800.0,
1017
+ "max": 2001.0
1018
+ },
1019
+ {
1020
+ "current": 800.0,
1021
+ "min": 800.0,
1022
+ "max": 2001.0
1023
+ },
1024
+ {
1025
+ "current": 800.0,
1026
+ "min": 800.0,
1027
+ "max": 2001.0
1028
+ },
1029
+ {
1030
+ "current": 800.0,
1031
+ "min": 800.0,
1032
+ "max": 2001.0
1033
+ },
1034
+ {
1035
+ "current": 800.0,
1036
+ "min": 800.0,
1037
+ "max": 2001.0
1038
+ },
1039
+ {
1040
+ "current": 800.0,
1041
+ "min": 800.0,
1042
+ "max": 2001.0
1043
+ },
1044
+ {
1045
+ "current": 800.0,
1046
+ "min": 800.0,
1047
+ "max": 2001.0
1048
+ },
1049
+ {
1050
+ "current": 800.0,
1051
+ "min": 800.0,
1052
+ "max": 2001.0
1053
+ },
1054
+ {
1055
+ "current": 800.0,
1056
+ "min": 800.0,
1057
+ "max": 2001.0
1058
+ },
1059
+ {
1060
+ "current": 800.0,
1061
+ "min": 800.0,
1062
+ "max": 2001.0
1063
+ },
1064
+ {
1065
+ "current": 1400.0,
1066
+ "min": 800.0,
1067
+ "max": 2001.0
1068
+ },
1069
+ {
1070
+ "current": 800.0,
1071
+ "min": 800.0,
1072
+ "max": 2001.0
1073
+ },
1074
+ {
1075
+ "current": 800.0,
1076
+ "min": 800.0,
1077
+ "max": 2001.0
1078
+ },
1079
+ {
1080
+ "current": 800.0,
1081
+ "min": 800.0,
1082
+ "max": 2001.0
1083
+ },
1084
+ {
1085
+ "current": 800.0,
1086
+ "min": 800.0,
1087
+ "max": 2001.0
1088
+ },
1089
+ {
1090
+ "current": 800.0,
1091
+ "min": 800.0,
1092
+ "max": 2001.0
1093
+ },
1094
+ {
1095
+ "current": 800.0,
1096
+ "min": 800.0,
1097
+ "max": 2001.0
1098
+ },
1099
+ {
1100
+ "current": 800.0,
1101
+ "min": 800.0,
1102
+ "max": 2001.0
1103
+ },
1104
+ {
1105
+ "current": 800.0,
1106
+ "min": 800.0,
1107
+ "max": 2001.0
1108
+ },
1109
+ {
1110
+ "current": 800.0,
1111
+ "min": 800.0,
1112
+ "max": 2001.0
1113
+ },
1114
+ {
1115
+ "current": 800.0,
1116
+ "min": 800.0,
1117
+ "max": 2001.0
1118
+ },
1119
+ {
1120
+ "current": 800.0,
1121
+ "min": 800.0,
1122
+ "max": 2001.0
1123
+ },
1124
+ {
1125
+ "current": 800.0,
1126
+ "min": 800.0,
1127
+ "max": 2001.0
1128
+ },
1129
+ {
1130
+ "current": 800.0,
1131
+ "min": 800.0,
1132
+ "max": 2001.0
1133
+ },
1134
+ {
1135
+ "current": 800.0,
1136
+ "min": 800.0,
1137
+ "max": 2001.0
1138
+ },
1139
+ {
1140
+ "current": 800.0,
1141
+ "min": 800.0,
1142
+ "max": 2001.0
1143
+ },
1144
+ {
1145
+ "current": 800.0,
1146
+ "min": 800.0,
1147
+ "max": 2001.0
1148
+ }
1149
+ ],
1150
+ "disk": {
1151
+ "/": {
1152
+ "total": 3519.2570304870605,
1153
+ "used": 3068.076229095459
1154
+ }
1155
+ },
1156
+ "gpu": "NVIDIA H100 80GB HBM3",
1157
+ "gpu_count": 1,
1158
+ "gpu_devices": [
1159
+ {
1160
+ "name": "NVIDIA H100 80GB HBM3",
1161
+ "memory_total": 85520809984
1162
+ }
1163
+ ],
1164
+ "memory": {
1165
+ "total": 2015.504482269287
1166
+ }
1167
+ }