-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
324 lines (270 loc) · 13.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
<!DOCTYPE html>
<html>
<head>
<title>Yumin Suh</title>
<meta name="viewport" content="width=device-width"/>
<meta name="description" content="Yumin Suh"/>
<meta charset="UTF-8">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@300&display=swap" rel="stylesheet">
<style>
body {
/* font-family: "Trirong", serif; */
font-family: 'Open Sans', sans-serif;
font-size: 14px;
margin-left: 300px;
margin-right: 300px;
display: inline-table; /* table|inline-table|table-cell */
word-wrap: normal;
}
.sectionTitle{
color: Maroon;
}
h3 {
margin-top: 0px;
margin-bottom: 0px;
}
a {
color: blue;
}
img {
width: 120px;
height: 120px;
object-fit: cover;
}
.row::after {
content: "";
clear: both;
display: table;
}
.column {
float: left;
padding: 15px;
}
</style>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4CSM3ZYBN3"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-4CSM3ZYBN3');
</script>
</head>
<body id="top">
<div id="cv">
<div class="mainDetails">
<div id="name">
<h1>Yumin Suh</h1>
<div class="row">
<div class="column">
<div id="img">
<img src="me.jpg" alt="Yumin Suh" /></a>
</div>
</div>
<div class="column">
<!-- <ul> -->
<br>
E-mail: ysuh (at) atmanity.io<br>
<!-- <a href="https://www.dropbox.com/s/9hpxj7npwl927uz/cv_2.pdf?dl=0">CV</a><br> -->
<a href="YuminSuh_jan2025.pdf">CV</a><br>
<a href="https://scholar.google.com/citations?user=a9k4nwQAAAAJ&hl=en">Google Scholar</a><br>
<a href="https://github.com/yuminsuh">Github</a><br>
<!-- </ul> -->
</div>
</div>
</div>
</div>
<br>
<div id="mainArea">
<section>
<!-- <div class="sectionTitle">
<h1>About Me</h1>
</div> -->
<!-- <div class="sectionContent" style="color: red;">
We are looking for strongly motivated graduate students for 2024 summer internship. Please apply <a href=https://www.appone.com/MainInfoReq.asp?R_ID=5958900&B_ID=83&fid=1&Adid=0&ssbgcolor=FFFFFF&SearchScreenID=1381&CountryID=3&LanguageID=2>here</a> or email me if you are interested in collaborating with us.
</div>-->
<br>
<div class="sectionContent">
I am an AI Research Scientist at <a href=https://www.atmanity.io/>Atmanity</a>. Before joining Atmanity, I was a Senior Researcher at <a href="https://www.nec-labs.com/research/media-analytics/home/">NEC Labs America</a>.
Before joining NEC, I worked as a postdoctoral researcher at Seoul National University under the supervision of <a href="https://cv.snu.ac.kr/index.php/bhhan/">Bohyung Han</a> and <a href="https://cv.snu.ac.kr/index.php/~kmlee/">Kyoung Mu Lee</a>.
I completed my PhD at Seoul National University in 2019 under the supervision of <a href="https://cv.snu.ac.kr/index.php/~kmlee/">Kyoung Mu Lee</a>.
I have a broad interest in computer vision and machine learning.
</div>
</section>
<section>
<div class="sectionTitle">
<h1>Selected Publications (<a href="publications.html" target="_blank" style="color:Maroon;">More</a>)</h1>
</div>
<div class="sectionContent">
<h3>Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation</h3>
Abhishek Aich, <u>Yumin Suh</u>, Samuel Schulter, Manmohan Chandraker<br>
<b>ICLR 2025 </b> [<a href="https://arxiv.org/pdf/2404.14657v1.pdf">arXiv</a>]
[<a href='https://github.com/abhishekaich27/proscale-pytorch'>Code</a>]
<br><br>
<h3>Improving the Efficiency-Accuracy Tradeoff of DETR-Style Models in Practice</h3>
<u>Yumin Suh</u>, Dongwan Kim, Abhishek Aich, Samuel Schulter, Jong-Chyi Su, Bohyung Han, Manmohan Chandraker<br>
<a href="https://sites.google.com/view/ecv24/program?authuser=0"><b>Efficient Deep Learning for Computer Vision, CVPR Workshop 2024</b></a> [<a href="https://openaccess.thecvf.com/content/CVPR2024W/ECV24/papers/Suh_Improving_the_Efficiency-Accuracy_Trade-off_of_DETR-Style_Models_in_Practice_CVPRW_2024_paper.pdf">paper</a>]
<br><br>
<h3>Generating Enhanced Negatives for Training Language-Based Object Detectors</h3>
Shiyu Zhao, Long Zhao, Vijay Kumar BG, <u>Yumin Suh</u>, Dimitris N. Metaxas, Manmohan Chandraker, Samuel Schulter<br>
<b>CVPR 2024</b> [<a href="https://arxiv.org/pdf/2401.00094v1.pdf">arXiv</a>]
<br><br>
<h3>Taming Self-Training for Open-Vocabulary Object Detection</h3>
Shiyu Zhao, Samuel Schulter, Long Zhao, Zhixing Zhang, Vijay Kumar BG, <u>Yumin Suh</u>, Manmohan Chandraker, Dimitris N. Metaxas<br>
<b>CVPR 2024</b> [<a href="https://arxiv.org/pdf/2308.06412v2.pdf">arXiv</a>]
<br><br>
<h3>Efficient Controllable Multi-Task Architectures</h3>
Abhishek Aich, Samuel Schulter, Amit K. Roy-Chowdhury, Manmohan Chandraker, <u>Yumin Suh</u><br>
<b>ICCV 2023 </b> [<a href="https://arxiv.org/pdf/2308.11744v1.pdf">arXiv</a>]
<br><br>
<h3>OmniLabel: A Challenging Benchmark for Language-Based Object Detection</h3>
Samuel Schulter, Vijay Kumar B G, <u>Yumin Suh</u>, Konstantinos M. Dafnis, Zhixing Zhang, Shiyu Zhao, Dimitris Metaxas<br>
<b>ICCV 2023 (Oral) </b> [<a href="https://arxiv.org/pdf/2304.11463.pdf">arXiv</a>][<a href="https://github.com/samschulter/omnilabeltools">code</a>][<a href="https://www.omnilabel.org/">project</a>][<a href="https://www.omnilabel.org/dataset/download">data</a>]
<br><br>
<h3>Confidence and Dispersity Speak: Characterizing Prediction Matrix for Unsupervised Accuracy Estimation</h3>
Weijian Deng, <u>Yumin Suh</u>, Stephen Gould, Liang Zheng<br>
<b>ICML 2023 </b> [<a href="https://arxiv.org/pdf/2302.01094.pdf">arXiv</a>]
<br><br>
<!--
<h3>Split to Learn: Gradient Split for Multi-Task Human Image Analysis</h3>
Weijian Deng, <u>Yumin Suh</u>, Xiang Yu, Masoud Faraki, Liang Zheng, Manmohan Chandraker<br>
<b>WACV 2023 </b> [<a href="https://openaccess.thecvf.com/content/WACV2023/papers/Deng_Split_To_Learn_Gradient_Split_for_Multi-Task_Human_Image_Analysis_WACV_2023_paper.pdf">Paper</a>]
<br><br>
<h3>Learning Semantic Segmentation from Multiple Datasets with Label Shifts</h3>
Dongwan Kim, Yi-Hsuan Tsai, <u>Yumin Suh</u>, Masoud Faraki, Sparsh Garg, Manmohan Chandraker, Bohyung Han<br>
<b>ECCV 2022 </b> [<a href="https://arxiv.org/abs/2202.14030">arXiv</a>]
<br><br>
-->
<h3>Controllable Dynamic Multi-Task Architectures</h3>
Dripta Raychaudhuri, <u>Yumin Suh</u>, Samuel Schulter, Xiang Yu, Masoud Faraki, Amit K. Roy-Chowdhury, Manmohan Chandraker<br>
<b>CVPR 2022 (Oral) </b> [<a href="https://arxiv.org/pdf/2203.14949v1.pdf">arXiv</a>] [<a href="https://www.nec-labs.com/~mas/DYMU/">Project page</a>]
<br><br>
<!--
<h3>On Generalizing Beyond Domains in Cross-Domain Continual Learning</h3>
Christian Simon, Masoud Faraki, Yi-Hsuan Tsai, Xiang Yu, Samuel Schulter, <u>Yumin Suh</u>, Mehrtash Harandi, Manmohan Chandraker<br>
<b>CVPR 2022</b> [<a href="https://arxiv.org/pdf/2203.03970.pdf">arXiv</a>]
<br><br>
<h3>Cross-Domain Similarity Learning for Face Recognition in Unseen Domains</h3>
Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, <u>Yumin Suh</u>, Manmohan Chandraker<br>
<b>CVPR 2021</b> [<a href="https://openaccess.thecvf.com/content/CVPR2021/papers/Faraki_Cross-Domain_Similarity_Learning_for_Face_Recognition_in_Unseen_Domains_CVPR_2021_paper.pdf">Paper</a>]
<br><br>
-->
<h3>Learning to Optimize Domain Specific Normalization for Domain Generalization</h3>
Seonguk Seo, <u>Yumin Suh</u>, Dongwan Kim, Geeho Kim, Jong-Woo Han, and Bohyung Han<br>
<b>ECCV 2020</b> [<a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123670069.pdf">Paper</a>]
<br><br>
<h3>Stochastic Class-based Hard Example Mining for Deep Metric Learning</h3>
<u>Yumin Suh</u>, Bohyung Han, Wonsik Kim, and Kyoung Mu Lee<br>
<b>CVPR 2019</b> [<a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Suh_Stochastic_Class-Based_Hard_Example_Mining_for_Deep_Metric_Learning_CVPR_2019_paper.pdf">Paper</a>]
<br><br>
<h3>Part-Aligned Bilinear Representations for Person Re-identification</h3>
<u>Yumin Suh</u>, Jingdong Wang, Siyu Tang, Tao Mei and Kyoung Mu Lee<br>
<b>ECCV 2018</b>
[<a href="https://cv.snu.ac.kr/publication/conf/2018/reid_eccv18.pdf">Paper</a>]
[<a href='https://cv.snu.ac.kr/publication/conf/2018/reid_eccv18_supp.pdf'>Supp</a>]
[<a href='https://github.com/yuminsuh/part_bilinear_reid'>Code</a>]
[<a href='https://cv.snu.ac.kr/~ysuh/REID_ECCV2018_poster_v5.pdf'>Poster</a>]
<br><br>
<!--
<h3>Appearance Dependent Inter-Part Relationship for Human Pose Estimation</h3>
<u>Yumin Suh</u> and Kyoung Mu Lee<br>
<b>APSIPA 2016</b>
<br><br>
<li class='paper'>Ho Yub Jung, <u>Yumin Suh</u>, Gyeongsik Moon, and Kyoung Mu Lee, "Sequential Approach to 3D Human Pose Estimation: Seperation of Localization and Identification of Body Joints", <i>European Conference on Computer Vision (<b>ECCV</b>)</i>, 2016.
</li>
<h3>Discrete Tabu Search for Graph Matching</h3>
Kamil Adamczewski, <u>Yumin Suh</u>, and Kyoung Mu Lee<br>
<b>ICCV 2015</b>
[<a href="https://cv.snu.ac.kr/publication/conf/2015/DTSGM_ICCV2015.pdf">Paper</a>]
[<a href='https://cv.snu.ac.kr/research/~DTSGM/'>Project</a>]
<br><br>
<h3>Subgraph Matching using Compactness Prior for Robust Feature Correspondence</h3>
<u>Yumin Suh</u>, Kamil Adamczewski, and Kyoung Mu Lee<br>
<b>CVPR 2015</b>
[<a href="https://cv.snu.ac.kr/publication/conf/2015/SMCP_CVPR2015.pdf">Paper</a>]
[<a href='https://cv.snu.ac.kr/research/~SMCP/'>Project</a>]
<br><br>
<li class='paper'>Jungmin Lee, <u>Yumin Suh</u>, and Kyoung Mu Lee, "Energy Formulation for Effective Subgraph Matching", <i>Workshop on Image Processing and Image Understanding (IPIU) </i>, 2013.
<h3>Graph Matching via Sequential Monte Carlo</h3>
<u>Yumin Suh</u>, Minsu Cho, and Kyoung Mu Lee<br>
<b>ECCV 2012</b>
[<a href="https://cv.snu.ac.kr/publication/conf/2012/SMCM_ECCV2012.pdf">Paper</a>]
[<a href='https://cv.snu.ac.kr/research/~SMCM/'>Project</a>]
<br><br>
<h3>Roles of Time Hazard in Perceptual Decision Making under High Time Pressure</h3>
Minju Kim, <u>Yumin Suh</u>, Daeseob Lim, Issac Rhim, Kyoung-Whan Choi, and Sang- Hun Lee<br>
<b>i-Perception 2(4), 2011</b>
<br><br>
-->
</div>
</section>
<!-- <section>
<div class="sectionTitle">
<h1>Tech Reports</h1>
</div>
<div class="sectionContent">
<h3>Cross-Modal Prediction Consistency based Self-Training for Unsupervised Domain Adaptation</h3>
Dongwan Kim, Geeho Kim, Seonguk Seo, <u>Yumin Suh</u>, Bohyung Han, Taeho Lee, Jongwoo Han, and Hyejeong Jeon<br>
<b>3rd Place in VisDA-2019 Challenge</b>
[<a href="">Paper</a>]
<br><br>
-->
<!--
<h3>Domainwise Batch Normalization for Unsupervised Domain Adaptation</h3>
Seonguk Seo*, <u>Yumin Suh</u>*, Woong-Gi Chang, Tackgeun You, Suha Kwak, Taeho Lee, and Bohyung Han<br>
<b>ICML 2019 Workshop on Understanding and Improving Generalization in Deep Learning</b>
[<a href="">Paper</a>]
<br><br>
<h3>Holistic planimetric prediction to local volumetric prediction for 3d human pose estimation</h3>
Gyeongsik Moon, Ju Yong Chang, <u>Yumin Suh</u>, Kyoung Mu Lee<br>
<b>arXiv 2017</b>
[<a href="https://arxiv.org/pdf/1706.04758.pdf">Paper</a>]
<br><br>
</div>
</section>
-->
<section>
<div class="sectionTitle">
<h1>Work Experience</h1>
</div>
<div class="sectionContent">
<article>
AI Research Scientist at <b>Atmanity</b>. 2025.1 - present<br><br>
Senior Researcher at <b>NEC Labs America</b>. 2019.12 - 2025.1<br><br>
Postdoctoral researcher at <b>Seoul National University</b>, Seoul, Korea. 2019.4 - 2019.11<br>
Advisors: <a href="https://cv.snu.ac.kr/index.php/bhhan/">Bohyung Han</a>, <a href="https://cv.snu.ac.kr/index.php/~kmlee/">Kyoung Mu Lee</a><br><br>
Research Intern at <b>Microsoft Research Asia (MSRA)</b>, Beijing, China. 2016.9 - 2017.5<br>
Advisors: <a href="https://taomei.me/">Tao Mei</a>, <a href="https://jingdongwang2017.github.io/">Jingdong Wang</a><br><br>
Research Intern at <b>WILLOW, Inria</b>, Paris, France. 2014.12 - 2015.6<br>
Advisors: <a href="https://www.di.ens.fr/~ponce/">Jean Ponce</a>, <a href="http://cvlab.postech.ac.kr/~mcho/">Minsu Cho</a>
</article>
</div>
</section>
<section>
<div class="sectionTitle">
<h1>Academic Services</h1>
</div>
<div class="sectionContent">
<article>
<h3>Area Chair</h3>
ACCV 2022, CVPR 2024-2025
<h3>Senior Program Committee</h3>
IJCAI 2021, AAAI 2022
<h3>Program Committee</h3>
ACCV 2020, CVPR 2019-2023, ECCV 2020-2022, ICCV 2019-2023, ICPR 2020, MM 2017, WACV 2021<br>
AAAI 2020-2021, ICLR 2021-2024, ICML 2020-2024, IJCAI 2022, NeurIPS 2021-2022
<h3>Journal Reviewer</h3>
TPAMI, TIP, TMM, CVIU, MMSJ, AOAS, TCSVT
<!--<h3>Student Volunteer</h3>
ACCV 2012, MM 2018
<br><br>-->
</article>
</div>
<div class="clear"></div>
</section>
</div>
</div>
</body>
</html>