Research Journey (Unfiltered)
If you landed to this page, it means you are interested to know more about “How to build a research profile?” or to know about my journey. Please refer to publications page for actual resources as this page is more about the story behind the publications.
Truth be told, I am not the right guy to get inspiration from. I have this inferior complex most of the times and while I don't compare with others, I always look high of my peer accomplishments and try to get inspiration and push myself harder (sometimes hurting!) to improve myself and my skills.
That being said, there's a lot of serendipity and perseverance that goes on in building a research profile as it's not everyone's cup of tea. The ability to handle rejections is a true skill that a researcher will develop over the course of one's journey. Occasionally, I take break from all kinds of work. And my friends (who are fellow researchers) always suggest to focus on building skills and doing good work rather than targetting for a publication as a good work eventually results in a good paper (or a good patent, good research blog post). While I agree to his advice, given this fast-paced field, the sheer amount of pressure and expectations being built around the community makes it even more difficult to stay at top of the field.
I am sorry but I don't have any particular advice if you are looking for one. Neither am I qualified enough to provide any such advice. I made this page so at a bare minimum:
It should serve as a gentle reminder of the journey that I went through to become who I am now.
To show how perseverance and serendipity is very important in building an overall profile.
To be grateful for all the things that happened along my journey and serve as a motivation.
To keep track of my research journey (and it might help someone out there!!)
Note: As such, I wanted to keep the unfiltered, raw truth version which won't be rosy. Obviously, this will be heavily filtered in professional resume (which I believe is the case of almost everyone).
Publications
Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)
StoryWas lucky to meet Subba (first author) during NeurIPS 2024 and collaborated as our research interests coincided.
Truth be told, he managed most of the work along with the 2nd author for the submission to ICLR 2025 which got accepted with ratings 6688 (OpenReview).
Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
StoryI was lucky to onboard this while talking with Harit (first author) on a different project. When I joined, most of the work was already done by him as he built from his previous project and I helped with experimentations. An interesting note - he was my TA in my first year of Masters.
Our initial submission was rejected at ICML 2024 with ratings 6664 (OpenReview)
Finally, with major revisions (beefing up experiments and rewriting), it went through NeurIPS 2024 with ratings 46678 (OpenReview)
PabLO: Improving Semi-Supervised Learning with Pseudolabeling Optimization
Under Review
StoryThis was also with Harit, where we made an initial submission in just 3 months of work along with our resubmission of previous work (Pearls from Pebbles). Unfortunately, this submission to NeurIPS 2024 got rejected with ratings 3345 (OpenReview).
Was unable to resubmit at ICLR 2025 as all the authors were busy with other priorities.
Pretrained Hybrids with MAD Skills
Under Review
StoryMy professor Fred suggested that Nick (first author) was working on a project that might be of my interest (as I already published my EMNLP 2023 on compression with Fred). Lucky enough, Nick said he needs folks with experimental setup and I quickly onboarded. Truth be told, I didn't involve very actively during the initial submission due to time constraints and limited knowledge.
Our first submission was rejected at NeurIPS 2024 with ratings 345 (OpenReview). Even we were aware that the work can be strengthened.
But the 2nd submission also got rejected at ICLR 2025 with ratings 356 (OpenReview). While we don't agree with all the reviews, that's the luck part of publication process.
We didn't submit to ICML 2025 as we felt we can substantially strengthen the paper.
Tabby: Tabular Adaptation for Language Models
Under Review
StoryOne of Fred's student recommended my name to Sonia (first author) who was looking for someone who can help with empirical experimentation stuff. And that actually turned out to be good for me as I involved to my best capacity in this project at my first submission.
Unfortunately, it was rejected at ICLR 2025 with ratings 135 (OpenReview). As we were aware of the shortcomings, it was quite understandable.
RICA^2: Rubric-Informed, Calibrated Assessment of Actions
StoryWhen I reached out to prof Yin for research study, he put me in touch with one of his student Abrar (who was a TA to one of my Masters course) and thus started a long-term collaboration. He was working on action recognition and assessments and this project took roughly one year to reach the submission stage.
We dropped the submission at last minute to NeurIPS 2023 as we realized that quality of paper can be improved substantially
Rejected at CVPR 2024 with ratings 223 (OpenReview)
With substantial revision, it got accepted to ECCV 2024 with ratings (2 BA, 1 WA, 1 SA)
The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
StoryMy first author paper at a reputable conference. I was out of touch with ML field for appx 2 years and was truly grateful to Makesh who actually introduced me to NLP and LLMs at right time. We managed to bring a submission with just 5 months of work with me leading the work. As Makesh was already doing his full-time he was able to help me with writing and that was very helpful as I learned a lot of skills (whatever I used to write in 6 hrs, he comes and polishes in a presentable way in just 30min)
This was my first submission with Fred which went through EMNLP 2023 Findings with soundness ratings 233 (OpenReview).
This paper helped me to establish trust as a researcher and lay road for the next ones. More importantly, this helped to get an intern at Amazon.
Enhancing Cartographic Design using Artificial Neural Network: A Geometric Approach for Map Generalization
StorySiddharth and RamaKrishna are my neighbours and some funny discussions on integrating ML to cartography got converted to a presentation at NACIS; a cartography conference.
I have a minor regret that we didn't follow-up after the presentation and convert this to a paper.
NITCAD - Developing an object detection, classification and stereovision dataset for autonomous navigation in Indian roads
Third International Conference on Computing and Network Communications (CoCoNet), 2019.
StoryThis was part of my Bachelors thesis and my first ever submission, got rejected. We made some trivial errors in submission (like misusing the terms detection vs classification).
Now that I reflect, this paper if put in the right way would have a significant media coverage and impact as datasets are sort of rare!. I believe the study can be presented much better.
Stereo Vision Based Speed Estimation for Autonomous Driving
18th International Conference on Information Technology (ICIT), 2019.
StoryThis was also part of my Bachelors thesis. The order of the folks doesn't truly reflect the contributions as some of us are planning for Master. And I don't regret it as my friend Uma took care of this submission on behalf of us.
Android Based Control of Transmission line Robot for Traversing Through Straight line and Crossing of Tower Junctions
International Journal of Innovative Technology and Exploring Engineering (IJITEE), Volume-8 Issue-6, April 2019.
StoryI collaborated with Shruti, a PhD student at the Robotics lab and worked part-time for appx 2years on this project developing circuits and building app to control the robot. That finally resulted in a journal publication and a demo in real-field.
Now that I reflect, I honestly feel this journal is not reputable, not my best work and the quality of paper can be improved quite a lot. And I might have actually been better if I got involved in other projects as well to expand my learning (eg: I had to forgo some interesting robotics competition