Research Journey (Unfiltered)

Please refer to publications page for actual resources as this page is more about the story behind the publications.

If you landed to this page, it means you are interested to know more about “How to build a research profile?” or to know about my journey. Just remember that when you look at one's resume and feel awe on their achievements, there'll a similar journey but not many document in this fashion; so I wanted to bring my version out there so someone might be inspired or at the least informed.

Truth be told, I am not the right guy to get inspiration from. I have this inferior complex most of the times and while I don't compare with others most of the times, I always look high of my peers and try to get inspired and push myself harder to improve myself and my skills (and sometimes pushing harder will hurt mentally, emotionally and physically!).

That being said, there's a lot of serendipity and perseverance that goes on in building a research profile as it's not everyone's cup of tea. The ability to handle rejections is a true skill that a researcher will develop over the course of one's journey. Occasionally, I take break from all kinds of work. And my friends (who are fellow researchers) always suggest to focus on building skills and doing good work rather than targetting for a publication as a good work eventually results in a good paper (or a good patent, good research blog post). While I agree to his advice, given this fast-paced field, the sheer amount of pressure and expectations being built around the community makes it even more difficult to stay at top of the field. In short, stress is real and unavoidable at situations; read Felix story from Google DeepMind for a more mature perspective.

I am sorry but I don't have any particular advice if you are looking for one other than ‘You, your health comes first‘. Neither am I qualified enough to provide any such advice as I consider myself inexperienced.

I made this page to serve at a bare minimum:

  • A reflection of the journey that I went through to become who I am now.

  • To show how perseverance and serendipity is very important in building an overall profile.

  • To be grateful for all the things that happened along my journey and serve as a motivation.

  • Maybe someone out there might get some takeaways/motivation from this page!

Note: As such, I wanted to keep the unfiltered, raw truth version which won't be rosy. Obviously, this will be heavily filtered in professional resume (which is the case of almost everyone).

Publications

  • Linguistic Properties and Model Scale in Brain Encoding: From Small to Compressed Language Models
    Story

    • This is the project that me and Subba first interacted during NeurIPS 2023, he reached out after my workshop presentation that he's interested in our analysis of EMNLP2023 and wanted to do similar things in Neuroscience area.

    • The committments that he and I had at those times took so long to complete and make this work happen!!! I got a job, Subba completed his Postdoc and in the meanwhile we had 2 other papers along the way (this one will be the 3rd of our collaboration, but the one we discussed first)

    • A classic example on realisitic goalsetting and understanding the time it takes to get through a collaborative research project involving people from different backgrounds and priorities.

    • Currently under review

  • From Many Voices to One: Statistically Principled Aggregation of LLM Judges
    Story

    • I got in this project, thanks to Dyah from Fred's lab. In one of an unrelated conversation, when I reached out to her about a project we were working, she told that Jitian is working on some submission on LLM-as-a-judge and I might be interested in that work.

    • That started an interaction with Jitian and Changho and I onboarded on the project. Most of my work was on setting up code for data and running experiments. This is one of the projects where I can't comprehend the entire math and reminds me the power of collaborations as I believe I alone wouldn't have done a project of this type (thanks to math savvys - Jitian and Changho)

    • Submitted to NeurIPS 2025 got rejected with 5432 (OpenReview). But it was decently helpful as we were able to get some feedback to improve the paper and presentation.

  • Compressed but Compromised? A Study of Jailbreaking in Compressed LLMs
    Story

    • A one-year part-time work at my firm resulted in this workshop paper at NeurIPS 2025

    • How I arrived at this final paper is quite interesting. I worked with a prof in Aug 2024 on understanding compression-based alignment. Unfortunately, that work got discountinued due to other priorities at that time and I restarted some exploratory work on compression-related speculative decoding in May/June 2025 but that direction was not fruitful as the results are not positive.

    • While exploring in that space, realized some experiments can be done in the previous space itself which led to this paper. A classic example that final paper can be very different from initial direction and inspiration can come from different places at different times.

  • Instruction-Tuned Video-Audio Models Elucidate Functional Specialization in the Brain
    Story

    • A continued collaboration with Subba (first author) following our meet from NeurIPS 2023.

    • This work is sorta relevant and following up on some of the findings from our (ICLR) paper, and when submitted to NeurIPS 2025. The initial scores were 3445 and even with a good rebuttal and final scores being 3555 with average of 4.5/6 (OpenReview responses), this got rejected :(.

    • This (along with R&B) is another example where just getting top scores won't matter and luck plays a role in deciding the fate of the venue.

    • Currently under review

  • R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
    Story

    • Albert works with Fred and in some common discussions found that he's working on a project area which I was interested in. So, I pinged and joined this project.

    • We submitted to COLM 2025 with initial scores of 577 and post rebuttal scores of 777 (OpenReview responses)but unfortunately rejected. This is a classic example of how luck might favor despite good ratings!!

    • Currently under review

  • Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)
    Story

    • Was lucky to meet Subba (first author) during NeurIPS 2023 and collaborated as our research interests coincided.

    • Truth be told, he managed most of the work along with the 2nd author for the submission to ICLR 2025 which got accepted with ratings 6688 (OpenReview).

  • Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
    Story

    • I was lucky to onboard this while talking with Harit (first author) on a different project. When I joined, most of the work was already done by him as he built from his previous project and I helped with experimentations. An interesting note - he was my TA in my first year of Masters.

    • Our initial submission was rejected at ICML 2024 with ratings 6664 (OpenReview).

    • Finally, with major revisions (beefing up experiments and rewriting), it went through NeurIPS 2024 with ratings 46678 (OpenReview)

  • PabLO: Improving Semi-Supervised Learning with Pseudolabeling Optimization
    Story

    • This was also with Harit, where we made an initial submission in just 3 months of work along with our resubmission of previous work (Pearls from Pebbles). Unfortunately, this submission to NeurIPS 2024 got rejected with ratings 3345 (OpenReview).

    • Was unable to resubmit at ICLR 2025 as all the authors were busy with other priorities.

    • With major revisions and rewriting (title renamed to Rethinking Confidence Scores and Thresholds in Pseudolabeling-based SSL), we submitted to ICML 2025 and the initial reviews are 2223. But the rebuttal went very well (which happens rarely!) and the ratings changed to 2333 and thus accepted to ICML2025 (OpenReview); hurrayy!!

  • Pretrained Hybrids with MAD Skills
    Story

    • My professor Fred suggested that Nick (first author) was working on a project that might be of my interest (as I already published my EMNLP 2023 on compression with Fred). Lucky enough, Nick said he needs folks with experimental setup and I quickly onboarded. Truth be told, I didn't involve very actively during the initial submission due to time constraints and limited knowledge.

    • Our first submission was rejected at NeurIPS 2024 with ratings 345 (OpenReview). Even we were aware that the work can be strengthened.

    • But the 2nd submission also got rejected at ICLR 2025 with ratings 356 (OpenReview). While we don't agree with all the reviews, that's the luck part of publication process.

    • We didn't submit to ICML 2025 as we felt we can substantially strengthen the paper.

    • We ended up submitting to COLM 2025 with initial scores of 4666 and post rebuttal scores of 5666 (OpenReview) and (luckily) getting accepted :). I meant lucky because another paper (R&B) with 777 got rejected in COLM 2025.

  • Tabby: Tabular Adaptation for Language Models
    Story

    • One of Fred's student recommended my name to Sonia (first author) who was looking for someone who can help with empirical experimentation stuff. And that actually turned out to be good for me as I involved to my best capacity in this project at my first submission.

    • Unfortunately, it was rejected at ICLR 2025 with ratings 135 (OpenReview). As we were aware of the shortcomings, it was quite understandable.

    • With major revisions, we submitted to ICML 2025; received scores of 2223, and the reviews during rebuttal changed 2223 -> 2233 -> 2223 and it finally got rejected :( But one thing I personally liked this time was that the AC engaged with our paper and raised very interesting points which improved the quality of our paper!

    • We went with submitting to NeurIPS 2025 from feedback received at ICML, but got rejected with scores 2334 and average of 3/6.

    • With all 3 major conferences done (while these experiences are expected in large scale submissions, I was slightly disappointed with some reviews in past), we submitted to TMLR which has a different reviewing style i.e by incorporating feedback from reviewers, we were able to get this one through (OpenReview reviews):) It's interesting that there won't be any ratings for TMLR, but only feedback and overall I actually quite liked this format (even though folks consider this as lower bar compared with ICML, ICLR and NeurIPS, I believe this is a good place to publish as I have seen engaging conversations which genuinely improved our paper).

  • RICA^2: Rubric-Informed, Calibrated Assessment of Actions
    Story

    • When I reached out to prof Yin for research study, he put me in touch with one of his student Abrar (who was a TA to one of my Masters course) and thus started a long-term collaboration. He was working on action recognition and assessments and this project took roughly one year to reach the submission stage.

    • We dropped the submission at last minute to NeurIPS 2023 as we realized that quality of paper can be improved substantially

    • Rejected at CVPR 2024 with ratings 223 (OpenReview)

    • With substantial revision, it got accepted to ECCV 2024 with ratings (2 BA, 1 WA, 1 SA)

  • LETS Forecast: Learning Embedology for Time Series Forecasting
    Story

    • Continued collaboration with Abrar after the RICA project!.

    • We dropped the submission at last minute to ICLR 2025 as Yin felt it can be improved a lot and there are minor issues with current results; even though we pushed a lot to finish it on time

    • Post ICLR 2025, we kept on improving the paper and submitted to ICML 2025, the ratings were 2233 and after rebuttal it went to 2333 and got accepted (OpenReview)!!

  • The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
    Story

    • My first author paper at a reputable conference. I was out of touch with ML field for appx 2 years and was truly grateful to Makesh who actually introduced me to NLP and LLMs at right time. We managed to bring a submission with just 5 months of work with me leading the work. As Makesh was already doing his full-time he was able to help me with writing and that was very helpful as I learned a lot of skills (whatever I used to write in 6 hrs, he comes and polishes in a presentable way in just 30min)

    • This was my first submission with Fred which went through EMNLP 2023 Findings with soundness ratings 233 (OpenReview).

    • This paper helped me to establish trust as a researcher and lay road for the next ones. More importantly, this helped to get an intern at Amazon.

  • Enhancing Cartographic Design using Artificial Neural Network: A Geometric Approach for Map Generalization
    Story

    • Siddharth and RamaKrishna are my neighbours and some funny discussions on integrating ML to cartography got converted to a presentation at NACIS; a cartography conference.

    • I have a regret that we didn't follow-up after the presentation and convert this to a paper.

  • NITCAD - Developing an object detection, classification and stereovision dataset for autonomous navigation in Indian roads
    Third International Conference on Computing and Network Communications (CoCoNet), 2019.
    Story

    • This was part of my Bachelors thesis and my first ever submission, got rejected. We made some trivial errors in submission (like misusing the terms detection vs classification).

    • Now that I reflect, if I did in the right way, this would have a significant media coverage and impact as datasets are sort of rare at that time!. I believe the study can be presented much better and analysis can be made solid even at that time (transformers came in 2017 and this was roughly 15 months after that!).

  • Stereo Vision Based Speed Estimation for Autonomous Driving
    18th International Conference on Information Technology (ICIT), 2019.
    Story

    • This was also part of my Bachelors thesis. The order of the folks doesn't truly reflect the contributions as some of us are planning for Master. And I don't regret it as my friend Uma took care of this submission on behalf of us.

  • Android Based Control of Transmission line Robot for Traversing Through Straight line and Crossing of Tower Junctions
    International Journal of Innovative Technology and Exploring Engineering (IJITEE), Volume-8 Issue-6, April 2019.
    Story

    • I collaborated with Shruti, a PhD student at the Robotics lab and worked part-time for appx 2years on this project developing circuits and building app to control the robot. That finally resulted in a journal publication and a demo in real-field.

    • Now that I reflect, I honestly feel this journal is not reputable, not my best work and the quality of paper can be improved quite a lot. And I might have learned more if I got involved in other projects in parallel rather than focusing on just one project for roughly 3 years (eg: I had to forgo some interesting robotics competition