About Me

I am Midhun Harikumar. With a robust tenure at Adobe, my role has evolved from Senior Applied Scientist to Research Scientist, where I focus on cutting-edge developments in generative AI and diffusion models. My work contributes to enhancing the controllability and editability of these powerful tools for image manipulation and texture editing.

Experience & Education

Please visit my LinkedIn profile for a detailed overview of my experience and education: LinkedIn Profile

Patents

US Patent 12,322,007 (2025): Systems and methods for color palette optimization Inventors: Pranav Vineet Aggarwal, Midhun Harikumar, Ajinkya Gorakhnath Kale Date of Patent: June 3, 2025

US Patent 12,299,939 (2025): Techniques for generating a novel image using tokenized image representations Inventors: Midhun Harikumar, Pranav Aggarwal, Ajinkya Gorakhnath Kale Date of Patent: May 13, 2025

US Patent 12,277,630 (2025): Systems and methods for image processing Inventors: Pranav Vineet Aggarwal, Midhun Harikumar, Ajinkya Gorakhnath Kale Date of Patent: April 15, 2025

US Patent 12,260,480 (2025): Machine learning-based generation of recommended layouts Inventors: Sukriti Verma, Pranav Vineet Aggarwal, Peter O'Donovan, Midhun Harikumar, Ajinkya Kale Date of Patent: March 25, 2025

US Patent 12,008,698 (2024): Learned image representation for text-image localization Inventors: Midhun Harikumar, Pranav Aggarwal, Baldo Faieta, Ajinkya Kale, Zhe Lin Date of Patent: June 11, 2024

US Patent 11,934,448 (2024): Keyword localization digital image search Inventors: Pramod Srinivasan, Zhe Lin, Samarth Gulati, Saeid Motiian, Midhun Harikumar, Baldo Antonio Faieta, Alex Filipkowski Date of Patent: March 19, 2024

US Patent 11,574,392 (2020): Automatically merging people and objects from multiple digital images to generate a composite digital image Inventors: Midhun Harikumar et al. Date of Patent: February 6, 2020

Publications

My research focuses on advancing the field of generative AI, with particular emphasis on diffusion models and their applications in computer vision and graphics. Selected publications include:

TexSliders: Diffusion-Based Texture Editing in CLIP Space

ACM SIGGRAPH 2024 Harikumar, M., Aggarwal, P., Kale, A. We present a novel framework for semantic texture manipulation by leveraging the latent space of pretrained diffusion models and CLIP embeddings. Our method enables fine-grained control over texture attributes while preserving structural integrity.

PREDITOR: Text-Guided Image Editing with Diffusion Prior

Preprint, 2024 Harikumar, M., et al. We propose a novel approach for text-guided image editing that utilizes diffusion priors to achieve semantically meaningful modifications while maintaining image coherence and photorealism.

Enhanced Controllability in Diffusion Models through Feature Disentanglement

International Conference on Machine Learning (ICML), 2024 Harikumar, M., Aggarwal, P., Kale, A., Lin, Z. We introduce an innovative architecture for diffusion models that separates spatial content and style representations, leading to improved manipulation capabilities and more precise control over generated outputs.

For a comprehensive list of publications and citations, please visit my Google Scholar profile.