AI Researcher · CV · NLP · GNN · RL · Interpretable Models
Advancing novel neural architectures, graph-based learning, and vision systems across research and applied domains. Heading to UZH Zurich for my Masters in September 2026.
My research spans Computer Vision, NLP, Graph Neural Networks, Reinforcement Learning, and Interpretable Models — exploring how machines can perceive, understand, and act across complex real-world domains.
A novel neural architecture designed from the ground up for complete interpretability. Traditional activation functions are removed entirely — instead, the model is structured so that each layer naturally and necessarily increases the polynomial degree of the entire network. The result: every single computation in the model is analytically extractable. No black box. No approximations. Every bit of detail, on demand.
Unlike standard networks where depth adds opacity, each layer here raises the expressiveness and the tractability simultaneously.
Because the architecture is polynomial by construction, every detail of what the model learned can be extracted analytically — no probing, no surrogate models, no guessing.
Applying the PNN framework to model and extract interpretable representations of complex human behavioural patterns.
I'm a final-year B.Tech student in Computer Science (AI & ML) at Chennai Institute of Technology, with a CGPA of 8.2/10. I'll be joining the University of Zurich (UZH) for my Masters in September 2026. My research spans Computer Vision, NLP, Graph Neural Networks, Reinforcement Learning, and Interpretable Models — exploring how machines can perceive, understand, reason, and act across complex real-world problems.
I've had the privilege of conducting research at NIT Tiruchirappalli (Multimodal VQA, IoT, Virtual Surgery) and Universiti Tunku Abdul Rahman, Malaysia (Stock Forecasting, Transformers).
My current original research on White Box Polynomial Neural Networks is a novel architecture built without any traditional activation functions — the model's structure ensures each layer increases the polynomial degree of the entire network, making every learned relationship analytically extractable and fully interpretable.
Outside research: Smart India Hackathon 2025 Finalist and active language learner (German A1 — Goethe Institut).
A novel architecture that eliminates traditional activation functions entirely. The model is structured so each successive layer raises the polynomial degree of the entire network — depth equals expressiveness equals interpretability. Because the computation is polynomial throughout, every learned relationship is analytically extractable. No black box. Built to explain complex patterns including human behaviour.
Whether you want to discuss AI research, novel architectures, or potential collaborations — or just want to connect before I head to UZH — my inbox is always open.
Actively seeking opportunities in research and applied AI.