Skip to main content

I am a technologist, author, and presenter who is passionate about doing software right. Most of my professional experience is around enhancing and simplifying developer experience in many areas: real-time systems, enterprise application frameworks, distributed systems, and cloud services. I strongly believe that we—as developers—write too much code, get distracted with incidental complexity, and end up with a codebase that is hard to maintain. The result is lowered productivity, hampered creative aspects, and poor user experience. We can do better.

Declarative Backends

My current focus—Exograph—is a way to build fast, flexible, and secure backends in minutes. Developers express their domain model and authorization rules using simple, type-safe, declarative language and get the backend ready for traditional and serverless deployment. They can easily add custom business logic and crosscutting concerns such as observability, auditing, and security using TypeScript and WebAssembly. Exograph allows a 10x reduction in the number of lines compared to traditional backends. Exograph is written in Rust to ensure smaller memory footprint, faster startup time, and predictable, ultra-fast execution.

The need for Exograph came from my years of helping companies build their backend systems. Those experiences made me realize that most developers put too much effort into creating backends that could be better expressed using a declarative approach. I also experienced this firsthand while building my own startup LearnRaga.

Passion for Music

I am an ardent fan of Indian Classical Music (I play the bamboo flute!). I have been listening for a long time, volunteering for a local organization, but progressed little in playing. The initial obstacle was finding a good teacher who lives within reasonable commute distance. Eventually, I found a well-qualified teacher who could teach me remotely. Still, writing down compositions without any tools and getting pitch feedback when practicing on my own was time-consuming and frustrating. My wife (who learns vocal music) had identical obstacles.

Examining our own experiences and talking to others made us realize that aspiring music students don't have the right tools to get started or advance their practice. We needed a system that uses advances in computer science to deliver a pleasing, frustration-free, effective system to learn this time-tested art form while recognizing the realities of modern times. Therefore, we co-founded LearnRaga—a unique platform that allows creating and playing compositions, practicing patterns, and getting real-time pitch feedback. It is helping many music learners begin or rediscover their passion. LearnRaga uses Scala in the backend and frontend along with Rust/WebAssembly for performance-critical Digital Signal Processing (DSP).

Cloud Computing

CloudFoundry Launch

Earlier, I was in a leadership position at VMware. As a part of the team that created Cloud Foundry, I played an instrumental role in its architecture and implementation. I built and directed a team of developers with a wide range of expertise. My team was responsible for creating an awesome experience for Cloud Foundry users while deploying apps written using various frameworks, runtimes, and services on Cloud Foundry. My team had a unique responsibility for implementation, working with other groups inside VMware as well as outside partners, and serving as the public face of the Cloud Foundry team. I also created the Spring Cloud Connector project to allow consuming cloud services in the public and private cloud in a principled manner.

Spring and Aspect-oriented Programming

I came to VMware through the acquisition of SpringSource, where I was an early employee and contributor to many parts including, of course, aspect-oriented programming. I was involved in the specification, design, and implementation of many open-­source and commercial products. I have worked with several clients towards creating effective architecture, design, implementation of enterprise applications, and specific issues such as adding monitoring, improving performance, and increasing availability. Later, this experience will shape my perspective on developing robust and scalable enterprise applications with as little ceremony as possible.

Just before joining SpringSource, I had published AspectJ in Action, the best-selling book on aspect-oriented programming that has been lauded by industry experts for its presentation of practical and innovative approaches to solve real-world problems.

Real-Time Distributed Systems

My first substantial experience as a professional software engineer was with real-time operating systems. These systems must be computationally efficient, use minimum memory, and carefully manage concurrency. Years later, I would be working with Rust and this experience would make me appreciate its unique power. I spearheaded a team that created a framework to model data flow and finite state machines to create robotic systems, factory floor automation, and even healthcare.

I led a team that created a pub/sub messaging middleware for performance and mission-critical environments such as industrial automation. We didn't use the term at the time, but the product targeted the Internet of Things (IoT). Working with challenges with distributed systems where hundreds of devices must communicate with each other without any central single-point of failure and still meeting the real-time constraints of the system, was a humbling but exhilarating experience. Here I got a taste of working with real-world distributed systems that I had tinkered with in a research setting.

Machine Learning Research

I started my journey as a machine learning researcher in SPANN Lab at the Indian Institute of Technology, Bombay under Prof. Uday Desai. It gave me a taste of the early thinking in Neural Networks and how they may be used for a wide range of problems. My research focused on the use of neural networks for time-series forecasting.

Initially, working with neural networks was an exercise in patience. Running on modern hardware of the yesteryear (single-core, ~100MHz machine), it often took ~36 hours to train a neural network to reach a somewhat acceptable error value. This led me on a path to optimize the learning algorithm in various ways. The initial attempts were to implement garden-variety optimizations: minimize allocations, cache expensive computations, and so on. That made some difference. The next was looking closely at the training process, which led to inventing a few techniques that led to a published paper. The lab had many computers connected by the network, which led me to try utilizing all those machines! Specifically, the main program distributed parts of training steps and gathered the results. This was my first taste of distributed systems, which would continue to be a theme of my professional career.