Interview with Raj Jagani, CEO, Tibicle LLP

Featured

Featured connects subject-matter experts with top publishers to increase their exposure and create Q & A content.

7 min read

Interview with Raj Jagani, CEO, Tibicle LLP

© Image Provided by Featured

This interview is with Raj Jagani, CEO, Tibicle LLP.

For readers on Featured.com, how do you describe your role as CEO of Tibicle and the kinds of AI-powered web and mobile products your team is best known for delivering?

I am the CEO and co-founder of Tibicle, a software development company based in Ahmedabad, India. I have been building software for over 12 years. I started as a backend developer working on ERP systems and directory integrations for companies such as Genea in the US. I started Tibicle in 2021 with my co-founders, Arjun and Sandip.

Our team of 50+ developers builds web apps, mobile apps, and desktop applications for clients across India, Europe, and the US. The work we get asked about most is AI-powered products:

  • A recruitment app that uses AI to parse resumes, pre-screen candidates through chatbots, and analyze video interviews for sentiment.
  • A multilingual AI chatbot for an IT services firm that improved content discovery and conversions.
  • A customer support app where AI handled 75% of incoming queries within the first month of launch.

We are a build-and-ship team: clients come to us with a problem, and we turn it into a working product.

What were the key inflection points that took you from hands-on developer (.NET/PHP, MVC, MySQL) to leading AI innovation and scalable product delivery in information technology and services?

There were two real shifts. The first happened at Genea. I was a backend developer building directory integrations, connecting platforms like Okta, Azure AD, and Google Workspace using the SCIM protocol. That project forced me to think beyond writing code. I designed architecture, managed how different systems talked to each other, and solved integration problems directly with customers. That changed how I saw my role.

The second was starting MCOOK, a POS system for restaurant management. Building my own product from scratch taught me things no job could: scoping, prioritising, shipping with limited resources, and handling everything that breaks after launch. MCOOK did not scale the way I wanted, but it gave me the confidence and clarity to start Tibicle in 2021.

At Tibicle, I stopped writing code daily and started focusing on product architecture and team building. We went from a founding team to 50+ people delivering AI-powered apps, SaaS platforms, and mobile products for clients across Europe and the US. The shift was not sudden. Each role added one more layer of thinking beyond the code in front of me.

To ground this in a concrete example, can you walk us through one AI‑infused web experience your team took from prototype to production, including the stack choices you made and one decision you would change in hindsight?

The best example is an AI recruitment app we built at Tibicle. The client wanted to automate their entire hiring pipeline. We took it from discovery workshops through to deployment on both app stores.

The core stack was Node.js on the backend with a Flutter-based mobile frontend. We built an AI engine for resume parsing and matching candidates to open roles, a chatbot that pre-screened applicants and updated their profiles dynamically, and a video interview module where candidates recorded responses that were analyzed for sentiment. We integrated scheduling with Google Calendar and Outlook and added an analytics dashboard tracking time-to-hire, drop-off rates, and diversity data.

One decision I would change: we built the chatbot and video interview as two separate flows. In production, we realized they should have been one continuous candidate experience from the start. Stitching them together later cost us extra development time that proper flow planning upfront would have avoided. I now catch that mistake at the architecture stage before writing any code.

Shifting to mobile, in a recent app you shipped with AI in the loop, how did you balance on‑device UX, latency, and data privacy to reach production quality?

We shipped an AI-based customer support app at Tibicle, where this exact trade-off came up on every sprint call. The AI had to respond fast enough to feel conversational, but we were handling client data that required strict controls.

We kept the AI processing server-side rather than on-device. This was a practical decision: on-device models produced inconsistent performance on lower-end Android devices, and our user base was not limited to flagship phones. Hosting processing server-side kept response quality predictable across devices.

For latency, we optimized the API layer so chatbot responses felt near-instant. The UX trick was simple: we showed typing indicators while the backend processed, so users never stared at a blank screen.

On privacy, we drew on a separate project where we built a mobile app with military-grade encryption for handling large files. That experience taught us to design the data layer with encryption and access controls from day one rather than patch them in later.

Retrofitting privacy into a finished app is always more expensive and more fragile.

As usage grows, what is your go‑to backend pattern for scaling from 1,000 to 100,000 daily users in PHP/.NET and MySQL based on what you learned shipping at scale?

Early in my career, I spent years building PHP and MySQL systems: ERPs, dynamic websites, and full applications on CodeIgniter and other MVC frameworks. They worked fine at low traffic, but problems always appeared when load increased and everything was within one monolithic codebase.

  1. Separate reads and writes. MySQL handles writes well, but reads under heavy load need caching. At Tibicle, we use Redis to take pressure off the database without rearchitecting everything at once.

  2. Break the monolith gradually; do not rewrite from scratch. Identify the heaviest operations and move them into independent services. At Genea, I built integration modules using Node.js alongside the existing system, communicating through RabbitMQ for asynchronous processing. That approach scaled without breaking what was already working.

  3. Separate your infrastructure layers—database, cache, and queues—early. If these layers are not separated early, no amount of code optimisation will save you later.

The biggest lesson from shipping at scale: the jump from 1,000 to 100,000 daily users is not primarily a code problem; it is an architecture problem.

Operationally, beyond task deflection rate, which production metrics and runbook practices do you define before writing code to shape architecture decisions?

At Tibicle, before we write a single line of code, we define three things with the client:

  1. Error rate thresholds
  2. API response time ceilings
  3. Concurrent user limits the system must handle at launch

Everything else in the architecture follows from those numbers.

Error rate is the one people skip. Everyone talks about speed and uptime. But if your API returns a 200 status code with wrong data, your monitoring says everything is fine while users are having a broken experience. We define acceptable error rates per endpoint before development starts and build alerting around that from day one.

For runbook practices, we document what happens when things fail, not just how things work. Every service we deploy has a runbook entry covering:

  • what to check first
  • who gets alerted
  • what the manual fallback is

We started doing this after an incident with a client project where a background sync job failed silently for two days. Nobody noticed because there was no runbook and no alert tied to it.

That experience changed how we set up every project now. Define what failure looks like before you define what success looks like.

From a web mobility perspective, how do you design for seamless movement between web, PWA, and native mobile experiences, drawing on your early AJAX/jQuery background and today’s frameworks?

I started building web apps with jQuery and AJAX when that was how you made a page feel dynamic without a full reload. That background helps more than people think because it taught me to focus on what users experience between interactions: loading states, partial updates, and keeping interfaces responsive. Those problems haven’t changed—only the tools have.

At Tibicle, when a client needs presence across web and mobile, we make the decision early. If the core experience is content-heavy and does not need deep device access, we push for a responsive web app or PWA first. It ships faster and covers both desktop and mobile browsers without requiring separate codebases.

When the product needs camera access, push notifications, offline capability, or heavy local processing, we go native through Flutter. We built apps like Aegis and Gaming Mode this way. Flutter gives us a single codebase for Android and iOS without sacrificing performance.

The mistake I see teams make is deciding web versus native based on preference instead of what the product actually requires. We let the feature set drive the platform decision—not the other way around.

Under the hood, how do you structure transactional MySQL, analytics layers, and embeddings/vector stores so AI features don’t slow core user flows or complicate schema evolution?

The rule we follow at Tibicle is simple: AI processing never touches the same database instance that serves your core user flow. The moment you start running ML queries against your transactional MySQL, your response times become unpredictable.

When we built our AI recruitment app, the resume parsing, sentiment analysis, and candidate matching all ran as separate services. The transactional layer handling user accounts, scheduling, and application status remained on MySQL. AI outputs were processed asynchronously, and the results were written back only after completion. We used message queues to keep those layers communicating without blocking each other. I learned this pattern at Genea, where we used RabbitMQ to handle sync operations between directory integrations without slowing down the main platform.

For analytics, we keep a separate read layer. Transactional data is synced into it on a schedule rather than in real time. The client dashboard pulling time-to-hire metrics and drop-off rates never competed with the live application flow.

Schema evolution stays manageable because each service owns its own data. Changing how the AI layer stores results does not require migrating your core user tables.

For a founder asking where to start, if you were to add one AI‑powered workflow to an existing web or mobile product, what would your 30–60–90 day plan look like?

Start with whatever your team is doing repeatedly that consumes the most hours — not the flashiest AI use case, but the most wasteful manual process. At Tibicle, every AI project we take on starts with that question.

  1. Days 1–30: Identify that one workflow. Sit with the team that is actually doing the work and map out where the bottleneck sits. Then validate whether an AI solution is even the right fix or if simple automation can handle it. We run discovery workshops at this stage with clients, and half the time the scope changes completely from what the founder originally pitched.

  2. Days 30–60: Build a working prototype against real data — not a demo with dummy inputs. We connect it to the actual production environment in a sandboxed way so the output reflects what users will genuinely experience. This is where you learn whether the AI adds value or just adds complexity.

  3. Days 60–90: Ship it alongside the existing manual flow. Do not replace anything yet; let both run together. Measure whether the AI output matches or beats human quality. Our customer support app client ran it this way before fully switching over.

Thanks for sharing your knowledge and expertise. Is there anything else you'd like to add?

One thing I want founders and enterprise teams to hear: Do not overthink the AI part. The technology is moving fast, but the fundamentals of building good software have not changed. Understand the problem first. Plan the architecture before writing code. Ship something real and measure what happens.

I have been building software for over 12 years. The tools I used at my first job look nothing like what we use at Tibicle today. But the thinking behind good product delivery is exactly the same. Listen to the people who will use it, build what actually solves their problem, and do not add complexity for the sake of sounding impressive.

At Tibicle, we have delivered over 62 projects across Europe, the US, and India with a team of 50+ people and a 90% client retention rate. That retention does not come from fancy pitches. It comes from showing up, doing the work properly, and being honest when something is not the right fit.

If anyone reading this has a product idea sitting in a Google Doc somewhere, stop waiting. Start building.

Up Next