This interview is with Daniel Kravets, Technical Lead at Vendict.
Daniel Kravets, Technical Lead, Vendict
Can you introduce yourself and explain what ‘vibe coding’ means to you in the context of software development and AI?
My name is Daniel Kravets. I’ve been in tech since 2011, and for the last four years, I’ve been a tech lead at Vendict, an AI-Native GRC Platform that conquers third-party risk with hallucination-free AI that always gets it right—in short, we help companies audit their vendors, speed up compliance processes, and build trust with clients. We work hard to be sure that the AI truly understands the context and doesn’t just make up answers, as this is critical when it comes to security and legal accuracy.
In a few words, vibe coding is an approach where you usually work with AI assistants not as a tool, but as a partner. The term was presented by Andrej Karpathy with the idea that you can accept a part of code without looking into every line because you trust the overall “vibe” of the solution. In other words, you describe the problem in natural language, the AI suggests a solution, and you focus more on the direction than on each line.
I think that the key here is to maintain control. That’s why in my work I always set up “guardrails”: tests, specifications, reviews. Then, vibe coding turns from blind belief into an accelerated cycle of “describe – generate – verify.”
How did your journey in software development lead you to explore the intersection of coding and AI? What sparked your interest in this field?
It all started with using GitHub Copilot in 2022. Before then, I was already familiar with and used IDEs that could handle refactoring and auto-generation, so the idea of “code on autopilot” sounded natural. But the real shift happened when agents appeared. When I first tried Cursor, I was amazed not so much by the fact that it wrote code, but by how it understood it. It actually read the project, found the necessary files, and explained the logic. From that moment on, I began experimenting with how to integrate such tools into my daily work, not just to speed up, but to transform the process itself.
You’ve mentioned the concept of ‘guardrails’ when using AI in coding. Can you share a specific example from your work where implementing such guardrails proved crucial?
Yes, without guardrails, a project can quickly turn into chaos. One example: we used Cursor to generate API endpoints. The AI wrote the code well, but started changing contracts between services without synchronization. We quickly realized we needed a validation layer: specifications, tests, and automated schema checks. Now our workflow is as follows: first, the specification is written (including for the AI), then the code. If the AI deviates, the tests fail. This is a guardrail, a system that prevents the model from going too far without slowing speed.
In your experience, how has AI changed the dynamics of code review processes? Can you describe a situation where AI significantly impacted how your team approached code quality?
AI has significantly changed the code review process. Code now appears faster but requires more thought. While we used to spend 80% of our time writing and 20% reviewing, it’s now the opposite. The review process is now more about the “why” than the “what.”
In one case, the AI generated a working solution but without considering edge cases in the business logic. We kept the auto-generation but added a checklist to the review process with questions: what assumptions the AI made, where context leaks are possible, and what can be verified with tests.
As a result, quality has improved; the focus has simply shifted: the review is now not about checking syntax, but about ensuring common sense.
You’ve noted that product-oriented thinking is becoming increasingly important. How do you balance this with the technical aspects of AI-assisted coding in your projects?
If an engineer used to think, “I’m writing code,” now they think, “I’m building a product with AI.” AI takes over some of the routine work—but that’s precisely why it’s important not to get stuck in code but to focus on value. I often ask myself: Does what I’m doing now really advance the product? The balance here is to avoid falling into either “pure architecture for the sake of architecture” or “let’s generate it and then figure it out.” We solve this through short product loops: small features, quick feedback, and metrics. If the metric is growing, everything is fine; if not, we rethink it.
Can you share a challenging experience where you had to explain AI-generated code vulnerabilities to non-technical stakeholders? How did you approach this communication?
Yes, let’s say you need to explain to sales why an AI feature could behave unpredictably. You don’t talk about models and tokens. You simply say, “Imagine we have a genius intern who writes code quickly but sometimes misjudges the meaning of a task. He might make a mistake if the instructions aren’t clearly formulated.”
Then you show a specific example about how an incorrect prompt led to a log leak. Then you add, “We’ve implemented filters and tests, and now everything is under control.”
The main thing is not to scare them, but to show that the risk is understood and manageable.
You’ve emphasized the importance of asking better questions when working with AI. Could you walk us through your process of framing a complex coding problem for an AI system?
It all starts with me defining what a “good result” means. I usually write down three things: the goal, the context, and the success criteria. For example: “We need to add log filtering; it’s important not to break the metrics build; success means the tests are green and latency hasn’t increased.”
Then I turn this into a prompt: I provide specifics, a piece of code, and constraints.
After generating the code, I always check it—either with a test or manual analysis. If I see deviations, I refine the prompt.
AI coding isn’t “write it for me,” but “we solve the problem together, but I define the rules of the game.”
In your educational framework for recruiters, you bridged the gap between technical and non-technical understanding. How might you adapt this approach to educate the public about AI in coding?
It’s the same principle, only broader. People don’t need to know how the attention layer works—they just need to understand what the tool does and its limitations. I would create short workshops or videos where AI is presented not as magic, but as a tool: here’s the task, here’s the prompt, here’s the result, here’s what can go wrong if you ask the wrong question. It’s important to speak in simple terms, but not to oversimplify the meaning. When people understand the mechanics, they’re less afraid and begin to use AI consciously.
Looking ahead, how do you envision the role of human developers evolving as AI becomes more integrated into the software development lifecycle?
Developers are no longer just “people who write code.” They are becoming process designers, solutions architects, and AI mentors. Code is no longer an end, but a means. The primary value is the ability to see connections, formulate hypotheses, and test ideas. AI accelerates everything around us, but it doesn’t replace thinking. And those who learn to combine the speed of machines with the depth of human understanding will define the development of the future.
Thanks for sharing your knowledge and expertise. Is there anything else you’d like to add?
Yes. In short, don’t be afraid to experiment. AI development is like the early internet: lots of noise, lots of hype, but those who learn to use the tools correctly now will be one step ahead in a couple of years. I’d say this: learn to ask questions, keep your critical thinking sharp, and remember—code without meaning is worthless, even if it’s perfectly generated.