This interview is with Eray ALTILI, CYBER SECURITY ARCHITECT.
Eray Altili, Cyber Security Architect, World Bank
Can you introduce yourself and highlight your key areas of expertise in the cybersecurity landscape?
With over two decades of experience in cybersecurity, my expertise spans the architecture, implementation, and governance of secure systems for complex, global organizations. I have led large-scale security architecture and transformations at institutions such as the World Bank Group, NATO, IBM, and the United Nations, focusing on hybrid multi-cloud environments, secure software development, and risk management.
My core strengths include designing and operationalizing security architecture and assessment programs that address cloud, application, and infrastructure risks. I specialize in driving DevSecOps adoption, automating security controls, and embedding secure development practices across the software lifecycle. My work has consistently resulted in measurable reductions in vulnerabilities and improved compliance with international standards such as NIST, CIS, OWASP, and GDPR.
I have successfully guided organizations through the adoption of Zero Trust architectures, policy-as-code frameworks, and advanced identity and access management solutions. My approach emphasizes automation, scalability, and the integration of threat modeling to proactively identify and mitigate risks. Additionally, I have developed and implemented AI security and risk management frameworks, enabling secure deployment and governance of emerging technologies like generative AI and MLOps.
Collaboration is central to my methodology—I work closely with cross-functional teams to align security strategies with business objectives, ensuring executive buy-in and effective communication of technical risks. My experience extends to incident response enhancement, third-party risk management, and the delivery of secure cloud-native and microservices architectures.
Through continuous learning and industry engagement, I remain at the forefront of cybersecurity trends, regularly contributing to international forums like Cloud Security Alliance and knowledge briefs on topics such as Web3, quantum computing, and AI security. My commitment is to deliver pragmatic, scalable, and forward-looking security solutions that enable organizations to innovate confidently while managing evolving threats.
You’ve mentioned implementing DevSecOps practices in CI/CD pipelines. Can you share a specific challenge you faced during this implementation and how you overcame it?
Integrating security tools into existing CI/CD workflows presented a significant challenge, primarily due to resistance from development teams concerned about pipeline slowdowns and alert fatigue. To overcome this, we embedded security scans directly into the pipeline using GitHub Advanced Security and Azure DevOps, automating static/dynamic analysis, infrastructure-as-code checks, secret scanning, and container vulnerability scanning. This provided real-time feedback to developers without disrupting their workflow. We also implemented granular severity thresholds to prioritize critical vulnerabilities, reducing noise by 60%. By demonstrating how early vulnerability detection accelerated release cycles and cut remediation costs by 40%, we secured team buy-in. The result was a 50% reduction in software vulnerabilities within six months while maintaining deployment velocity.
In your experience with intergovernmental organizations, how have you navigated the unique security challenges of multi-cloud environments? Can you provide an example of a strategy that proved particularly effective?
Navigating multi-cloud security in intergovernmental organizations requires addressing complex compliance mandates, legacy system integration, and diverse stakeholder requirements. “Secure by Design,” “Secure by Default,” and “Secure by Operation” are complementary and crucial for building robust cybersecurity into systems and processes.
Starting with “Secure by Default” is an effective strategy by implementing a unified policy-as-code framework across multi-hybrid cloud. This automated enforcement of security baselines and compliance checks reduced misconfigurations by 30% and improved adherence to NIST/CIS/GDPR standards by 40%.
“Secure by Design,” “Secure by Default,” and “Secure by Operation” involve:
Centralized governance and Automated compliance: Deploying Prisma Cloud, Azure Policy (Defender for Cloud), AWS Control Tower, and AWS Config to codify security rules (e.g., encryption standards, network segmentation) across all cloud environments and services that are in use or going to be used enables “Secure by Default.” The result is continuous scanning and remediation of non-compliant resources, with real-time dashboards for audit trails.
Threat modeling integration: Embedding threat modeling (using tools like Iriusrisk) early in design enables “Secure by Design,” identifying risks 40% faster.
Ensure baseline alerts, playbooks, and runbooks for each component: A proactive and systematic approach to cybersecurity that embeds security considerations into the very operational fabric of an organization’s IT and cloud environments. It’s about making security an inherent part of how systems are managed, monitored, and responded to, rather than an afterthought. Enabling “Secure by Operation” by implementing continuous monitoring of security posture, performance, and adherence to baselines, with a feedback loop for continuous improvement of alerts, playbooks, and runbooks.
This strategy proves transformative by enabling consistent security postures despite heterogeneous cloud architectures, while meeting strict regulatory demands of multilateral institutions. The automation also freed teams to focus on strategic initiatives rather than manual audits.
Zero Trust Architecture is gaining traction. Based on your real-world implementations, what’s one often-overlooked aspect of Zero Trust that organizations should pay more attention to?
One often-overlooked aspect of Zero Trust, based on real-world implementations, is the critical need for continuous, granular policy refinement and automation, driven by deep visibility into application dependencies and data flows. Many organizations treat Zero Trust as a technology purchase or one-time project, focusing on initial setup rather than ongoing operationalization. This leads to issues:
Complexity of Legacy Systems: Undocumented dependencies in existing environments make it hard to define precise “who needs to talk to what” rules.
Lack of Granular Visibility: Traditional tools often miss the application-layer context needed for truly least-privilege access.
Policy Sprawl & Manual Management: Manually managing policies across diverse enforcement points becomes unsustainable, leading to errors and security gaps.
Fear of Breaking Production: Without solid data, the “verify everything” principle can lead to overly broad policies to avoid disruption.
Underestimation of Operational Burden: The ongoing effort to monitor, validate, and adapt policies is frequently underestimated.
Why this is crucial:
True Least Privilege: Without deep insight into application communication, policies remain too broad, undermining Zero Trust’s core goal. Effective conditional access, least privilege, just-in-time access, and segmentation require understanding legitimate communication paths between segments.
Dynamic Policy Enforcement: Modern environments demand dynamic policies that adapt to changes, relying on real-time application behavior data.
Reduced Long-Term Overhead: Automating policy refinement, once dependencies are mapped, significantly reduces manual effort.
Faster Incident Response: Precise policies immediately flag deviations, enabling rapid threat detection and containment by restricting lateral movement.
To address this:
Invest in Application Discovery & Dependency Mapping Tools: Automatically map communications at a granular level.
Implement IAM and Network Observability: Gain deep insights into IAM and network activity, tied to application context.
Adopt Policy-as-Code & Automation: Define policies in code and automate their deployment.
Phased Rollout with Baselines: Start with critical applications, establish “known good” baselines, then gradually tighten policies.
Integrate Security into DevSecOps.
Zero Trust is fundamentally about knowing exactly who and what needs access to where, when, and under which conditions, and then enforcing continuous verification.
You’ve worked on automating threat modeling practices. Can you describe a situation where this automation significantly impacted a project’s outcome, and what lessons did you learn from it?
I automated threat modeling using modern tools and integrating into the design process. Previously, manual risk assessments delayed projects by weeks and created bottlenecks. Collaboration and automation cut risk analysis time by 60% and accelerated design-to-deployment by 40%, while improving risk coverage across applications.
The key to success was standardizing mitigation libraries and compliance rules, which ensured consistent, objective assessments and eliminated subjective risk ratings. Automation handled common threats, mitigations, and misconfigurations, while security architects reviewed complex scenarios, balancing efficiency with expert oversight. This reduced alert fatigue and ensured nuanced risks were addressed.
Embedding automated threat modeling early in the design process and integrating to Azure DevOps led to a 45% reduction in post-deployment misconfigurations and vulnerabilities. The main lesson was that combining standardized frameworks, adaptive automation, and targeted human review transforms threat modeling from a compliance hurdle into a driver of secure innovation and faster delivery.
With the rise of Generative AI, what’s the most unexpected security vulnerability you’ve encountered, and how did you address it?
The most unexpected vulnerability I encountered was indirect data leakage through generative AI tools, where users inadvertently exposed sensitive information via prompts to LLM systems like ChatGPT. This occurred despite safeguards, as staff used these tools for tasks like code review and document summarization, bypassing traditional security controls.
To address this, implement:
1. Use approved generative AI systems from cloud vendors, implement AI gateway, and enable guardrails.
2. Strict usage policies: Prohibit input of confidential data into generative AI tools, with automated DLP enforcement scanning prompts in real-time.
3. Technical controls: Deploy inline data masking for sensitive fields (PII, IP, credentials) and implement allow-listing for approved AI services.
4. Training program: Educate users on prompt hygiene and data handling through interactive workshops and simulated phishing attacks.
5. Continuous monitoring: Integrate AI usage, leverage DSPM, AISPM, and SIEM for anomaly detection, reducing exposure incidents.
This approach transforms generative AI from an uncontrolled risk to an accountable, secure enabler.
In your leadership roles, how have you successfully bridged the communication gap between technical security teams and non-technical executives? Can you share an anecdote where this made a crucial difference?
During my time at an international military organization, I successfully bridged the communication gap between technical security teams and non-technical executives by translating complex cybersecurity risks into clear business impacts. For example, while leading the modernization of IT infrastructure across the organization, I encountered initial executive hesitation due to perceived complexity and cost. To address this, I leveraged the C3 taxonomy, which is connecting strategic concepts and mission goals to technology components, capabilities, and processes, improving interoperability and mission alignment. I reframed technical findings into quantifiable risks to mission continuity and operational readiness. By presenting how these security measures would directly reduce the likelihood and impact of deployment of technical components on organization missions, I secured executive support and funding for the initiative. This alignment led to improved compliance, a measurable reduction in misconfigurations, and enhanced resilience for the organization’s critical infrastructure. This experience reinforced the importance of translating technical outcomes into strategic value, ensuring security is recognized as essential to both mission success and organizational trust.
Looking at the evolving landscape of Industrial Control Systems (ICS) and SCADA security, what’s a emerging threat you’re particularly concerned about, and how are you preparing to address it?
The most concerning emerging threat in ICS/SCADA security is the rapid convergence of IT/OT networks, which exponentially expands attack surfaces while legacy systems remain vulnerable. This is exacerbated by internet-exposed devices—over 110,000 ICS systems were recently found online, including 6,500+ programmable logic controllers (PLCs) by recent research from Shodan. Attackers like the Cyber Av3ngers group exploit this, as seen in attacks on water utilities via exposed Unitronics PLCs. Critical vulnerabilities in widely deployed systems (e.g., ICONICS SCADA) further enable privilege escalation, data manipulation, or full compromise.
To address this, I prioritize:
Inadequate Authentication: Many SCADA systems lack strong authentication. Implement modern authentication that may include MFA, RBAC, temporary access, or at least strong passwords.
Proprietary Protocols without Encryption: Unencrypted data can be intercepted. Adopt standardized, encrypted protocols (OPC UA, Modbus TCP with TLS, Secure DNP3) based on regular security assessments, and end-to-end encryption.
Unpatched Vulnerabilities: Outdated software poses risks. Implement regular vulnerability assessments, patch management, and system modernization.
Insufficient Network Segmentation: Allows lateral movement for attackers. Design proper network segmentation, deploy firewalls and intrusion detection systems (IDS), use DMZs, and micro-segmentation.
Remote Access without Adequate Security Controls: Especially risky with 5G. Limit and monitor remote access, isolate sessions, and use time-limited credentials.
Third-Party Vendor Risks: External partners can introduce vulnerabilities. Vet and audit vendors, restrict access, use secure data exchange, and enforce security agreements.
Lack of Continuous Monitoring and Incident Response: Leaves breaches undetected. Implement SIEM, clear incident response plans, anomaly detection, and continuous threat hunting.
Deploy network and ICS-specific security systems: Deploy ICS-specific security systems (Honeywell (Scadafense), Claroty) to baseline normal ICS operations, flagging anomalies (e.g., abnormal command sequences) in real-time while integrating deception technology to trap attackers.
Insufficient Employee Training: Human error is a major factor. Conduct regular cybersecurity training and foster a security-conscious culture.
Thanks for sharing your knowledge and expertise. Is there anything else you’d like to add?
You’re most welcome! I’m glad I could provide helpful information. If there’s one final thought I’d add, it’s about the human element in cybersecurity. While we often focus on technology, processes, and sophisticated attacks, the reality is that:
People are the strongest defense: A well-trained, security-aware workforce is your first and often most effective line of defense against social engineering, phishing, and insider threats.
People are the weakest link (if neglected): Conversely, a lack of awareness, training, or adherence to security protocols can open significant vulnerabilities, even in the most technically secure environments.
Therefore, continuous investment in security awareness training, fostering a security-conscious culture, and empowering employees to be proactive defenders is just as critical, if not more so, than any technical control. This includes understanding the “why” behind security policies, not just the “what.”
Thank you for the engaging discussion! If you have any more questions in the future, feel free to ask.