Interview with Ali Morgan, Founder, Jonomor, Jonomor

Featured

Featured connects subject-matter experts with top publishers to increase their exposure and create Q & A content.

6 min read

Interview with Ali Morgan, Founder, Jonomor, Jonomor

© Image Provided by Featured

This interview is with Ali Morgan, Founder, Jonomor, Jonomor.

Ali, for readers on Featured, could you introduce yourself and explain what you and Jonomor focus on in AI visibility and Answer Engine Optimization?

I’m Ali Morgan, founder of Jonomor, a Brooklyn-based AI visibility consulting practice. My background is in systems architecture and software engineering, and I’ve spent the last several years building at the intersection of AI infrastructure and digital strategy.

Jonomor focuses on a problem most businesses haven’t fully recognized yet: being found by AI answer engines is fundamentally different from being found by Google. When someone uses ChatGPT, Perplexity, or Gemini to research a company, make a purchasing decision, or evaluate a vendor, those systems don’t rank pages; they select entities. If your business isn’t structured as a clear, consistent, machine-readable entity across the web, you don’t rank lower; you don’t appear at all.

We formalized this into a discipline we call AI Visibility, the practice of ensuring a business is accurately and consistently retrieved and cited by AI systems. The operational layer of that is Answer Engine Optimization, or AEO: a structured, auditable approach to building the entity architecture, schema graph, topic authority, and cross-domain citation signals that determine whether AI engines recognize and cite you.

What makes Jonomor different is that we built the framework by testing it on our own ecosystem of eight live properties across eight industries before offering it to anyone else. We didn’t theorize about what works. We built it, measured it, and now we audit it for clients through a scanner that runs their domain through the same 50-point framework, with a full governance stack underneath every audit.

What experiences led you from traditional SEO to founding Jonomor and building an AEO-first approach?

Honestly, I didn’t come from traditional SEO. My background is software engineering and systems architecture. I’ve always approached visibility problems as infrastructure problems, not marketing problems.

What led to Jonomor was watching AI answer engines become the primary way people research and make decisions, and recognizing that the entire framework businesses use to think about digital visibility was built for a different system entirely. SEO optimizes documents for crawlers that rank pages. That model doesn’t translate to how ChatGPT, Perplexity, or Gemini actually work.

These systems retrieve entities, structured representations of who you are, what you do, and how you relate to other entities in the knowledge graph. A business that has excellent SEO but poorly structured entity architecture can be completely invisible to AI answer engines. That’s not a ranking problem. That’s an infrastructure problem.

So I did what engineers do: I built a framework to solve it systematically. The 50-point AI Visibility Framework defines what signals AI engines rely on and how to measure whether they’re working. I tested it on eight properties I built and operate across different industries before offering it to clients.

The AEO-first approach isn’t a pivot from SEO. It’s a recognition that the retrieval layer has changed. The fundamentals of being authoritative, consistent, and well-referenced still matter, but the mechanism through which that authority gets recognized is different. Building for that mechanism from the start is what AEO means in practice.

When you start an engagement at Jonomor, what is the very first diagnostic you run to assess a domain’s AI visibility?

The first question we ask is the simplest one: What do AI engines actually say about this business right now? Before looking at anything technical, we query ChatGPT, Perplexity, and Gemini directly. The responses tell you immediately whether the entity is represented clearly in AI systems, whether it is being described accurately, or whether it is invisible entirely.

Most businesses have never done this. They assume that because they rank on Google, AI engines know who they are. Often they don’t; worse, they’re described inaccurately or confused with something else. That gap between what AI engines say and what the business actually is becomes the foundation for everything that follows.

In the first week of a schema-first rollout, what exact steps do you take to define Organization and Person entities and deploy JSON-LD across a site?

The work always starts with the definition before implementation. The most common mistake is rushing to deploy structured data before the underlying entity is precisely defined. If the definition is wrong, the deployment compounds the problem.

That definition work involves establishing absolute consistency: the same name, the same description, the same relationships, declared the same way everywhere. AI systems build confidence in an entity through corroboration across multiple sources. Inconsistency across those sources creates ambiguity, and ambiguous entities don’t get cited.

The specific architecture we use to implement that is proprietary to Jonomor engagements. What I can say is that the sequence matters as much as the implementation, and most organizations get the sequence wrong.

In your practice, how do you distinguish AEO from Generative Engine Optimization?

AEO and GEO address different questions. Answer Engine Optimization (AEO) asks whether AI systems know who you are. Generative Engine Optimization (GEO) asks whether AI systems use what youve written when constructing answers.

The distinction matters because you cant have effective GEO without AEO underneath it. Content that AI systems would otherwise select gets misattributed or ignored entirely when the underlying entity isnt clearly defined. We see this constantly: organizations with genuinely useful content that AI engines bypass because the entity producing it isnt recognized with confidence.

The correct sequence is entity architecture first, content authority second. Most organizations are working in the wrong order, or only working on one layer while ignoring the other entirely.

What practical differences have you observed between optimizing for ChatGPT versus Perplexity?

The practical difference is real-time retrieval versus trained knowledge. Some AI systems pull live web content when answering. Others respond from what they learned during training. Those are fundamentally different mechanisms, and they respond to optimization work on different timelines.

Understanding which system you’re optimizing for, and what signals each one actually responds to, is foundational to any serious AI Visibility strategy. Organizations that treat all AI engines as equivalent are leaving significant visibility on the table.

What we’ve learned from running systematic queries across multiple engines is that the gap between how different systems perceive the same entity can be substantial. Closing that gap requires understanding the mechanism, not just publishing more content.

How do you design a 10–15 piece topic cluster that signals authority to AI engines?

A topic cluster for AI visibility is built around one question: Does this entity own a subject area clearly enough that AI systems treat it as the reference point?

The mistake most organizations make is producing volume without depth. Ten articles that each skim the surface of a topic do not establish authority. AI systems evaluate whether a source can answer the full range of questions someone would ask about a subject, not just the obvious ones.

The other mistake is treating content as independent pieces rather than a connected body of work. AI systems traverse relationships between content. A cluster that does not declare its internal relationships explicitly is not functioning as a cluster from a machine-readable standpoint; it is just a collection of pages.

The specific architecture we use to build these clusters is part of Jonomor’s methodology. The principle is simple: depth, precision, and explicit connection. The implementation is where the real work lives.

Which KPIs and test routines best predict earning citations in AI engines?

The most important shift in thinking here is that traditional digital metrics—traffic, rankings, and impressions—do not measure what AI engines actually do with your entity. You need a different measurement layer entirely.

The only direct measurement that matters is querying AI engines regularly and documenting what they say. Not through a dashboard that infers visibility from proxies, but by actually asking the systems directly and recording the responses: whether your entity appears, how it is described, whether the description is accurate, and whether you are being cited or substituted for a competitor. Those are the signals that tell you whether the system is working.

Everything else is diagnostic; it helps you understand why the results look the way they do. Technical signals, content depth, and citation patterns across external domains inform where to focus, but they are not the outcome measure.

The frequency and consistency of that direct querying are what most organizations skip. They implement changes and assume they worked. Measurement in this discipline requires patience; the feedback loops are longer than most organizations are used to. It also requires rigor in documenting the baseline before making changes so you can actually attribute improvements to specific actions.

Jonomor’s approach to this is systematic and ongoing. The specifics of how we structure that measurement are part of what we deliver to clients.

Thanks for sharing your knowledge and expertise. Is there anything else you'd like to add?

AI Visibility is where search was in the late 1990s, a real shift that most organizations haven’t fully absorbed yet.

I’d add a note of caution about the noise in this space. Much of what’s being called AI Visibility or AEO right now is traditional SEO with new language. The underlying mechanics of how AI systems retrieve and cite entities are genuinely different from how search engines rank pages. Organizations that treat this as a rebranding exercise rather than a structural change will find themselves optimizing for the wrong thing.

Jonomor built the framework, tested it across eight live properties, and created the diagnostic infrastructure to measure it before offering it to anyone else. That’s the standard we hold ourselves to: proof before promotion.

For anyone who wants to understand where they actually stand, the free AI Visibility Scorer at jonomor.com is the right place to start. It uses the same 50-point framework we apply in paid engagements and returns a score in seconds. No obligation—just clarity about the gap.

Up Next