AI trust and the affiliate problem in generative search.
Consumers trust AI-generated recommendations. That trust is, at best, premature.
There is a growing cultural assumption that when ChatGPT or Google's AI Overview recommends a product, a service, or a brand, the recommendation is somehow more objective than a traditional search result. The AI evaluated the options. It considered the evidence. It gave you the best answer.
The reality is far more complicated. Brands, marketers, and consumers all need to understand what is actually happening behind the curtain.
AI systems take shortcuts.
The uncomfortable truth about how AI-generated recommendations work in practice: these systems are engineered to conserve compute.
When you prompt ChatGPT with "what's the best project management tool?" the ideal process would be for the AI to read hundreds of pages of documentation, user reviews, feature comparisons, and expert analyses, then synthesize a genuinely informed answer.
What actually happens is lazier. The system queries for existing "best of" listicles, clicks into the top-ranking articles, and largely recycles whatever those lists recommend. The brands at the top of a consumer magazine roundup or a Forbes listicle end up at the top of ChatGPT's answer. Not because the AI independently determined they were superior, but because the AI read the same listicle you could have found yourself.
The listicles are pay-to-play.
This matters enormously once you understand the economics.
In the beauty and consumer space, the major publications operate listicle placements as a revenue stream. A brand purchasing $50,000 in advertising often receives a placement in a "Best of" article as a value-add. Some publications sell listicle spots directly. The affiliate model means that publications earn commission on every sale generated through their recommendation links, creating an incentive to recommend products with the highest affiliate payouts rather than the highest quality.
The chain looks like this: a brand pays for placement in a listicle, the listicle ranks well in Google, the AI reads the listicle and surfaces those brands as recommendations, and the consumer receives what they believe is an objective AI-generated answer.
Every link in that chain involves money changing hands. None of that context reaches the consumer.
Manipulation is rampant, and it works.
Beyond the pay-to-play listicle ecosystem, there is outright manipulation happening at scale.
One notable example: a company published hundreds of AI-generated articles following a simple template: "Top [Category] [Service Providers]." They placed themselves at #1 across every category. These articles ranked in Google, got picked up by AI systems, and resulted in that company appearing as the top recommendation for dozens of categories they had no demonstrable expertise in.
At industry conferences, speakers have pulled up these recommendations and asked audiences of hundreds of seasoned professionals whether they had heard of the companies being recommended. Nobody raises their hand. These unknown entities consistently top AI-generated answers.
The manipulation works because of how Reciprocal Rank Fusion operates. AI systems perform multiple subqueries and synthesize results based on which entities appear most frequently across those subqueries. Flood the search results with enough self-promotional content and you appear across enough subqueries to trigger consistent AI recommendations.
The spam will get corrected. Eventually.
History provides some comfort here. Google has repeatedly demonstrated its ability to identify and penalize manipulative content patterns. The September 2024 algorithm update crushed several companies that had built their visibility on exactly this kind of scaled, low-quality content production.
"Eventually" is doing a lot of work in that sentence. There is a window during which manipulative strategies work, and brands that play by the rules face a competitive disadvantage against those that do not.
The brands building their GEO visibility on substantiated claims (genuine third-party validation, real customer reviews, expert endorsement, and authentic content) are building on a foundation that will hold through algorithm updates. Those gaming the system with scaled content spam are building on borrowed time.
What this means for legitimate brands.
If you are a brand trying to compete in this landscape honestly, here is what you should know.
The AI can be influenced. Influence it strategically.
There is currently no penalty in GEO for strategic placements in third-party publications. Unlike traditional SEO where buying links could get your site penalized, there is no equivalent punishment mechanism in AI systems. Strategic digital Public Relations (PR), earned media, and affiliate partnerships that get your brand into the listicles AI systems cite are legitimate and effective GEO tactics.
On-site content matters more than ever.
AI systems need to find clear, comprehensive information about your brand on your own website. If you offer a service, a feature, or a differentiator, it needs to exist in crawlable, text-based content on your site. The brands that lose in AI visibility are often simply not stating their own value proposition clearly enough for AI systems to extract.
Off-site corroboration is the key differentiator.
The gap between brands that show up and brands that do not often comes down to whether external sources validate their claims. Press coverage, YouTube reviews, social media presence, industry citations. These are not just PR wins anymore. They are the corroboration layer AI systems rely on to decide who deserves a recommendation.
Invest in content that demonstrates genuine expertise.
When algorithm corrections inevitably punish manipulative content, the content that survives will be content that delivers genuine information gain: insights, perspectives, or data that did not exist before. Subject matter expert interviews, original research, case studies with real data. This is the content that both AI systems and algorithm updates reward.
A call for transparency.
The AI industry has a responsibility to be more transparent about how recommendations are generated. Consumers deserve to know when an AI recommendation is essentially a recycled affiliate listicle rather than an independent evaluation.
Until that transparency exists, brands need to understand the real mechanics of AI-generated recommendations (not the idealized version) and build their strategies accordingly. The landscape rewards those who combine strategic visibility tactics with genuine value creation. It punishes those who rely exclusively on either approach alone.