The next frontier is not search engine optimization but AI dataset manipulation.
This seems far more dangerous because it's even less transparent (or possible) to spot, even for those who run these large language models, let alone the users who broadly trust the output.
Especially when the queries are not about facts/figures but broad social issues (e.g. "Why are human rights important?"), polluted/manipulated systems can very subtly influence the reader through their words.
**This was originally posted on Andras Baneth's LinkedIn account.