
We are now in a technological “Wild West”. An interview with Giuseppe Stigliano, Professor of Marketing, Business Advisor and Author
Brexit supporters claimed that the UK would save £350 million per week after leaving the EU. Disinformation fuelled fears about immigrants and influenced the referendum result. Will it become a standard that manipulated voters choose based on false information? Is there a way to defend against this?
The principle of democracy is not that everyone should have the right to voice an opinion on every topic. Rather, it is the opportunity for citizens to elect representatives who will, with competence and ethics, act in their best interest. The flaw in situations like the Brexit referendum lies in allowing the general public to make decisions on complex issues without the necessary tools or knowledge to properly assess them.
Brexit, for example, was a geopolitical decision of such profound implications that it required a deep understanding of international relations, socioeconomic factors, and long-term strategic vision. These are the domains of experts, not general voters. If we allow unprepared individuals to make such monumental decisions, we are essentially asking them to navigate territory where they may lack both the expertise and the understanding of the far-reaching consequences of their choices.
It’s akin to asking someone without medical knowledge to decide whether they should be hospitalized when unwell or asking a layperson to represent themselves in court. There are some matters where the public, no matter how well-intentioned, may not be equipped to make decisions that require deep, specialized knowledge. Instead, we need to work hard to enhance the public’s understanding and ability to assess these issues before they are put to a vote to elect their representatives.
Disinformation is supported by artificial intelligence. Armies of bots and internet trolls automatically spread fake news and are very active on social media, exerting a real influence on the opinions of many people. Why is it so difficult to distinguish true information from artificially fabricated content? Do public disputes and the resulting social divisions owe a lot to the increasing use of AI?
We are currently in what can be described as a “Wild West” phase, when it comes to the use of emerging technologies, especially AI. People are exploring the full potential of generative AI, often without understanding the long-term consequences or the ethical implications.
In the case of disinformation, the sheer scale at which bots, trolls, and AI-generated content can operate makes it difficult to distinguish truth from fiction. The problem isn’t just the technology itself, but how it is being used to manipulate public opinion. This manipulation is not only about spreading false information but also about amplifying it to a degree where it drowns out credible content.
However, it is important to understand that this phase won’t last. In time, we can expect regulators to step in and set boundaries for what can and can’t be done with AI. Just because technology allows something to happen doesn’t mean it is socially or economically acceptable, and legal frameworks will likely be developed to prevent the misuse of AI for malicious purposes. That said, AI can also be part of the solution. The same tools that are used to exploit information asymmetry can be leveraged to protect individuals by identifying and flagging manipulated content. We already see the development of technologies that can detect when an image, video, or voice message has been altered, and real-time fact-checking platforms are on the rise. These technologies have the potential to counteract disinformation, creating a more informed public and ensuring the accuracy of the information people use to make decisions.
In summary, while the proliferation of misinformation and disinformation in the AI era is indeed a serious risk, we shouldn’t view the current situation as the “new normal.” Nor should we see this as the beginning of a linear trajectory towards inevitable social disintegration. Rather, this is an evolving phase where both the risks and the countermeasures are developing. Misinformation and its societal consequences are something that must be controlled and regulated, but I’m optimistic that, with time, we will be able to build a more resilient and informed society, capable of navigating this new information landscape.
Giuseppe Stigliano – a recognized thought leader, sought-after advisor, and keynote speaker on Marketing, Management, Leadership, Business Transformation, and Corporate Innovation, Giuseppe is an entrepreneur and manager with over two decades of international experience. He has served as CEO three times at international marketing agencies, partnering with more than 300 companies globally.
With a Ph.D. in Marketing and Economics, Giuseppe has co-authored three business books with Philip Kotler: Retail 4.0, Onlife Fashion, and the most recent Redefining Retail. His books have been translated into twelve languages, reaching a total readership of over 120,000 people worldwide.
A member of the Advisory Council of HBR, a columnist at Forbes, and a LinkedIn Top Voice, Giuseppe was recognized by Thinkers50 in 2024 as one of the most inspiring global leaders whose ideas are set to make a significant impact on management thinking. Giuseppe is also an active angel investor and an Adjunct Professor of Marketing at prestigious international universities and business schools. His TEDx talk, How to Become a Marketing Superhero, has garnered over 1 million views worldwide.
Last Updated on March 10, 2025 by Anastazja Lach