Back to blog

Life in the Age of AI: Who Writes the News Today?

AI i mediji
AI foto: AI

It’s clear that we’ve crossed the line from being mere observers to becoming active users of AI in both work and everyday life.

We no longer talk about whether AI will take over, we’re already using it daily. From software that helps us resolve linguistic dilemmas, to tools that create presentations, caption videos, structure marketing budgets, process data faster and more precisely than humans… AI is already doing much of the work for us.

It can even write the news.

Incredible, isn’t it?

Did you know that a large portion of the content you read online is generated using AI tools? Even your favorite websites are equipped with software that helps journalists  publish more efficiently, though not necessarily with higher quality.

The transformation the media industry is going through now is similar to the great shift some twenty years ago, when it seemed that “the internet would destroy print media.” Newspapers have since lost much of their circulation, and readership has moved online, yet print media still exist.

Whether AI becomes a threat or an ally to media will depend largely on us, the people working in this industry, and on readers, who unfailingly decide which outlet deserves their trust and attention.

I’m often asked what I think about the role of artificial intelligence in media and business. My experience so far tells me that AI is both inevitable and immensely useful in the media industry. It’s transforming it deeply, from content creation to distribution, and at the same time changing how audiences consume information. I believe that those who embrace AI early and use it as a positive tool will gain a strong competitive advantage. Those who ignore new technologies or standards might soon face serious challenges, because right now, as you read this, your competitors are testing a new AI tool.

Media organizations were among the first to introduce AI algorithms into their workflows, always seeking ways to improve quality, quantity, and speed of publishing. That’s why we were ready to welcome ChatGPT, Midjourney, Canva, and others. What lies ahead is the question of how to ensure that AI is used ethically and responsibly, and how to prevent news from becoming trapped in the jaws of algorithms.

When I first observed early experiments with AI tools in newsrooms, there was a lot of uncertainty. “Machines will replace journalists”, that was the main, unspoken fear. Yet practice quickly proved otherwise.

Software for data analysis, translation, text generation, image and video editing, countless tools have taken over repetitive and technically demanding tasks, freeing journalists to focus on what is fundamentally human: critical thinking, research, storytelling, and real interaction with people.

AI’s capacity to help journalists process information quickly represents an enormous opportunity for its application in media. Imagine AI as a “stylistic editor” that efficiently condenses news content so algorithms can recognize and promote it more easily in search results.

Examples from leading global media outlets show how AI can be used wisely. The Associated Press has been using AI for years to produce short sports and financial reports, but every story still undergoes human verification. The Guardian uses AI to analyze massive document sets and detect patterns, yet no content is published without editorial oversight.

The BBC has introduced mandatory transparency, readers must be informed if any part of a story was generated with AI assistance. Meanwhile, The New York Times explicitly prohibits using AI to create visuals that could mislead the public.

What all these examples have in common is an insistence on ethics and transparency toward readers. When audiences know that AI has been used, trust remains intact.

Still, the risks are real and cannot be ignored. AI can make mistakes often and rely on biased data or spread misinformation. In the wrong hands, AI becomes dangerous through deepfakes and other forms of manipulation, raising serious ethical and security concerns.

Another major risk lies in copyright abuse. Today, it’s enough to “ask” an algorithm to
rewrite someone else’s text in a different style and present it as original. Such practices undermine journalistic integrity and raise difficult questions about legal protection, content ownership, and the limits of creative transformation in the AI era. The solution lies in regulation and legislation.

It’s essential to introduce strong fact-checking standards and clear ethical guidelines at both local and national levels. Collaboration between tech companies and media organizations is also key to ensuring that AI-generated content remains accurate and
reliable.

AI in Media
foto: pexelsfoto: pexels

Looking ahead, it’s evident that AI will continue to reshape both the media industry and the business world. AI tools will complement existing processes, allowing journalists to focus on higher-value tasks, on doing what they do best: writing the news.

Yet, the key to success lies in ethical use and constant adaptation.

Artificial intelligence becomes more “human” when used in service of people, when it saves time, simplifies processes, and enhances creativity.

AI must be used responsibly and transparently. When readers, users, or clients know that certain content has been created with AI assistance, trust won’t decline. On the contrary, it reinforces the perception that the organization embraces innovation without hiding how it works.

And perhaps the real question for all of us in Serbia is:

Will we establish clear and ethical AI standards in time, or will we wait until the trust of audiences, clients, and employees begins to erode?

Note: No artificial intelligence was used in writing this article.

Interested in working together? Let’s get in touch!