Google has the AI that generates the news ready

Google has the AI that generates the news ready. Probably the kind of work needed to write an accurate and digestible article for the public is ignored: this is how someone comments Genesis, Google’s generative AI that would be able to generate newspaper articles simply by feeding it the data necessary to build the story.

Intel’s NUC mini PCs won’t die thanks to ASUS

Google has shown it to the New York Times and other major news organizations like the Washington Post and the Wall Street Journal, presumably to gather feedback from industry professionals to hone its capabilities before launch. In fact, the tool is still in the testing phase: the basic idea is to give journalists a sort of virtual assistant that automates some activities so that they can focus on something else, but at present it doesn’t seem to convince the big names.

Between concern and the risk of disinformation

Some of the people who attended the demonstration simply say that all this is “disturbing”, for others, as mentioned at the beginning, the problem would be upstream and would concern the superficiality with which one would approach the world of journalism, effectively ignoring all the work that actually goes into the construction of an article. Because if at the moment this AI is not (yet) able to help the journalist verify the sources, it seems to assume the right to know how to set up a newspaper article regardless of the event or theme to report.

The reality of the facts would in fact be more subtle: each piece of news is different and the nuances that can be assumed with a piece in reporting it are so wide and varied that according to those who have seen Genesis at work, today they are simply unreachable by the machine; in short, despite the progress made in recent years by these algorithms, man still seems to have the upper hand.

For Jeff Jarvis, journalist and professor at the City University of New York, the situation is much simpler: tools like this – he says – should only be used if there is the certainty that they can provide “factual information and in a reliable manner”.

The doubts mainly concern the possibility that this type of AI facilitates the spread of disinformation and it is something that should be avoided, especially with an ongoing war and a pandemic behind us: we know well how wrong information can trigger a series of potentially harmful chain events for the whole of society, therefore the launch of a similar tool will have to be carefully considered.

Leave a Reply