Andrea Coldishi created a philosopher, presented as an author and produced a book, secretly created with the help of artificial intelligence, about manipulating reality in the digital age.
People were deceived. Accusations of betraying honesty, bad ethics, and even illegitimacy.
But the man behind this, Mr. Kolditi, insists that he was not a trick; Instead, he described it as a “philosophical experience”, saying it helps to appear How to destroy artificial intelligence “slowly, but inevitably our ability to think.”
Mr. Coladici is an Italian publisher – along with two tools of Amnesty International – generated “Decrease: Trump, Musk, and Architecture of Reality,” a noisy text Outwardly written by Jianwei Xun, the unaware philosopher.
In December, Mr. Coladici printed 70 copies of an Italian edition that is supposed to be translated. However, the book quickly gained great attention, as it is covered by the media in Germanyand SpainItaly, France and existence The artistic stars were martyred.
“Hypnocracy” describes the strength of people who use technology to form awareness with “sleeping novels”, put the audience in a kind of group coma that may worsen with dependence on artificial intelligence
The book’s publication came at a time when schools, companies, governments and Internet users wrestle all over the world with how artificial intelligence tools – and do not use them, which are technology and startups that are widely available. (The New York Times filed a lawsuit against Openai, ChatgPT, its partner, Microsoft, claiming a violation of copyrights for news content. The two companies denied the claims of the case.)
Nevertheless, it was found that the book also A demonstration of its thesisPlaying on unintended readers.
Mr. Kolditisi said that the book was aiming to show the dangers of “cognitive indifference” that could develop if thinking was delegated to machines and if people do not sow discrimination.
“I tried to create a performance, an experience is not just the book,” he said.
Mr. Coladici knows what he calls “student art”, or how to ask smart questions from artificial intelligence and give him practical instructions at the European Design Institute in Rome. He said he often sees two sophisticated responses, if reversed, on tools such as ChatGPT, where many students want to rely on it exclusively and many teachers who believe that artificial intelligence is wrong by nature. Instead, users are trying to teach how to distinguish the truth from manufacturing and how to deal with tools in a productive manner.
Mr. Kolditi argued that the book is an extension of this effort. He said that the tools of the artificial intelligence he used helped him to improve ideas, while clues (real and invention) about the fake author (via the Internet and in the book), have proceeded with possible problems intentionally to ask readers to ask questions.
The first chapter discusses the fake authorship, for example, and the book contains mysterious references to the Italian culture that is unlikely to come from a young philosopher from Hong Kong, which eventually helped lead one reference to the real author who works as a translator.
Sabina Minardi, editor in the Italian port L’Espresso, Pick up on the cluesExposing Jianwei Xun as an early fake of this month.
Mr. Kolditi then The vital page of the fake author has been updated And talk with LeafletsIncluding some Be deceived Through his work. The new versions and extracts printed this month come with PostCrips about the truth.
But some of those who converted to the book for the first time now reject it and ask whether Mr. Kolditi has been acting in a non -moral or breaking the European Union’s law on the use of artificial intelligence.
The French News Outlet Le Figaro has written about “L’AFFAIRER JIANWEI XUNExplain that “the problem”, with her previous interview by the philosopher Hong Kong, was that.It does not exist”
The Spanish newspaper El PaÃs in Spain retracted a report on the book, and replaced it with note This said, “The book failed to recognize the participation of artificial intelligence in creating the text and violating the new European artificial intelligence law.”
Article 50 Noah Feldman, a law professor at Harvard University, who advises technology companies, says that if someone uses an AI system to create a text for the purpose of “informing the public about matters of public interest.”
“It seems that this ruling on his face covers the creator of the book and perhaps anyone re -spreading its content,” he said. “The law does not come into effect until August 2026, but it is common in the European Union for persons and institutions to want to follow the laws that seem good morally even when they do not apply technically yet.”
Jonathan Zitreen, Professor of Computer Science at Harvard University, said he was more inclined to summon the book of Mr. Kolditi, “A piece of performance, or simply marketing, which included the use of the pen’s name.”
Mr. Coladici is disappointed, some heroes have criticized the experience. But he plans to continue using artificial intelligence to show the risks it provokes. “This is the moment,” he said. “We risk perception. It uses or loses it.”
He said he intends to be the presence of Gianoy Sean – who describes it as a collective for humans and artificial intelligence – knows a course on artificial intelligence next fall.