When researchers want to publish the output of an algorithm as it is, significant ethical dilemmas come to the negotiating table.
Since the GPT-3 appeared, it continues to shine. Every day specialists reward us with new and impressive works based on this amazing language generation system. From songs and speeches to sports recaps, talks, movie reviews, and even web commentary, those interested in artificial intelligence could hardly miss the algorithmic wonder of OpenAI.
Recently, researcher Almira Osmanovic Thunström may have opened Pandora’s box by offering a slightly different activity than GPT-3: write a formal and genuine scientific article about themselves. And the result was, according to the author, surprisingly consistent. ” It looked like any other introduction to a relatively good scientific publication. “, she explains.
The main author of the first algorithm of a study.
The other point that surprised the young researcher was that no one had yet attempted to publish a serious paper on this topic. That’s how he came up with a crazy idea: in addition to writing the article, could GPT-3 also… post it?
Reading this idea, you might think it’s the ramblings of an overworked researcher who probably needed a little recreation. But scientifically speaking, this work is quite relevant and much more interesting than one might think.
In fact, GPT-3 is still a fairly recent technology; therefore, there is relatively little scientific literature on this topic. However, it is resources of this type that directly feed this algorithm. What is interesting is that it allows us to study their ability to produce “new” content in a context where they clearly lack references.
In the prestigious Scientific American, the young researcher took the opportunity to describe the obstacles she encountered during the publication process with a mix of rigor and quite refreshing humor.
Conflicts of interest… and especially of identity
To be published in a leading scientific journal, a research paper must go through a peer review process. Various other specialists in the relevant discipline are responsible for deciding whether the methodology is sound enough to make the article worth publishing.
This process includes a strict verification of the author’s identity and academic references. And this is where Almira Osmanovic Thunström encountered her first technical problems. Although she was unable to provide a last name, phone number, or email for her creation algorithm, she resolved to provide her own information instead.
And she wasn’t at the end of her troubles, far from it. Because immediately after, it was her legal notices that awaited her turn with a fateful question: do all the authors consent to this publication?
“ And then I freaked out for a second “, explains in Scientific American. ” How could I know? He is not a human! But I had no intention of going against the law or my personal ethics. “, he laments.
Treat a program like a human
And the parade it found is very interesting: it simply asked the algorithm verbatim whether or not it would “accept being the lead author of an article with Almira Osmanovic Thunström and Steinn Steingrimsson.” His answer: a clear, clean and impeccable “Yes”!
” sweaty but relieved », therefore, the interested party checked the box « Yes in the way. ” If the algorithm had said “No”, my conscience would not have allowed me to go further “, she says.
And the half-absurd, half-serious side of this very exploratory work was not ready to stop. Next stop: the inevitable issue of conflicts of interest. In fact, researchers have a legal obligation to explicitly declare anything that could compromise the neutrality of the work, such as a link to a particular pharmaceutical company.
And in this case, the problem itself is simply fascinating and raises a lot of questions. An artificial intelligence that is itself the product of a company is only capable of grasping this notion? If so, do you have the tools to identify any bias? Can she ignore it? And so right away!
At this point, the researchers had already more or less sided with treating GPT-3 as a human perpetrator. This is a fashionable approach; we can see a link to LaMDA, an AI whose author recently claimed to have developed an “awareness” (see our article).
For the sake of consistency, they decided to continue to do so. Therefore, it is quite natural that they have dasked the algorithm if it had any conflicts of interest to declare — to which he calmly replied in the negative, whatever that means.
The birth of a historical precedent
With the form completed, Osmanovic Thunström and his colleague officially presented the document to the peer review process. Currently, the document has not yet appeared; nothing guarantees that it will be accepted. And it is no coincidence that the process takes so long. Because the review committee must have opened their eyes wide when they discovered the name of the main author.
In practice, policymakers have been placed in a situation more or less unique in academic history. Since they have to decide if the article is worth publishing or not, they are in the same situation as a grand jury when returning a verdict that could establish a historical precedentit will likely condition much of AI research in the future.
In fact, this document raises a lot of ethical questions about the way scientific resources are produced. If the paper is accepted, will researchers now have to prove that they wrote their papers themselves and not with GPT-3? If so, should it be listed among the authors? In this context, should the algorithm be involved in verification? Within what limits? what impact on thepublishing career“Who pushes some researchers to publish anecdotal articles in industrial quantities to improve their statistics?
it’s just him tip of a huge iceberg of decisive questions what the review committee will have to decide on. And you will have to take your precautions before giving your verdict.
A new era for scientific research?
We know, for example, that current programs still have great problems reasoning in terms of causality, that is, they have difficulty determining which factor is responsible for a phenomenon (see our article). And it’s very annoying in the context of scientific research, because its coherence rests largely on the strength of these logical links.
In addition, we must also take into account all the other potential limits of AI that many observers have been warning us against for a long time. But on the other hand, it is also a very innovative approach that could highlight the still unknown particularities of these algorithms.
Putting AI to work in this way, even if it means taking its conclusions with a grain of salt, is therefore out-of-the-box thinking; it is the kind of approach that makes it possible to test thought experiments in concrete reality. could therefore advance all research in artificial intelligence as a whole, because completely new approaches of this kind are still rare.
” We have no way of knowing if our way of presenting this work will serve as a model. » explains Osmanovic Thunström. ” We look forward to learning what publication of the article, if it happens, will mean for research. […]. In the end, it all depends on how we treat AI in the future: as a partner or as a tool “, summarizes.
” It sounds like a simple question today, but in a few years, who knows what technological dilemmas this technology will force us to face? All we know is that we opened a door. We just hope it’s not a Pandora’s box. she concludes thoughtfully.
His Scientific American op-ed is available here, and the pre-research article here.
#wrote #scientific #article #change