Journal of Controversial Ideas

(ISSN: 2694-5991) Open Access Journal
Rss Feed:

Controversial_Ideas 2022, 2(2), 4; doi:10.35995/jci02020004

Article
Will Moralization of Science Lead to “Better” Science?
Yves Gingras
Centre interuniversitaire de recherche sur la science et la technologie (CIRST), Université du Québec à Montréal; gingras.yves@uqam.ca
How to Cite: Gingras, Y. Will Moralization of Science Lead to “Better” Science?. Controversial Ideas 2022, 2(2), 4; doi:10.35995/jci02020004
Received: 29 May 2021 / Accepted: 5 July 2022 / Published: 31 October 2022

Open access

: JOURNAL OF CONTROVERSIAL IDEAS is a peer-reviewed open-access journal.

Abstract

:
In the fall of 2018, The US National Science Foundation (NSF) implemented a new policy on sexual harassment. A few months later, the National Institutes of Health (NIH), took a further step in the fight against harassment by announcing that researchers accused of harassment, but not yet found guilty, could nonetheless be excluded from the lists of potential reviewers of submitted projects. We also observe a recent tendency to call for the retraction of published peer-reviewed results on the basis that their conclusions are considered to go against the moral convictions of some social groups, though the lack of validity of the results has not been proven. It is certainly a legitimate question to ask whether these kinds of policies and moral critiques, which directly link the practice of science to the moral behavior of the scientists in the larger society, do not initiate a profound transformation in the relations between science and society by adding to the usually implicit norms governing the scientific community a new form of moralization of the scientists themselves. We analyze these recent events in terms of a new process of moralization of science and ask whether these new rules of conduct may lead to doing better or more robust science.
Keywords:
norms of science; moralization; autonomy of science
In the fall of 2018, The US National Science Foundation (NSF) implemented a new policy on sexual harassment.1 It states, in essence, that any scientist can have their research grants withdrawn if found guilty of sexual or other forms of harassment. A few months later, another American agency responsible for distributing research grants, the National Institutes of Health (NIH), took a further step in the fight against harassment by announcing that researchers accused of harassment, though not yet found guilty, could nonetheless be excluded from the lists of potential reviewers of submitted projects. As we know, any research project, like any article submitted for publication, is first peer-reviewed to judge the scientific quality of the project. The reports, usually anonymous, are used to decide whether or not to support the project financially or to publish or not the submitted paper. The tendency to use double-blind review—even triple-blind in some journals—is implicitly based on the idea that only the validity of the science, not the physical or psychological characters of the authors, is tested. From this point of view, the NSF and NIH policies are different. Whereas the first is not compatible with the idea of universalism, that of the NIH invokes a possibility of a form of conflict of interest going against the “integrity of the process,” given that the authors of the submitted project are known to the reviewers and could thus lack objectivity in their evaluation. The NIH explained that a person accused of harassment, usually a man, “could give better scores to proposals from female postdocs to avoid appearing biased, even if the science didn’t deserve that score.” As for the sources of the allegations, they could come, she adds, “not only from institutions conducting an investigation, but also from victims or ‘observers.’”2
No one can seriously object to the idea of sanctioning socially reprehensible behavior. However, it is certainly a legitimate question to ask whether these new NSF and the NIH policies, particularly those of NSF, which directly link the practice of science to the moral behavior of scientists, do not initiate a profound transformation in the relations between science and society by adding to the usually implicit norms governing the scientific community a new form of moralization of the scientists themselves. As mentioned, NSF and NIH policies have different consequences on science. Withdrawing a grant directly affects the production of valid science. In the case of reviewing, the policy simply excludes a person from a task that can be performed by someone else, as is the case when a conflict of interests is detected. In both cases, however, we have the use of socially arbitrary criterion of “good social behavior” applied to an activity whose specific norms, as we will see, are different from those admitted in the general social sphere.
Scientists usually consider their search for objective knowledge as a highly moral activity. But their notion of morality is more philosophical than social. It applies to the world of ideas, not to their actions in every day social life. Hence, in his autobiography, Albert Einstein insisted that “the essential in the being of a man of my type lies precisely in what he thinks and how he thinks, not in what he does or suffers.”3 This separation of the social from the scientific sphere of action is also found in the mission of all science-funding agencies which, over the past half-century, have essentially focused their work on deciding who should get government money for research, a decision reached by evaluating, usually through peer-review, the quality of the researcher and the originality of the research program. Similarly, journal editors aim at accepting or rejecting papers on the sole basis of internal criteria (originality, coherence, validity, etc.), even using double-blind methods—that is, erasing the names of the authors and their institutions—to diminish the possibility of personal bias in this process of evaluation. They thus make no moral inquiry to check whether the person, qua scientist, was, for example, considered racist (like the Physics Nobel Prize laureate, William Shockley), anti-Semitic (like the other Physics Nobel Prize laureates, Johannes Stark and Philip Lenard), misogynous or what have you. Thus, it has long been implicit and generally accepted that the “republic of science” has been a relatively autonomous subset of society with its own rules based on expertise.
This view of science was formalized in the 1940s by the American sociologist Robert K. Merton as the “normative structure of science.” According to Merton,4 science as a social system aimed at generating new and sound knowledge is essentially based on four institutional norms: communalism (knowledge is a public good); disinterestedness (scientists search truth not strictly personal interests); organized skepticism (results must be scrutinized by other scientists before being accepted); and universalism (scrutiny of scientific results should not be influenced by the particular characteristics—religion, race, gender, etc.—of the scientists). This last norm has generally been taken to mean that only objective arguments can be used to evaluate research projects and scientific results. The point here is not that, as humans, scientists never break these rules, but that they are taken as implicit regulatory principles within the scientific community and sanctions exist for when these norms are violated.
These standards of behavior are generally taken for granted by researchers and only become visible in situations where they are violated. Let us think here of fraud, untimely announcements of discoveries, plagiarism, etc. People thus found “guilty” are denounced and morally sanctioned by the scientific community: their papers may be retracted and they may even lose their job.
In order to underscore how the new moralization of science implicit in the recent NSF and NIH policies are indeed original and transformative, let us recall a few striking examples showing that, though such attempts at moralization of science linking grants, prizes or publications to the “good” social and moral behavior of scientists did exist in the past, they were considered inconsistent with the norm that Merton called “universalism,” In hindsight, those examples can be read as failed attempts at the moralization of science. These few examples, to which others could probably be added, also show the volatility of the moral norms now invoked to condemn scientists and they all suggest that their application could hardly lead to “better” or more valid science.

The Morality of Marie Curie

The controversy involving Marie Curie at the end of 1911, when she had just been awarded the Nobel Prize in Chemistry, shows the dangers of wanting to impose self-proclaimed “good” behavior on scientists in matters related to their personal private life.
At precisely the time when the Nobel committee announced the 1911 Nobel Prize in Chemistry, French gossip newspapers had revealed that Marie Curie was having a secret affair with a married man, the well-known physicist Paul Langevin. Scandalized, and speaking in the name of the Nobel Committee, the chemist Svante Arrhenius wrote Marie Curie a letter (dated December 1) asking her not to come to the official ceremony to accept that prestigious award until the accusations against her had been proven unfounded. Surprised, not to say stunned by such a demand, Marie Curie immediately replied (on December 5) that she would indeed be present at the ceremony since “the prize was awarded for [her] discovery of polonium and radium.” Above all, she recalled that “there is no relationship between [her] scientific work and the facts of [her] private life.” She also spontaneously reaffirmed a fundamental standard of science—universalism—by declaring that she “cannot accept the principle that the appreciation of the scientific value of [her] work could be influenced by libel and slander concerning [her] private life.” She concluded by saying that she was convinced that many colleagues agreed with her attitude and confirmed her attendance at the ceremony to receive her medal.

Does the Inventor of Chemical Warfare Deserve a Nobel Prize?

Another very interesting case illustrating the difference between the moral convictions of individual citizens and the institutional norms of science is the public reaction to the decision of the Nobel Committee to award (in 1919) the 1918 Chemistry Prize to Fritz Haber “for the synthesis of ammonia from its elements.” This work played an important role in the manufacture of artificial fertilizers and contributed to the growth of agricultural productivity. Obviously, the Nobel Committee’s decision ignored the well-known fact that Haber, a German scientist, had been active during the war in the creation and use of the first chemical gas that ushered into the world the new era of chemical warfare in 1915. Once made public, the decision of the Nobel Foundation immediately aroused indignation, especially in France and Belgium where thousands of their soldiers had been killed or crippled by chlorine and mustard gas. The New York Times suggested, ironically, that in its wisdom, the Nobel Committee should have given its literature prize “to the man who wrote General Ludendorff’s daily communiqués.” Some scientists even withdrew from attending the ceremony. But the Committee considered that science had to be evaluated only on its own merit and not on the basis of the personal qualities of the scientists who were honored. To recall that principle, the president of the Nobel Foundation opened the ceremony by insisting on the internationality of science. He stated that the Nobel prizes, in science as well as literature, would contribute to “burst the cloud of hatred between people.” Haber himself was surprised to be honored and wrote that it was “a deed of greatness on the part of the Swedish academy to elect three Germans” and that “it may lead to renewed international understanding.”5 The other two scientists he was referring to were Max Planck and Johannes Stark (later found to be a convinced Nazi), winners of the Physics Nobel Prize for the years 1918 and 1919 respectively.

Should a Murderer Have the Right to Publish Scientific Papers?

As a final, but striking, example of the fact that the norms of the social system of science are closely linked to the search for truth and do not take into consideration the personal and more or less moral character of scientists as persons, let us briefly recall the strange case of the engineering professor Valery Fabrikant, who in 1992 killed four of his colleagues and injured a secretary on the Concordia University campus in Montreal. Serving a life sentence, he nevertheless continued his theoretical research and published many articles in recognized peer-reviewed academic journals, his institutional address indicating his prison cell.
The moral controversy over this case arose when an article submitted in September 1994, and published in January 1996 in the International Journal of Solids and Structures, launched a debate on the ethics of scientific publishing. Upon learning about the existence of this article devoted to the obscure subject of the mathematical analysis of cracks in concrete, the rector of Concordia University complained to the editor of the journal. He considered that, having lost his freedom, Fabrikant did not have the right to publish scientific articles. Troubled by further pressure from the family of one of Fabrikant’s victims, the editor admitted to being “in a quandary.”6 Many of his colleagues had advised him to publish the article because the results were valid. He ultimately refused to publish a second article (later published in another journal) but admitted his decision was arbitrary.
A professor of research ethics had also opposed this censorship by advising that individual crimes are punished by society and should not influence judgments on the validity of scientific results. A law professor added that “if the content of the article is sound, it should be published,” as “it would be inconsistent with the goals of a university to attempt to suppress knowledge.”7 Interestingly, even a former colleague of Fabrikant admitted to being ambivalent about the situation and said that, while he found it reprehensible that Fabrikant could continue to publish in prison, denying anyone the opportunity to publish valid research results went against a belief deeply rooted in the academic community.
After this incident, the journal that had refused an article by Fabrikant for reasons external to the “republic” of the scientific field, finally published another paper by him in 2004. Since then, Fabrikant, while still living out a life sentence, has continued to write scientific papers and, according to bibliometric data from the Web of Science, published nearly sixty articles between 1996 and 2021, scattered across nearly twenty different peer-reviewed journals. And though, from 2003 to 2020 the address of the author identifies him as “Prisoner 167932 D,” this has not precluded these papers from being cited over time. His career thus illustrates in a rather extreme manner how norms of conduct within science differ from the usual moral standards of the larger society.

The End of the Republic of Science?

In various ways, the cases described above illustrate how the institutionalized norm of “universalism” prohibits the consideration of personal, social, and moral characteristics of scientists in assessing the validity and quality of their scientific work. They also show the extent to which personal moral attitudes may differ from the institutionalized values of science, a mismatch that is not without creating, as we have seen, some ambivalence in the minds of scientists.
The recent process of moralization of scientists—and indirectly of science itself as a social endeavor—certainly goes against the ideal of autonomy of the republic of science promoted after World War II and theorized, for instance, by Michael Polanyi. Well-known physical chemist and philosopher of science, Polanyi was a strong proponent of the autonomy of science and opposed all ideological and political influence on it as well as the idea of planification of science proposed by his colleague John D. Bernal.8. The emphasis on the autonomy of science has even served as an argument to the effect that sanctioning a scientist for reasons that have nothing to do with the norms of science is, in fact, equivalent to a double punishment, one by social institutions responsible for civil and criminal law, and the other by the scientific community. As we have seen, the rejection of Fabrikant’s scientific publications on the basis of his criminal condemnation was perceived at the time as applying to the scientific community rules that were not considered relevant in this relatively autonomous space focused solely on the validity of the contributions to science.
Now, that very idea of a republic of science defining its norms in a relatively autonomous manner from the larger society in order to facilitate the search for truth, seems to be giving way to a conception according to which to produce “good” science, one should also be a “good person” from a “moral” point of view, though the precise content of this new morality is hardly specified. The internal norms and values of science adapted to its specific purpose (the advancement of knowledge) would seem to be no longer considered sufficient to produce valid knowledge. This trend also suggests that validity is no longer sufficient to define legitimate knowledge and that it should, in addition, be consistent with (and be judged by) some moral standards defined by subgroups of the civil society. This trend is also visible in the imposition of so-called DEI (diversity, equity and inclusion) language into more and more abstracts of grants offered by NSF as well as—in some organizations—the new evaluation criteria that dictate that research projects and student fellowships should be justified on the basis of the United Nations’ 17 Sustainable Developments Goals (SDGs).9 But as the previous examples illustrate, it is not clear how these new norms will contribute to producing a better science since they have not been shown to have any real connection with a specific scientific methodology. Behaviors that violate the moral standards of some social groups do not automatically affect the validity of the results obtained by a scientist, a fact that seems to have become problematic at least for some moral entrepreneurs. The confusion between “is” and “ought,” that is, between what is in fact the case and what one would want to be the case, is also at the core of recent pressures to retract papers whose conclusions go against the moral belief of some social groups.10

Scientists Debunked and Others Rehabilitated

To fully understand the complexity of the question of the new moralization of science that we have observed in recent years—especially in the United States—it is also necessary to clearly distinguish between research activity per se and the social positions that scientists may occupy and for which they must meet other kinds of criteria. For example, it is certainly legitimate to ask that, as a representative of or spokesperson for an institution, a person must have moral qualities publicly perceived as consistent with the image that the organization wishes to project. It is obvious that having opinions considered incompatible with the image and mission that an institution sets for itself is a sufficient reason to terminate any official association with that person. Since such positions have a symbolic character, the person is, in fact, chosen primarily for his or her prestige and credibility (forms of symbolic capital), which simply vanish in the event of a public controversy. This explains why biologist and Nobel laureate James Watson recently lost his honorary titles from the Cold Spring Harbor Laboratory.11 This institution was of course “proud” to be associated with a Nobel Prize and even gave Watson’s name to a laboratory. The situation changed dramatically when he became a burden after publicly expressing comments generally considered as racist and to which no academic institution wants to be associated in any manner.
Moralization can also have retrospective effects and thus affect dead scientists who have been recognized for their scientific contributions to science. As is already the case for former politicians and historical figures, scientists can now see their past scrutinized and their behavior judged for their morality according to new standards defined by moral entrepreneurs who pressure institutions to erase from public space the names of those they now judge somehow “immoral,” their mere symbolic presence on a painting, a monument, or simply a street name being considered “offensive.”
Hence, in 2015, a Canadian city decided to change the name of the street “Alexis Carrel,” the name of the winner of the 1912 Nobel Prize in Physiology and Medicine, to that of a now more acceptable scientific figure (Marie Curie) after some moral entrepreneurs had discovered that the author of the 1935 best-selling essay Man, The Unknown was in favor of eugenics, a fact already well known to historians of science. Those critics seemed to ignore the historical fact that “eugenics” was a very popular view among scientists at that time and that it is thus quite anachronistic to think that most scientists should then have been opposed to that belief.
The same Canadian city also erased the name of another Nobel laureate, physicist Philip Lenard, after discovering (another fact well known to science historians) that he had been an active Nazi during World War II. They replaced him with Albert Einstein.12 Let us note in passing a possible irony in the choice of these two characters. While Marie Curie was in 1911 considered “immoral” because of her romantic relationship with a married man, today she has rather become a symbol of courage and independence for most women. As for Einstein, his possibly “immoral” behavior has recently begun to be scrutinized by some who think (wrongly, in fact) that he appropriated ideas from his wife Mileva. Some self-proclaimed judges could thus soon criticize the city’s decision of having chosen a person who had many mistresses, took little care of his children, seemed rather xenophobic—even outright racist according to some13—and has not always been kind to his wife.
It is important to make a symmetrical analysis of moralization and note that it works both ways and thus can have the effect of rehabilitating scientists and giving new public visibility to characters who had remained unknown to the general public despite their important (and generally recognized) contributions to science. The most spectacular case is probably that of the mathematician Alan Turing, whose current public image certainly owes much to the action of moral entrepreneurs. Although he has always been recognized by scientists for his fundamental contributions to mathematics and computer science, he only became a prominent public figure after it had been pointed out that he was homosexual and that, for this reason, he had been convicted in 1952 of “gross indecency” and forced to undergo chemical hormonal treatment. This conviction and harsh treatment may even have contributed to his committing suicide in 1954. Half a century later, a petition forced the British Prime Minister to apologize in 2009 on behalf of the British government for the measures taken by the State against Turing. A second petition was then filed asking for a full pardon, which was eventually granted by the UK’s Queen herself in December 2013.14

Will “Better” Persons Produce “Better” Science?

While it is certainly legitimate to question, on an ethical or ideological basis, the declarations and acts of scientists, the weight which tends to be given to these kinds of denunciations could go against the inherent logic of the production of knowledge.
By deciding that the social behavior of scientists will now affect their chances of continuing to do science—by obtaining research grants or evaluating projects and, one day perhaps, even publishing papers—the NSF and the NIH, as well as other government granting bodies, are extending their mission well beyond their traditional role of gatekeeper, that is to say, guardians of the quality of scientific production. By explicitly opening the frontiers of the scientific field to give legitimacy to claims of various pressure groups putting forward their own conception of moral purity, these institutions maybe entering slippery terrain. While being funded by the NSF or the NIH is seen as a sign of scientific excellence, it seems that one now also has to be perceived as a good moral agent to even get a grant. The obligation to write a DEI statement in grant application testifies to the emergence of a new form of loyalty oath, reminiscent and analogous—despite its different content and aims—to the one the House Committee on Un-American Activities and its president Senator Joseph McCarthy tried to impose on American university professors in 1950.15 By using their monetary power, these organizations are thus imposing on universities and academic researchers the conception of their (temporary) managers of what is supposed to be a “good life.” More importantly, one may even consider these new rules as extending well beyond their explicit mandate to promote the production of valid scientific results.
According to psychologist Paul Rozin, “One factor that seems to encourage “success” [of a moralization campaign] is the association of a stigmatized or marginal group with the activity in question.”16 This assumption is consistent with the current situation, as the focus on harassment (sexual or psychological) as well as on the ill-defined notion of DEI more often affects women and stigmatized and discriminated groups than dominant ones. This situation probably facilitates the acceptance by many scientists of these new moral standards imposed on scientific organizations by self-proclaimed moral entrepreneurs. Many researchers may, indeed, feel guilty of being “privileged” and be tempted to give in to the demands of groups who claim to speak on behalf of all minorities. They can thus easily clear their conscience and continue their work. For research managers, it may also be a question of buying peace to calm down active minorities inspired by a “culture of victimization.”17 And the fact that these policies, however audacious on the part of the NIH, were immediately denounced by some groups as insufficient,18 confirms that the pressure will not abate until the supposed members of the “dominant” group have not completely yielded to the demands of moral entrepreneurs who do not see why science should have any special autonomy to ensure the progress of reason.19
The activity of moral entrepreneurs who try to impose their particular conception of the “good life” on all social activities, constitute in our opinion a form of ideological regression that goes against the relative—and always precarious—autonomy of all cultural fields, an autonomy hardly won over time against all forms of censorship.
As the road to hell is paved with good intentions, only time will tell whether the current tendency to impose the values of self-proclaimed moral entrepreneurs on all scientists and other creators (artists, writers, etc.) will really contribute to the production of “better” science, better novels, and better movies through the formation of “better” persons. The history of the relationships between the arts, the sciences, and changing moral values and ideologies unfortunately suggests that this is unlikely.

Acknowledgments

The author thanks the two anonymous reviewers for their comments. This paper develops and updates arguments first presented in a French essay published in Savoir/Agir, 2020/4 N° 54, 109–17.
1
News Release 18-082. (September 19, 2018). “NSF announces new measures to protect research community from harassment.” https://www.nsf.gov/news/news_summ.jsp?cntn_id=296610.
2
Kaiser, Jocelyn. (March 27, 2019). “NIH may bar peer reviewers accused of sexual harassment,” Science. https://www.sciencemag.org/news/2019/03/nih-may-bar-peer-reviewers-accused-sexual-harassment.
3
Einstein, Albert. (1949). “Autobiographical notes,” in Paul Arthur Schlipp (ed.), Albert Einstein, Philosopher-Scientist, Lasalle, Open Court 1: 33.
4
Merton, Robert K. (1973). The Sociology of Science. University of Chicago Press.
5
Charles, Daniel. (2005). Master Mind: The Rise and Fall of Fritz Haber, the Nobel Laureate Who Launched the Age of Chemical Warfare. HarperCollins, p. 196.
6
Spurgeon, David. (June 6, 1996). “Paper from jailed professor stirs debate over publication.” Nature 381: 458.
7
Ibid.
8
Polanyi, Michael. (1962). “The republic of science: Its political and economic theory,” Minerva 1: 54–73. For details, see Nye, Mary Jo. (2011). Michael Polanyi and His Generation Origins of the Social Construction of Science. Chicago University Press.
9
For an analysis of the rise of DEI language in NSF grants, see Rasmussen, Leif. (November 16, 2021). “Increasing politicization and homogeneity in scientific funding: An analysis of NSF grants, 1990–2020,” Center for the Study of Partisanship and Ideology,” Report No. 4. For details of the UN’S SDGs, see https://sdgs.un.org/goals.
10
For an analysis of retractation of papers on moral grounds, see Gingras, Yves. (2022). “Towards a moralization of bibliometrics? A response to Kyle Siler.” Quantitative Science Studies 3(1): 315–18. https://doi.org/10.1162/qss_c_00178.
11
BBC. (January 13, 2019). “James Watson: Scientist loses titles after claims over race.” https://www.bbc.com/news/world-us-canada-46856779.
12
13
Alison Flood. (June 12, 2018). “Einstein’s travel diaries reveal ‘shocking’ xenophobia.” The Guardian. https://www.theguardian.com/books/2018/jun/12/einsteins-travel-diaries-reveal-shocking-xenophobia.
14
CBC News, (December 23, 2013). “Alan Turing granted royal pardon for gay sex conviction.” https://www.cbc.ca/news/world/alan-turing-granted-royal-pardon-for-gay-sex-conviction-1.2474916.
15
Stewart, George R. (1950). The Year of the Oath: The Fight for Academic Freedom at the University of California. Doubleday.
16
Rozin, Paul. (May 1999). “The process of moralization.” Psychological Science, 10(3): 218–21.
17
Campbell, Bradley and Jason Manning. (2018). The Rise of Victimhood Culture. Palgrave Macmillan.
18
Subbaraman, Nidhi. (June 25, 2020). “NIH’s new sexual-harassment rules are still too weak, say critics.” Nature. https://www.nature.com/articles/d41586-020-01921-5.
19
Bourdieu, Pierre. (1975). “The specificity of the scientific field and the social conditions for the progress of reason.” Social Science Information 14(6): 19–47.