A Poorer Practice: The Consequences of Unethical Research

By Ethan Sim

Ethics are ubiquitous normative statements which delimit societally acceptable behaviour, and thereby advise individual and collective action (Vanclay, Baines & Taylor, 2013). As scientific research primarily aims to benefit society (Rull, 2014), ethical standards, which permit discrimination between beneficial (“right”) and detrimental (“wrong”) research practices are necessary. Although, unethical research which flouts these standards, can arise by various means. Such research fails to accurately enhance human knowledge in a societally acceptable manner, and thereby erodes public trust in science. Because of this, many measures seek to discourage unethical research, although their effectiveness remains controversial. For such efforts to truly succeed, scientists must view them as aids to their research rather than obstacles to be circumvented. 

At a basic level, ethical standards regulate both the conduct and publication of scientific research (Sivasubramaniam et al., 2021). By ensuring researchers adhere to principles such as empiricism and objectivity throughout the research process, ethical standards allow science to accurately enhance human knowledge (Resnik, 2020). The scientific method, which encapsulates the conduct of scientific research (Hanne & Hepburn, 2020), is predicated on empiricism: faithfulness to evidence (Reiss & Sprenger, 2020). To abide by this standard, researchers must report their observations truthfully and completely (Steneck, 2007); however, a research climate which overwhelmingly favours significant results incentivises researchers to engage in unethical practices (Mlinarić, Horvat & Smolčić, 2017), with data fabrication an extreme example (Resnik, 2014). Although traditionally thought to be rare due to its incontrovertibly unethical nature (Koshland, 1987), a meta-analysis by Fanelli (2009) found that almost 2% of scientists across various disciplines admitted to fabricating data, challenging this perception. Apart from outright fabrication, unethical research may involve questionable research practices (QRPs), which exploit ethical ambiguities in research methodology (Artino, Driessen & Maggio, 2019); researchers remain faithful to evidence, but only when the latter is desirable in some way. Such QRPs include p-hacking, the repetition of analyses with the intent of obtaining a significant result for publication (Head et al., 2015), and sample manipulation, where sample sizes are arbitrarily altered to facilitate the attainment of a significant result (Suter, 2020). Because QRPs are ethically “greyer” than outright fabrication, and therefore more defensible, their incidence is more frequent (John, Loewenstein & Prelec, 2012); 42% of ecology researchers admitted to p-hacking, while 4.5% admitted to fabricating data (Fraser et al., 2018). Following the completion of the study, papers are subject to the peer-review process, which is predicated on objectivity: freedom from bias (Reiss & Sprenger, 2020). To abide by this standard, peer-reviewers must assess a paper purely on its scientific merit, without knowledge of the author’s personal or professional identity (Horbach & Halffman, 2018). However, the nexus between career advancement and research productivity incentivises unethical researchers to subvert this process (Maggio et al., 2019); they may deliberately identify themselves within their manuscripts (Sivasubramaniam et al., 2021), or suggest reviewers more likely to recommend publication (Liang, 2018). Although quantitative evidence on the incidence of peer-review subversion is lacking, the greater tendency for author-suggested reviewers to recommend publication has been documented by numerous studies (Fox et al., 2017). Thus, while ethical standards preserve scientific accuracy in theory, the environment in which research is conducted may mitigate their effect in practice. 

Beyond ensuring simple adherence to truth, ethical standards also regulate the relationship between researchers and their research subjects (Ferdowsian et al., 2019). By ensuring that researchers respect research subject autonomy and wellbeing, ethics allow science to enhance human knowledge in a manner congruent with societal values (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (NCPHS), 1978). Respect for autonomy arises from a fundamental moral principle: that all individuals have a right to personal liberty (United Nations, 1948). This is demonstrated by the process of obtaining informed consent: the full disclosure of all relevant aspects of the study to prospective participants, who have the freedom to refuse (Nijhawan et al., 2013). However, to increase participant recruitment, and therefore data for analysis (Kapp, 2006), unethical researchers exploit the information asymmetry between researcher and subject to mislead the latter. For instance, a novel therapeutic regimen may be disguised as standard clinical practice (Sade, 2003), or information pertaining to possible side-effects may be withheld (Cyranoski, 2005). 

Unfortunately, while case studies of informed consent violations are well-known, such violations only come to light following extensive public scrutiny, frustrating attempts to quantify the problem (Weyzig & Schipper, 2008). Respect for wellbeing likewise arises from another fundamental moral principle: the universal right to life (United Nations, 1948). In line with this principle, researchers are expected to “maximise possible benefits and minimise possible harms” (NCPHS, 1978: p.6) to participants. Yet, in practice, conflicts of interest – the intrusion of secondary influences which diverge from the primary aim of knowledge enhancement – may cause researchers to breach this ethical standard (Weinbaum et al., 2019). For example, corporate financial interests may influence study design to the detriment of research participants (Chopra, 2003), with a striking example being the propensity of industry-funded clinical trials – which constitute the “vast majority” of trials (Blazynski & Runkel, 2019: p.8) – to compare novel therapy with placebos, rather than available (and potentially more effective) treatment, potentially causing avoidable harm to the control group (Djulbegovic et al., 2000). Unfortunately, while qualitative evidence is plentiful, the subjectivity of risk assessment (Bernabe et al., 2012) hinders efforts to quantify these unethical practices. Thus, although ethical standards ensure research remains societally acceptable, this role may be frustrated by personal and corporate interests. 

Because ethical standards undergird accurate and societally acceptable science, their violation necessarily generates incorrect or improper research, which lowers public trust in science. Although public trust may seem intangible, it begets the continued provision of public, private, and personal resources for scientific research, and thereby drives scientific progress (Resnik, 2011). An “implicit contract” (Shrader-Frechette, 1994: p.24) between society and scientists therefore exists: society supports scientific research to the extent that it can trust scientists to use its resources in an appropriate manner for human benefit. By breaching this contract, unethical research taints public perception of science in general, and harms the broader scientific enterprise (Shrader-Frechette, 1994). For instance, QRPs are thought to be a leading cause of the replicability crisis – the failure to replicate key studies in numerous scientific fields – simply because QRPs generate incorrect conclusions (Romero, 2019). The effect of these ethical violations on public trust was detrimental: individuals informed about QRPs and the replicability crisis reported lower levels of trust in science (Anvari & Lakens, 2018). Similarly, the failure of investigators to obtain adequate informed consent from members of the Havasupai Indian tribe prior to blood sample collection (Mello & Wolf, 2010) led to a broader distrust of medical research among the tribe (Van Assche, Gutwirth & Sterckx, 2013), and a refusal to participate in further studies (Sterling, 2011). Yet, while the erosive impact of unethical research on public trust in science is indubitable, its importance should not be overstated: although 85% of Americans are sceptical that medical researchers disclose conflicts of interest most of the time, 86% are confident that scientists act in the public interest (Funk et al., 2019). This seeming contradiction may arise from public perception of unethical researchers as “a few bad apples” (LaFollette, 2000); this deflects distrust onto these individuals and shields the wider scientific architecture (Chen, 2014). Thus, although unethical research undermines public trust in science, its effect may be ameliorated by public opinion. 

Due to the harmful effect of unethical research on public opinion, a variety of measures aim to prevent its occurrence. Foremost among them is the enshrinement of ethical standards in international agreements, such as the Helsinki Declaration (World Medical Association, 2013), and in national policy documents, such as the Belmont Report (NCPHS, 1978). By translating ethics into expectations for researcher conduct, these documents reduce ethical ambiguity (Vollmer & Howard, 2010) and inform laws which govern ethical conduct in research, such as the United States’ Common Rule, which predicates public funding on approval by an institutional review board (IRB), a body empowered to evaluate the ethicality of research involving human subjects (Meyer, 2020). IRBs themselves constitute the second tier of ethical regulation – they ensure that measures to obtain informed consent are sufficient and that the benefits of such research outweigh the harms, thereby preventing potentially unethical research from being performed (Shaul, 2002). Following study completion, publicatory processes impose a final barrier to unethical research; the detection of incorrect or improper research practice by peer reviewers often results in editors refusing to publish such work, preventing unethical researchers from furthering their careers (Angelski et al., 2012). Although seemingly comprehensive, the effectiveness of these measures remains controversial: because some forms of unethical research are still factually correct, and therefore beneficial to human understanding, some editors argue that publication, albeit with an editorial critiquing its unethical aspects, will invite public scrutiny and prevent similar studies from being conducted (Sade, 2003); however, critics assert that such editorials are rarely read, and are therefore a poor deterrent (Hunter, 2012). Furthermore, the extent to which researchers internalise the ethical standards these measures codify – and thus, whether these measures truly discourage, rather than prohibit, unethical research – is also debateable (Carlson, Boyd & Webb, 2004); some researchers view IRBs and their ethical frameworks as “intrusive, invasive [and] frustrating”, especially when paperwork is burdensome and judgements are opaque (Brown, Spiro & Quinton, 2020), although other studies suggest that these views are uncommon (Labude et al., 2020), and that IRB criticism mainly stems from procedural, rather than conceptual, grounds (Page & Nyeboer, 2017). A potential solution may therefore be to improve IRB efficiency and credibility; this could include greater administrative resource allocation to IRBs (Page & Nyeboer, 2017), or standardised IRB auditing and accreditation programs (Coleman & Bouësseau, 2008). Thus, while a hierarchy of measures exists to prohibit unethical research, they will only be truly effective when researchers identify with their underlying ethics, and perceive them as supporting, rather than obstructing, their endeavours. 

While unethical research may take numerous forms, such research ultimately erodes public trust in science. To preserve this trust, an array of measures seeks to deter unethical research, although their effectiveness, and how they can be improved, remain subjects of debate. Although the necessity of a rigorous ethical framework for scientific research is generally accepted (Brown, Spiro & Quinton, 2020), increasingly innovative efforts to circumvent these measures will persist if an unfavourable research climate discourages researchers from adhering to their underlying moral compasses. Any permanent solution to unethical research must therefore restore the incentive to be ethical, whether via procedure or policy. Only then will the integrity of scientific practice be preserved.

References: 

Angelski, C., Fernandez, C., Weijer, C. & Gao, J. (2012) The publication of ethically uncertain research: attitudes and practices of journal editors. BMC Medical Ethics. 13, 4. Available from: https://doi.org/10.1186/1472-6939-13-4 [Accessed 22nd May 2021] 

Anvari, F. & Lakens, D. (2018) The replicability crisis and public trust in psychological science. Comprehensive Results in Social Psychology. 3 (3), 266–286. Available from: https://doi.org/10.1080/23743603.2019.1684822 [Accessed 22nd May 2021] 

Artino, A., Driessen, E. & Maggio, L. (2019) Ethical Shades of Gray: International Frequency of Scientific Misconduct and Questionable Research Practices in Health Professions Education. Academic Medicine. 94 (1), 76–84. Available from: https://doi.org/10.1097/ACM.0000000000002412 [Accessed 20th May 2021] 

Bernabe, R., van Thiel, G., Raaijmakers, J. & van Delden, J. (2012) The risk-benefit task of research ethics committees: An evaluation of current approaches and the need to incorporate decision studies methods. BMC Medical Ethics. 13, 6. Available from: https://doi.org/10.1186/1472-6939-13-6 [Accessed 21st May 2021] 

Blazynski, C. & Runkel, L. (2019) 2018 Completed Trials: State of Industry-Sponsored Clinical Development. Informa Pharma. Available from: https://pharmaintelligence.informa.com/~/media/informa-shop-window/pharma/2019/files/whitepapers/jn2403-clinical-trials-whitepaper-update_lr.pdf [Accessed 21st May 2021]  

Brown, C., Spiro, J. & Quinton, S. (2020) The role of research ethics committees: Friend or foe in educational research? An exploratory study. British Educational Research Journal. 46 (4), 747–769. Available from: https://doi.org/10.1002/berj.3654 [Accessed 22nd May 2021] 

Carlson, R., Boyd, K. & Webb, D. (2004) The revision of the Declaration of Helsinki: past, present and future. British Journal of Clinical Pharmacology. 57 (6), 695–713. Available from: https://doi.org/10.1111/j.1365-2125.2004.02103.x [Accessed 22nd May 2021] 

Chen, J. (2014) Research Misconduct Policy in Biomedicine: Beyond the Bad Apple Approach. Yale Journal of Biology and Medicine. 87 (2), 224–225. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4031800/ [Accessed 22nd May 2021] 

Chopra, S. (2003) Industry Funding of Clinical Trials: Benefit or Bias? JAMA. 290 (1), 113-114. Available from: https://doi.org/10.1001/jama.290.1.113 [Accessed 21st May 2021] 

Coleman, C. & Bouësseau, M. (2008) How do we know that research ethics committees are really working? The neglected role of outcomes assessment in research ethics review. BMC Medical Ethics. 9, 6. Available from: https://doi.org/10.1186/1472-6939-9-6 [Accessed 22nd May 2021] 

Cyranoski, D. (2005) Consenting adults? Not necessarily… Nature. 435, 138. Available from: https://doi.org/10.1038/435138a [Accessed 21st May 2021] 

Djulbegovic, B., Lacevic, M., Cantor, A., Fields, K., Bennett, C., Adams, J., Kuderer, N. & Lyman, G. (2000) The uncertainty principle and industry-sponsored research. The Lancet. 356 (9230), 635–638. Available from: https://doi.org/10.1016/S0140-6736(00)02605-2 [Accessed 21st May 2021] 

Fanelli, D. (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLOS ONE. 4 (5), e5738. Available from: https://doi.org/10.1371/journal.pone.0005738 [Accessed 20th May 2021] 

Ferdowsian, H., Johnson, L., Johnson, J., Fenton, A., Shriver, A. & Gluck, J. (2019) A Belmont Report for Animals? Cambridge Quarterly of Healthcare Ethics. 29 (1), 19–37. Available from: https://doi.org/10.1017/S0963180119000732 [Accessed 21st May 2021] 

Fox, C., Burns, C., Muncy, A. & Meyer, J. (2017) Author-suggested reviewers: gender differences and influences on the peer review process at an ecology journal. Functional Ecology. 31 (1), 270–280. Available from: https://doi.org/10.1111/1365-2435.12665 [Accessed 20th May 2021] 

Fraser, H., Parker, T., Nakagawa, S., Barnett, A. & Fidler, F. (2018) Questionable research practices in ecology and evolution. PLOS ONE. 13 (7), e0200303. Available from: https://doi.org/10.1371/journal.pone.0200303 [Accessed 20th May 2021] 

Funk, C., Hefferon, M., Kennedy, B. & Johnson, C. (2019) Trust and Mistrust in Americans’ Views of Scientific Experts. Pew Research Center. Available from: https://www.pewresearch.org/science/wp-content/uploads/sites/16/2019/08/PS_08.02.19_trust.in_.scientists_FULLREPORT-1.pdf [Accessed 22nd May 2021] 

Hanne, A & Hepburn, B. (Winter 2020 Edition) Scientific Method. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Available from: https://plato.stanford.edu/archives/win2020/entries/scientific-method/ [Accessed 20th May 2021] 

Head, M., Holman, L., Lanfear, R., Kahn, A., Jennions, M. (2015) The Extent and Consequences of P-Hacking in Science. PLOS Biology. 13 (3), e1002106. Available from: https://doi.org/10.1371/journal.pbio.1002106 [Accessed 20th May 2021] 

Horbach, S. & Halffman, W. (2018) The changing forms and expectations of peer review. Research Integrity and Peer Review. 3, 8. Available from: https://doi.org/10.1186/s41073-018-0051-5 [Accessed 20th May 2021] 

Hunter, D. (2012) Editorial: The publication of unethical research. Research Ethics. 8 (2), 67–70. Available from: https://doi.org/10.1177/1747016112445959 [Accessed 22nd May 2021] 

John, L., Loewenstein, G. & Prelec, D. (2012) Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science. 23 (5), 524-532. Available from: https://www.cmu.edu/dietrich/sds/docs/loewenstein/MeasPrevalQuestTruthTelling.pdf [Accessed 20th May 2021] 

Kapp, M. (2006) Ethical and legal issues in research involving human subjects: do you want a piece of me? Journal of Clinical Pathology. 59 (4), 335–339. Available from: https://doi.org/10.1136/jcp.2005.030957 [Accessed 21st May 2021] 

Koshland, D. (1987) Fraud in science. Science. 235 (4785), 141. Available from: https://doi.org/10.1126/science.3798097 [Accessed 20th May 2021] 

Labude, M., Shen, L., Zhu, Y., Schaefer, G., Ong, C. & Xafis, V. (2020) Perspectives of Singaporean biomedical researchers and research support staff on actual and ideal IRB review functions and characteristics: A quantitative analysis. PLOS ONE. 15, e0241783. Available from: https://doi.org/10.1371/journal.pone.0241783 [Accessed 22nd May 2021] 

Lafollette, M. (2000) The Evolution of the “Scientific Misconduct” Issue: An Historical Overview (44535C). Proceedings of the Society for Experimental Biology and Medicine. 224 (4), 211–215. Available from: https://doi.org/10.1177/153537020022400405 [Accessed 22nd May 2021] 

Liang, Y. (2018) Should authors suggest reviewers? A comparative study of the performance of author-suggested and editor-selected reviewers at a biological journal. Learned Publishing. 31 (3), 216–221. Available from: https://doi.org/10.1002/leap.1166 [Accessed 20th May 2021] 

Maggio, L., Dong, T., Driessen, E. & Artino, A. (2019) Factors associated with scientific misconduct and questionable research practices in health professions education. Perspectives on Medical Education. 8, 74–82. Available from: https://doi.org/10.1007/s40037-019-0501-x [Accessed 20th May 2021] 

Mello, M. & Wolf, L. (2010) The Havasupai Indian Tribe Case — Lessons for Research Involving Stored Biologic Samples. New England Journal of Medicine. 363, 204–207. Available from: https://doi.org/10.1056/NEJMp1005203 [Accessed 22nd May 2021] 

Meyer, M. (2020) There Oughta Be a Law: When Does(n’t) the U.S. Common Rule Apply? The Journal of Law, Medicine & Ethics. 48 (1), 60–73. Available from: https://doi.org/10.1177/1073110520917030 [Accessed 22nd May 2021] 

Mlinarić, A., Horvat, M. & Smolčić, V. (2017) Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica (Zagreb). 27 (3), 030201. Available from: https://doi.org/10.11613/BM.2017.030201 [Accessed 20th May 2021] 

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1978) The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. United States Department of Health, Education, and Welfare. Report number: 78-0012. Available from: https://videocast.nih.gov/pdf/ohrp_belmont_report.pdf [Accessed 21st May 2021] 

Nijhawan, L., Janodia, M., Muddukrishna, B., Bhat, K., Bairy, K., Udupa, N. & Musmade, P. (2013) Informed consent: Issues and challenges. Journal of Advanced Pharmaceutical Technology & Research. 4 (3), 134–140. Available from: https://doi.org/10.4103/2231-4040.116779 [Accessed 21st May 2021] 

Page, S. & Nyeboer, J. (2017) Improving the process of research ethics review. Research Integrity and Peer Review. 2, 14. Available from: https://doi.org/10.1186/s41073-017-0038-7 [Accessed 22nd May 2021] 

Reiss, J. & Sprenger, J. (Winter 2020 Edition) Scientific Objectivity. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Available from: https://plato.stanford.edu/archives/win2020/entries/scientific-objectivity/ [Accessed 20th May 2021] 

Resnik, D. (2011) Scientific Research and the Public Trust. Science and Engineering Ethics. 17 (3), 399–409. Available from: https://doi.org/10.1007/s11948-010-9210-x [Accessed 22nd May 2021] 

Resnik, D. (2014) Data Fabrication and Falsification and Empiricist Philosophy of Science. Science and Engineering Ethics. 20 (2), 423–431. Available from: https://doi.org/10.1007/s11948-013-9466-z [Accessed 20th May 2021] 

Resnik, D. (2020) What Is Ethics in Research & Why Is It Important?. Available from: https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm [Accessed 20th May 2021]  

Romero, F. (2019) Philosophy of science and the replicability crisis. Philosophy Compass. 14 (11), e12633. Available from: https://doi.org/10.1111/phc3.12633 [Accessed 22nd May 2021] 

Rull, V. (2014) The most important application of science. EMBO Reports. 15 (9), 919–922. Available from: https://doi.org/10.15252/embr.201438848 [Accessed 20th May 2021] 

Sade, R. (2003) Publication of unethical research studies: the importance of informed consent. The Annals of Thoracic Surgery. 75 (2), 325–328. Available from: https://doi.org/10.1016/S0003-4975(02)04323-0 [Accessed 21st May 2021] 

Shaul, R. (2002) Reviewing the reviewers: the vague accountability of research ethics committees. Critical Care. 6, 121. Available from: https://doi.org/10.1186/cc1469 [Accessed 22nd May 2021] 

Shrader-Frechette, C. (1994) Ethics of Scientific Research. Lanham, Rowman & Littlefield Publishers Inc. Available from: https://books.google.com.sg/books?id=PbgzAAAAQBAJ [Accessed 22nd May 2021] 

Sivasubramaniam, S., Cosentino, M., Ribeiro, L. & Marino, F. (2021) Unethical practices within medical research and publication – An exploratory study. International Journal for Educational Integrity. 17, 7. Available from: https://doi.org/10.1007/s40979-021-00072-y [Accessed 20th May 2021] 

Steneck, N. (2007) Introduction to the Responsible Conduct of Research. (Revised ed.) Washington D.C., U.S. Government Printing Office. Available from: https://ori.hhs.gov/sites/default/files/rcrintro.pdf [Accessed 20th May 2021] 

Sterling, R. (2011) Genetic Research among the Havasupai: A Cautionary Tale. Virtual Mentor. 13 (2), 113–117. Available from: https://doi.org/10.1001/virtualmentor.2011.13.2.hlaw1-1102 [Accessed 22nd May 2021] 

Suter, W. (2020) Questionable Research Practices: How to Recognize and Avoid Them. Home Health Care Management & Practice. 32 (4), 183–190. Available from: https://doi.org/10.1177/1084822320934468 [Accessed 20th May 2021] 

United Nations. (1948) Universal Declaration of Human Rights. Available from: https://www.un.org/en/about-us/universal-declaration-of-human-rights [Accessed 21st May 2021] 

Van Assche, K., Gutwirth, S. & Sterckx, S. (2013) Protecting Dignitary Interests of Biobank Research Participants: Lessons from Havasupai Tribe v Arizona Board of Regents. Law, Innovation and Technology. 5 (1), 54–84. Available from: https://doi.org/10.5235/17579961.5.1.54 [Accessed 22nd May 2021] 

Vanclay, F., Baines, J. & Taylor, C. (2013) Principles for ethical research involving humans: ethical professional practice in impact assessment Part I. Impact Assessment and Project Appraisal. 31 (4), 243–253. Available from: https://doi.org/10.1080/14615517.2013.850307 [Accessed 20th May 2021] 

Vollmer, S. & Howard, G. (2010) Statistical Power, the Belmont Report, and the Ethics of Clinical Trials. Science and Engineering Ethics. 16, 675–691. Available from: https://doi.org/10.1007/s11948-010-9244-0 [Accessed 22nd May 2021] 

Weinbaum, C., Landree, E., Blumenthal, M., Piquado, T. & Gutierrez, C. (2019) Ethics in Scientific Research: An Examination of Ethical Principles and Emerging Topics. RAND Corporation. Report number: RR-2912-IARPA. Available from: https://www.rand.org/content/dam/rand/pubs/research_reports/RR2900/RR2912/RAND_RR2912.pdf [Accessed 21st May 2021] 

Weyzig, F. & Schipper, I. (2008) SOMO briefing paper on ethics in clinical trials. SOMO. Available from: https://www.somo.nl/wp-content/uploads/2008/02/Examples-of-unethical-trials.pdf [Accessed 21st May 2021]  

World Medical Association. (2013) WMA Declaration Of Helsinki – Ethical Principles For Medical Research Involving Human Subjects. Available from: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ [Accessed 22nd May 2021] 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s