Science and Ethics
The theme of the conference What can ethics do for medicine? has been, but if we base ourselves on the possible relationship between ethics and religion, can we say that there are differences of ethics and morals in territories of different cultures and religions?
I think it is a mistake to confuse cultural and religious views. The cultural perspective explains the different way of life in society A and in society B, and that is the work of anthropologists. The problem of normative ethics is another. The debate focuses on whether normative ethics is also relativistic or not, that is, whether or not there is a different ethic in different cultures. Those who defend the existence of differences often use the arguments of European bad conscience (for example, the ethnocentrism argument). They say that the entry into force of certain ethical norms would be to go to the extreme of ethnocentrism, insofar as a culture imposed on others their way of life. This, once again, would lead to confusing ethical and cultural perspectives.

If ethics in the West, for example, is only the imposition of the accepted way of life, that is, if ethics were limited to the cultural point of view, those who complain about egocentrism would be right, but in my opinion that is not so. I believe that the values that would serve every human being can be reasonably argued. For example, I don't think there is anyone who believes that human rights (non-discrimination, prohibition of torture, prohibition of arbitrary death...) are worthless and accepted in all cultures. I think there is no relativistic ethics on this subject. Although according to different cultures, good things and life forms can be different, they coincide when it comes to fixing what is wrong and if there is universal consensus about what is wrong, we could say that it moves away from evil.
The old ethically unacceptable (formerly childbearing, abortion...) today are more acceptable or acceptable. Does this mean that the ethical view changes or, on the contrary, there are no real limits to ethically acceptable?
As for responsibilities, for example, I think there is a relationship between advances in science and ethics. Epistematic advancement reduces areas that were formerly very broad. For example, as epistemic knowledge increases, it decreases that attributable to good or bad luck; the hardened smoker cannot say that someone with lung cancer has been unlucky, since medicine has shown that there is a causal relationship between the two. The same with miracles. Therefore, as knowledge increases our moral responsibility also increases.
But, on the other hand, it sometimes seems to reduce responsibility. The most homosexual is a clear example of this. From the moment in which science has shown that homosexuality is as natural as heterosexuality, the moral qualification of homosexuality has changed, so our responsibility for this action is less, since more than an evil would be a causal fact. There is, therefore, a relationship between the advances and the modification of the moral qualifications of certain actions (not all).
In this sense, whenever a qualitative advance occurs there are debates that can then seem unfounded.
Qualitative jumps mean a change of focus and a mixture. Faced with this it seems that behavioral norms also change and tend to adapt to the new situation, generating ethical debates as the baselines change. When the results of science are very evident, they require a change in human responsibility (atomic bomb, cloning...) and that poses a challenge for ethics.
In his speech, he affirmed that the "boomerang effects" of those carried out are more effective than the moral limits. But as one of the pillars of progress is curiosity, can you fear the outcome of research? How can this be a real limit?
I think the problem is that normative ethics by definition gives rules. But all rules, since they are rules, can be broken. So what can be asked of morality? If normative moral standards have no guarantee, the problem lies in their effectiveness. This seems very serious to me, as there will always be some scientist who breaks moral standards and can cause disasters. Other solutions would be punishment and imprisonment, the legal solution, but these would always come after the event, it would be too late.
Consequently, the last limitation is the self-interest of the scientist himself. If the scientist knows that what he is going to do can come back against him, that will be his limit. For example, the US had no problem launching the atomic bomb with Nagasha, but the Japanese or Russians did not have this bomb. Of course, there can always be suicidal scientists.
This approach would require that the research result be known, but what happens when this is not so?
I think it depends on the psychology of the same person. If the person is optimistic, he will believe that the result will be good and can control it. But if it is pessimistic, it will believe that the risk is too high and therefore it will act more wisely.
This would leave things in the hands of researchers, that is not debatable. Personal behavior may be different in different situations. For example, when investigating with people, it can be very different to use someone totally unknown or very distant or, on the contrary, someone close to the researcher.
Here too. The limit will be the effect of research on oneself or the closest. Human evil has no limits and there are no moral norms that stop the fanatic, a Hittler. What is really worth is that all this knowledge is not in the hands of fans.
Would too strict legislation not be an incentive for evil? Is there no risk that the controversy over drug legalization or alcohol illegalization in the US. What do you think?
Eliminating penalties on drugs can be, among other things, a way to prevent criminal mafias. But I think the main problem is the same, that is, that morality and law can avoid disproportionate disasters if the researcher himself has no interest in this happening?
So would it be the question of using research for direct purposes as possible later errors?

It is clear that there are areas that only seek luxury, such as cosmetics. There are many animals that are used to obtain cosmetic products, but they underlie the large lobbies of difficult control (the money that Western women spend on cosmetics can end global hunger). There are, however, studies that, despite being objective A, produce effects C and D not related to it and its control is very difficult.
Therefore, one of them, or the prohibition of one's own research and knowledge, I do not think there is any justification for it, or knowledge cannot be prohibited, and then it will be very difficult to avoid its consequences. I think these are the inescapable dangers that science entails and the limits will be established when things get to the extreme. I don't know with cloning in medicine, for example, if we get to the extreme or not and, therefore, if we decide it is enough. It is very difficult to set limits.
In the end would the border be in human survival?
That is. Assuming that society in general is not suicidal, the only brake will be a kind of autocalga, although there will always be researchers willing to contribute a large part of the knowledge. The fundamentalism of science can be very dangerous, but it is also difficult to avoid it. I believe that in these cases morality and law are not strong enough; the only thing left is the limit that the researcher himself has established.
Buletina
Bidali zure helbide elektronikoa eta jaso asteroko buletina zure sarrera-ontzian