Tive conhecimento de duas perspectivas sobre como a ciência está lidando com a Moral. A mais elegante delas, mais simples, embasada e inteligível, é a do Joshua Greene¹. Na perspectiva dele a moral é, de acordo com a nossa evolução como espécie, um problema de cooperação. E esse problema é percebido de duas formas. Na primeira, existe o conflito do que é bom, vantajoso ou prejudicial pra mim, como indivíduo, versus o que é bom, vantajoso ou prejudicial pro grupo onde estou. Na segunda forma o conflito é entre o meu grupo e um outro grupo. A explicação dele porque é assim decorreria da forma como evoluímos: seres sociais em interação constante não só com pessoas do grupo com o qual nos identificamos e compartilhamos o sentimento de pertencer, mas também com grupos vizinhos, com quem disputaríamos recursos e com quem não temos identificação nenhuma. Nesse sentido, nossos instintos morais evoluíram pra resolver essa pendenga.
Essa diferenciação de grupos é clara num dos estudos levados a cabo por ele:
You also see this in the indifference that we have towards far away strangers. Many years ago the philosopher, Peter Singer, posed the following hypothetical: He said, suppose you're walking by a pond and there's a child who's drowning there, and you can wade in and save this child's life, but if you do this you're going to ruin your fancy shoes and your fancy suit. It might cost you $500, $1,000 or more, depending on how fancy you are, to save this child's life. Should you do it, or is it okay to not do it?
Almost everybody says, "No, of course, you'd be a monster if you let this child drown because you were thinking about your clothes." Peter Singer says, okay, well, what about this question: There are children on the other side of the world in the Horn of Africa, let's say, who desperately need food, water, and medicine; not to mention having an education, and some kind of political representation, and so on and so forth. Just to help satisfy some basic needs, keep somebody alive for a better day, let's say you can donate $500, $1,000, something that's substantial, but that you can afford; the kind of money you might spend on a nice set of clothes, and you could save somebody's life to do that. Do you have a moral obligation to do that? Most people's thought is, well, it's nice if you do that, but you're not a monster if you don't do that, and we all do that all the time.
There are two different responses to this problem. One is to say, well, these are just very different. Here, you have a real moral obligation—saving the child who's drowning. Over there, it's a nice thing to do, but you don't have to do it; there's no strong moral demand there. There must be some good explanation for why it's okay you have an obligation here, but you don't have an obligation there. That's one possibility.
Another possibility is that what we're seeing here are the limitations of intuitive human morality; that we evolved in a world in which we didn't deal with people on the other side of the world—the world was our group. We're the good guys. They're the bad guys. The people on the other side of the hill—they're the competition. We have heartstrings that you can tug, but you can't tug them from very far away. There's not necessarily a moral reason why we're like this, it's a tribal reason. We're designed to be good to the people within our group to solve the tragedy of the commons, but we're not designed for the tragedy of common sense morality. We're not designed to find a good solution between our well-being and their well-being. We're really about me and about us, but we're not so much about them.
Is that right? First, let me tell you about an experiment. This was done as an undergraduate thesis project with a wonderful student named Jay Musen. We asked people a whole bunch of questions about helping people, and one of the starkest comparisons goes like this: In one version, you're traveling on vacation in a poor country. You have this nice cottage up in the mountains overlooking the coast and the sea, and then a terrible typhoon hits and there's widespread devastation, and there are people without food, medicine, there are sanitary problems, and people are getting sick, and you can help. There's already a relief effort on the ground, and what they need is just money to provide supplies, and you can make a donation. You've got your credit card, and you can help these people. And we ask people, do you have an obligation to help? And a majority, about 60-something percent of the people said yes, you have an obligation to do something to help these people down on the coast below that you can see.
Then we asked a separate group of people the following question: Suppose your friend is vacationing in this faraway place, and you describe the situation as exactly the same, except instead of you there, it's your friend who's there. Your friend has a smart phone and can show you everything that's going on; everything your friend sees, you can see. The best way to help is to donate money, and you can do that from home just as well. Do you have an obligation to help? And here, about half as many people say that it's okay, that you have an obligation to help—about 30 percent.
What's nice about this comparison is that it cleans up a lot of the mess in Singer's original hypothetical. You can say, you're in a better position to help in one case, but not in the other case, or there are a lot of different victims, in one case, there's only one victim. We've equalized a lot of the things there and you still see it as a big difference. It's possible, maybe, that what's really right is to say no, you really have an obligation to help when you're standing there, and you really don't have an obligation when you're at home. That's one interpretation. I can't prove that it's wrong.
Uma das formas que ele defende pensarmos e encararmos esses conflitos seria através de um utilitarismo. O que realmente importa e deve ser objeto de valor é a qualidade de vida das pessoas, as experiências pelas quais elas passam. Nesse sentido, o bem estar de uma pessoa não é mais importante que o de outra. Então, as soluções para os conflitos seria encontrar maneiras de equivaler o bem estar desses grupos.
There is a philosophy that accords with this, and that philosophy has a terrible name; it's known as utilitarianism. The idea behind utilitarianism is that what really matters is the quality of people's lives—people's suffering, people's happiness—how their experience ultimately goes. The other idea is that we should be impartial; it essentially incorporates the Golden Rule. It says that one person's well-being is not ultimately any more important than anybody else's. You put those two ideas together, and what you basically get is a solution—a philosophical solution to the problems of the modern world, which is: Our global philosophy should be that we should try to make the world as happy as possible. But this has a lot of counterintuitive implications, and philosophers have spent the last century going through all of the ways in which this seems to get things wrong.
Maybe a discussion for another time is: Should we trust those instincts that tell us that we shouldn't be going for the greater good? Is the problem with our instincts or is the problem with the philosophy? What I argue is that our moral instincts are not as reliable as we think, and that when our instincts work against the greater good, we should put them aside at least as much as we can. If we want to have a global philosophy—one that we could always sign onto, regardless of which moral tribes we're coming from—it's going to require us to do things that don't necessarily feel right in our hearts. Either it's asking too much of us, or it feels like it's asking us to do things that are wrong—to betray certain ideals, but that's the price that we'll have to pay if we want to have a kind of common currency; if we want to have a philosophy that we can all live by. That's what ultimately my research is about. It's trying to understand how our thinking works, what's actually going on in our heads when we're making moral decisions. It's about trying to understand the structure of moral problems, and the goal is to produce a kind of moral philosophy that can help us solve our distinctively complex moral problems.
A outra forma de se entender a moral humana e como ela funciona é através da proposta do Sam Harris² que, a meu ver, é um tanto confusa. Ele diz que questões sobre valores—sentido da vida, moralidade e significado—são questões sobre o bem-estar de criaturas conscientes, os quais, inclusive, podem ser traduzidos em fatos que podem ser cientificamente entendidos. Sendo que esses estados cerebrais são bons ou ruins para o indivíduo que o experimenta, logo, existiriam respostas corretas para situações que se dizem morais.
¹
http://edge.org/conversation/deep-pragmatism²
http://en.wikipedia.org/wiki/The_Moral_Landscape e
https://www.ted.com/talks/sam_harris_science_can_show_what_s_right