In asking me what's wrong with engineering a child to be less prone to hatred and violence, you're forcing me to go down a path that I follow only with reluctance, because what I say can be easily misunderstood. No one, least of all me, can make the argument that what the world needs is more pain, suffering, and, indeed, hatred and violence. So why shouldn't we get rid of all of these negative emotions, as well as anxiety, remorse, longing, guilt, dissatisfaction, envy, and the like, if not with genetic engineering, then with drugs?
The answer, I think, is that a completely lived human life is not complete without experience of these emotions, which is why I think a moral philosophy based on the maximization of something called "happiness" is so shallow.
Nietzsche once said that we need to learn to become better haters. Now, Nietzsche is not a good all-around guide on moral questions, but in this respect he was his usual insightful self. You could talk about getting rid of hatred only if all truly hateful things in the world disappeared, and talk about ending propensities toward violence only if there were no aggressors ready to do violence to you. The "good" emotions that we conventionally associate with happiness are often good only in relation to the negative ones. If we never felt pain, or pathos, we could never feel human sympathy. If we never had unrequited longings or anxieties, we would never strive to reach beyond ourselves, or innovate, or explore. If there was no risk or danger, there would be no courage. There are indeed people who are happily free of the acute fear of death, and we call them sociopaths.
Let me turn the tables on you and ask you a question—something that's easy to do since I'm getting the last word in this dialogue. What's wrong with the Soma drug in Brave New World? If you could pop a pill under whatever circumstances and feel happy, is there anything in your moral-utilitarian philosophy that would tell you it was wrong? If you betrayed a friend, took a bribe, acted contemptibly in the line of duty, wouldn't it be great to be able to overcome these horrible feelings of guilt and self-loathing with a simple medication? You're adding to the global maximization of happiness, after all.
Back to more practical matters. I would like to base a lot of future regulation on a distinction between therapy and enhancement, permitting the former while raising barriers to the latter. As you say, the line between the two is very difficult to draw in many circumstances. Is Viagra, for example, a therapy or an enhancement? Or giving a new heart valve to an 80-year-old patient?
Be that as it may, I think that these distinctions are much easier to make in practice than in theory. Your example of Ritalin is a good one. You see the glass as half-empty. It's prescribed for a wide variety of people, including those that really don't suffer from hyperactivity disorder, and I agree; in our society the domain of the therapeutic has been widening steadily over the years. On the other hand, you can also see the glass as half full: If there was ever a socially constructed disease subject to squishy diagnosis, it is attention deficit disorder. And yet, despite the fact that no one can draw a neat theoretical line between the therapeutic and enhancement uses of Ritalin, we still make the former legal as a prescription drug and ban the latter as an enhancement medicine. We don't like the idea that our home-run hitters may be competing more on the basis of who has the better pharmacologist rather than who has the greater natural talent, so we make rules—nonexistent, unfortunately, in professional baseball—to control the abuse of performance-enhancing drugs. These solutions may be sloppy, but I don't think we're better off not trying to put them in place.
In an age when literally millions of people in Africa and other developing countries are dying from diseases that are either curable (as in the case of malaria) or treatable (as in the case of AIDS), it seems to me quite unjust to have market forces channel large resources into ethically questionable biomedical technologies instead. You don't even have to ban them; all you need to do is raise their cost or force them to jump through much tighter regulatory hoops. It makes sense, after all, that we would permit parents to take greater risks in treating a debilitating genetic disease than in making an otherwise normal kid a little bit more intelligent.
Bob, I'll close here because I know you couldn't possibly disagree. This is not being anti-science or anti-technology, only commonsensical.