When I come across these one death vs. many deaths thought experiments, I often think of Asimov's laws of robotics. In the original stories, the First (highest) law was that a robot may never harm a human or allow one to come to harm through inaction. In a later novel, a robot derives a Zeroth (i.e., higher) law - that a robot may never harm humanity or allow humanity to come to harm. However, invoking the Zeroth law to harm a human (in violation of the First law) places major stress on the robot's brain. I think that's a good way of how we should view these situations. We may sometimes find it necessary to sacrifice one for many, but for a truly moral person, it is very painful and aversive to do so because of the recognition of the one's humanity. I worry about those who could make such a sacrifice without a second thought or guilt.