Karma optimization has a built in crassness to it’s connotation. It feels a bit gross to maximize, optimize, perfect the goal-centred, utilitarian approach that Dev and the Church of the Karma Bureau demand from us. It feels robotic. It feels algorithmic. Like you said, there aren’t just a finite set of problems that we can identify, quantify, and conquer that’ll bring us to this utopic land where Dev and the Bureau can retire to the back 9 and a life of beer by the beach. This perfect world impossibility is a reality we need to actively confront when embarking on any sort of ethical discussion. In the past, this kind of realization has left me in a gridlock of moral paralysis. If I am to optimize my karma, and even in the best case scenario, the whole world optimizes together, this utopic sustainable land will remain a fabricated dreamscape. So I’d ask myself the ever circular, almost annoying nihilistic adolescent question, what’s the point? Why even bother? How do I navigate this seemingly impossible ethical world, somehow balancing my selfish interests, without condemning myself to complete self sacrifice, all managed with no easy black and white moral compass to direct my behaviour? The short answer to that question is I don’t know. But, it’s more of a “I don’t think it’s possible to know for certain”, type of I-don’t-know. Before I spring off the diving board to a deeper conclusion to that question, I feel a bit obligated to say that I was definitely oversimplifying the moral duty to charity in my first post. It was a good exercise to try and understand the extreme side of the consequentialist perspective of charitable duty.
Now why I say it’s not possible to know for certain lies in a sad realization about our friends Dev and Slav. Whether you’re inspired by a form of empathetic guilt, or enriched by the euphoric bubbles of altruistic compassion, or are striving to be a “good guy”, there isn’t a real Karma bureau accounting for your efforts. There isn’t a Dev, the Accountant, or a Slav, the Auditor out there. Taking a deep dive into some cinematic cheese, there is, however, a Dev and Slav inside your heart. And that internal Dev and Slav should be consulted upon to figure out the calibration for the compass, to figure out what makes you feel good ultimately when it comes to charity.
But how do we actually end up acting on this internal reflection? The deontological perspective, as you brought up in your post, may help us find some answers here. What’s rooted in this perspective is an embracement of a series of human evolutionary tendencies when it comes to morality. There’s a natural admiration of the virtuous individual, with an empathetic ear, with a strong sense of duty, with a conviction of yes and no answers to difficult questions. There’s a reassurance to the finality and clarity that it gives people. And apart from this role model “good person” ideal that’s easy and natural to strive to, the deontological perspective gives clear answers to people to make decisions and act on them. As for the world of the morally grey, they’re stuck in a gridlock of indecision with no exit in sight. Even if the truth is actually grey, how do you become operational and stay out of the purgatory of moral paralysis of analysis? My answer here is to embrace the arbitrariness of the moral grey by making a new rule. The “Time-Sensitive-Aribitrary-Deadline-Decision-Making” rule. TSADDM. The name is still a work in progress. But what this means is that I fix some arbitrary deadline, “one week from today”, spend time having the continued analysis that I’ve been having, then after the deadline arrives, make a decision, and follow through with it. Period.
Leave a Reply