<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://razahashmi.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://razahashmi.github.io/" rel="alternate" type="text/html" /><updated>2026-02-18T03:26:55-08:00</updated><id>https://razahashmi.github.io/feed.xml</id><title type="html">Home</title><subtitle>ML &amp; Startups</subtitle><author><name>Raza Hashmi</name><email>razahashmi93@gmail.com</email></author><entry><title type="html">I Designed a Trolley Problem That Puts You on the Tracks</title><link href="https://razahashmi.github.io/posts/2025/06/blog-post-2/" rel="alternate" type="text/html" title="I Designed a Trolley Problem That Puts You on the Tracks" /><published>2025-06-18T00:00:00-07:00</published><updated>2025-06-18T00:00:00-07:00</updated><id>https://razahashmi.github.io/posts/2025/06/blog-post-2</id><content type="html" xml:base="https://razahashmi.github.io/posts/2025/06/blog-post-2/"><![CDATA[<p>For over half a century, a single, deceptively simple scenario has served as philosophy’s favorite moral laboratory: the trolley problem. We all know the setup, originally conceived by Philippa Foot in 1967 and later refined by Judith Jarvis Thomson. A runaway trolley is about to kill five people. You can pull a lever to divert it, but doing so will kill one person on a side track.</p>

<p>The question echoes through every Intro to Ethics class: Do you pull the lever?</p>

<p>This puzzle brilliantly pits two titans of moral philosophy against each other. On one side stands <strong>Utilitarianism</strong>, the cold, calculating logic that demands the greatest good for the greatest number. In this view, pulling the lever is not just permissible but morally obligatory. Five lives outweigh one. The end justifies the means.</p>

<p>On the other side stands <strong>Deontology</strong>, which argues for absolute moral duties. A core deontological principle is the prohibition against actively killing. From this perspective, pulling the lever makes you a murderer. While letting five people die is a tragedy, it is not your hand causing their death. You are forbidden from taking an action that uses one person’s life as a means to an end, regardless of the outcome.</p>

<p>For years, I’ve been fascinated by this conflict. But I’ve also come to believe the classic trolley problem is a trap. It presents us with a clean, sterile, third-person puzzle, as if we are floating above the tracks, god-like and dispassionate. It ignores the messy, emotional, and deeply personal realities of moral decision-making. Real life isn’t a single, static choice. It’s a series of shifting contexts where our principles are stress-tested by our instincts, our biases, and our own skin in the game.</p>

<p>So, I redesigned the experiment. I wanted to see what happens when we move through the problem in iterations, forcing us to inhabit different roles—from philosopher, to potential victim, to judge. This isn’t about finding the “right” answer. It’s about revealing the hidden architecture of our own moral psychology.</p>

<hr />

<h3 id="iteration-1-the-philosophers-gaze"><strong>Iteration 1: The Philosopher’s Gaze</strong></h3>

<p>We begin on familiar ground. You are the impartial observer, the philosopher-king standing by the lever. Five people on one track, one on the other. This first step serves a crucial purpose: it establishes your baseline moral principle in a vacuum.</p>

<p>When you have no personal stake, what is your default setting? Do you calculate the numbers, or do you adhere to the rule? Your answer here is your “moral north star” before the storm hits.</p>

<hr />

<h3 id="iteration-2-the-shock-of-self-preservation"><strong>Iteration 2: The Shock of Self-Preservation</strong></h3>

<p>Now, the scenario shatters its sterile frame. The abstract becomes visceral.</p>

<p><strong>YOU are now the one tied to the main track.</strong> A single, uninvolved stranger is on the side track. The lever is within your reach. If you pull it, you will live, and the stranger will die. If you do nothing, you will die, and the stranger will live.</p>

<p>Suddenly, the clean lines of Utilitarianism and Deontology are blurred by the terrifying, primal instinct of self-preservation. The question is no longer about the “greater good” in the abstract; it’s about whether your life is worth more than a stranger’s.</p>

<p>Does your carefully constructed ethical framework from Iteration 1 survive contact with the fear of your own death? Is it hypocrisy to change your mind, or is it an undeniable, human truth that self-preservation is an instinct that overrides abstract principles? What you did in Iteration 1 does it haunt you? This is where the puzzle gets messy, and where our principles are truly tested.</p>

<hr />

<h3 id="the-final-iteration-the-judges-gavel"><strong>The Final Iteration: The Judge’s Gavel</strong></h3>

<p>This is where the experiment moves into entirely new territory. You are no longer a participant in the immediate crisis. You are now a judge, tasked with assessing the moral worth of others who have faced the classic dilemma. And their past actions will determine their fate in a new crisis.</p>

<p><strong>Scenario A: Judging the Inactive</strong></p>

<p>An author—let’s call them Author A—previously faced the classic trolley problem and chose to <strong>do nothing</strong>, letting five people die. Now, a new trolley is hurtling toward an innocent stranger. On the side track is Author A. To save the stranger, you must actively pull the lever, which will kill Author A. If you do nothing, the stranger dies and Author A lives.</p>

<p><strong>Scenario B: Judging the Actor</strong></p>

<p>Another author—Author B—faced the same classic problem and chose to <strong>pull the lever</strong>, sacrificing one to save five. Now, Author B is on the side track, and the same innocent stranger is on the main track. Once again, you must pull the lever to save the stranger at the cost of the author’s life.</p>

<p>Your verdict here is no longer about Utilitarianism vs. Deontology in a vacuum. It’s about justice, retribution, and moral desert. Does Author A’s past inaction create a “moral debt” that makes them more expendable? Does Author B’s past heroism grant them a “moral credit” that makes them more worthy of being saved?</p>

<p>You are no longer just solving a math problem or following a rule. You are weighing a soul. This final iteration tests our deepest intuitions about fairness and whether we believe a person’s moral history should influence their right to life in a completely unrelated event.</p>

<hr />

<h3 id="further-loops-other-iterations-to-consider">Further Loops: Other Iterations to Consider</h3>

<p>The power of this iterative model is its flexibility. We can swap out variables to test different aspects of our moral framework. Consider these:</p>

<ol>
  <li>
    <p><strong>The Kinship Iteration:</strong> In Iteration 2, it isn’t you on the tracks, but your child, your parent, or your partner. The stranger remains on the other track. Does this change your decision? This version pits pure impartiality against our most powerful agent-relative duties—the special obligations we feel towards our loved ones.</p>
  </li>
  <li>
    <p><strong>The Culpability Iteration:</strong> What if the five people on the track in Iteration 1 were not innocent bystanders, but convicted criminals who had committed heinous acts? And the one person on the side track was a celebrated doctor? This forces us to ask if we believe all lives are truly equal.</p>
  </li>
  <li>
    <p><strong>The Group Justice Iteration:</strong> Let’s twist Iteration 2. You are on the tracks. But the people on the <em>other</em> track are the five people who, in a previous round, conspired to tie you there. Pulling the lever now isn’t just killing strangers; it’s an act of self-defense against your aggressors. Does this make pulling the lever not only permissible, but morally necessary?</p>
  </li>
</ol>

<hr />

<h3 id="take-the-survey-become-part-of-the-experiment">Take the Survey: Become Part of the Experiment</h3>

<p>I have formalized this iterative problem into a short, anonymous survey. It will walk you through each of these roles—philosopher, victim, and judge—and will then ask you to reflect on the consistency (or inconsistency) of your own choices.</p>

<p>The goal is to gather data to see the patterns in how our moral compass shifts under pressure. Do we hold others to a standard we wouldn’t apply to ourselves? Do we reward past good deeds and punish past inaction?</p>

<p>Your participation will contribute to a fascinating collective insight into the real, messy business of human morality.</p>

<p><strong><a href="https://forms.gle/Vbi7aVo8wXKyo18Y9">Click Here to Take the Iterative Trolley Problem Survey (5-7 minutes)</a></strong></p>

<hr />

<h2 id="next-steps">Next Steps</h2>

<p>Once I have enough data I will compile and analyze the anonymized results. I’ll publish the findings— looking at how judgments change when we introduce personal risk, second-order consequences, and moral debt or credit. If you’d like to see how people’s moral compasses align (or clash) in these scenarios, stay tuned.</p>]]></content><author><name>Raza Hashmi</name><email>razahashmi93@gmail.com</email></author><category term="Philosophy" /><summary type="html"><![CDATA[For over half a century, a single, deceptively simple scenario has served as philosophy’s favorite moral laboratory: the trolley problem. We all know the setup, originally conceived by Philippa Foot in 1967 and later refined by Judith Jarvis Thomson. A runaway trolley is about to kill five people. You can pull a lever to divert it, but doing so will kill one person on a side track.]]></summary></entry><entry><title type="html">Building a Sandbox for Human Behavior</title><link href="https://razahashmi.github.io/posts/2025/03/Sim-Personalities/" rel="alternate" type="text/html" title="Building a Sandbox for Human Behavior" /><published>2025-03-21T00:00:00-07:00</published><updated>2025-03-21T00:00:00-07:00</updated><id>https://razahashmi.github.io/posts/2025/03/Personalities-Sim</id><content type="html" xml:base="https://razahashmi.github.io/posts/2025/03/Sim-Personalities/"><![CDATA[<p>It all started with a simple walk to class. Back at university, there were two distinct pathways to the lecture hall. Both were the exact same distance, had the same view, and took the same amount of time. Yet, I noticed something strange: people rarely picked a path at random. One route always seemed to pull more people than the other.</p>

<p>I used to spend my walks wondering why. Was it a subconscious “nudge” from a friend leading the group? Did the introverts instinctively peel off to the quieter path to avoid the crowd? It became clear that the decision wasn’t about the geometry of the road—it was about the psychology of the walker.</p>

<p>That curiosity followed me after graduation into my work at a survey company. I saw those same invisible forces at play, observing how personality traits and context could subtly shift how people answered questions. Traditional research only captures what people <em>say</em> or <em>post</em>—but what about everyone else? The ones who read, consider, and choose <em>not</em> to engage?</p>

<p>So I built a sandbox to simulate the complete picture: <strong>300 AI personas</strong>, each with distinct personalities, ideologies, and interests, exposed to <strong>3,000 real tweets</strong>. Every persona processes content through a stochastic cognitive pipeline—evaluating relevance, tracking emotional state, and deciding whether to engage. The result? <strong>3,317 engagement events</strong> with realistic power-law distributions, social cascades through small-world networks, and traceable decision paths showing exactly why each persona did or didn’t respond.</p>

<p>This isn’t just a simulation—it’s a research instrument. You can trace precisely which cognitive gates filtered out each persona, what would need to change to tip the outcome, and how different personality types respond to identical content. The full technical breakdown, visualizations, and system architecture are all documented below.</p>

<p><strong><a href="https://razahashmi.github.io/files/2025-03-21-Personalities-Sim.html">Explore the complete simulation →</a></strong></p>]]></content><author><name>Raza Hashmi</name><email>razahashmi93@gmail.com</email></author><category term="Simulations" /><category term="Human Behaviour" /><summary type="html"><![CDATA[It all started with a simple walk to class. Back at university, there were two distinct pathways to the lecture hall. Both were the exact same distance, had the same view, and took the same amount of time. Yet, I noticed something strange: people rarely picked a path at random. One route always seemed to pull more people than the other.]]></summary></entry><entry><title type="html">My AI-Powered Imposter Syndrome</title><link href="https://razahashmi.github.io/posts/2024/02/blog-post/" rel="alternate" type="text/html" title="My AI-Powered Imposter Syndrome" /><published>2024-02-13T00:00:00-08:00</published><updated>2024-02-13T00:00:00-08:00</updated><id>https://razahashmi.github.io/posts/2024/02/blog-post</id><content type="html" xml:base="https://razahashmi.github.io/posts/2024/02/blog-post/"><![CDATA[<p>AI is a wonderful, powerful tool. For someone like me, with a mind constantly buzzing with ideas, possibilities, and new cases, it acts as a “know-it-all buddy” to bounce those ideas off of. It’s an incredible accelerator. But, as you might have guessed, a know-it-all buddy can be dangerously wrong and, worse, doesn’t know <em>when</em> it’s wrong.</p>

<p>A few days ago, I learned this lesson the hard way. To get through some grunt work, I asked a generative AI to write a few test cases for a project. On the surface, everything looked fine. But when I looked under the hood, I found the code was built on a series of subtle, incorrect assumptions. Assumptions that, had they gone unnoticed, could have cost me my reputation.</p>

<p>That incident forced me to stop and think about the real place for Generative AI in my life and work.</p>

<h3 id="the-trade-off-productivity-vs-ownership">The Trade-Off: Productivity vs. Ownership</h3>

<p>There’s no doubt that AI is a phenomenal assistant. It’s a brilliant brainstorming partner, a patient code-writing assistant (especially for someone like me who isn’t a professional developer), and an excellent editor that helps expand and correct my writing.</p>

<p>But that help comes with a hidden cost. Asking for help from an AI, I’ve found, feels like diving headfirst into a pool of imposter syndrome. It creates a growing separation between what feels truly <em>yours</em> and what does not. It leaves you with the nagging question: “Did <em>I</em> really create this?”</p>

<p>I know the old saying: we all “stand on the shoulders of giants,” and there is nothing truly new under the sun. But I also believe that putting in the hard work, wrestling with a problem, and doing it yourself is where true growth happens. The struggle is the feature, not the bug.</p>

<h3 id="the-search-for-passion">The Search for Passion</h3>

<p>This feeling was crystallized for me by a little experiment. I am incredibly passionate about my own ideas; I will happily go to the ends of the earth to learn what I need to bring them to life. The excitement is the fuel.</p>

<p>Curious, I asked a Gen AI to generate a few ideas for me. I tried it across different domains—game ideas, startup ideas, machine learning projects. I kept refreshing, hoping for a spark. But none of them had any real pull on me. They were technically plausible, even clever, but they were hollow. They lacked a soul. I found myself returning to my own messy, half-formed list of ideas, because those are the ones I truly care about.</p>

<p>And that’s when I realized what will endure in this new era. Creativity, a sense of belonging with an idea, and personal passion still have a vital place in this world. Perhaps this is the same uncertainty people felt at the advent of the printing press, the computer, or the internet.</p>

<h3 id="the-process-of-building">The Process of Building</h3>

<p>Translating that spark into reality is where the real work begins. This is the process of building, of shaping, of iterating. It’s often messy, filled with challenges and setbacks. There are moments of frustration when things don’t work as expected, and moments of elation when a difficult problem is finally solved. Each line of code written, each brushstroke applied, each nail hammered in, is a step forward. It’s a testament to patience and persistence.</p>

<p>This struggle is not just a byproduct of creation; it’s a fundamental ingredient. It’s through grappling with the difficult parts that we truly learn and grow. The frustration of a bug that takes hours to fix, the despair when a design just doesn’t look right, the physical ache from hours of labor—these are the moments that forge the deepest connection to our work. Overcoming these hurdles is what transforms the process from a mere task into a meaningful journey. The struggle infuses the creation with our effort and resilience, making the final ownership all the more sweet.</p>

<h3 id="the-pride-of-ownership">The Pride of Ownership</h3>

<p>And then, you have it. A finished product. Something that exists because you made it exist. The pride that comes with this is immense. You know its every flaw and every strength. You remember the late nights, the moments of doubt, and the breakthroughs. This intimate knowledge creates a bond that is unbreakable. This isn’t just <em>a</em> website; it’s <em>my</em> website. I chose the layout, I wrote the posts, I configured the domain. Every pixel has a story.</p>

<h3 id="finding-the-joy-not-just-the-tool">Finding the Joy, Not Just the Tool</h3>

<p>So, where does that leave me? I have to be careful. Even sending this very post to an AI for restructuring and grammar checking feels a little like cheating.
I need to be the driver, not the passenger. To do that, I’ve started building a few “guardrails” for myself.</p>

<ul>
  <li><strong>The Blank Page Rule:</strong> For any truly creative work, I force myself to start with a blank page. I’ll write down my own raw, messy thoughts first. Only after I have a core idea that feels like <em>mine</em> will I turn on the GPS for refinement or brainstorming.</li>
  <li><strong>The ‘Why’ Check:</strong> Before I delegate a task to AI, I ask myself: “Am I doing this to avoid difficult but necessary work, or am I doing this to bypass tedious, repetitive work?” The former is a learning opportunity I shouldn’t skip; the latter is a perfect job for a machine.</li>
  <li><strong>Fact-Checking is Non-Negotiable:</strong> My test case story taught me this. I now treat any output from an AI—code, facts, or figures—with the same skepticism I’d have for a random, unsourced claim online. I assume it’s wrong until I can prove it’s right.</li>
</ul>

<p>But I’ve decided on my path forward. I will continue to use AI, but I will see it for what it is: a tool. A powerful one, but a tool nonetheless. My goal is to ensure it doesn’t take the joy and the learning out of my work. It should exist to elevate me to a new level of creativity and productivity, not to replace the essential, human journey of creation itself.</p>]]></content><author><name>Raza Hashmi</name><email>razahashmi93@gmail.com</email></author><category term="AI" /><category term="Creativity" /><category term="Personal Growth" /><summary type="html"><![CDATA[AI is a wonderful, powerful tool. For someone like me, with a mind constantly buzzing with ideas, possibilities, and new cases, it acts as a “know-it-all buddy” to bounce those ideas off of. It’s an incredible accelerator. But, as you might have guessed, a know-it-all buddy can be dangerously wrong and, worse, doesn’t know when it’s wrong.]]></summary></entry><entry><title type="html">Can an AI Learn the Art of Valet Parking?</title><link href="https://razahashmi.github.io/posts/2022/04/blog-post-1/" rel="alternate" type="text/html" title="Can an AI Learn the Art of Valet Parking?" /><published>2022-04-12T00:00:00-07:00</published><updated>2022-04-12T00:00:00-07:00</updated><id>https://razahashmi.github.io/posts/2022/04/blog-post-1</id><content type="html" xml:base="https://razahashmi.github.io/posts/2022/04/blog-post-1/"><![CDATA[<p>Valet parking is a classic test of human multitasking. It’s not just about driving; it’s a high-pressure logistical puzzle that combines spatial reasoning, memory, and strategic decision-making against a ticking clock. This complexity makes it a fascinating and challenging problem to solve with Artificial Intelligence.</p>

<p>This project explores that challenge by teaching a Reinforcement Learning (RL) agent to work as a valet in a custom-built 2D environment called “Valet-Park.”</p>

<p><img src="/images/valet_park.png" alt="Alt text" title="A screenshot of the Valet-Park game environment. The player agent is in the center. A customer is dropping off a yellow car at the entrance, with a speech bubble indicating the desired parking spot, 1009. The lot has numerous numbered parking spaces." /></p>

<p><strong>The Environment: Welcome to Valet-Park</strong>
The environment, inspired by classic top-down arcade games, is designed to test the core skills of a valet:</p>

<ul>
  <li>
    <p><strong>The Task:</strong> Customers arrive at the entrance, drop off their car, and declare their desired parking spot number. The agent must take the car, park it in the correct spot, and later retrieve it when the customer returns to the exit.</p>
  </li>
  <li><strong>The Challenge:</strong> Success isn’t just about parking cars. The agent is penalized for real-world mistakes:
    <ul>
      <li>Collisions: Hitting other cars or obstacles.</li>
      <li>Blocking Traffic: Leaving cars at the entrance for too long.</li>
      <li>Poor Service: Making customers wait at the exit.</li>
    </ul>
  </li>
  <li><strong>The Memory Component:</strong> To succeed, the agent must remember which car belongs to which customer, as customers are unique, but their car models may be identical.</li>
</ul>

<p><strong>The Core Question: Can an Agent Learn Human-like Heuristics?</strong></p>

<p>What makes this project truly exciting isn’t just whether the agent can learn to park cars, but how it learns to solve the problem under pressure.</p>

<p>Think about how a human valet adapts. When things are slow, they follow the rules, parking each car in its assigned spot. But when the lot is busy and time is running out, they switch strategies. They might start ignoring the assigned numbers, memorizing the customer’s face and their car instead, and parking the car in the closest available spot to the exit for a quick retrieval.</p>

<p>This project aims to see if an RL agent can discover similar, more abstract strategies. We’ll test this by introducing specific constraints:</p>

<ul>
  <li><strong>Time Pressure:</strong> By drastically reducing the game time, will the agent abandon the “assigned spot” rule and invent a new, more efficient strategy?</li>
  <li><strong>Memory vs. Logic:</strong> By increasing the number of identical car models, we force the agent to rely less on the car’s appearance and more on either memorizing the customer or strictly adhering to the parking spot numbers.</li>
</ul>

<p>Will the agent learn to “bend the rules” like a human? Or will it discover a completely different, non-human approach to optimization?</p>

<p><strong>Next Steps</strong>
A prototype has been built in Pygame to prove the concept. The next phase of the project involves porting the environment to Unity, which will provide a more robust physics engine and streamlined integration with modern RL frameworks. By progressively increasing the difficulty with new layouts and obstacles, we can continue to explore the fascinating and often surprising ways that AI agents learn to solve complex, human-centric tasks.</p>]]></content><author><name>Raza Hashmi</name><email>razahashmi93@gmail.com</email></author><category term="RL" /><category term="ML" /><category term="Unity" /><summary type="html"><![CDATA[Valet parking is a classic test of human multitasking. It’s not just about driving; it’s a high-pressure logistical puzzle that combines spatial reasoning, memory, and strategic decision-making against a ticking clock. This complexity makes it a fascinating and challenging problem to solve with Artificial Intelligence.]]></summary></entry></feed>