A Manifesto on Ethical AI

Share:

Core Philosophy:

My position on AI ethics is evolving in time with AI development, but there are some core, pragmatic ideals that I suspect will continue to hold as we are swept up into this new “Era of AI”.

My philosophy revolves around the idea of AI as a Partner, Not a Replacement™—also the name of my training program, which helps creatives and organizations adopt AI in ways that enhance creativity, reduce burnout, and stay aligned with human values. Simply, AI should amplify human potential—not replace it. Imperfect systems are acceptable—if they’re improving, honest about their limits, and designed to serve the public good. My hope for our collective future is that we learn to use AI boldly, but wisely.

Here are a few of my more specific positions:

AI as a Tool, Not a Decision-Maker

  • AI is fantastic for sparking ideas, drafting content, simplifying tasks, and doing surface-level research—but it should never automate away critical thinking or expertise.
  • Automation should scale human insight, not replace it. When we cut humans out of the loop, we lose nuance, empathy, and discernment.
  • AI should never be used to make significant decisions without meaningful human oversight—especially in policy, hiring, finance, or healthcare.

Guarding Against Corporate Capture

  • Allowing the market to default to corporate-controlled AI systems will widen the gap between giant tech monopolies and small, community-rooted businesses.
  • I support decentralized innovation, open-source ethics, and community-built tools—not just because they’re ethical, but because they’re resilient.
  • People’s data, time, and trust aren’t raw materials—they’re a currency to be respected. Prioritizing this builds long-term trust in an increasingly data-wary world.
  • AI workflows and products must be built with clear consent mechanisms, transparent usage policies, and accountability baked in from the start.
  • Organizations should be explicit about data usage and give users real agency, especially in sectors like education, advocacy, or public services.
  • We must advocate for stronger anti-data scraping regulations, even if perfection is unattainable, to protect individuals during this AI transition.

Rejecting Exploitation & Surveillance Capitalism

  • I believe organizations should never compromise their audience’s data for speed, scale, or trend-chasing.
  • I’m critical of surveillance capitalism and reject the normalization of using AI to mine personal information for profit.
  • As business leaders, we have a responsibility to oppose unethical AI-driven practices like synthetic content flooding or exploitative dynamic pricing models.

Equity, Not Just Efficiency

  • Ethical AI should be used to dismantle systemic inequities, not entrench them.
  • I especially emphasize this in nonprofit, civic, and educational contexts, where trust and impact matter more than flash.

AI Literacy Means Knowing the Risks

Real AI literacy includes understanding the tool’s biases, blind spots, and failure modes—not just its bells and whistles.

I encourage decision-makers to ask: “Who benefits? Who is harmed? And who is missing from the room?” before any AI system gets implemented.