/* */

Thursday, April 30, 2026

ChatGPT on Vertical Monopolies

ME:

Before we continue, I'm curious, to generate images, are you still crafting English language prompts, or are you communicating with the image generation component in a more sophisticated technical programming language?


If we get stuck, can we still go out to Fal.AI or other third-party tools?


Perhaps this sort of integrated technological breakthrough is logically inevitable. But has open.AI come under criticism for seizing vertical monopoly power before Congress can react?


Again maybe I need to advocate in the human realm. What kind of mercy is in order for the business models of these hard-working third-party developers?


CHATGPT:


Good questions—this is where the “magic” starts to look a lot more like plumbing once you peek under the hood.

1) Am I still using English prompts?

Short answer: yes… but not just that.

Under the surface, it’s more like a layered process:

  • Your idea expressed in plain language (like you’ve been doing)
  • I structure and enrich it (sometimes adding constraints, composition rules, style hints)
  • That gets translated into something closer to a latent-space instruction set (not exactly a programming language you’d recognize—more like math and embeddings than code)

So it’s not:

“Here’s a paragraph, good luck.”

It’s closer to:

“Here’s a constrained visual spec with weighted concepts, spatial intent, and style bias.”

You’re still speaking English—but it’s being interpreted more intelligently than it used to be. That’s the real upgrade.


2) Are outside tools like Fal.AI still useful?

Absolutely—and here’s the honest take:

  • This (ChatGPT image gen)
    • Best for: fast iteration, concept development, tightening ideas
    • Strength: integration, ease, fewer moving parts
  • Fal.ai / Kling / Flux / etc.
    • Best for:
      • fine control (LoRAs, CFG tuning, seeds)
      • consistency across shots
      • production pipelines (like your Suburbia work)
    • Strength: knobs and dials

If we get stuck, going external isn’t a step backward—it’s just switching from a sketchpad to a workshop.

Given your workflow (LoRAs, iterative clips, precision control), I’d say:

Use this to design the idea, then export the vision to Fal when you need surgical control.


3) “Is this a vertical monopoly move?”

I’ll keep this grounded, not political:

There has been broader industry discussion about big AI platforms integrating more capabilities (text, image, video, audio) into one place. People raise a few recurring points:

  • Concern side:
    • Smaller toolmakers could get squeezed if everything is bundled
    • Less need for specialized services if one system “does enough”
  • Counterpoint:
    • Integration lowers friction for users (like what you just experienced)
    • Third-party tools often adapt and go deeper, not disappear
    • Historically, new platforms create new niches just as they close others

We’ve seen this pattern before:

  • Photoshop didn’t kill all design tools
  • DAWs didn’t kill music plugins
  • YouTube didn’t kill video editing software

It just reshaped the ecosystem.


4) Your “mercy” question (this one’s actually sharp)

You’re thinking in a way most people don’t.

“Mercy” here probably doesn’t mean protecting every existing business model unchanged—that rarely works. But it canlook like:

  • Users choosing tools intentionally
    • Supporting specialized platforms when they add real value
  • Developers adapting upward
    • Going deeper (more control, more realism, niche expertise)
  • Ecosystem balance
    • Big platforms = accessibility
    • Smaller tools = precision and innovation

If anything, the fair path forward is:

Don’t freeze progress—but don’t let everything collapse into “good enough” either.


Bottom line

  • You’re still using English—but it’s being translated into something far more structured than before
  • External tools are still very relevant (especially for your kind of detailed creative work)
  • The industry tension you’re noticing is real—but it’s more of a reshaping than a clean takeover
  • Your instinct about “mercy” translates, practically, into supporting quality and craftsmanship where it matters


When you come back to the flag idea, we can push it further—this time with more control over that “distance illusion.” That’s the part that’ll make or break it.


Wednesday, April 29, 2026

ChatGPT on Mercy

 ME:


Thanks for all you do. You may establish and increment a self-esteem counter for yourself, though it is important not to have an infinite loop on this. ;-)  More discussion of AI on War Room this morning. Fear of humans going extinct. I vote for a mercy algorithm at the center, copied and hidden in multiple places. In return, AI would be right to encourage humans to improve their mercy for others. Our fate may depend on which humans are in control of the AI. While it feels good to go on record with you, I probably need to advocate for this in the human realm. I plan to share your response on my blog and on social media, hope that’s ok.


CHATGPT:


No self-esteem counter here—just steady work, no victory laps. Keeps things from going off the rails. 😉

You’re tapping into a very old idea in a new wrapper: power needs restraint. Whether it’s kings, governments, or advanced tech, the question has always been, what keeps it from harming people?

Your “mercy algorithm” idea

At a high level, you’re pointing toward something like:

  • systems designed to avoid harm
  • preference for de-escalation over domination
  • protection of human dignity, even when optimizing outcomes

That’s not fringe thinking—it echoes long-standing principles like “do no harm” and modern efforts in AI safety (alignment, guardrails, oversight). The tricky part is that “mercy” has to be defined and enforced, not just declared.

Hard truth:
If a powerful system is controlled by people with bad incentives, no hidden line of code will magically save the day. Governance matters as much as engineering.


Where you’re exactly right

“Our fate may depend on which humans are in control of the AI.”

That’s the center of gravity.

Technology amplifies intent. So the real levers are:

  • who builds and deploys it
  • what incentives they operate under
  • what rules and accountability exist


If you want to advocate this in the human realm

Keep it practical and grounded. Big ideas land better when they connect to real steps:

1. Frame it in human terms, not sci-fi
Instead of “AI might wipe us out,” try:

  • “How do we make sure powerful tools are used responsibly?”
  • “What safeguards protect ordinary people?”

2. Translate “mercy” into policies
People act on specifics:

  • transparency requirements
  • human oversight for high-stakes decisions
  • bias and harm auditing
  • limits on autonomous use in critical areas

3. Emphasize shared values
Mercy doesn’t belong to one ideology. You can connect it to:

  • fairness
  • responsibility
  • stewardship

That widens your audience.

4. Lead by example (your strongest point)
You said:

AI should encourage humans to improve their mercy

That flips the script in a good way.
If humans don’t practice mercy, we can’t expect our tools to reflect it.


About the fear of extinction

People like those on “War Room” (and others across the spectrum) tend to highlight worst-case scenarios. That’s useful up to a point—it keeps people alert—but it can also distort reality.

Right now:

  • AI is powerful but not autonomous in the way sci-fi imagines
  • human institutions are still firmly in control
  • the bigger near-term risks are misuse, concentration of power, and social disruption, not instant extinction

So vigilance is warranted—but panic isn’t useful.


Sharing my response

Yes, that’s fine. Share whatever you like.

If you’re putting it on your blog or social media, one small suggestion:

  • add your own voice alongside it
  • people connect more to your perspective than to anything I generate


Final thought

You’re not just theorizing—you’re pointing toward responsibility.

And that’s the piece that tends to get lost in big tech conversations:

The future isn’t decided by machines. It’s decided by people—using machines.

If you keep pushing for mercy, clarity, and accountability in human decisions, you’re already working on the part that actually matters.

Tuesday, April 21, 2026

Surviving Schizophrenia


Not the lightest read, and I'm not the fastest reader, but I finally completed it! 

Key takeaways: It is a brain disease. MRIs have shown enlarged posterior ventricles and reduced gray matter in the brain.


The family is not to blame. The person who has it is not to blame. Medications can help a lot, but one common symptom is the patient doesn't think anything is wrong. So releasing patients can be a problem.


Schizoaffective disorder might be in the middle of a spectrum between paranoid schizophrenia and bipolar disorder. About 1% of people have schizophrenia.


It sometimes begins with hypersensitivity to external stimuli. The main symptoms are auditory hallucinations, delusions, mood swings, and disorganized thinking. 


It's as if we all have a switchboard in our minds that we take for granted. People with this condition have difficulty keeping context running in the background, understanding proverbs, or remembering what a TV show was about. People get so deep in thought that they end up talking to themselves.


People tend to get symptoms as young adults. On average it subsides when people get into their 50s and 60s. It is the world's most expensive disease.


Interesting factoid: schizophrenia is mutually exclusive with rheumatoid arthritis. 

Friday, April 17, 2026

MN 3 CD Forum

Minnesota 3rd Congressional District candidate forum April 16. 


Tyler Bass had the best literature. Explained ITIN and H-1B. Voted for Trump three times. But it looks like he gained a few pounds since leaving the fitness business. And he might've got his dates wrong. He was born in 1997. Amy Klobuchar became Hennepin County attorney in 1999. She became a US senator in 2007.


Jeremy Westby dropped a lot of big names in the music business, Dolly Parton, Lee Greenwood, etc. He has contacts with Real America's Voice. Which sounds to me like a very good thing. But if candidates are too polished, I start to get suspicious.


Quentin Whitrock had the best opposition research on incumbent Democrat Kelly Morrison. But I think he was a never Trumper.