Is “cis” a slur?

You may have heard that Twitter now considers “cis” and “cisgender” to be slurs.

I asked ChatGPT what makes a word a slur. Here’s what it said.

A slur is a derogatory or offensive term used to demean or insult a particular individual or group of people based on their race, ethnicity, gender, sexual orientation, religion, or other characteristics. Slurs are typically intended to belittle, dehumanize, or marginalize the targeted group.

Several factors contribute to a word being considered a slur:

1. Derogatory Intent: Slurs are intentionally used to demean or disparage individuals or communities. The primary purpose of a slur is to insult or offend.

2. Historical Context: Many slurs have a long history of being used to oppress or discriminate against marginalized communities. The historical use and association of a word with discrimination or violence contribute to its classification as a slur.

3. Impact and Offensiveness: Slurs often evoke strong negative emotions and perpetuate harmful stereotypes. They can be deeply hurtful and dehumanizing to those targeted by them, causing psychological and emotional harm.

4. Societal Consensus: The classification of a word as a slur can also depend on societal consensus. The general perception of a term within a community or society, especially among the affected group, plays a significant role in determining whether it is considered a slur.

It is important to note that slurs can vary across cultures, contexts, and time periods. What may be considered a slur in one culture or era might not have the same connotation in another. Additionally, the impact and offensiveness of slurs can evolve over time as societies progress and language norms change.

That seems mostly reasonable, although it misses a key component, namely, what do the people the word is used to describe think of it?

Is “cis” used with derogatory intent? I’m not sure, but I think it’s at least slightly derogatory.

It has no historical context because all this “gender” foolishness started 10 minutes ago.

Is it offensive? I don’t know. Slightly, maybe.

Is there a consensus? How could there be? If you disagree with the trans narrative you get social media murdered.

Here’s what J.K. Rowling has to say about the words, although she doesn’t address if they are “slurs.”

J.K. Rowling on cis and cisgender

If AI is going to get out of hand …

A lot of people are predicting that AI will get out of hand and create a nightmare scenario. I don’t know if I believe that or not. Computers are pretty amazing, but I suspect there are limits to what they can do.

However, it’s certainly a possibility we shouldn’t ignore.

Given that, what is there to do about it?

Regulation isn’t going to stop it. First of all, the regulators aren’t smart enough. But even if they were, we can’t regulate every country in the world, nor get every country to agree to regulations.

People have proposed things like …

  1. Ethical guidelines for AI. Who says AI would follow them?
  2. Oversight bodies. That’s like the regulators. They’re not up to the task.
  3. AI kill switches. Who says AI won’t be able to disable the switch?

The solution in Dune was to destroy all “thinking machines” and kill anybody who tried to make one. Up to and including nuking them.

Is there a better option?

Here’s a long but interesting discussion on the future of AI and humanity with Mo Gawdat, who was some bigwig at Google.

Scientific discourse and the magnification of downstream effects

You all probably know that Jordan Peterson has a very strange diet. He only eats beef.

Let’s say I was curious about that, so I asked his doctor “How is Dr. Peterson doing on his all-beef diet?”

The doc could reply with an answer to my question — i.e., he could tell me how this diet is affecting Dr. Peterson.

It’s pretty likely that he would be worried that I would assume that if it’s good for Dr. Peterson, it would be good for me as well, so he might add “but everybody’s different, and just because it works for Dr. Peterson doesn’t mean it will work for you. In fact, it’s very unlikely that this diet is good for most people.”

That seems to be a reasonable level of caution. Experts have to think about how their advice will be used, and that means sometimes they can’t simply give a straight answer to a straight question.

But what if the doctor was also a climate alarmist who believed humanity needed to move to a plant-based diet? He might simply add that.

“an all-beef diet seems to be working for Dr. Peterson, but it’s not a good thing for the planet because [insert ideological baloney].”

That kind of response is skirting the edges of a “reasonable level of caution.” The man’s a doctor, not an expert on climate, or world economies, or any of the things that go into deciding how the world ought to feed itself.

The next step is far worse. Because the doc believes the alarmist conspiracy theories about how methane from cows will destroy the planet, he has to lie about the effect of the diet on Dr. Peterson.

This is where we get into the “noble lie,” where experts tell us things they know are not true because they’re concerned with how the answer will affect downstream issues.

I think this is happening at an increasing rate. Experts are presenting false claims because they’re afraid of the consequences if the truth gets out. They’re also afraid of the consequences to their own careers if they say something that the ideologues believe will cause harmful downstream effects.

I can see two ways to prevent this.

The first is to re-emphasize basic honesty. The second is to provide something like whistleblower protection for dissenters.