AKA jerk : useful terms for 2025

bastard

dog

clown

joker

skunk

creep

idiot

rat

snake

beast

moron

brute

brat

villain

heel

swine

fool

scumbag

boor

schmuck

louse

toad

cad

bugger

scum

nuisance

so-and-so

bounder

lout

stinker

rotter

pill

reptile

slob

hound

buzzard

cur

dirtbag

bleeder

varmint

vermin

slimeball

cretin

sleazebag

blighter

sleazeball

slime

barbarian

son of a gun

churl

sod

nerd

crud

fink

sleaze

crumb

scuzzball

savage

stinkard

chuff

loudmouth

rat fink

wretch

scoundrel

rogue

caveman

jackass

scab

dolt

oaf

lowlife

vulgarian

imbecile

scamp

roughneck

miscreant

Neanderthal

blockhead

turkey

snob

dork

ninny

booby

dope

goon

doofus

airhead

pest

insolent

nut

nincompoop

snot

schmoe

nitwit

schmo

dink

dweeb

snip

nit

half-wit

snoot

birdbrain

Notes on hallucination

  • Analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses.
  • Indeed, ChatGPT and other sophisticated chatbots regularly put out false information. But that information is packaged in such an eloquent, grammatically correct statement that it’s easy to accept it as truth.
  • Errors in encoding and decoding between text and representations can cause hallucinations. When encoders learn the wrong correlations between different parts of the training data, it could result in an erroneous generation that diverges from the input.
  • AI hallucinations are caused by a variety of factors, including biased or low-quality training data, a lack of context provided by the user or insufficient programming in the model that keeps it from correctly interpreting information.
  • Northwestern’s Riesbeck said generative AI models are “always hallucinating.” Just by their very nature, they are always “making up stuff.” So removing the possibility of AI hallucinations ever generating false information could be difficult, if not impossible.
  • Temperature is a parameter that controls the randomness of an AI model’s output. It essentially determines the degree of creativity or conservatism in its generated content, where a higher temperature increases randomness and a lower temperature makes the output more deterministic. In short: the higher the temperature, the more likely a model is to hallucinate. Companies can provide users with the ability to adjust the temperature settings to their liking, and set a default temperature that strikes a proper balance between creativity and accuracy.

Whereas no temperature settings are required for humans, who can actually be creative and accurate at the same time. 

Sources:
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
https://builtin.com/artificial-intelligence/ai-hallucination
https://www.lighton.ai/blog/llm-glossary-6/turning-up-the-heat-the-role-of-temperature-in-generative-ai-49
https://builtin.com/artificial-intelligence/eliza-effect

So it was

So it was that, after the Deluge, the Fallout, the plagues, the madness, the confusion of tongues, the rage, there began the bloodletting of the Simplification, when remnants of mankind had torn other remnants limb from limb killing rulers, scientists, leaders, technicians, teachers, and whatever persons the leaders of the maddened mobs said deserved death for having helped to make the Earth what it had become. Nothing had been so hateful in the sight of these mobs as the man of learning, at first because they had served the princes, but then later because they refused to join in the bloodletting and tried to oppose the mobs, calling the crowds ‘bloodthirsty simpletons.’

–Walter M. Miller, A Canticle for Leibowitz, 1959.