plantimals on Nostr: the "abliterated" and "uncensored" models do well. though it would be better to train ...
the "abliterated" and "uncensored" models do well. though it would be better to train them without biases in the first place rather than tuning them up after the fact.
as for honesty, sometimes lying is just answering a question for which the correct answer has slid out of the context window.
Published at
2025-05-31 16:51:58Event JSON
{
"id": "c78cedb9aabec42ac54e80b5329c7f7641508a3e19c36b2fec0ac68f5b710041",
"pubkey": "dd81a8bacbab0b5c3007d1672fb8301383b4e9583d431835985057223eb298a5",
"created_at": 1748710318,
"kind": 1,
"tags": [
[
"e",
"e9a4e9b335b37145deee528aed59bf53d22045eded3a91dc9816f82d27fa9f3a",
"",
"root"
],
[
"p",
"726a1e261cc6474674e8285e3951b3bb139be9a773d1acf49dc868db861a1c11"
]
],
"content": "the \"abliterated\" and \"uncensored\" models do well. though it would be better to train them without biases in the first place rather than tuning them up after the fact.\n\nas for honesty, sometimes lying is just answering a question for which the correct answer has slid out of the context window.",
"sig": "f3b300769bb47fb8176c7dd2473024249c9ba8ac15fb21905b3656f8d0ad66631d9a2187cd7c97e234c5277c227b7aa589187f0266483170fe8419dab511dcab"
}