February 24, 2025 at 12:00 am
Comments posted to this topic are about the item Trust is a Funny Thing
February 24, 2025 at 4:19 am
From the article:
"Essentially one mistake overrides everything else."
We had a saying in the Navy... "One 'Aw Shit' wipes out a thousand 'Attaboys' ".
And, it doesn't matter if it's for a human, some other form of animal, a machine, or AI.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 24, 2025 at 8:53 am
With AI, I know what it is, I know what it is for, it is a mechanical Turk. I can forgive its inaccuracies because most of the time it is genuinely useful. If it doesn't give me the right answer then it gives me avenues to explore that take me further on. I've got it pinned to the "Trust, but verify" mantra.
The biggest destroyer of trust for me are those that will confidently advance a wrong answer and exhibit poor behaviours when wrong. The poorer the behaviour the less I trust them because I am not confident in their motivations.
I have peers in whom I trust. Not because they are always right, but because if any of us give the wrong answer we discuss and debate and advance all our knowledge. Our motivation is a genuine desire to obtain the correct understanding.
AI states things with supreme confidence. The difference is that it has no motivations. If you tell it that the answer it gave was wrong it doesn't gaslight you or undermine you.
I was reading through the Databricks documentation on "Judges". These are items designed to reduce AI hallucinations. One of them is "Groundedness". Groundedness is to what extent the answer comes from source material vs how much is extrapolated (possibly hallucinated). There are others that choose information based on whether it is within a defined block and what the safe minimum blocks across which the information resides may be to give a confident answer. We are already at the point where telling AI that it has the wrong answer can result in a change how the model will answer in future.
February 24, 2025 at 11:20 am
One thing maybe most of you are not into yet but which seems to me to akin to AI is using subtitles or close captioning on TV, especially in live broadcast such as news. Lots of conversation comes across OK, but many critical things such as nouns and proper names get really buggered. And maybe one of the most scary parts is when you see Kamala Harris. Wow, you talk about artificial intelligence...
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
February 24, 2025 at 11:31 am
David, I have to adamantly disagree with your thought that AI 'has no motivation'. I think motivation is one of it's greatest dangers. AI is created by people and people have agendas.
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
February 24, 2025 at 11:46 am
Jeff, another good illustration is how much financial advisors seen to more and more depend on software analysis for investing our money. I've been retired now for 15 years, and depend on FA's for handling my assets. You talk about "Aw, shit" moments, try logging in to your accounts at day's end and seeing you lost $4k that day. I have to override their systems by keeping my own cash account to move funds in and out of their AI.
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
February 24, 2025 at 12:38 pm
David, I have to adamantly disagree with your thought that AI 'has no motivation'. I think motivation is one of it's greatest dangers. AI is created by people and people have agendas.
I know where you are coming from. I mean that it has no desire, it has no emotion, it is not self-aware and a long way off it according to Grady Booch.
Can it be used for malign intent? Absolutely and especially by those likely to claim that nothing touched the trigger but the Devil's right hand.
February 24, 2025 at 1:24 pm
And here's another one on AI. My wife is constantly sending me links to stuff on Instagram Reels with people dramatically blathering on about something. My usual question is "what are their qualifications behind their opinions?"
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
February 24, 2025 at 5:08 pm
And here's another one on AI. My wife is constantly sending me links to stuff on Instagram Reels with people dramatically blathering on about something. My usual question is "what are their qualifications behind their opinions?"
I found the initial fun from Instagram and Twitter/X content descended into toxicity. I found it had become a source of stress rather than pleasure. I found it impossible to dial down, let alone avoid the content that was causing me stress. I voted with my feet and deleted my account and removed the apps.
I'm on a few Reddit groups because that has specific communities but even that recommends stuff I have negative interest in.
Decades of change and experimentation just to find out that SQLServerCentral remains one of the best online forums.
February 26, 2025 at 2:27 am
Decades of change and experimentation just to find out that SQLServerCentral remains one of the best online forums.
The system won't let me give that comment the million or so LIKEs that it deserves.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy