The AI/Human Spectrum

  • Comments posted to this topic are about the item The AI/Human Spectrum

  • I have occasionally wondered if the management of our company is just using AI to interact with us workers. I seem to get the same surprise when I express some issue that they have failed to clear literally weekly. Groundhog day, explain it all again, OK watch this space. Next week, explain it all again🙄

  • Nah, that's just poor communication. Most people suffer from this, including me, which is why I think AI use is ultimately going to be limited.

  • The answer to your question, from the perspective where I work, is so complicated. I work in a large state government's health department. A lot of our data is controlled by HIPAA. I don't see us trying to feed that to the public version of ChatGPT or any other public Gen AI, and for very good reasons. Maybe we could do that if we installed a GenAI inhouse and kept all the compute internal. But like you said, that's going to cost boo coo bucks. And that means it isn't going to happen, either.

    Some simple functions, not related to HIPAA data, I can see being replaced by GenAI, but on a smaller scale, like a small LLM. I can see that happening in the next few years, but the overriding issue will be cost. If it goes into the millions of dollars, then forget it, it ain't gonna happen.

    At least these are my predictions.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • My limited experience with AI at the moment, with regards to code, is that it is not very good. It can, however, provide a good starting point if you have to deal with an unfamiliar language.

    Where I work AI really comes into its own with the long convoluted emails from the 'leadership team'. Copy the email into ChatGPT and ask for a summary, and then ask for a further summary, and you get something that is vaguely readable. If asked, it will even rewrite the email in iambic pentameter although the result would not be mistaken for Shakespearean prose. However, if you try to get a sonnet it produces two very poor verses and then gives up. (This might have something to do with the quality of the source material.)

    I have also noticed that voice messages on my 'phone now come with text which is good enough to not have to listen to the message. I presume this is done through some sort of voice recognition software which uses AI.

    Overall, I can see AI obviously uses some very clever algorithms and the results can be useful in assisting people. In the future it might be able to replace some people but so far I do not see the limited assistance justifying the cost of the research.

     

  • I tried running one of my old articles through ChatGPT asking it to improve its readability.  I was pleased with the results.

    When writing for SQLServerCentral I use the yoast plug-in built into the WordPress editor.  It tells you what you should do though knowing what you should do and how it applies to your text are two different things.  ChatGPT recommendations do both.  It provides a competent editorial function.

    For coding the results are a bit hit and miss.  It can hallucinate object methods that don't exist.  That's like it giving you a SQL query such as

    VACUUMN FULL WOMBLE sales.Orders;

    WOMBLE being the hallucination.

    I've found it good at defining Terraform variable definitions for example data structures.  It is very human in what it does when asked to produce a complex RegEx.  But ChatGPT was not intended to be a coding LLM!

    I think we're going to hear an awful lot about RAG - Retrieval Augmented Generation.  At this stage I'm not 100% sure what it is.  It appears to be a way of injecting relevant content into a pre-trained LLM as well as a more context rich question.

    If anyone could write a good article on what it is, who does what and where then I'd love to read it.

    When I raise a GitHub pull request I use the PR description to give as much context as possible as to what it going on.  I did experiment with Codium AI to see if that could provide the description for me from the code included in the PR.  The results were a bit mixed.

    I've read a case study where a company operating a call centre was having trouble training up call centre staff due to pressures of expansion.  They used AI to allow inexperienced staff to get better answers from internal documentation about the particular products.  I've been around long enough  to be extremely sceptical about tech case studies.  They are almost always telling you about the happiest of happy paths with friction free adoption and results defying the "No Silver Bullet" hypothesis from Fred P Brooks.  Occasionally some truth does bleed into the hyperbole.  In the call centre case they did mention that it took a few months to train the call centre staff to use the AI system.  That rings true.

  • Where I work we haven't any access to any AI for the typical reason, "It costs money, and we don't want to spend money on anything, ever."

    Kindest Regards, Rod Connect with me on LinkedIn.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply