June 5, 2024 at 12:00 am
Comments posted to this topic are about the item Using AI for the First Draft
June 5, 2024 at 6:59 am
With a few exceptions, I've found the time, gained by generated code, is lost by the manual process of debugging and refactoring.
Using LLM for Java/C# etc strikes me as trying to use LLM questions as a declarative language to abstract the complexity of the underlying language. Like SQL does for interacting with RDBMS .
June 5, 2024 at 2:03 pm
I've used generative AI, such as GitHub Copilot, at home, for analyzing code. Both to tell me what a method is doing and where the problems in the code are. I find that helpful, but it doesn't go the whole 9 yards. However, its good enough so that I'll continue to use gen AI at home.
Unfortunately, my employer won't pay for it, so at least for the next few years we won't be able to benefit from using it.
Kindest Regards, Rod Connect with me on LinkedIn.
June 5, 2024 at 2:09 pm
I agree with David Poole. To put it all in less formal terminology, I've found that tools like ChatGPT are nothing more than a very sophisticated consensus engine and, sadly the source of that consensus is the same source if you do a normal search engine search. The consensus frequently overwhelms the correct outlier sources... and some of the answers are actually dangerous in one form or another.
I've been checking ChatGPT 3.5, ChatGPT 4o, CoPilot, Gemini, and a few others. They all very confidently fail by producing code that will "Silently Fail" (produces incorrect answers in some instances with no indication that an error has been made) to the question of "In SQL Server (T-SQL), what is the code to convert a DATETIME2 value to DATETIME". They ALL got it wrong. Some do correctly point out that resolution may be a factor but the code they produce will provide the incorrect answer in many cases that most people (even "experts" on the Internet) never test for.
Of course, it's not just that one question.
The really interesting part is how people justify its use to augment coding. The problem is that a lot of programmers that actually don't know how to do something, won't know that the very confident sounding answers of the monster before them is, in fact, wrong.
Of course, that would also be true of people using normal search engines. AI is just a quicker and more expensive manifestation of the same problems. Ironically, that makes it "more human". 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply