September 14, 2024 at 12:00 am
Comments posted to this topic are about the item AI Sings the Blues
September 14, 2024 at 8:12 am
I think we are near the peak of the hype cycle for AI.
Last week I got something useful out of ChatGPT. I asked it how to do some data manipulation in Terraform HCL that I didn't know how to do. It was a bug ridden but close enough for me to get the idea and learn while debugging. That, for me was a win.
To get the answer I needed took several attempts. I find it somewhat ironic that the phraseology that got me to an acceptable answer was pseudo SQL.
It seems that people want to use English as a declarative language (ghost of MSSQL 2000) but English is quite a strange language with strange traps for the unwary.
The idea seems to be that we will replace developers with people who are incredibly good at writing questions in a phraseology that yields effective results. Plus ça change, plus c’est la même chose
September 14, 2024 at 10:35 pm
AI is a powerful tool, but you are the maestro who brings it to life. It’s like having a grand piano at your disposal—shouting "Play!" won't produce a single note. You must sit down and engage with it, refining your prompts and responses in an ongoing, iterative dance. AI can be a brilliant learning partner, though it occasionally hallucinates; your role is to remain vigilant, carefully inspecting and validating its outputs to discern what is real from what is merely illusion.
I used Gemini-1.5-pro with fabric extract_wisdoom_agents command "pbpaste | ExtractWisdomAgents gemini-pro".
Here is the summary:
# SUMMARY
- The speaker discusses their mixed feelings about Generative AI (GenAI) like ChatGPT and CoPilot, acknowledging both the potential benefits and drawbacks.
- They highlight an article, "AI Blues," which explores the decreasing tolerance for AI errors and the potential trade-off between accuracy and creativity in AI models.
- The speaker contemplates the possibility of specialized AI models trained for specific tasks or industries, while also questioning the economic viability of such an approach.
# IDEAS:
- Many people are uncertain about the usefulness of GenAI, approaching it with skepticism.
- "AI Blues" discusses the decreasing tolerance for errors in AI systems.
- There is a potential trade-off: reducing errors in AI might limit its creativity.
- Current AI models are not perceived as highly creative.
- AI could potentially surpass beginners or intermediate users in certain tasks.
- AI models might benefit from more focused and in-depth training.
- Specialized AI models could focus on specific areas like Python programming or marketing.
- Separate models could prioritize accuracy (e.g., error-free Python code) or creativity (e.g., innovative solutions with potential errors).
- Tailored AI models for specific industries like marketing or law could be beneficial.
- The financial viability of training and maintaining numerous specialized AI models is uncertain.
- Current AI revenue generation is questionable, making further investment challenging.
# INSIGHTS:
- The increasing dissatisfaction with AI errors reflects a growing expectation for AI to perform flawlessly, potentially hindering its development.
- The potential trade-off between accuracy and creativity in AI development presents a significant challenge, as both are desirable yet potentially conflicting goals.
- The concept of specialized AI models tailored for specific tasks or industries highlights the need for a more nuanced approach to AI development, moving beyond general-purpose models.
- The economic viability of specialized AI models is a crucial consideration, particularly in light of the current challenges in generating revenue from AI technologies.
# QUOTES:
- "Quite a few of you aren't thrilled or enamored by GenAI (Generative Artificial Intelligence) with things like ChatGPT or CoPilot."
- "I find this piece on AI Blues fascinating to read. It talks about some of the problems with AI and how people are becoming less tolerant of the small errors that AIs make."
- "The thing in the piece that really caught my eye is that training models to make less errors might make them less creative."
- "Perhaps some of the models ought to be trained separately, with some having a more specific focus, like learning Python and producing very few errors, or learning Python to produce interesting solutions to large scale problems even if there are some errors in the code."
# HABITS:
- Seeking out and engaging with thought-provoking content on AI and its implications.
- Critically evaluating the potential benefits and drawbacks of new technologies.
- Considering the ethical and societal impacts of AI development.
- Exploring alternative perspectives and approaches to AI training and deployment.
# FACTS:
- Generative AI, such as ChatGPT and CoPilot, is a rapidly developing field with the potential to impact various industries.
- Training AI models to reduce errors can sometimes limit their creativity and ability to generate novel solutions.
- There is a growing demand for specialized AI models tailored to specific tasks or industries.
# REFERENCES:
- GenAI (Generative Artificial Intelligence)
- ChatGPT
- CoPilot
- AI Blues (article)
- Python (programming language)
# ONE-SENTENCE TAKEAWAY:
While GenAI offers potential, the trade-off between accuracy, creativity, and economic viability requires careful consideration.
# RECOMMENDATIONS:
- Explore the "AI Blues" article to gain a deeper understanding of the challenges and opportunities presented by AI.
- Consider the potential benefits of specialized AI models for specific tasks or industries relevant to your work or interests.
- Engage in discussions and stay informed about the ethical and societal implications of AI development and deployment.
- Experiment with different GenAI tools and platforms to gain firsthand experience with their capabilities and limitations.
- Support research and development efforts focused on creating more robust, transparent, and ethical AI systems.
# AGENT TEAM SUMMARIES:
- The SUMMARY agent team identified the speaker's mixed feelings towards GenAI and their exploration of AI's potential and challenges.
- The IDEAS agent team extracted key concepts from the text, highlighting concerns about AI accuracy, creativity, and economic viability.
- The INSIGHTS agent team synthesized the ideas, emphasizing the tension between AI accuracy, creativity, and the need for specialized models.
- The QUOTES agent team selected key statements reflecting the speaker's thoughts on AI's potential and limitations.
- The HABITS agent team inferred habits from the text, suggesting the speaker is thoughtful, critical, and engaged with AI's implications.
- The FACTS agent team extracted verifiable information about GenAI, AI training, and the demand for specialized models.
- The REFERENCES agent team listed specific AI technologies, concepts, and resources mentioned in the text.
- The ONE-SENTENCE TAKEAWAY agent team condensed the core message into a concise statement about GenAI's potential and challenges.
- The RECOMMENDATIONS agent team provided actionable steps for further exploration, engagement, and responsible AI development.
On Medium.com, I responded to an article:
I reread your article and realized I had overlooked a significant point: the iterative refinement of prompt engineering in your summary.
I am incorporating this into my teaching because textbooks typically provide examples of code, and I am using AI to reverse-engineer the code into propositions and specifications in Markdown language.
Additionally, I wanted to see what I could generate with the prompt: “Create a Periodic Task Scheduling utility similar to CRON using two separate modules.”
Try executing prompts developed through an iterative prompt/response refinement process, and let the magic begin!
Prompt 1 Example:
Create a proposition from the code below create a detailed functional specification in markdown language.
use Northwinds2022TSQLV7;
go
create view [Sales].[uvw_OrderValues]
--with schemabinding
as
select O.OrderId,
O.CustomerId,
O.EmployeeId,
O.ShipperId,
O.OrderDate,
O.RequiredDate,
O.ShipToDate,
sum(OD.Quantity) as TotalQuantity,
cast(sum(OD.Quantity * OD.UnitPrice * (1 - OD.DiscountPercentage)) as numeric(12, 2)) as DiscountedTotalAmount
from Sales.[Order] as O
join Sales.OrderDetail as OD
on O.OrderId = OD.OrderId
group by O.OrderId,
O.CustomerId,
O.EmployeeId,
O.ShipperId,
O.OrderDate,
O.RequiredDate,
O.ShipToDate;
go
select uov.EmployeeId,
uov.CustomerId,
uov.DiscountedTotalAmount,
row_number() over (partition by uov.CustomerId order by uov.DiscountedTotalAmount desc) as rownum
from Sales.uvw_OrderValues as uov;
order by uov.CustomerId
, uov.DiscountedTotalAmount desc;
Prompt 2 Example:
Create a Periodic Task Scheduling utility similar to CRON using two separate modules and an enumeration for scheduled interval types, start time, reschedule flag, new start time, and queue status. The utility should include an asynchronous await listener on the queue, leveraging a timer queue without polling. Document each module in detail using Markdown files with code hints. Use autodocstring within the code to ensure proper documentation for both the code itself and for tooltip integration.
Prompt for Implementation
Task: Develop a Periodic Task Scheduling utility similar to CRON.
· Requirements:
· # Modules:
1. Create two distinct modules for the task scheduler.
2. Use an Enum to define the scheduling interval types (e.g., minutes, hours, daily, weekly, monthly, quarterly, semi-annual, yearly).
## Functionality:
1. Include parameters for start time, a reschedule flag, a new start time, and a queued status.
2. Implement an asynchronous await listener associated with the timer queue to handle task scheduling without polling.
## Documentation:
1. Provide detailed Markdown documentation for each module, including code hints.
2. Use autodocstring in the code to document functions, classes, and methods, ensuring the documentation is available for code tooltips and reference.
## Output:
1. Two modules with Markdown documentation and autodocstring integration for comprehensive code explanations
2. Create Two modules with Markdown documentation detailed user guidance with detailed explanations as code hints
Consider using AI to create documentation in Python based on auto-documentation, which is often a low priority for developers but is extremely important. It can also simultaneously create tooltips.
Have fun with it!
September 16, 2024 at 1:22 am
AI is something which I firmly believe will have a major impact on our lives and work. Indeed, I see that it already is, when I read of various companies not hiring people in favor of letting some GenAI or ML system doing the work instead. So, to say that AI won't have an impact, IMO is to deny the reality of those who have been impacted by systems using AI in some form.
Nevertheless, I am finding AI to be a mixed bag. I have used both GitHub Copilot, Microsoft Copilot, and ChatGPT, to do development. Both for personal projects as well as some at work. My problem with using some of these at work is my employer won't spend a dime on any AI. I don't know if they ever will, or if it will be God only knows how many years before they do. Not being able to use it live, at my job, is a major hinderance. For example, my biggest issue at this time is trying to get a new Blazor app I have been working on, to work with Windows Authentication, using .NET 8. I checked with GitHub Copilot, and it said that sure enough, I could do that. Then I spent weeks trying to get it to work, because I could only use AI when I'm home, then take that to the office, then take those results back home, etc. A lot of bureaucratic nonsense, because my employer won't spend any money on anything. Anyway, I've finally come to the conclusion that it cannot be done, using .NET 8. It could, with .NET 6, but not .NET 8. This is a situation in which the AI tool would respond with great confidence that something could be done, when it can't. I wish the authors of these systems would make the AI be honest, and if something cannot be done, say so. Maybe offer some other approach or just say, "Sorry folks, it can't be done" and leave it at that.
On the positive side I was trying to get an API working in a project, I asked GitHub Copilot for help and the answer it gave me was both insightful and correct. In under 5 minutes I had a working program, whereas it would have taken me hours, or days, to do it the way I've always done it.
It's a mixed bag. I find that sometimes it can be incredibly useful, as in the second case. But in other times, it can lead me down a path that because of the constraints put on me by my employer, I am handicapped for too long of periods at a time.
Kindest Regards, Rod Connect with me on LinkedIn.
September 16, 2024 at 12:00 pm
AI is transforming our world at a pace reminiscent of moving from horse-drawn carriages to the SSC Tuatara—the fastest stock commercial car, blazing ahead at 295 miles per hour. Despite its relatively recent emergence, commercial AI is already reshaping industries and workflows with remarkable speed, embodying both the promise of innovation and the challenges of rapid change.
AI tools like GitHub Copilot, Microsoft Copilot, ChatGPT, and new entrants like Claude.ai, Google Gemini, Ollama.ai, and Hugging Face's expansive LLM repository are pushing the boundaries of what's possible. Yet, they also reveal the growing pains of a disruptive technology that is both an asset and a challenge. For many, AI has already begun to redefine traditional roles, highlighting the need for a Darwinian adaptability that aligns with Moore's Law.
However, AI's double-edged nature is undeniable. These tools can be incredibly efficient, providing rapid solutions that save hours or even days of work, as demonstrated by successful API integrations. But they can also mislead, offering confident but flawed advice, especially when access is restricted or unsupported by employers. This disconnect can lead to frustrating moments—struggling with tech stacks that AI assures will work but ultimately don’t—underscoring the need for AI systems to more clearly communicate their limitations.
In this rapidly evolving landscape, AI is advancing faster than real-world implementation, often slowed by organizational inertia and mixed reliability. The journey from skepticism to seamless integration is ongoing; while the ride may be bumpy, the potential rewards are significant. Just as computers were once seen as job killers only to create countless new opportunities across skill levels, AI presents an invitation to evolve, adapt, and thrive in this new era of work.
September 16, 2024 at 3:26 pm
I think we are near the peak of the hype cycle for AI.
...
The idea seems to be that we will replace developers with people who are incredibly good at writing questions in a phraseology that yields effective results. Plus ça change, plus c’est la même chose
merci, mais je pense que nous avons plus de temps
More hype is coming. I predict two more Ignite/Re:Invent/Google-whatever cycles
September 16, 2024 at 3:28 pm
AI is a powerful tool, but you are the maestro who brings it to life. It’s like having a grand piano at your disposal—shouting "Play!" won't produce a single note. You must sit down and engage with it, refining your prompts and responses in an ongoing, iterative dance. AI can be a brilliant learning partner, though it occasionally hallucinates; your role is to remain vigilant, carefully inspecting and validating its outputs to discern what is real from what is merely illusion.
...
Consider using AI to create documentation in Python based on auto-documentation, which is often a low priority for developers but is extremely important. It can also simultaneously create tooltips.
Have fun with it!
Interesting description, and I like the opening, however, the problem is that often people aren't partners, but the are students and expecting the AI to lead, which is a dangerous place to be.
December 20, 2024 at 8:55 am
AI is something which I firmly believe will have a major impact on our lives and work. Indeed, I see that it already is, when I read of various companies not hiring people in favor of letting some GenAI or ML system doing the work instead. So, to say that AI won't have an impact, IMO is to deny the reality of those who have been impacted by systems using AI in some form.
Nevertheless, I am finding AI to be a mixed bag. I have used both GitHub Copilot, Microsoft Copilot, and ChatGPT, to do development. Both for personal projects as well as some at work. My problem with using some of these at work is my employer won't spend a dime on any AI. I don't know if they ever will, or if it will be God only knows how many years before they do. Not being able to use it live, at my job, is a major hinderance. For example, my biggest issue at this time is trying to get a new Blazor app I have been working on, to work with Windows Authentication, using .NET 8. I checked with GitHub Copilot, and it said that sure enough, I could do that. Then I spent weeks trying to get it to work, because I could only use AI when I'm home, then take that to the office, then take those results back home, etc. A lot of bureaucratic nonsense, because my employer won't spend any money on anything. Anyway, I've finally come to the conclusion that it cannot be done, using .NET 8. It could, with .NET 6, but not .NET 8. This is a situation in which the AI tool would respond with great confidence that something could be done, when it can't. I wish the authors of these systems would make the AI be honest, and if something cannot be done, say so. Maybe offer some other approach or just say, "Sorry folks, it can't be done" and leave it at that.
On the positive side I was trying to get an API working in a project, I asked GitHub Copilot for help and the answer it gave me was both insightful and correct. In under 5 minutes I had a working program, whereas it would have taken me hours, or days, to do it the way I've always done it. I also recently used rewritify - it's an AI that creates and modifies text. It makes writing for work a lot easier.
It's a mixed bag. I find that sometimes it can be incredibly useful, as in the second case. But in other times, it can lead me down a path that because of the constraints put on me by my employer, I am handicapped for too long of periods at a time.
AI's confidence in providing answers, even when incorrect or outdated, can be both misleading and time-consuming, as you experienced with the Blazor app and Windows Authentication in .NET 8. These systems are often trained on historical data, which may not always account for the nuances or latest changes in technologies. Improving transparency in AI responses, such as acknowledging uncertainties or suggesting alternative solutions, would greatly enhance their reliability.
Your success with using GitHub Copilot to implement an API quickly demonstrates the potential time-saving benefits when the tool aligns with your needs. Unfortunately, your workplace's reluctance to invest in AI creates a barrier that not only hinders your efficiency but also prevents broader adoption of modern development practices. This speaks to a larger issue in organizations failing to recognize the strategic advantages of AI.
December 20, 2024 at 8:56 am
This was removed by the editor as SPAM
December 20, 2024 at 11:12 am
AI is a tool, and as the maestro, it is our responsibility to validate responses for hallucinations and iteratively refine prompts to achieve our goals. Machine Teaching (MT) plays a crucial role in learning new material, as emphasized in my LinkedIn article, Socrates and the Evolutionary Dialogue: From Ancient Wisdom to Generative AI Through 2,500 Years (https://www.linkedin.com/posts/peter-heller-7422a727_prompt-socratic-activity-7247322700111626241-pPbQ?utm_source=share&utm_medium=member_desktop). This iterative dialogue mirrors the Socratic Method, guiding users to communicate with AI like a person, refining outputs to align with specific needs.
When building a Python database framework, I found Claude.AI excels in coding, while ChatGPT shines in writing tasks. Prefacing prompts with standardized guidance such as “Please adhere to the KISS principle (Keep it simple and standard) and Occam’s razor ensures responses align with clear, simple, and well-defined architectures, avoiding going off on tangents". This disciplined approach empowers users to harness AI effectively.
Commercial and open-source repositories like Claude.AI, Gemini, and Ollama.ai (e.g., qwen2.5-coder:32b, llama3.2:latest) enhance productivity across varied domains. These tools excel in documentation, a traditionally tedious task for developers, showcasing their value even for skeptics of Generative AI.
The article AI Blues raises valid concerns about AI’s perceived limitations, including its occasional errors and the trade-off between creativity and precision. However, tightly focused models, such as those for Python programming or specific industries like legal and marketing, illustrate AI's potential for specialization. Despite financial and technical challenges, continued experimentation with AI is essential. It won't replace skilled professionals but can augment productivity and creativity, provided we embrace it with diligence, refinement, and purpose.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply