When we build software, many of us use the same algorithms to solve problems. We might choose a similar method for a quicksort or a lambda validation or a regular expression. For database work, your code for a running total (or other common challenge) is likely very similar to many other people. At least on the same platform. You might solve this differently in SQL Server and Oracle, but for the same type of database, many of us write very similar code.
Actually, many developers might copy and paste an answer from SQL Server Central, Stack Overflow, or another site. I'm not sure if I think this is good or bad, as it's a good idea to reuse code if it solves the same problem. If you copy it and don't test it, that's bad. After all, the code might not solve your slightly different problem if you don't check it.
In the modern world, if we build software for our business using an AI assistant, could our company be liable if we knew our competitors were using the same AI service? Is this any different than a human developer performing a copy/paste from SQL Server Central? I don't think it is in many cases, though the same concerns about intellectual property might be present in either case.
The concerns over AI seem murky in some sense, especially as the AI might "generate" code that isn't directly available on some public resource. I do think that this is more of a collusion using a service than an algorithm. Still, in the hyper-connected world, where many of our applications might look to take advantage of some service instead of implementing it ourselves, this could be an issue.
I ran across a piece that discusses a lawsuit about a common pricing algorithm being used by different hotels. In this case, it's not that the developers at different hotels used the same code, but rather that the hotels used the same service from a company, which of course, used the same code for all their customers. Whether you think this is a valid lawsuit or not, this is the type of legal action that others might bring if two competitors ended up using the same AI service and developed very similar code that might behave the same way.
I don't think that AIs (at present) can actually develop new algorithms or solve problems in a new way. Instead, they predict the likely solution based on how they've been trained by similar scenarios. In that case, how concerned are we about how getting common solutions in disparate pieces of software? For most of us, I think we've be pleased that we have well-tested (hopefully) code that runs efficiently (again, hopefully) being re-used in many places. That would be better for most systems in the world.
What isn't better is if humans become more adept at specifying prompts and producing software without lots of specialized expertise. For many developers that might be average, or even slightly below average, this might cause them concern for the security of their position. With good reason, as labor is one of the most expensive parts of building software.
Ultimately, just as with any other position, the best way to build a safe, secure career, is to continue to build your skills and produce value for your employer. That way it's unlikely any AI will ever outperform you.