It's a significant advantage for developers to ask AI to solve technical problems, get right answers right away and move to a next task vs keep on scratching your head for the next 5h wondering "why it doesn't work".
I think equal advantage lies in getting multiple approaches fleshed out, even if the answer isn't right, as in, may not compile as-is. That is more than sufficient advantage a developer gets because most of the time usually spent isn't actually typing the code but finding out how to design/combine things.
E.g. I was working on a rust problem and asked ChatGPT for a solution. What it provided didn't compile (incorrect functions etc) but it provided me information in terms of the crates to use, general outline of the functionality and the approach to combine them - that proved to be more than enough for me to get going (to be clear, I didn't blindly copy the code; I understood it first and wrote tests for the finished product). I think that is where the real advantage lies. I see it as an imperfect but very powerful assistant.
Have you ever tried to solve a difficult technical problem with ChatGPT (any version)? I have. It worked about as well as Google search. Which is to say, I had to keep refining my ask after every answer it gave failed until it ultimately told me my task was impossible. Which isn't strictly true, as I could contribute changes to the libraries that weren't able to solve my problems.
Funny enough, the answers gpt4 gave were basically taken wholesale from the first Google result from stackoverflow each time. It's like the return of the I'm Feeling Lucky button.
There are many developers who are unable to do that and need to be spoon-fed. This is the market for ChatGPT and that's why they heavily promote it.
I doubt though that corporations that employ these developers will have any advantage. To the contrary, their code bases will suffer and secrets will leak.
This is very much my experience too. Occasionally ChatGPT can give me something quickly that I wasn't able to find quickly (because I was looking in the wrong place, likely). But most of the time, it's just a more interactive (and excessively verbose) web search. In fact, search tends to be more optimized for code problems; I can scan results and SO answers much faster than I can scan a generated LLM answer.
Use the edit button if you get an incorrect answer. The whole conversation is fed back in as part of the prompt so leaving incorrect text in there will influence all future answers. Editing creates a new branch allowing you to refine your prompt without using up valuable context space on garbage.
That doesn't change anything in terms of the flow. It's still refining the input over and over. This is exactly how searching on google works as well.
For the case I last tested, there was no correct answer. I asked it to do something that is not currently possible within the programming framework I asked it to use. Many people had tried to solve the problem, so chaptgpt followed the same path as that's what was in its data set and provided solutions that did not actually solve the problem. There wasn't any problem with the prompts, it's the answers that were incorrect. Having those initial prompts influence the results was desired (and usually is, imo).
I haven't actually seen that advantage in action. That is, I haven't seen a case where an LLM has actually given a solution right away for a problem that would have stumped a dev for multiple hours.
In my workplace, two devs are using chatgpt -- and so far, neither has exhibited an increase in productivity or code quality.
That's a sample size of two, of course, so statistically meaningless. But given the hype, I expected to see something.
ChatGPT is a godsend for junior developers, it isn't very great at providing coherent answers to more complex or codebase specific questions. Maybe that will change with time but right now it's mostly useful as a learning aid.
For complex codebases it’s better to use copilot since they do the work of providing context to gpt for you. CopilotX will do a lot more but it’s still waitlist signup. You could hack something together yourself using the API. The quickest option is just to paste the relevant code in to the chat along with your prompt.
I tend to use both. Copilot is vastly better for helping scaffold out code and saves me time as a fancy autocomplete, while I use ChatGPT as a "living" rubber duck debugger. But I find that ChatGPT isn't good at debugging anything that isn't a common issue that you can find answers for by Googling (it's usually faster and more tailored to my specific situation, though). That's why I think it's mostly beneficial in that way to junior devs. More experienced devs are going to find that they can't get good answers to their issues and they just aren't running into the stuff ChatGPT is good at resolving because they already know how to avoid it in the first place.
And this gets into why companies are banning it, at least for the time being; developers and especially junior developers in general think nothing of uploading the sum total contents of the internal code base to the AI if it gets them a better answer. It isn't just what it can do right this very second that has companies worried.
It isn't even about the AI itself; the problem is uploading your code base or whatever other IP anywhere not approved. If mere corporate policy seems like a fuddy-duddy reason to be concerned, there's a lot of regulations in a lot of various places too, and up to this point while employees had to be educated to some extent, there wasn't this attractive nuisance sitting out there on the internet asking to be fed swathes of data with the promise of making your job easier, so it was generally not an issue. Now there is this text box just begging to be loaded with customer medical data, or your internal finance reports, or random data that happen to have information the GDPR requires special treatment for even if that wasn't what the employee "meant" to use it for. You can break a lot of laws very quickly with this textbox.
(I mean, when it comes down to it, the companies have every motivation for you to go ahead and proactively do all the work to figure out how to replace your job with ChatGPT or a similar technology. They're not banning it out of fear or something.)
It depends heavily on what you do. When working with proprietary/non-public software stacks, or anything requiring knowledge of your internal codebase, ChatGPT is of little help.
"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
A great quote, but not applicable. The problem with LLMs is that they are non-deterministic, and you can ask the exact correct question and get the wrong answer.
I'm sure they're terrified of competing with the legions of boilerplate generators