I’d guess it’s mostly the AI autocomplete stuff. I.e. you keep on typing until the AI guesses it right then press tab to save keystrokes. LLMs are really bad at making test cases in my experience; they, ironically, can’t do the simple but nuanced computations needed to figure out what the output should be given the inputs, or to recognize and test the edge cases.
Oh yeah it can’t do anything complicated, only on simple modules. And I usually give it pretty detailed instructions on my expected I/O. It just converts a few sentences of English to dozens of lines of code.
I’d guess it’s mostly the AI autocomplete stuff. I.e. you keep on typing until the AI guesses it right then press tab to save keystrokes. LLMs are really bad at making test cases in my experience; they, ironically, can’t do the simple but nuanced computations needed to figure out what the output should be given the inputs, or to recognize and test the edge cases.
Oh yeah it can’t do anything complicated, only on simple modules. And I usually give it pretty detailed instructions on my expected I/O. It just converts a few sentences of English to dozens of lines of code.