It does often behave better if you say to it "that doesn't seem to be working" and perhaps some error message
It is afterall a language tool. Its function is to provide text that seems real. If you ask it a science question and ask it to provide references in Vancouver format, it can format the references perfectly. They will be from real authors (often who have published in the general field), they will be in real journals for the field. But the title is entirely false but plausible. Expect many a scammer to get caught out... On Sun, 13 Aug 2023, 18:50 Bert Gunter, <bgunter.4...@gmail.com> wrote: > **OFF TOPIC** but perhaps of interest to some on this list. I apologize in > advance to those who may be offended. > > The byline: > ******************************** > "ChatGPT's odds of getting code questions correct are worse than a coin > flip > > But its suggestions are so annoyingly plausible" > ************************************* > from here: > https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/ > > Hmm... Perhaps not surprising. Sounds like some expert consultants I've > met. 😕 > > Just for amusement. I am ignorant about this and have no strongly held > views, > > Cheers to all, > Bert > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.