Discussion:
[gentoo-user] has anybody use ChatGPT for programming?
(too old to reply)
Mark Knecht
2023-03-21 01:00:02 UTC
Permalink
Has anybody use ChatGPT for programming? I think it would very very
handy (less bugs) and less questions on the mailing-list
I've used it very early on to write some python code to read some text
files and place them in numpy arrays, etc.

It worked, the files were read correctly, then they cut off my access due
to too many users and I haven't bothered since.
ezntek
2023-03-21 03:30:01 UTC
Permalink
It's still early tech, sometimes I do use chatgpt but I prefer searching, stack overflow and official docs.
Post by Mark Knecht
Has anybody use ChatGPT for programming? I think it would very very
handy (less bugs) and less questions on the mailing-list
I've used it very early on to write some python code to read some text
files and place them in numpy arrays, etc.
It worked, the files were read correctly, then they cut off my access due
to too many users and I haven't bothered since.
Anna
2023-03-21 13:30:01 UTC
Permalink
Has anybody use ChatGPT for programming? I think it would very very handy (less bugs) and less questions on the mailing-list
I've tried a bit but ultimatly gave up using it.

At least on my experience, it's results are wildly inconsistent for
code, specially for something non-trival. It can be confidently wrong,
and thus would still require you to know what you want to do and fact
check the reply...

Trivial things I usually can do them myself from the get go, and for
non-trivial stuff, reading man pages and searching online is usually
faster than trying to fix whatever chatgpt gives me.

Tho, GPT-4 is getting out and that may or may not change things.
Rich Freeman
2023-03-21 14:20:01 UTC
Permalink
Post by Anna
Has anybody use ChatGPT for programming? I think it would very very handy (less bugs) and less questions on the mailing-list
At least on my experience, it's results are wildly inconsistent for
code, specially for something non-trival. It can be confidently wrong,
and thus would still require you to know what you want to do and fact
check the reply...
I know Home Assistant banned posting AI-generated responses on their
forums when some users decided to create bots that would take
questions from newbies, run them through ChatGPT, and then auto-post
the responses as replies.

The result was mass confusion as often the answers seemed plausible
but contained subtle errors. Dealing with the resulting frustration
and chaos took more volunteer time from those hanging out on the
forums than just dealing with the original questions.

Now, as an individual running a question through such an AI can be
useful, but you have to realize that it has no real measure of
confidence attached to the answers, and it will make up an answer if
it isn't sure. It is very human-like in that sense, but that includes
all the downsides of humans.

I've used it a little - it can be good for giving you alternative
ideas or suggesting starting points, but you have to realize that just
like humans it isn't infallible, and you aren't talking to the
programmer who wrote whatever it is you're trying to deal with. If
you use it for code I'd use it more as a suggestion or template, or
maybe when you already have an approach worked out maybe see if it has
another approach that you might want to consider.
--
Rich
Wols Lists
2023-03-22 08:40:01 UTC
Permalink
Post by Rich Freeman
Post by Anna
Has anybody use ChatGPT for programming? I think it would very very handy (less bugs) and less questions on the mailing-list
At least on my experience, it's results are wildly inconsistent for
code, specially for something non-trival. It can be confidently wrong,
and thus would still require you to know what you want to do and fact
check the reply...
I know Home Assistant banned posting AI-generated responses on their
forums when some users decided to create bots that would take
questions from newbies, run them through ChatGPT, and then auto-post
the responses as replies.
The result was mass confusion as often the answers seemed plausible
but contained subtle errors. Dealing with the resulting frustration
and chaos took more volunteer time from those hanging out on the
forums than just dealing with the original questions.
The results I've come across suggest this is the norm.

So code APPEARS better, but is usually WORSE, than code you actually
wrote yourself.

So it's probably good for study, or if you want a template to follow,
but don't blindly use any code. It's only as good as the data it's
trained on, and as a doctor I know said to me "We select doctors from
the general population, and if half the population are below average
what does that say about doctors?".

ChatGPT (code especially) is probably trained mostly on student output.
Do you really want to be "writing" student grade code?

Cheers,
Wol

Loading...