by liberal japonicus
It seems like it's time to take a step back and see if we can come to some agreement about the use of AI here. This is primarily directed at Charles, which is highly unfortunate, because I don't want to make him an object lesson, but it seems unavoidable.
A while back, Charles fed a comment thread into ChatGPT and then linked to the summary. I was really taken aback by that, the idea of having a conversation and the person says well, I just fed everything we said into this machine, and here's what it said you said (oh, and it will continue to use your words and ideas in the future) was rather off-putting. Not that what I write here would not have gotten into chatGPT, the whole model is built on scraping the net in its entirety and I don't have the time or patience to do what is necessary to chatgpt-proof ObWi. But it seems to be an escalation to actually feed stuff from here intentioanlly and with purpose. So I definitely think that is a step too far and would ask that Charles (and anyone else) not do that.
I didn't write this immediately when that happened, but it has been on my mind. But just now, Tony P. asked Charles:
Charles took Tony's request as 'please use a prompt to feed this question into ChatGPT'. I don't think that was what Tony intended (Tony is welcome to clarify this) I think the point was that Rockefeller's wealth was a fact regardless of independent of Charles, Reason, or ChatGPT 'opinions', not a request to have ChatGPT explain how Rockefeller's wealth should be considered different in light of different contexts.
Which has me wonder what precise line to suggest here at ObWi, bearing in mind that I can't force anyone to behave in a particular way, especially in regard to chatGPT, except to ban them, which would be overkill. So it seems important to discuss parameters.
If I had my druthers, I'd prefer that when people tackle a question, you don't simply link to or reprint chatgpt output. The link is particularly insidious, because it pulls anyone who wants to engage with you into the ChatGPT ecosystem. If someone abstains from clicking, you can believe that they won't engage with your arguments, where they are simply wanting to avoid giving Sam Altman anything. However, I think it is important to acknowledge using ChatGPT, which sounds like a bit of a catch-22.
I should also add that I use ChatGPT quite a bit, for things like 'please give me 10 examples of this grammatical pattern', 'please outline this student's paper', 'please make grammatical corrections to this essay without changing the content or the style'. I understand that this opens me up to the charge of hypocrisy. Here I am, feeding student work into ChatGPT and getting in high dudgeon when it's my stuff. Folks are welcome to discuss that, I am a bit uneasy with drawing lines here for other people and not doing it for myself. But there seems to be a difference when I am exchanging and offering my own opinions about something as we do here versus when I am in a situation where I am teaching writing to second language learners and trying to get more examples or show them how to use it
Trying to boil this down, I make the following suggestions
-don't simply post chatgpt (or any other AI) output verbatim with no comment
-if you do feel that the output is better than what you could have written, at least add your own points to it
-if you are going to use LLM, acknowledge any points that you get from them
-realize that in doing so, you are making those points yours, so saying 'well, that's chatgpt, not me' is essentially an abdication of responsibility and makes the process of exchanging opinions much more difficult
This is all unfocussed, so I'm hoping that others might be able to weigh in so we can have an understanding of what the boundaries are. I have to wonder that, if there were some variant Charles from another reality who was as committed to syndicalism as this reality's Charles is to libertarianism who employed ChatGPT in the same way, would I come down as hard? On the other hand, it seems that using ChatGPT in this way is a perfect example of why libertarianism is such a mess: A huge amount of sub-rosa assumptions are fed into it and it miraculously comes up with justifications that seem robust until you start examining those assumptions. When used this way, ChatGPT is just another tool to obscure those assumptions so they are never questioned. Discuss.
Recent Comments