Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think this largely accounts for when it will be useful.

I think this is excessively cynical. For example, I just used chatGPT to write a job description last night. I'm not particularly interested in copyright laundering or anything of the sort - nobody really cares about copyright on job descriptions (they're mostly pretty similar) unless you're being egregious about it. I could spend a bit more time and just pattern match of other postings, but the LLM can do it better and faster than me. I just provided appropriate prompts for the job characteristics and benefits, and then provided the final editing and discretion.

I think there are lots of similar situations where we (as humans) need text that is essentially boilerplate, but still is expected to be well-enough-written. ChatGPT isn't a copyright evasion tool to me, it's a shortcut for better writing.



For me, when I use the output I am acutely aware of how difficult it was to create the sources its sampling from.

Behind each line I can imagine a person solving a problem, for themselves and writing up their solution, and sharing it. This process is inordinately expensive for each individual: they must be competent with the ideas, techniques, etc. and deploy them in a novel circumstance.

LLMs need no competence with the ideas, indeed, need no ideas or techniques at all. The very same work can be obtain via two radically different algorithms: 1) develop conceptualisations and techniques to generate solutions in the face of novel problems; 2) sample from all such prior attempts. (2) requires many cases of (1) to work -- and that's the copyright laundering, or just, theft if you like.

Who in all their writing on the internet was consenting to train ChatGPT? No one.

This is the innovation. We have google search. We can easily get 99.5% of whatever chatgpt generates if we have the gaul to copy/paste it. Since we don't we launder these efforts through a novel interface.

It is an extremely useful tool, but we should be more aware of where it's utility comes from. From the last twenty years of human labour which has placed all our lives and thoughts on a digital commons which is here being replayed back to us as if "ChatGPT wrote it". AI here has written zilch. Absent all those digital efforts, this is a dumb system with nothing to say.

And if we had the audacity just to copy/paste from ebooks, github, blogs and the like -- we would barely benefit from it at all.


Its funny, but even if the LLM was open source and free, it would still be unethical in terms of not attributing all this effort it is basically stealing.

So thats the first problem to solve. Its not an easy one because depending on the context the output it could be just generic, you can't attribute the entire universe, but in a specialized context it may well be literally impersonating some expert's output.


You just wrote and typed out two sentences and they were beamed across copper wires and glass tubes, and it was predicated on a mix of:

- concepts you've been taught across a lifetime - concepts you don't even know exist - hard labor and pain no one person can even track

An unspeakable mountain of effort from people you'll never know to enable a simple social interaction.

-

No one of us is an island. I think people underestimate how interconnected humanity is, and LLMs have been a mirror that forces them to understand just how little of us exists in isolation, and how much we ourselves are mirrors of knowledge and influences so far removed from us that we can't even acknowledge them if we try.


> I think people underestimate how interconnected humanity is

That is a true fact (and underlies much social dysfunction) but you can't use it to justify a free for all.

Notice I mentioned two social inventions that individuals found important to keep count of the interdependency: money and copyright/attribution.

While faulty tools in many ways, abusing them is not going to lead to anything better. In fact if actors get away with it it would be a signal that power rests now with a new type of appropriating oligarchy.


> Behind each line I can imagine a person solving a problem, for themselves and writing up their solution, and sharing it. This process is inordinately expensive for each individual: they must be competent with the ideas, techniques, etc. and deploy them in a novel circumstance.

I'm really worried if you weren't doing this before LLMs. No person in the information era has ever had a thought that wasn't built upon an imaginable number of people solving problems for themselves, writing up their solutions, and sharing them.

Calling what an LLM is doing copy/pasting is like claiming you've only ever copy/pasted other people's thoughts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: