Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use LLM for generating data, for example to fill SQL tables for tests or some python datastructures. Transforming data works really well, so tasks like "take bibtex entry and convert it into citation"

I've tried to code entire IOS app with it but failed. You still need to have knowledge about what you are actually doing. I feel like it can accelerate learning, but in a very narrow way. When I was trying to do my own android app 3 years back I've learned bunch of different things about what I was trying to actually do and also around the whole ecosystem. When I was trying to code the IOS app I made progress fast, but I felt like I was just learning the thing that I wanted to learn not the actual surrounding knowledge. Now this sounds good, but I feel like I'm leaving something on the table when doing this kind of "accelerated learning".

I've also tried autogpt/langchain approaches to automate some writing commands for me like "remove all files that are beginning with abc". It did generate appropriate command but sometimes if they failed (the files weren't there in the first place) it just fell into the loop of constantly trying to get a successful deletion by refining the command. It even tried to go to the / to find all such files, talk about paperclips huh. Truth to be told I haven't played with the idea much so there is a room for improvement

LLMs kinda help, they are especially good when provided with source, but I don't really feel like a 1000x super-hacker with VR-glasses that spawns hundreds of agents each minute to do different task. (I would like to though)

I would like to do something with the LLMs but every idea I have already has an established startup or the argument "you could just write this in chatgpt" is made. Literally take whatever comes to your mind and add "LLM" or "GPT", there is a startup for that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: