Do chatgpt or other language models help you code more efficiently and faster? Is it worth spending your money for it?
deleted by creator
Not really, it’s been pretty useless for me. But I’m also a very senior developer, I’ve been coding for 18 years so more often than not I’m stuck on a problem much bigger than the best AI can possibly handle, just in amount of context needed to find out what’s wrong.
It’s still much faster for me to just write the code than to explain what I want to an AI. IDE snippets and completion just makes it super quick. Writing out code is not a bottleneck for me, if anything I shit out code and shell commands without a thought. It comes out like it’s regular speech.
I’m also at a point where I Google things out, and end up answering myself 5 years ago, or asking 5 years ago and there’s still zero answers to the question.
I do see my juniors using Copilot a good bit though.
Depends on if you want to work with existing code. LLMs tend to be good at generating small code snippets but not good at understanding / finding errors in existing code
I’m pretty sure even if it was helpful they wouldn’t use it out of principle. Shit’s basically plagiarism laundering.
EDIT: Oh you’re talking about devs who use Lemmy, not the Lemmy devs.
Not really. Writing code is the easy part. It’s not the rate limiting step. The hard part is getting requirements out of customers, who rarely know what they want. I don’t need to push out more code and features faster, that would make things into unmaintainable spaghetti.
I might send it a feature list and ask it “what features did they forget?” or “Can you suggest more features?”, or even better – “which features are the least important for X and can be eliminated?”. In other words, let it do the job of middle-management and I’ll just do the coding myself.
Anyway, ChatGPT blocks my country (I’ve confirmed it’s on their end).
I tried to use Copilot but it just kept getting in the way. The advanced autofill was nice sometimes, but its not like i’m making a list of countries or some mock data that often…
As far as generated code… especially with html/css/js frontend code it consistently output extremely inaccessible code. Which is baffling considering how straightforward the MDN, web.dev, and WCAG docs are. (Then again, LLMs cant really understand when an inaccessable pattern is used to demonstrate an
onclick
instead of a semantica
or to explain aria-* attributes…)It was so bad so often that I dont use it much for languages I’m unfamiliar with either. If it puts out garbage where i’m an expert, i dont want to be responsible for it when I have no knowledge.
I might consider trying a LLM thats much more tuned to a single languge or purpose. I don’t really see these generalized ones being popular long run, especially once the rose-tinted glasses come off.
I mostly use shell-gpt and ask it trivial questions. Saves me the time for switching to a browser. I have it always running in a tmux pane. As for code, I found it helpful for getting started when writing a functionality, but the actual engineering part should be done manually imo. As for spending money on it, depends on how you benifit from it. I spend about 50c on my openai API key, but I know a friend who used ollama (I think with some mistral derivative) locally on a gaming laptop with decent enough results.
Hmm well for research it can give me good pointers, when I am going into a new field.
For actual coding it’s mostly useless for the moment. It’s not trained to be productive, so it doesn’t know what to focus on and tends to be overly verbose. Its internal model of what’s going on is also quite shaky.
It feels like working with clay, I have to somehow get the code the llm generates into the shape I need. But it’s like looking at a movie at super slow mo, and the clay is too wet and keeps falling apart.
Furthermore, it cannot handle anything more than relatively low complexity code. Sure it can give you a function for drawing a circle. But architecture and code smell are things it doesn’t understand.
So after using it for a year I must say that I don’t use it for actual coding. I use it mostly to get an overview of fields I’m not that much into. For example lately I’ve looked into quantum field theory again, and Rust for the first time. I know it spouts a lot of nonsense but I can still get the gist of it.
Still relying on good ol Bessie 🧠
Yep! It’s the best autocorrect I’ve ever used, and it does a decent job explaining config files when needed. Just don’t let any unvetted code in because it can have some quirky bugs
I’m a new DM (and new to TTRPGs in general). I’m using bard and chatgpt to keep track of homebrew stuff.
I’m running an almost completely custom system, adapted to ASOIAF. Races (renamed to origins), classes, backgrounds, feats, etc. extra mechanics like duelling systems and large battle simulations, and faction interaction systems. It’s a lot, and I find it easier for me to have the bot spray solutions to whatever issue I run into, then grab the one that might work, and refine it until it might sound fun. I need to get a system in order to keep track of my campaign, though. Tried WorldAnvil and honestly, I don’t need that many tools. Might go back to Notion and keep track of all the factions and characters that way. Gonna be a lot of work though.
Tried WorldAnvil and honestly, I don’t need that many tools. Might go back to Notion and keep track of all the factions and characters that way. Gonna be a lot of work though.
Obsidian has been great for me to keep track of all my worldbuilding notes for Pathfinder 2e
Obsidian.md? Did you get it from GitHub?
Basically made stack overflow useless for me. Great for pasting error messages. I don’t really find it useful for actually writing the code tho, unless its standard boilerplate stuff.
Yes and no. I compare it to a graphing calculator: I know how to graph a parabola by hand already, but I don’t want to have to do it over and over already. That’s just busy work for me.
LLMs are similar that way. There’s often a lot of boilerplate to get out of the way that’s just busy work to write over and over again. LLMs are great at generating some of that scaffolding.
LLMs have also become a lot more helpful as Google search has gotten worse over time.
It’s sometimes helpful when working with libraries that are not well documented. Or to write some very barebones and not super useful tests if I’m that lazy. But I’m not going to let it code for me. The results suck and I don’t want to become a „prompt engineer“.
ChatGPT will mock up a python script pretty quickly given a basic english description and reference materials like API docs, sparing me the burden of doing something tedius, but that’s about the extent of its utility for me.
As someone who is just getting started in a new language (rust), it can be very helpful when trying to figure out why something doesn’t work, or maybe some tips I don’t know (even if gets confused sometimes).
However, for my regular languages and work, I imagine it would be a lot slower.