• 0 Posts
  • 17 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • There kinda is. It says the goat is about to cross the river and asks what the minimum number of trips are. It’s a trick question, correctly identified by Gemini as such, but there is a question. I guess the more human response is “What the fuck are you talking about?” but for an agent required to do its best to answer questions, I don’t know how to expect much better.



  • I can read your code, learn from it, and create my own code with the knowledge gained from your code without violating an OSS license. So can an LLM.

    Not even just an OSS license. No license backed by law is any stronger than copyright. And you are allowed to learn from or statistically analyze even fully copyrighted work.

    Copyright is just a lot more permissive than I think many people realize. And there’s a lot of good that comes from that. It’s enabled things like API emulation and reverse engineering and being able to leave our programming job to go work somewhere else without getting sued.


  • Yeah I don’t think we should be pushing to have LLMs generate code unsupervised. It’s an unrealistic standard. It’s not even a standard most companies would entrust their most capable programmers with. Everything needs to be reviewed.

    But just because it’s not working alone doesn’t mean it’s useless. I wrote like 5 lines of code this week by hand. But I committed thousands of lines. And I reviewed and tweaked and tended to every one of them. That’s how it should be.





  • I’ve thought about this many times, and I’m just not seeing a path for juniors. Given this new perspective, I’m interested to hear if you can envision something different than I can. I’m honestly looking for alternate views here, I’ve got nothing.

    I think it’ll just mean they they start their careers involved in higher level concerns. It’s not like this is the first time that’s happened. Programming (even just prior to the release of LLM agents) was completely different from programming 30 years ago. Programmers have been automating junior jobs away for decades and the industry has only grown. Because the fact of the matter is that cheaper software, at least so far, has just created more demand for it. Maybe it’ll be saturated one day. But I don’t think today’s that day.


  • Agents now can run compilation and testing on their own so the hallucination problem is largely irrelevant. An LLM that hallucinates an API quickly finds out that it fails to work and is forced to retrieve the real API and fix the errors. So it really doesn’t matter anymore. The code you wind up with will ultimately work.

    The only real question you need to answer yourself is whether or not the tests it generates are appropriate. Then maybe spend some time refactoring for clarity and extensibility.


  • There are bad coders and then there are bad coders. I was a teaching assistant through grad school and in the industry I’ve interviewed the gamut of juniors.

    There are tons of new grads who can’t code their way out of a paper bag. Then there’s a whole spectrum up to and including people who are as good at the mechanics of programming as most seniors.

    The former is absolutely going to have a hard time. But if you’re beyond that you should have the skills necessary to critically evaluate an agent’s output. And any more time that they get to instead become involved in the higher level discussions going on around them is a win in my book.


  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzLook at this. Or don't.
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    16 days ago

    I don’t think it’s wrong, just simplified. You don’t really have to touch the photon, just affect the wave function, the statistical description of the photon’s movement through space and time. Detectors and polarizers, anything that can be used to tell exactly which path the photon took through the slits will do this. Quantum eraser experiments just show that you can “undo the damage” to the wave function, so to speak. You can get the wave function back into an unaltered state but by doing so you lose the which-way information.




  • What? I’ve already written the design documentation and done all the creative and architectural parts that I consider most rewarding. All that’s left for coding is answering questions like “what exactly does the API I need to use look like?” and writing a bunch of error handling if statements. That’s toil.


  • Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.

    One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

    So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.

    First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

    Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

    All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.


  • I have a son that’s learning to read right now so I’ve got some first hand experience on this. This article is making a lot out of the contextual clues part of the method but consistently downplays or ignores that phonics is still part of what the kids are taught. It’s a bit of a fallback, sure, but my son isn’t being taught to skip words when he can’t figure it out.

    He’s bringing home the kinds of books mentioned in the article. The sentence structure is pretty repetitive and when he comes across a word he doesn’t know he tries to look at the picture to figure out what it is. Sometimes that works and he says the right word. Other times, like there’s a picture of a bear and the word is “cub” (but I don’t think my son knew what a bear cub was), he still falls back on “cuh uh buh” to figure it out.

    So he still knows the relationship between letters and sounds. He just has some other tools in his belt as well. I can’t say I find that especially concerning.