As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    8 days ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    Sounds like you’ve learned the answer!

    Virtual all programming principles like that should never be applied blindly in all situations. You basically need to develop taste through experience… and caring about code quality (lots of people have experience but don’t give a shit what they’re excreting).

    Stuff like DRY and SOLID are guidelines not rules.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        7 days ago

        Even KISS. Sometimes things just have to be complex. Of course you should aim for simplicity where possible, but I’ve seen people fight against better and more capable options just because they weren’t as simple and thus violated the KISS “rule”.

  • douglasg14b@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    The principles are perfectly fine. It’s the mindless following of them that’s the problem.

    Your take is the same take I see with every new generation of software engineers discovering that things like principles, patterns and ideas have nuance to them. Who when they see someone applying a particular pattern without nuance think that is what the pattern means.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    YAGNI ("you aren’t/ain’t gonna need it) is my response to making an interface for every single class. If and when we need one, we can extract an interface out. An exception to this is if I’m writing code that another team will use (as opposed to a web API) but like 99% of code I write only my team ever uses and doesn’t have any down stream dependencies.

  • JakenVeina@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    9 days ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    That one is indeed objective horse shit. If your interface has only one implementation, it should not be an interface. That being said, a second implementation made for testing COUNTS as a second implementation, so context matters.

    In general, I feel like OOP principals like are indeed used as dogma more often than not, in Java-land and .NET-land. There’s a lot of legacy applications out there run by folks who’ve either forgotten how to apply these principles soundly, or were never taught to in the first place. But I think it’s more of a general programming trend, than any problem with OOP or its ecosystems in particular. Betcha we see similar things with Rust, when it reaches the same age.

    • egerlach@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      SOLID often comes up against YAGNI (you ain’t gonna need it).

      What makes software so great to develop (as opposed to hardware) is that you can (on the small scale) do design after implementation (i.e. refactoring). That lets you decide after seeing how your new bit fits in whether you need an abstraction or not.

  • HaraldvonBlauzahn@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    I think that OOP is most useful in two domains: Device drivers and graphical user interfaces. The Linux kernel is object-oriented.

    OOP might also be useful in data structures. But you can as well think about them as “data structures with operations that keep invariants” (which is an older concept than OOP).

      • Guttural@jlai.lu
        link
        fedilink
        Français
        arrow-up
        1
        ·
        5 days ago

        Those are very powerful abstractions for sure, but did you notice how far their implementation is from standard Java OOP?

        That’s because polymorphism at a macro-level is a functional concern, not something programmers speak in conferences about.

        One of my biggest gripe with Y2K-style OOP is that its proponents make lots of promises that don’t track in practice when you measure the outcomes. One such promise is that writing rigid class hierarchies lead to the potent abstractions you describe.

  • aev_software@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    The main lie about these principles is that they would lead to less maintenance work.

    But go ahead and change your database model. Add a field. Then add support for it to your program’s code base. Let’s see how many parts you need to change of your well-architected enterprise-grade software solution.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      Sure, it might be a lot of places, it might not(well designed microservice arch says hi.)

      What proper OOP design does is to make the changes required to be predictable and easily documented. Which in turn can make a many step process faster.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        I have a hard time believing that microservices can possibly be a well designed architecture.

        We take a hard problem like architecture and communication and add to it networking, latency, potential calling protocol inconsistency, encoding and decoding (with more potential inconsistency), race conditions, nondeterminacy and more.

        And what do I get in return? json everywhere? Subteams that don’t feel the need to talk to each other? No one ever thinks about architecture ever again?

        I don’t see the appeal.

        • Guttural@jlai.lu
          link
          fedilink
          Français
          arrow-up
          1
          ·
          5 days ago

          It works in huge teams where teams aren’t closely integrated, for development velocity.

          Defining a contract that a service upholds, and that dependents can write code against, with teams moving at will as long as the contract is fulfilled is valuable.

          I’ll grant you it is true that troubleshooting those systems is harder as a result. In the huge organization I was in, it was the job of a non-coder specialist even.

          But given the scope, it made a ton of sense.

          • Log in | Sign up@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 days ago

            But if the contract were an interface, for example, the compiler would enforce it on both sides, and you would get synchronous communication and common data format for free, and team A would know that they’d broken team B’s code because it wouldn’t pass CI and nothing drastic would happen in production.

            • Guttural@jlai.lu
              link
              fedilink
              Français
              arrow-up
              1
              ·
              edit-2
              5 days ago

              At that scale, contracts are multiple interfaces, not just one. And C#/Java /whathaveyou interfaces are largely irrelevant, we’re talking way broader than this. Think protocol, like REST, RPC…

              • Log in | Sign up@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                5 days ago

                At that scale, contracts are multiple interfaces, not just one.

                Good job all the compilers I can remember since the last 30 years or so can compile more than one file into a project then.

                We’re taking past each other. I’ll saying that I don’t see how adding networking makes anything simpler and you’re saying that you need a bunch of network protocols. Why?

                I’m not saying you shouldn’t ever have networking, but then again, I wouldn’t call it a microservices architecture if you’re only using networking when it’s necessary. At that point you just have services because it’s genuinely a network.

                It’s not microservices unless you have unnecessarily added a bunch of networking, and unnecessarily adding a bunch of networking is innecessarily adding a bunch of complexity that I can’t see makes anything better.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 days ago

    If it makes the code easier to maintain it’s good. If it doesn’t make the code easier to maintain it is bad.

    Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

    This might make sense for a library, but it doesn’t make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it’ll still be a lot less work than bloating the entire codebase with needless indirections every day.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

      If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

      • SilverShark@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          9 days ago

          But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

          Readability maybe?

          • Consti@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            9 days ago

            Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              9 days ago

              Show me one.

              I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

              • Guttural@jlai.lu
                link
                fedilink
                Français
                arrow-up
                0
                ·
                5 days ago

                Emulation code where you expect unsigned integers to wrap around instead of being UB is a good example, because it was guaranteed for programmers working on the emulated systems.

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 days ago

                  That’s just how it works and have always worked. You can use an unsigned char on a 64 bit system and it’ll behave like on the Commodore 64. I don’t understand what you are trying to show.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    99% of code is too complicated for what it does because of principles like SOLID, and because of OOP.

    Algorithms can be complex, but the way a system is put together should never be complicated. Computers are incredibly stupid, and will always perform better on linear code that batches similar operations together, which is not so coincidentally also what we understand best.

    Our main issue in this industry is not premature optimisation anymore, but premature and excessive abstraction.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      This is crazy misattribution.

      99% of code is too complicated because of inexperienced programmers making it too complicated. Not because of the principles that they mislabel and misunderstood.

      Just because I forcefully and incorrectly apply a particular pattern to a problem it is not suited to solve for doesn’t mean the pattern is the problem. In this case, I, the developer, am the problem.

      Everything has nuance and you should only use in your project the things that make sense for the problems you face.

      Crowbaring a solution to a problem a project isn’t dealing with into that project is going to lead to pain

      why this isn’t a predictable outcome baffles me. And why attribution for the problem goes to the pattern that was misapplied baffles me even further.

      • termaxima@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        7 days ago

        No. These principles are supposedly designed to help those inexperienced programmers, but in my experience, they tend to do the opposite.

        The rules are too complicated, and of dubious usefulness at best. Inexperienced programmers really need to be taught to keep things radically simple, and I don’t mean “single responsibility” or “short functions”.

        I mean “stop trying to be clever”.

        • Guttural@jlai.lu
          link
          fedilink
          Français
          arrow-up
          1
          ·
          5 days ago

          Wholeheartedly agree. OOP was supposed to offer guardrails that make it harder to write irremediably bad code. When you measure the outcomes in the wild, the opposite is true. Traditional OOP code with inheritance makes it hard to adapt code and to reuse it, as far I’ve been able to measure.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    I think the general path to enlightenment looks like this (in order of experience):

    1. Learn about patterns and try to apply all of them all the time
    2. Don’t use any patterns ever, and just go with a “lightweight architecture”
    3. Realize that both extremes are wrong, and focus on finding appropriate middle ground in each situation using your past experiences (aka, be an engineer rather than a code monkey)

    Eventually, you’ll end up “rediscovering” some parts of SOLID on your own, applying them appropriately, and not even realize it.

    Generally, the larger the code base and/or team (which are usually correlated), the more that strict patterns and “best practices” can have a positive impact. Sometimes you need them because those patterns help wrangle complexity, other times it’s because they help limit the amount of damage incompetent teammates can do.

    But regardless, I want to point something out:

    the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    This attitude is a problem. It’s an attitude of ignorance, and it’s an easy hole to fall into, but difficult to get out of. Nobody is “circlejerking OOP”. You’re making up a strawman to disregard something you failed at (eg successful application of SOLID principles). Instead, perform some introspection and try to analyze why you didn’t like it without emotional language. Imagine you’re writing a postmortem for an audience of colleagues.

    I’m not saying to use SOLID principles, but drop that attitude. You don’t want to end up like those annoying guys who discovered their first native programming language, followed a Vulkan tutorial, and now act like they’re on the forefront of human endeavor because they imported a GLTF model into their “game engine” using assimp…

    A better attitude will make you a better engineer in the long run :)

    • iByteABit@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      I get your points and agree, though my “attitude” is mostly a response to a similar amount of attitude deployed by the likes of developers who swear by one principle to the death and when you doubt an extreme usage of these principles they come at you by throwing acronyms instead of providing any logical arguments as to why you should always create an interface for everything

    • marzhall@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      I dunno, I’ve definitely rolled into “factory factory” codebases that are abstraction astronauts just going to town over classes that only have one real implementation over a decade and seen how far the cargo culting can go.

      It’s the old saying “give a developer a tool, they’ll find a way to use it.” Having a distataste for mindless dogmatic application of patterns is healthy for a dev in my mind.

  • Azzu@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    The main thing you are missing is that “loose coupling” does not mean “create an interface”. You can have all concrete classes and loose coupling or all classes with interfaces and strong coupling. Coupling is not about your choice of implementation, but about which part does what.

    If an interface simplifies your code, then use interfaces, if it doesn’t, don’t. The dogma of “use an interface everywhere” comes from people who saw good developers use interfaces to reduce coupling, while not understanding the context in which it was used, and then just thought “hey so interfaces reduce coupling I guess? Let’s mandate using it everywhere!”, which results in using interfaces where they aren’t needed, while not actually reducing coupling necessarily.

    • HereIAm@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      I think a large part of interfaces everywhere comes from unit testing and class composition. I had to create an interface for a Time class because I needed to test for cases around midnight. It would be nice if testing frameworks allowed you to mock concrete classes (maybe you can? I haven’t looked into it honestly) it could reduce the number of unnecessary interfaces.

      • Guttural@jlai.lu
        link
        fedilink
        Français
        arrow-up
        2
        ·
        5 days ago

        I’ve had to do that too, for tests specifically as well, and making clocks an interface on the spot was trivial. I did it when I needed it though, and not ahead of time.

        A Time interface is waaaay too broad. Turns out, I only needed something something to give me programmable ticks for my tests, which is much narrower in scope than abstracting something as general as time.

        I’d say abstractions designed to support tests need to be very narrow in scope, and focused on solving the problem at hand.