|
I hear you, and to a degree I agree. I do format my code, and I don't go out of my way to make the generated code cryptic, but I do concentrate on maintaining the generator. If it generates code that needs to be tweaked, then it's the generator that needs to be tweaked, IMO.
I'm running into that right now with a lexer generator I made that I'm using as a pre-build step in another project.
I fixed the lexer generator. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
I'm of the camp that you start off simple, write a clean implementation, and if you need to bring in other technologies like AWS lambda, distributed computing, whatever, you can do so once there's evidence that you *need* to do so. Again, unless that need is quantifiable up front and becomes the driving factor for the design.
I *am* biased, and if someone is going to suggest a totally different tool than what's currently in the chest, I want it vetted and I want that person to be responsible for documenting it.
So in writing this, I realized something. In this impromptu design meeting that was pretty much about implementation ideas for a solution, I realized that the requirements are incomplete. And with an incomplete (and by this I don't mean that one needs a *complete* set of requirements, just something sufficient for the major talking points) understanding of the requirements, you can have as many ideas as you want, but they are all pretty much worthless.
modified 26-Nov-19 10:03am.
|
|
|
|
|
I agree somewhat.
Sometimes performance requirements are obvious before you even start coding. Eg a high volume website that needs to be sensible with resources. However, I'd be preaching that the perf baked in should be at the architecture level, not the line-by-line optimisation level. ie Settle on a server framework that's been proven to be fast; bake in caching; make sensible database design decisions. I wouldn't be optimising your sorting routines just yet, though.
I 100% agree with not bringing in any technology until it's needed. We've had many, many projects fail because they were over engineered / overly complex. Instead of prototyping in Python, or building a monolithic prototype in .NET Core, systems were using literally dozens of frameworks and technologies spread over dozens of web services and 90% of the efforts ended up in DevOps, not Dev. Instead of focusing on the goal, the focus was on the projects.
The one point I will differ with you is that incomplete requirements don't faze me that much. We had a recent project that failed because a year was spent analysing requirements. If we'd just thrown something out there, played around and then worked out what the actual real world requirements were, we'd be far ahead. Instead we tried to predict the future. There's a point where you hold your breath and jump.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: is that incomplete requirements don't phase me that much.
Oh, no disagreement there! It was actually one of my points -- let's start with a simple and clean approach, get some datapoints on how well it's working, and then decide if a more involved solution is required.
In my particular case, it was revealing, and probably worthwhile to do a short investigation, that because performance kept coming up, it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts.
It's also somewhat annoying that there is no clear decision maker in this process. (Of course I want that person to be me!) But the way the game is being played is, let's get everyone's opinion and watch them argue / flail / wave unsubstantiated opinions like banners at a jousting competition. I actually very much dislike that approach, but I also recognize I could be very much in the wrong or minority with my dislike.
Sigh. The balance between "we use these technologies, does this problem warrant introducing a new technology?" The balance between how much time do we spend investigating concrete datapoints that can guide us on an implementation vs. try it and see what happens? In this case, it's really a ridiculously simple process that can run autonomously, there's just a couple specific things it needs to do.
It's funny, I actually have a conservative approach with regards to introducing newfangled technology and a rather aggressive approach to "let's leap and see what happens." I think though, that I'm comfortable leaping because I've learned how to write the actual code in a way that, if a course correction is required, it's not usually a big deal. The problem is, I don't trust other people to have that skill!
|
|
|
|
|
Marc Clifton wrote: it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts
There's your problem right there. You're letting facts get in the way of a good story.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: incomplete requirements
The reason many developers fear incomplete requirements, is that they know they're the ones that lose when the blame game starts.
|
|
|
|
|
|
Marc Clifton wrote: I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
Then how do you innovate or advance?
Marc Clifton wrote: I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
Performance is always a factor. Do you also not bother about security until that becomes a factor? I know what you're getting at is that you shouldn't "gold plate" things, but some things are often too hard to retro-fit so you should think about them even if they are not explicit requirements.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance? By investing in R&D. That shouldn't be part of normal production.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Aint nobody got time fo' that
|
|
|
|
|
Money you mean, since time is for sale.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance?
By making an informed decision (ok, can-of-worms) rather than just someone saying "oh, here's something I've played with that will work perfectly." Riiight. Show me.
I've worked with too many developers that leaped onto Ruby, for example, and it turned out that the only innovation that occurred was how must faster you could screw up a project.
|
|
|
|
|
I agree with almost all of your points, with the possible exception of having a complete set of requirements before starting the design.
Some of the requirements may not be apparent until you have already done part of the design. For example, the original design might have envisioned reading certain personal information from a public database, and immediately discarding it after use. Due to responsiveness and/or reliability constraints, it turned out to be necessary to keep a local cache of this personal information in a local database. This requires following the legal requirements for databases that store personal information, which is a whole new kettle of fish.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: with the possible exception of having a complete set of requirements before starting the design.
Yes, I agree. I'll edit the post.
|
|
|
|
|
I'm of the camp that you start off simple and write a clean implementation, performance will not be a problem.
|
|
|
|
|
One approach we've used in the past is the Pugh Matrix, which makes sure that all parties are heard, and that all criteria are taken into consideration.
The Wikipedia post for this Pugh Matrix is somewhat incomplete, and I find better references online.
|
|
|
|
|
Hear! Hear!
And keep me from punching they who respond to new projects with, "ooh, this would be a good opportunity to try out New Third-Party Product X which I read about in a blog this morning!"
|
|
|
|
|
Performance goals should be in the requirements.
We're not talking about optimization here, but high level performance goals.
something like, for example :
When application is launched, it should be reactive in less than X seconds (could mean what initialization can be postponed later instead at startup)
or
3D rendering should be Y frames/seconds (could mean, how can we reduce the complexity of the model to achieve that goal ?)
I'd rather be phishing!
|
|
|
|
|
I probably never had a single complete requirements...
That's why I add mine - and working web mostly, performance is one I always add...
I also does nothing until it is written in emails - even incomplete...
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
|
|
|
|
|
Marc Clifton wrote: and if you need to bring in other technologies like AWS lambda, distributed computing, whatever, you can do so once there's evidence that you *need* to do so In the case of AWS Lambda (or Azure Functions in my case) I prefer to use them unless I can't
They're simple, easy, cheap and they scale automatically out of the box.
Many programmers are like "I use the right tool for the job", but when new tools come around they're like "it's just a fad so I'm sticking to what I know."
Even when these tools solve actual problems and gain popularity and maturity, a lot of these "right tool for the job" people refuse to work with new technologies.
I (think I) know of at least a few people here who just outright refuse to work with cloud or containers or even anything that isn't vanilla JS (or jQuery) and HTML.
A tool you know may be right for the job simply because you know it, but if something is gaining traction, like in this case serverless solutions, I'll sure as hell try them out if I think I've got something for it.
If I'd start "simple" with "what I know" on each new project I did I would still be writing giant WinForms monoliths in VB.NET
I'm just saying there comes a time when you've got to try something new even when it *now* looks like the old would suffice.
That said, always wait for version 2
|
|
|
|
|
You would have run screaming from the Bank, the manager was forever proposing we use some *NEW* tech that he had read about or some sales person had put a flea in his ear about. Drove me nuts and eventually drove me out.
I have seen both the minimal spec project and the one where they attempt to spec every detail out before starting the development. I'm still ambivalent about which one is better.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
All the leds on my washing machine are blinking. According to the official dealer and an independent mechanic, the main board is faulty and needs replacing. Their prices were similar, but I'll come back to that in a bit.
Having replaced many parts in many different computers, one would expect a similar proces. Open the hood, unscrew motherboard, put in new one that you ordered from the site, and boot it. Since the washing machine lacks a display, we can safely assume that this new board does not come with a graphics adapter, soundadapter, integrated networks-adapter nor with large amounts of DDR3 RAM.
To my surprise, the price of the logic board was 60% of the total price of the new machine (current price, the exact type is still for sale today, so it is not like ordering hardware that's not available), and it is built in a way that a regular consumer cannot replace it easily. The logic board inside the washing machine sells for roughly 150 euro's - but you'd have to add an hourly rate and travel costs, and then it adds up to 60% of the new-price.
All the other hardware still works, it's just a faulty logic board. Still, the independent mechanic gave me the advice to "buy a new machine", since it makes more economic sense. Must have been working too much with computers as it makes no sense to me at all. It reeks of planned obsolecence; making it hard to repair and easy to break down, to limit the lifespan. Seek out their owners website, and you get marketing-texts like "environmental sustainability" and "fair trade"
The rest of the old machines hardware would be "recycled", meaning it probably[^] gets dumped somewhere in Africa. Very environmentally and fair. Having a kid burn the plastic of a copper wire is a form of recycling, and that's good for the environment.
Remember how some printers have an internal counter[^] after which they stop working? Same thing
So, why don't we do that with software? After, say, a 4000 executions, simply stop the software and demand that the customer buys a complete new license. Let's add to that OriginalGriffs' headphone-economics - and sell new licenses only in packs of 20. And to top all that, include the words "fair trade" on your website.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I didn't know Apple made washing machines.
|
|
|
|
|
Sure. The iBleach.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
|