Design and Architecture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What is Google[^]?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I'm keeping an eye on this user. Seems potentially spammy to me.
|
|
|
|
|
Our company is currently running .NET version 4.5 and using VB as the back-sript language. All of our forms are basically using the general ASP.NET forms methods. In other words we are not taking advantage of the any of the MVC core methods.
We are getting ready to start our new design process and wanted to get some recommendations on what framework and mythologies/C# version/database methods/Web UI methods, to use in our new design.
Any advice would be grateful,
Thanks,
Steven
|
|
|
|
|
The answer is: It Depends!
What skills does your team have / how much new learning are they able to do?
Are you intending to run the old stuff concurrently with the new stuff or is it a big bang approach?
What is the problem domain? That will dictate what are the best tools for the job
Are you willing to risk leading edge (which may be buggy) or do you want trailing edge (which may cause upgrade issues in the future) or do you want to play it safe (e.g. go with the next-to-latest version)?
Who are your clients / customers? What will they be most comfortable using?
Are there off-the-shelf packages that can do what you want which could save you a lot of development time?
Stevey T wrote: recommendations on what framework and mythologies/C# version/database methods/Web UI methods, to use Do not use mythologies: they are not real!
|
|
|
|
|
Hello,
In my company we had a system on .net framework platform. That was made from an orchestator, different rest API services on top of a monolithic oracle DB. With a huge plsql library.
We used distributed transaction for the operation (import of data). Because of the rollback possibilities if one request did fail e.t.c. And used a orchestrator in front of the rest API services (micro services) that organized the different requests all in one transaction.
We had to upgrade the system to .net core.
But .net core don't support DTC anymore !
Have anyone had a similar situation, and what what did you do about it?
Or do anyone have any comments to this issue/problem.? That if one request fails in a
big operation of many requests the database data will not be consistent, if no rollback is done.
When we use many "micro"services on top of one monolithic database.
BR
|
|
|
|
|
I think a DTC implies "multiple" "micro services".
A transaction that spans multiple databases is not what I would can micro.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
We are thinking of building an internal Kaggle like platform to run Hackathons in our company. I am wondering what would be the best technologies to use for this and how to go about building such a site. Any help appreciated. Thanks.
|
|
|
|
|
So we need to first visit Kaggle to know what you're talking about? And then we come back here to ask you questions about Kaggle?
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
|
I have always had the view that duplication is bad. There are always exceptions, but most of the time I it seems bad to me. Unit tests are one are where I tolerate it more but in production code it's something that I rarely find desirable.
Some people duplicate things two or three times and only eliminate the duplication on the third or fourth time. I really can't understand why somebody would blindly follow this rule. I understand the argument that we may not know how to refactor something if there are aren't enough instances of the duplication but I find that it is rarely the case. At least if you don't know how best to refactor something then keep it simple. The rule seems crazy as why would you do something if you know it's bad? Eliminating duplication is usually quick and, in my opinion, usually makes things much easier to read, particularly when you have half as much code to read/understand.
As an example, imagine we want to format a number as a currency. We could have the following x.ToString ("£0.##")
Having it once seems fine, but not really more than once. Surely, as a very simple refactor, something like the following would be better?
Format.AsCurrency(x)
Surely the time saved by the readability would outweigh the code of writing it and it solves the problems of duplication.
Deliberately duplicating code doesn't make much sense to me, but maybe I've been working with people who have taken things a bit too far?
|
|
|
|
|
The principle of Do not Repeat Yourself (DRY) is one of those areas that is taken too far by some code zealots. Let's take your ToString example here, you notice after a couple of times that you have the same ToString code so you decide to introduce an AsCurrency method. That seems straightforward enough, but you're working in a large codebase so you don't notice that the same logic has been added in pieces of the code that you don't visit. Worse still, somebody has done this elsewhere:Then there was that piece of code which looks like this.<pre>public class FormatConstants
{
public const string GBP = "£0.##"l
}
...
return myItem.ToString(FormatConstants.GBP);
... What we're seeing here is that others have attempted to avoid repeating code with varying degrees of success. In all of these cases, there is an element of repeated code because differnt people have taken different approaches to to avoid repeating code. Even if the code doesn't look exactly the same, you are repeating the intent of the code. Now you have introduced yet another way to represent this same conversion. In six months time, someone else comes along and has to add a currency ToString in a few places so they refactor their code to avoid adding repeated code. If you're lucky, they have looked through the codebase looking for other places that does the conversion and picks an already written one; if they've searched using ToString("£0.##") then they might not have found the match so they end up adding yet another new way of formatting this one item. What has happened here is that the search to remove duplication has ended up creating a mess - and this is just with a simple example.
The bottom line is, DRY is a great principle and one that you should try to stick to if it makes sense but you have to accept that, in some cases, you aren't going to achieve it and you shouldn't beat yourself up over it.
|
|
|
|
|
Contrast duplicating, inlining and premature optimizing.
Or the fact the "pattern" may not yet have fully materialized in the mind.
It's not the first time one has pulled together "similar" code only to realize you just dug a deep pit because you didn't consider the full impact.
Which is very easy in the "agile" environment where they pump out bits of code in ignorance of the big picture.
As for the ToSTring() example, I use xxxAsSomething() ... which inevitably winds up including xxxAsSomethingDifferent(), etc.
I've now established a pattern that creates functions out of everything ... even "one-timers". And then programming becomes a real drag.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
|
General
News
Suggestion
Question
Bug
Answer
Joke
Praise
Rant
Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.