|
Gurigraphics wrote: Visual code is more faster to order and sort these functions. No, not by definition. Build a form using VS' form-designer and find out why.
If you make a claim, make sure it can be backed up. If it isn't, it will be shot down
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
There are 3 different levels:
1-Textual
2-Visual
3-Configurational (Visual and textual blend.)
If the "configurational element" is not encapsulated in a "visual element", we do not have a module, and is not possible fast order and sort.
Plurality need be modular, not just visual, or textual or configurational.
|
|
|
|
|
Hello,
I am relatively new to this type of coding, I have done a few years of data science but not much application development. I want to create a program which allows for multiple users to be operating at the same time on a network and submitting information. Think of it as a panel of operators each providing details on different topics. They submit those details individually which sends it along to a master operator for review and final commitment to a MySQL database on the web. Is there a language that would be best suited for this? I have done some work with Java and it was relatively slow communicating with a database. Is this a task for Ruby or Python? Where should I start? Any help would be greatly appreciated!
|
|
|
|
|
My first instinct is WCF[^]. If you have C# experience it isn't too hard to learn especially with a relatively simple service. If you can guarantee that each user is providing details to separate topics as you describe you won't even have to handle shared data. You can setup the service to run on any connections you'd like - pipes, HTTP, TCP, etc. If you don't have C# experience it might be a bit much if you're on a deadline though and I'm sure other CPers will offer some alternatives
The "Getting Started Tutorial" and "Basic WCF Programming" should be the only topics you really need to review for this problem as described. ServiceBehavior , ServiceContract , OperationContract , and DataContract are the main things you'll be interested in besides the basics on how to setup the service itself.
EDIT: This suggestion is mainly if you're looking for a service-oriented approach.
|
|
|
|
|
What sort of "information"?
There are any number of solutions available that allow for "workflow / document" management: uploading; approving; rejecting; committing; sharing; reporting; versioning; etc.
(For example) You can do all that for $5 per month using SharePoint Online.
|
|
|
|
|
kmk513 wrote: Think of it as a panel of operators each providing details on different topics...
That is a basic description of a client server architecture which has been around since perhaps the 60s (at least conceptually). And because of that it has the advantage that there are many books on it which address it indirectly and even specifically.
kmk513 wrote: I have done some work with Java and it was relatively slow communicating with a database
I have decades of experience with java, C#, C++ and databases. And performance problems come up in the following
1. Requirements - most significant
2. Architecture
3. Design
4. Technology (software and hardware) - least significant
So no java is not "slow" in real world business applications for the vast majority of businesses out there. Certainly would not have any impact on the very general description of your project.
Should note that attempting to switch to new technologies based on a presumption that they are "better" and giving up on a known technology is in fact more likely to lead to problems due to lack of knowledge. Makes for a great way to learn the technology but lessens the chance for success. Nothing wrong with learning and it is a cost that businesses must shoulder for long term viability, morale and ability to make new hires. But the downside must be acknowledged.
|
|
|
|
|
Before I begin to research this idea, I'm hoping someone who knows might be able to tell me if this is worthwhile pursuing, or if it's beyond the scope of all but the most hardcore developer.
Would it be possible for a user mode application to take advantage of the virtualization instructions in Intel and AMD CPU's in order to create some sort of sandboxed environment in which I could put untrusted code?
I wouldn't be looking to emulate an entire PC the way Virtual Box does, but could I create a very simple virtual machine that might be capable of something useful?
Thank you for any learned input you may have.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Only by using a driver in the kernel, its only kernel code that can get to the hardware in that way, but isnt there a third party utility that will do this for you? Just a guess, I have never looked into this.
|
|
|
|
|
Since VirtualBox is open-source, I would start there (breaking down the source code).
|
|
|
|
|
|
Machine virtualization protects a lot but requires a lot of complexity.
Richard Andrew x64 wrote: order to create some sort of sandboxed environment in which I could put untrusted code?
More limited forms exist to do that depending on what you actually want to do.
Both C# and Java have mechanisms that allow plugins that, with care for the app owner, can very tightly control what the code can do. In general I suspect C++ doesn't but Net C++ (or whatever it is called now) probably does in the same way that C# does.
But if you really want machine virtualization then don't even do it yourself. Rather
1. Require internet API (REST probably)
2. Document what the API does extensively.
3. Document, to whatever extent you want, how a developer codes to that API and then sets up their own server on one of the vast array of hosting sites now available (AWS, etc.)
Then in your application your provide a registration service that allows the other developers to register their server. If you want add a validation process of their api.
Of course on your end you stringently validate input and output of the calls to those servers via your server. You then implement your business functionality to use those external sites.
|
|
|
|
|
This is more of a general design concept, but if I had a class which I intend to have a list of those objects (around 5000) and I need to perform some function on each of these, should I be designing around a function that is part of the class or a separate function that takes a collection of these objects? I know that there's always situations where one may be better than another, but when first starting to think about the design, which way would be recommend
For example:
class Person {
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.AdjustHunger(amount); } } or
class Person {
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.HungerLevel += amount; } }
I would imagine that the second option is more efficient if I need to change all objects in the collection by the same value, but not sure. I guess it could even depend on what language I'm using and how it would try to optimize the code? Does a call to a function have a higher cost than a call to an object's variable? Expanding on this example, what if the hunger change was based on another of the objects variables.
class Person{
float HungerLevel;
float Metabolism;
public void AdjustHunger(float amount) { HungerLevel += (amount * Metabolism); }
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.AdjustHunger(amount); } } vs.
class Person{
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.HungerLevel += (amount * Person.Metabolism); } }
I don't have a project right now that deals with this, but the concept popped into my head and I was thinking about how I would start designing this. I could probably setup a test case and try it out with a sample, but is that what most programmers do at the designing stage, creating multiple test implementations? Is there some sort of 'design theory' that I would/should be using? I'm self taught, so I don't know if this would be something that is covered in a more structured training environment (so please excuse me if this is a dumb question).
|
|
|
|
|
I presume you would need to store the result into a database so I would do the processing in the database. Define the list, pass it to a stored proc that does the calculation and writes to the database (UPDATE) and returns you the result.
I avoid double handling IE fetch from the database, process the function and write back to the database. Make the database do it's job.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I actually never thought about it being a separate database, as I was thinking more in a video game setting. The thought came up when looking at a park simulation (Planet Coaster) and trying to decide how I would tackle the same situation. Like I said, I don't have a project that has run into this situation, but I'm curious about how I would approach this concept.
In this case the function doesn't really care about the result, just the computation. Is there a word or definition for this concept of 'figuring out how best to store and manipulate data' in programming terminology? I feel as I am floundering when it comes to these thoughts. How to best approach data structure and program structure. I feel like an idiot as I have no idea what words to use when asking about this kind of situation.
|
|
|
|
|
hpjchobbes wrote: I was thinking more in a video game setting That just shows my prejudice, I'm a LOB developer and EVERYTHING revolves around the database.
I doubt there is a performance difference that is significant, I know I only split a function out if it is to be reused or if the method is too complex and splitting it makes sense when supporting the app (there is that LOB thinking again).
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Nobody gets it right the first time; design is an iterative process. A good designer is curious and tries to think of alternatives.
In your case, if you're in the habit of increasing or decreasing "everyone's" welfare by a certain % or fixed amount, you might consider adding a "static" % or amount to the Person class that is always used to affect the HungerLevel, but only if it has a value set by a game rule (in order words, it functions as a global weighing factor).
static float ExtraPain {get;set;} = .1f;
..
float _hunger = 10f;
float HungerLevel { get { return _hunger * (1f + ExtraPain);}}
|
|
|
|
|
hpjchobbes wrote: but is that what most programmers do at the designing stage
Yes.
hpjchobbes wrote: so I don't know if this would be something that is covered in a more structured training environment
I doubt it. Vast majority of business problems cannot be taught because by the time you teach it then you have created the solution anyways and there are just too many to do that.
If you are interested I suggest the following site to get a real idea of how big projects actually got big. It is always via iteration over time.
highscalability[^]
hpjchobbes wrote: I would imagine that the second option is more efficient
Based on your original description and your examples I would use neither.
First you must load the actual 'algorithm' via some mechanism. And that can fail. So you certainly can't load all of them and then process each. And some of them might fail as your process it - then what?
So presuming that you want to process as many as possible.
1. Loop
a. Attempt to load current one. If fail log error so it can be identified then proceed to next
b. Process it. If fail log it. If necessary rollback.
c. Continue looping on next one until done.
Variations on the above.
1. If you need to do this once a day and each item is in your control logging an error might be relevant. But if doing it once a minute then you need to tag items that fail with a flag(s) to indicate that you do not process them until flag(s) cleared (manually by someone that fixed the problem.)
2. Might want to provide notifications of failures, perhaps owners of each item.
3. Might want to have an error that indicates if a 'large' percentage failed since it might indicate a problem with the system and not the items.
4. What if the system is done for a period of time? Do you need to catch up? If so how?
|
|
|
|
|
I hope this is the right forum to ask my question, so I'm posting here.
At the moment for all the software deliverables I used msi packaging (using Flexera InstallShield). So that for each new release I've create a new InstallShield and distribute. Looking for a possibility of distribution over the air.
To do that I've do some changes to the software logic in-order to poll a server may be and download the latest binaries. What I've in my mind is similar design of Windows update, where my application will shows when a new version is available.
My question is, what's the best way to archive new binaries to download. I mean on a single server for example, and what if server down in case?
Feedback and comments really appreciates. Thanks in advance.
If you've never failed... You've never lived...
|
|
|
|
|
Honestly, it depends entirely on the criticality of the updates. How severe of a negative impact will users suffer from not being able to update immediately on update release? Will an update server uptime of less than 100% drive consumers away from your product to alternatives?
That said, static space is very cheap, and by setting up a mirror or two that your update service can point at wouldn't hurt, it would just cost a little money.
Versioning for a single application can be pretty easy, just make sure that you have a "current" folder that is consistent across mirrors.
mirror1.myapp.com
|
|-/myAppDataComponent
|-/current
|-/1
|-/1/1 //version 1.1
|-/1/2 //version 1.2
|-...
|-2
...
|-/myAppBusinessComponent
|-/current
...
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
Out of all, I've one application which is a core layer of others. So that's critical.
There can be a situation application in use, and and the underlying layer has an update. That's a critical situation in my case.
My thought of, I need a separate agent to monitor the states of applications and update.
If you've never failed... You've never lived...
|
|
|
|
|
You might want to look at Squirrel[^] - "It's like ClickOnce but Works".
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
We use clickonce behind the firewall and it works reasonably well.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That's something new for me. Thanks for the comment, will have a look.
If you've never failed... You've never lived...
|
|
|
|
|
IIRC, there is an option within the (paid version of) InstallShield to generate code that will check for an update when you run a program, and notify the user. Have you looked into that?
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
I've licensed version of InstallShield 2011. Hope that feature support in their.
I didn't knew that and never tried. Isn't it GUI maker for the installer?
If you've never failed... You've never lived...
|
|
|
|