|
Since VirtualBox is open-source, I would start there (breaking down the source code).
|
|
|
|
|
|
Machine virtualization protects a lot but requires a lot of complexity.
Richard Andrew x64 wrote: order to create some sort of sandboxed environment in which I could put untrusted code?
More limited forms exist to do that depending on what you actually want to do.
Both C# and Java have mechanisms that allow plugins that, with care for the app owner, can very tightly control what the code can do. In general I suspect C++ doesn't but Net C++ (or whatever it is called now) probably does in the same way that C# does.
But if you really want machine virtualization then don't even do it yourself. Rather
1. Require internet API (REST probably)
2. Document what the API does extensively.
3. Document, to whatever extent you want, how a developer codes to that API and then sets up their own server on one of the vast array of hosting sites now available (AWS, etc.)
Then in your application your provide a registration service that allows the other developers to register their server. If you want add a validation process of their api.
Of course on your end you stringently validate input and output of the calls to those servers via your server. You then implement your business functionality to use those external sites.
|
|
|
|
|
This is more of a general design concept, but if I had a class which I intend to have a list of those objects (around 5000) and I need to perform some function on each of these, should I be designing around a function that is part of the class or a separate function that takes a collection of these objects? I know that there's always situations where one may be better than another, but when first starting to think about the design, which way would be recommend
For example:
class Person {
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.AdjustHunger(amount); } } or
class Person {
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.HungerLevel += amount; } }
I would imagine that the second option is more efficient if I need to change all objects in the collection by the same value, but not sure. I guess it could even depend on what language I'm using and how it would try to optimize the code? Does a call to a function have a higher cost than a call to an object's variable? Expanding on this example, what if the hunger change was based on another of the objects variables.
class Person{
float HungerLevel;
float Metabolism;
public void AdjustHunger(float amount) { HungerLevel += (amount * Metabolism); }
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.AdjustHunger(amount); } } vs.
class Person{
float HungerLevel;
public void AdjustHunger(float amount) { HungerLevel += amount; }
}
public void AdjustAllHunger(List<Person> PersonList, float amount) { foreach(Person in PersonList) { Person.HungerLevel += (amount * Person.Metabolism); } }
I don't have a project right now that deals with this, but the concept popped into my head and I was thinking about how I would start designing this. I could probably setup a test case and try it out with a sample, but is that what most programmers do at the designing stage, creating multiple test implementations? Is there some sort of 'design theory' that I would/should be using? I'm self taught, so I don't know if this would be something that is covered in a more structured training environment (so please excuse me if this is a dumb question).
|
|
|
|
|
I presume you would need to store the result into a database so I would do the processing in the database. Define the list, pass it to a stored proc that does the calculation and writes to the database (UPDATE) and returns you the result.
I avoid double handling IE fetch from the database, process the function and write back to the database. Make the database do it's job.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I actually never thought about it being a separate database, as I was thinking more in a video game setting. The thought came up when looking at a park simulation (Planet Coaster) and trying to decide how I would tackle the same situation. Like I said, I don't have a project that has run into this situation, but I'm curious about how I would approach this concept.
In this case the function doesn't really care about the result, just the computation. Is there a word or definition for this concept of 'figuring out how best to store and manipulate data' in programming terminology? I feel as I am floundering when it comes to these thoughts. How to best approach data structure and program structure. I feel like an idiot as I have no idea what words to use when asking about this kind of situation.
|
|
|
|
|
hpjchobbes wrote: I was thinking more in a video game setting That just shows my prejudice, I'm a LOB developer and EVERYTHING revolves around the database.
I doubt there is a performance difference that is significant, I know I only split a function out if it is to be reused or if the method is too complex and splitting it makes sense when supporting the app (there is that LOB thinking again).
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Nobody gets it right the first time; design is an iterative process. A good designer is curious and tries to think of alternatives.
In your case, if you're in the habit of increasing or decreasing "everyone's" welfare by a certain % or fixed amount, you might consider adding a "static" % or amount to the Person class that is always used to affect the HungerLevel, but only if it has a value set by a game rule (in order words, it functions as a global weighing factor).
static float ExtraPain {get;set;} = .1f;
..
float _hunger = 10f;
float HungerLevel { get { return _hunger * (1f + ExtraPain);}}
|
|
|
|
|
hpjchobbes wrote: but is that what most programmers do at the designing stage
Yes.
hpjchobbes wrote: so I don't know if this would be something that is covered in a more structured training environment
I doubt it. Vast majority of business problems cannot be taught because by the time you teach it then you have created the solution anyways and there are just too many to do that.
If you are interested I suggest the following site to get a real idea of how big projects actually got big. It is always via iteration over time.
highscalability[^]
hpjchobbes wrote: I would imagine that the second option is more efficient
Based on your original description and your examples I would use neither.
First you must load the actual 'algorithm' via some mechanism. And that can fail. So you certainly can't load all of them and then process each. And some of them might fail as your process it - then what?
So presuming that you want to process as many as possible.
1. Loop
a. Attempt to load current one. If fail log error so it can be identified then proceed to next
b. Process it. If fail log it. If necessary rollback.
c. Continue looping on next one until done.
Variations on the above.
1. If you need to do this once a day and each item is in your control logging an error might be relevant. But if doing it once a minute then you need to tag items that fail with a flag(s) to indicate that you do not process them until flag(s) cleared (manually by someone that fixed the problem.)
2. Might want to provide notifications of failures, perhaps owners of each item.
3. Might want to have an error that indicates if a 'large' percentage failed since it might indicate a problem with the system and not the items.
4. What if the system is done for a period of time? Do you need to catch up? If so how?
|
|
|
|
|
I hope this is the right forum to ask my question, so I'm posting here.
At the moment for all the software deliverables I used msi packaging (using Flexera InstallShield). So that for each new release I've create a new InstallShield and distribute. Looking for a possibility of distribution over the air.
To do that I've do some changes to the software logic in-order to poll a server may be and download the latest binaries. What I've in my mind is similar design of Windows update, where my application will shows when a new version is available.
My question is, what's the best way to archive new binaries to download. I mean on a single server for example, and what if server down in case?
Feedback and comments really appreciates. Thanks in advance.
If you've never failed... You've never lived...
|
|
|
|
|
Honestly, it depends entirely on the criticality of the updates. How severe of a negative impact will users suffer from not being able to update immediately on update release? Will an update server uptime of less than 100% drive consumers away from your product to alternatives?
That said, static space is very cheap, and by setting up a mirror or two that your update service can point at wouldn't hurt, it would just cost a little money.
Versioning for a single application can be pretty easy, just make sure that you have a "current" folder that is consistent across mirrors.
mirror1.myapp.com
|
|-/myAppDataComponent
|-/current
|-/1
|-/1/1 //version 1.1
|-/1/2 //version 1.2
|-...
|-2
...
|-/myAppBusinessComponent
|-/current
...
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
Out of all, I've one application which is a core layer of others. So that's critical.
There can be a situation application in use, and and the underlying layer has an update. That's a critical situation in my case.
My thought of, I need a separate agent to monitor the states of applications and update.
If you've never failed... You've never lived...
|
|
|
|
|
You might want to look at Squirrel[^] - "It's like ClickOnce but Works".
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
We use clickonce behind the firewall and it works reasonably well.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That's something new for me. Thanks for the comment, will have a look.
If you've never failed... You've never lived...
|
|
|
|
|
IIRC, there is an option within the (paid version of) InstallShield to generate code that will check for an update when you run a program, and notify the user. Have you looked into that?
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
I've licensed version of InstallShield 2011. Hope that feature support in their.
I didn't knew that and never tried. Isn't it GUI maker for the installer?
If you've never failed... You've never lived...
|
|
|
|
|
I've never had to use it myself, but I seem to remember seeing something about it in the documentation of the 2013 release.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
Seems my version not supporting that.
For the moment I'm working on a software solution to evaluate. Third agent running all the time to manage all the updates, interacting with serves.
If you've never failed... You've never lived...
|
|
|
|
|
I use a separate service to monitor the "health" of the main app (kiosk) and communicate with a server for "checking in", downloading updates (new installs), restarting / communicating (shutdown) with the main app. (Service can email that the server is down; and vise-versa).
Adding this logic to the "main app" itself gets a little mind-mending for someone not familiar with the app.
|
|
|
|
|
That's what I exactly thought of. Develop a separate agent to monitor all the applications as well as the new updates on the server. It'll download/copy new binaries to relevant locations and manage the applications accordingly.
If you've never failed... You've never lived...
|
|
|
|
|
anyone can give me idea like how to design good architecture for web api project ?
i have to design a web api project which will give data in xml or json format to our customer.
i just plan to design it like this way....i have a web api layer and web api layer will interact with repository layer to do the CRUD operation. i will use HMAC auth.
but hence i am new in web api so i do not know my thinking is good or not. so any one experience developer please share the idea like how to design a good architecture for web api project. thanks
tbhattacharjee
|
|
|
|
|
|
I would attempt to "categorize" the intended application first; e.g. CRM; Blog; Forum; etc.
Chances are, there's a "template" out there that can help you; at least through the initial stages.
|
|
|
|
|
After unsuccessful attempt to get video into Arduino Due ( problems debugging interrupts and descriptors) I am going back to the real world.
I did some work using OpenCV few years back ( in Windows) , now I need to master getting the video from USB WEB CAM as "image" for OpenCV to work on in Linux / Ubuntu.
Checked libuvc and v4l2.
Which one to use?
I do not need high definition video nor streaming video.
I like to start with simple edge recognition on single frame.
Thanks for your time.
Vaclav
|
|
|
|