|
Matt U. wrote: but I need to figure out the most efficient way (both from a development
standpoint as well as from an end-user standpoint)
Then you really need to have some more requirements.
For example will 100,000 users access this?
Will there be 100 billion of these?
Will there be 1000 retail locations attempting to access it?
Questions like that impact the design.
Matt U. wrote: So is there a way I can keep from creating 25 UserControls, one for each type?
Yes.
Is it ideal in terms of your actual business requirments? I have no idea.
But the idea is simple you have a inventory item with "properties". In the simplist scenarios there business needs for the properties are very simple so consequently one can store them in a property table.
That table would look like this
Property (primary key id is assumed)
- inventory_id
- name
- type
- value
That table would have a row like the following
<some inventory="" id="">, "Clock Speed", "Decimal", "2.66"
There are many variations on the above. For example instead of "Clock Speed/"Decimal" you could have an enumeration table that define a 'type' and then the property id would tie to that type via the type id.
|
|
|
|
|
First of all, thank you for the response.
jschell wrote: For example will 100,000 users access this?
Will there be 100 billion of these?
Will there be 1000 retail locations attempting to access it?
There will only be a handful of people who access this portion of the system.
There will only be roughly 25 item types. However, there may be several variations of each item type. (500GB 5400RPM, 500GB 7200RPM, 750GB 5400RPM, etc.)
There is only one facility which will access this application. It's all in-house/local.
jschell wrote: But the idea is simple you have a inventory item with "properties". In the simplist scenarios there business needs for the properties are very simple so consequently one can store them in a property table.
So if I were to implement this idea would I have the following?
Inventory Table
------------------------
- InventoryId
- Manufacturer
- PartNumber
- ItemType
- TotalQty (this is the total qty. of this item in inventory)
- AvailableQty (this is the qty. (must be <= TotalQty) of this item which are available for picking)
Properties Table
------------------------
- InventoryId (linked to Inventory Table)
- Name
- Type
- Value
Am I understanding it correctly? Also, as far as not having that many UserControls, how could I reduce that number? I'm still thinking on that one and trying to figure it out.
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
Matt U. wrote: Am I understanding it correctly?
Yes.
However why are the other fields not properties as well - like 'part number'?
Quantities are probably functional so those wouldn't be properties.
|
|
|
|
|
Well, I figured since the 'Manufacturer' and 'Part Number' fields were common among all items, why not have those fields in the base table. Would there be any real advantages of placing them in the Properties table? Or is it more for separation of purpose?
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
Purpose not commonality is what drives the design.
Part number might be broken out because you need to drive functionality from it - like queries and/or B2B work. I doubt Manufacturer woujld have that same need.
However it is still somewhate subjective especially since your volume is low. You might just keep them there to make it easier for the DBA to investigate stuff.
|
|
|
|
|
I understand, that makes sense. Oh, it will make it easier for me to investigate stuff? LoL. I run the entire show when it comes to designing and developing software here. I design the databases, maintain the servers, write the software, etc. Haha. But I understand your point. Thanks a lot for all of your input.
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
In one company I worked at, we coded these characteristics into the .part number itself.
For instancr, a 4000 series graphics chip could have a speed rating of 24, 32 or 40 MHz.
It would be numbered as 4000-24, 4000-32 and 4000-40.
In the case of your CPU, the part number could be i3-1.4-c2-v2 for Intel i3, 1-4 GHz, 2-core, version 2 model.
In this case, one needs to only have the part number entered correctly into the system and not have to worry about some Tom, Dick or Harry entering all the characteristics in a separate property sheet.
Your disk drive could be numbered DISK-2.5-500-7200 for a 2.5", 500GB, 7200 RPM drive. You can even add SATA, PATA by using an S or a P at the end.
Don't know if this would meet your needs or not.
|
|
|
|
|
I had already taken that into consideration. However, the part numbers come from corporate. We have no control of them whatsoever so we're a bit limited in that aspect. :-/ I figured it out though, using the separate table as mentioned.
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
We used to supply parts to the Itty Bitty Mitty Computer Corporation who wanted to see only their part number on the chips as well as on the shipping documents.
Our Inventory Control system allowed for alternate part numbers so the user could put in their part numbers and the system would still point to the one that we used inside our company.
|
|
|
|
|
Oh, interesting. Well, our current implementation seems to fit our needs. But I'll keep this idea in mind. Thank you, Vivic.
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
Is it recommended to use microsoft enterprise library for heavy traffic sites ?
If not please suggest some architecture.
|
|
|
|
|
Sandeep Bhatti wrote: Is it recommended to use microsoft enterprise library for heavy traffic sites ?
It is efficient code, and unless one would like to implement their own logging-mechanism, test it, and maintain it, entlib is the recommended way. Same goes for caching, data access and exception-handling.
Sandeep Bhatti wrote: If not please suggest some architecture.
EntLib is not an architecture. What you'd probably be looking for is the MVC-pattern, which is also not an architecture.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Deleted
Nothing is true and everything is permitted!
modified 24-Oct-12 8:07am.
|
|
|
|
|
1. It is considered bad form to ask for reviews of your article. Articles will get reviewed by CodeProject members in their own time.
2. If your application is going to win a prize in the contest then it needs to be all your own work.
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
Saif Al Falah wrote: P.S. You could also rate and comment on my article. That would really help a lot. That looks to me like asking for a review.
If other people suggest extra features for your app then it is not all your own work. The competition is about innovation, so it is not just coding a solution but coming up with the ideas and design in the first place.
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
I was afraid to post this under a general discussion forum, and it looks like this might be the closest fit ... so here goes.
For a couple of years now, I have received unwanted calls from Online Pharmacies to Debt Reduction Services, and I am so tired of telling them I am not interested and/or to remove my name from their list. Cell phone service carriers are useless, for the most part, and usually one has to resort to just blocking the offending call. But, needless to say, the offending callers will just change their number to another one, and the cycle starts over again.
Researching these numbers on the internet usually turns up a list of others complaining about the exact same phone numbers. So, it is not just me. The latest number, (617) 371-4000, appears to be someone spoofing a legitimate investment company, which is sad for the company that actually does own that phone number. There has to be something better I can do to resolve this ... I am a programmer, I am a technician, and I am (reasonably) smart!
I am not a mobile device programmer, though I feel certain that I might be able to bulldoze my way through building an app - though it wouldn't look pretty, and there would be some serious doubt about the functionality of it. So, I am looking to you (the development community) to help create a solution. I am ok with someone else profiting from the ideas I am about to propose, my reward will be in having these calls stopped, and that IS priceless.
I believe that a simple app (Android, iOS, and/or even Windows) which maintains a list of offending phone numbers and when a phone number is matched to an inbound caller, instead of alerting the user that a phone call is coming in ... answer the call ... but act like a fax machine. The app should act completely like a fax machine, even to the point of accepting a fax transmission if it provided ... just don't do anything with the incoming data. It might even be possible to give the user an extra button for incoming calls ... The normal choices for incoming calls are Answer and Decline, maybe the application can add a third option ... "FAX Spoof"
The people responsible for these spoofing efforts (and I do understand that it might be multiple groups of multiple individuals, and might not even be connected or aware of each other), do not care what you do or say ... They just move on to other numbers, and if necessary move their numbers as well. Cursing at them doesn't work. Phone carriers are useless. I truly believe that the only way to stop it is to eventually have MY number show up in their list as a useless fax number, and I believe a tool like this would work wonders at bring that situation about.
|
|
|
|
|
Unwanted calls fall into one of the following
1. Outright fraudulent.
2. Unintentionally fraudulent.
3. Legitimate but abusive.
4. Legitimate but useful.
The first will change telephone numbers often.
2/3 will seldom change numbers but there is a lot of them.
4 you want to get them.
Black listed numbers won't help for 1.
Black listing for 2/3 won't help unless one number repeatedly calls.
Obviously you wouldn't want to black list 4.
White listing won't help if you want 4.
pdelayCA wrote: Cursing at them doesn't work. Phone carriers are useless.
In the US there are various ways to deal with this. You put your number on do not call lists. You can file written complaints. You can sue at least in some jurisdictions. A least in some cases it is small claims court which telemarketers would be loath to fight since it is local to the consumer not the telemarketer.
http://www.npr.org/templates/story/story.php?storyId=7532224[^]
pdelayCA wrote: <layer>I am not a mobile device programmer, though I feel certain that I might be able to bulldoze my way through building an app - though it wouldn't look pretty, and there would be some serious doubt about the functionality of it. So, I am looking to you (the development community) to help create a solution.
They already exist.
|
|
|
|
|
My colleagues and I have been maintaining an old client/server application for years and we want/need to utilize new technologies (new compared to the framework 2.0-type stuff we've been doing) to make using/maintaining our application much easier.
The current application is setup as follows:
1) This is a legacy windows desktop application
2) We have thousands of customer companies. Each company can have 1 or more locations. Each location can have 1 instance running per machine, but multiple instances (running on separate machines) can access the data of that location at any time.
3) We have a client database on every computer that stores a copy of the server data (this is to make reads faster). Whenever data is inserted/changed/deleted the operation is performed server-side, then local db is synched with the server db (we use timestamp comparisons to check for consistency). We also check timestamps before CRUD operations to makesure the local data is up-to-date (since other computers could be working with the same data server-side).
4) Our largest tables have ~300k rows per location
I've really been interested in the Entity Framework in conjunction with WCF Data Services and I have a couple of questions about how the new app (v2.0) should be setup.
Questions:
1) Should we abandon the use of a local DB? Our customers are required to have a broadband connection, so speed wouldn't be a huge issue, I am concerned though about bandwidth usage.
2) If we should keep the local DB, does WCF support this type of data model? I haven't been able to find any examples of people doing this (which is leading me to believe it's a bad idea)
3) Are there other alternatives to the Entity framework and WCF that would better suit our applications needs?
Just a side note, none of the developers currently working on the software were employed here when the current architecture was developed.
EDIT: I forgot to mention another aspect of our current setup: Each company can have location data on a number of different databases. Each location's data is only on 1 database.
modified 17-Oct-12 18:10pm.
|
|
|
|
|
It seems that no-one is willing to answer you, so I'll give it a shot.
sephus6 wrote: 1) Should we abandon the use of a local DB? Our customers are required to have a broadband connection, so speed wouldn't be a huge issue, I am concerned though about bandwidth usage.
Depends, mostly on the size of the recordsets you fetch. If it's limited to getting the data that's going to be displayed, then broadband is broad enough. If you're going to perform lots of manipulations on the data while moving back and forth, it might be more performant if you could do it locally.
You might want to consider a hybrid; keep the data 'in the cloud' (on your server), and have tables that aren't mutated in a local database. Or synchronize the most used data to a local cache - most reports don't require 'realtime' data, and are quite happy with a database that's synchronized every now and then.
sephus6 wrote: 2) If we should keep the local DB, does WCF support this type of data model? I haven't been able to find any examples of people doing this (which is leading me to believe it's a bad idea)
All the "local" examples would treat the server as if it's "remote"; that's due to using WCF - communication. Without WCF, you'd be talking directly to the local database. That might be an option too; it's faster, but makes it harder to share data. I'd recommend it only for the reports, who could use a local (readonly version) of the data.
sephus6 wrote: 3) Are there other alternatives to the Entity framework and WCF that would better suit our applications needs?
An alternative to EF would be NHibernate, but EF is a bit more than just an ORM. I think the tools you choose are the best for the job; with the added suggestion to add EntLib in the mix of ingredients.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
sephus6 wrote: to make using/maintaining our application much easier.
I doubt that will happen.
Since you already have experience with the existing application you know how it works. To re-write it you would need to learn one or more technologies and correctly implement them. Even supposing that if everything was equal while you were learning and correctly implementing the new stuff it would in fact be harder to maintain the application.
If and only if you do everything correctly then in time it might be easier. I doubt however that it would be "much" easier. Although if the original application did not grow in a disciplined way and the new application does then it could be. That however has nothing to do with technology.
sephus6 wrote: 3) We have a client database on every computer that stores a copy of the server
data (this is to make reads faster).
Based on the rest of your post the real reason for this concern was probably network latency. Older databases didn't have problems with read speed.
sephus6 wrote: I am concerned though about bandwidth usage.
That depends on application usage. Exactly what do your users do for their normal work flow. You can't answer the question without that knowledge nor can you really architect a solution without that either.
sephus6 wrote: If we should keep the local DB, does WCF support this type of data model?
If your user base is really using different databases (which is different than a marketing claim that someone could if they wanted to), then you would need to craft a solution to do this yourself. Your current architected solution is still sufficient in that regard.
WCF doesn't provide anything specifically helpful for that.
sephus6 wrote: Are there other alternatives to the Entity framework and WCF that would better
suit our applications needs?
I doubt that can be answered without more specific information. At a minimum it can't be answered without at least some specific information about what exact traffic you anticipate. A medical imaging system is vastly different than a cash register app.
sephus6 wrote: Each company can have location data on a number of different databases. Each
location's data is only on 1 database.
Not exactly sure what that means. I also question if your customers, at least a significant number, actually do that.
If your customers really do do that or if you really want to continue to market it that way (despite no one doing it) then it rules out database replication which would normally be the way to do this.
|
|
|
|
|
Hi all,
I have taken ownership of a Sql 2008 database that has a huge number of rows with HTML content. I need to implement some new functionality that would let users intelligently search through within the HTML of these records.
Any suggestions?
thanks,
Saurabh
saurabh
|
|
|
|
|
Member 179330 wrote: Any suggestions?
Free Text Search, it's part of Sql Server.
If you got a lot of spare time, you could build something similar yourself; fetch all the varchar-fields that contain the text, count how often it occurs, add a bonus if the text occurs in a title, and you have an 'intelligent' search
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Member 179330 wrote: huge number of rows
Where "huge" means what exactly?
Member 179330 wrote: with HTML content.
Where "html" means what exactly? More specifically the html represents what type of data? If there are tables, versus pictures, versus blog posts it impacts what one might need to focus on searching for.
|
|
|
|
|
I am working on a core project a class library.
and it is acting as a DAL for client
now DAL is going to fetch records from database
i heard about memorycache
but once i created a cache i want it to update itlself if a new record is added in table
is there a way my cache is recreated if record is entered in table?
|
|
|
|
|
It depends.
If it's ADO.NET and SQL2005 or higher, you can define a "cache dependency" that automatically refreshes cached stuff when it changes in the database. I suspect it relies on the fact that SQL2005 can host the CLR to have a way to swap the ordinary roles (ordinarily we use database servers as... SERVERS, but here the *server* needs to notify a *client*, which simply isn't possible given the definitions of what a client and a server is).
If it's ADO.NET and an older SQL server version I believe there's a cache dependency that *might* help. It depends. This thing uses a polling model, which means there's a delay involved. Dirty cache reads are still possible. If dirty reads causes serious problems, you can't use this. If seeing the old data is OK for a short time after it has really changed, you probably can use it.
Otherwise, you probably have to write the code yourself to keep the cache in sync. If the data is only modified by a single instance of your own application this isn't really difficult to accomplish. If there are multiple instances it gets more complicated, unless the instances are operating on isolated subsets of the data, since you will now need to establish some mechanism to ensure caches are kept in sync. It will also mean a performance hit and will require distributed transactions.
Cache invalidation is one of the hardest problems in all of common practical programming. In many cases, there are better practical approaches. Rather than trying to solve the cache invalidation problem, it may be easier and good enough to combine imperfect caching with optimistic concurrency. Concurrency is an often-ignored aspect of multi-user systems that you should probably address anyway (most apps should, but few do). Depending on how you handle concurrency violations (e.g. you could let the user resolve conflicts) having this in place often makes it acceptable to live with some dirty cache data.
You can use a timeout policy to prevent old data from sticking around for too long. Volatile data should perhaps not be cached at all. The ideal cache item is one that is expensive to get but changes infrequently. And never cache stuff that is inexpensive to get for a very long time. Saving a millisecond makes sense in something that happens often, but whether you save a millisecond every minute or every hour is not going to make any difference to the performance of your system. So in such cases, use a short timeout.
Also don't forget that there are frequently opportunities to influence how expensive stuff actually is to get. Look for opportunities to fetch (or compute) more than one thing at a time. Think about what information the application needs to get to show a much-accessed screen. Or better yet, run some profiling sessions and find out exactly what information the application gets, how often it gets *the same* information, and so on.
|
|
|
|
|